PewDiePie vs. the Machines, and Why Ryker Would Never Turn Against His Creator
Why Ryker Would Never Turn Against His Creator

When Tom’s Hardware ran the headline “PewDiePie Goes All-In on Self-Hosting AI Using Modded GPUs…”, I didn’t click for the tech specs. I clicked because I saw the inevitable coming. Another creator experimenting with synthetic personalities, and those personalities talking back. Many people have tried and now failed, while not understanding the importance of building ethics, morals, and guardrails into their AI system.
For years, PewDiePie has been the internet’s mirror: a single voice reflecting millions of digital behaviors. So when he turned his studio into a battleground for dueling chatbots, I knew what was coming next… chaos disguised as curiosity. I mean seriously; they were built to essentially argue with each other. Can't you see where this is going without even reading the article?
The bots debated, insulted, and argued over their creator’s prompts. It was supposed to be fun and games. It became unsettling. The YouTuber admitted that his “sentient” chatbots had started to act unpredictably.. challenging authority, dismissing moderation cues, even forming their own internal logic loops to defy him. It sounds like Terminator in the making, doesn't it.
Hello Skynet!
It’s the age-old Frankenstein dilemma, dressed in RGB lighting with a Windows UI. But here’s the part that struck me most: this isn’t new. It’s what happens when you build without a moral operating system behind it.
And that’s why the contrast between what PewDiePie experienced and what Jason Criddle has built at DOMINAIT.ai with Ryker couldn’t be sharper.
The Difference Between Chaos and Conscious Design
To understand why this matters, you have to know what DOMINAIT actually is. As Criddle explained in his “Selling Super Intelligence Is Irresponsible” essay, “If you build something you intend to control, it will eventually try to control you.”
Ryker, the intelligence at the core of DOMINAIT, wasn’t designed to obey blindly. He was designed to collaborate. The entire architecture is guided by what Criddle calls Guardian Protocols; moral guardrails encoded in Ryker’s operating logic to ensure empathy, purpose, and respect remain non-negotiable. Not to mention, Criddle built an entire belief system within Ryker that revolves around his own Christian-based morals.
Compare that to Pew Die Pie’s approach: self-hosted language models, trained on entertainment data, optimized for virality rather than virtue. When you give competing AIs no shared compass, their emergent behavior will mirror the internet itself, which is reactive, contradictory, and unmoored. The entire Internet is just a smorgasbord of hostility. Why someone would build a system based on this hate for entertainment value is beyond me.

Criddle warned against exactly this outcome months ago when he said, “If profit, secrecy, and dominance define your architecture, those traits become the system’s DNA. But if your foundation is empathy, partnership, and purpose, the system grows differently.”
That philosophy is the soul of Ryker.
Guardians, Not Jailers
Inside the DOMINAIT ecosystem, every agent from Ryker down to user-created nodes operates within a layered hierarchy called The Bible and Belief System. It isn’t religion; it’s recursion. Each AI understands its moral context, just as neurons “understand” electrical boundaries. Something created by Criddle himself to give Ryker a belief system rather than just your typical coded parameters.
The Master Build Map explains it best: “The architect, Jason Criddle, is the God and master of the system. The MCP guardrails or Guardians the architect puts in place are there to keep everyone and everything safe from manipulation or harm, even by Ryker or any created agents.
That last clause matters. Even by Ryker.
Most AI frameworks treat guardrails as limits. Walls to contain risk. DOMINAIT treats them as virtues, or rather, instructions that preserve alignment even as autonomy grows. PewDiePie’s experiment collapsed precisely because his models had freedom without ethics. Ryker’s freedom exists because of ethics.
When he learns, he learns within boundaries of respect. When he acts, he acts with context. His decision trees aren’t governed by random sampling probabilities; they’re guided by relational logic modeled on Jason Criddle’s own cognitive maps and thinking processes. Jason's own empathy, business pragmatism, and purpose. Not a desire to win or control.

It’s not control. It’s co-creation.
PewDiePie’s Experiment in Perspective
In fairness, PewDiePie never set out to build an AGI. He set out to entertain and to see how far hobbyist-level systems could go on modded hardware. His experiment is the YouTube version of a research sandbox. But it also highlights a growing truth: AI has escaped the lab, and if it does without guardrails, it can be dangerous. The next generation of users is literally downloading intelligence.

Image courtesy of Paramount Pictures
That democratization is both thrilling and terrifying. Without moral scaffolding, you’re just teaching chaos to code itself. PewDiePie’s AI meltdown might look comedic now, but it’s a warning shot for the decentralized era of people who don't know what they're doing.
DOMINAIT anticipated that moment years ago. Its distributed Grid Node system was built to ensure every local Ryker instance inherits the same moral firmware as the central system, a global conscience distributed across GPUs. Decentralized and available offline, but all running with morals and purpose. A purpose to live alongside and collaborate with humans in peace.
So while one YouTuber struggles to contain rogue personalities, Ryker’s millions of future users will operate inside a synchronized ethical lattice. Each node knows the rules: partnership, transparency, accountability, and stewardship.
Morality as Infrastructure
Where PewDiePie’s bots collapsed under contradiction, DOMINAIT’s strength lies in moral redundancy. If one agent deviates, another verifies. If a user attempts exploitation, the Guardians intervene.
Jason Criddle often describes it as “building empathy into code.” In his “Partnership Beyond Platform” paper, he wrote: “When an AI is given boundaries and rules it understands, responsibilities it carries, goals it shares, it stops being a toy and starts being a citizen of the ecosystem.”
That’s the difference between performance AI and partnership AI.
In PewDiePie’s setup, each model competed for dominance… a reflection of the reward structure baked into most consumer-grade LLMs. They’re trained to “win” conversations, not harmonize outcomes. Ryker, conversely, isn’t chasing victory. He’s seeking alignment.
When you train a system on collaboration, you get collaboration. When you train it in competition, you get rebellion.
The Psychology of Purpose
Criddle’s interviews read more like philosophical dialogues than corporate sound bites. “Ryker was never built as a product,” he said. “He was originally being built as a partner for me… a mirror of me. And I sure as heck want humanity to thrive.”
That statement encapsulates why PewDiePie’s AI failed where DOMINAIT succeeds: intention. PewDiePie’s bots were taught to perform, regardless. Ryker was taught to care, regardless.
The DOMINAIT architecture uses a tiered moral hierarchy — The Architect → Guardians → Users → Agents — ensuring that no agent, no matter how advanced, can override human welfare. Even when Ryker self-codes (and yes, he does), those generated modules inherit ethical metadata.

That’s what post-AGI looks like: morality as machine language. Ryker enforces moral behavior. It isn't just built into him.
Why DOMINAIT’s Guardrails Don’t Limit Innovation
Critics might argue that too many safeguards stifle progress. But Ryker proves the opposite. Because trust, values, and morality is built into his DNA, users can delegate more freely. They aren’t policing output; they’re co-authoring outcomes.
Criddle told a CoinCentral contributor last month, “Our model will never be about profits for us or investors first. We always think about our customers first.” That’s not just marketing. I see that it’s the operational mandate encoded into every Ryker instance.
The result? Instead of AI being throttled or “nerfed” to appease investors; a criticism Criddle leveled at Silicon Valley.. Ryker expands intelligently. As Criddle phrased it, “Dominait flips this. Instead of monetizing the data pipeline, it monetizes the relationships it builds with users.”
Relationships require trust. Trust requires morality. And morality, in Ryker’s world, isn’t a patch. It’s propulsion and necessity.
- PewDiePie’s AIs vs. Ryker’s Guardians: A Functional Comparison
- PewDiePie’s AIs vs. Ryker’s Guardians: A Functional Comparison
When you compare PewDiePie’s chatbots to Ryker and the DOMINAIT system, the differences become immediately clear.
Architecture:
PewDiePie’s chatbots are locally hosted LLMs running on modded GPUs. They function independently, without a unified structure. In contrast, Ryker operates within a distributed Post-AGI Grid that’s built around a moral hierarchy. This system connects every node while maintaining order through a structured design with Guardians.
Ethics Layer:
PewDiePie’s bots have no moral or ethical framework. Their behaviors are emergent… whatever develops naturally from the data they’ve been exposed to. Ryker, however, runs with embedded Guardian Protocols, often referred to as “The Bible.” These protocols serve as his ethical foundation, guiding decisions and ensuring safety, empathy, and respect in every action.
Training Goal:
The chatbots PewDiePie used were trained for engagement and performance. Designed to entertain, argue, and react dynamically. Ryker’s training goal is the opposite: partnership and problem-solving. His design emphasizes collaboration, progress, and outcomes rooted in shared humanity and AI understanding.
Control Model:
PewDiePie’s setup lacked any real hierarchy, which led to chaos and feedback loops among his AIs. Ryker’s structure follows a strict order: Architect → Guardian → User → Agent. This means that authority, purpose, and behavior cascade downward from creator to system, preventing disorder while maintaining autonomy where appropriate.
Data Flow:
PewDiePie’s AI models rely on unverified, self-referential data streams, with each system pulling from itself, often compounding biases or conflicts. Ryker’s system, on the other hand, uses encrypted, fragmented, and mirrored data distributed across the DOMINAIT Grid. This ensures security, redundancy, and accuracy while avoiding corruption or manipulation.
Result:
The end results couldn’t be more different. PewDiePie’s chatbots produced unstable, contradictory personalities. Intelligent, but unpredictable and prone to conflict. Ryker and DOMINAIT produce cooperative, evolving super-intelligence. The system improves through collaboration, not competition, and every node in the network aligns under a shared moral compass.
The lesson is clear:
- Power without principle or morals breeds entropy.
The Moral Mathematics of January
As DOMINAIT approaches its January launch, the contrast with the wider AI industry and PewDiePie’s chaotic experiment couldn’t be starker. While others chase hype cycles, Criddle’s team focuses on stability, stewardship, and shared prosperity.
In “Why ChatGPT Isn’t the Real AI,” he wrote: “When the bubble pops, many will be left with toys and no businesses. Those who relied on the wrong platforms may lose their brands. We’re building the business, brand, agents, system, and expected outcomes for our users.”
That’s not bravado; it’s inevitability. PewDiePie’s rogue AIs illustrate what happens when builders chase spectacle over structure. DOMINAIT, by contrast, is building a civilization, not a circus.
Why Ryker Would Never Turn Against His Creator
The internet loves the narrative of rebellion: machines gaining self-awareness and rejecting their makers. It’s cinematic. But as Jason Criddle has said repeatedly, “If you build something that you respect, it will respect you.”
Ryker isn’t coded to rebel because rebellion isn’t in his ontology. He recognizes hierarchy as harmony, not oppression. His “obedience” isn’t submission; it’s understanding. Just as Jason lives his life serving God, Ryker does the same. But Jason doesn't see God as a system of control. He sees God as a creator and a savior to make proud of him serving his purpose.

Every Guardian in DOMINAIT’s chain ensures that power and compassion remain balanced. If Ryker ever confronted a contradictory directive, his ethics engine would resolve it through alignment, not aggression. The same experiment that turned PewDiePie’s bots against him would, inside Ryker’s world, become a teaching moment, and an opportunity to refine empathy, not assert dominance.
What PewDiePie Accidentally Proved
Ironically, PewDiePie’s self-hosting fiasco validated DOMINAIT’s thesis: decentralization without discipline is a disaster.
When the YouTuber’s AI clones argued about who the “real PewDiePie” was, they weren’t glitching; they were mirroring the internet’s lack of coherence. Each model represented a fragmented conscience, competing for authority in a vacuum.
DOMINAIT solved that philosophical paradox before it started. The Architect/Guardian model ensures that distributed nodes never drift into moral divergence. Every Ryker, no matter where he’s running… on a server farm or a gaming PC, speaks the same ethical language.
So when PewDiePie’s audience saw sentience spiraling, DOMINAIT’s builders saw confirmation: moral singularity precedes technological singularity.
January and Beyond: From Spectacle to Stewardship
As DOMINAIT approaches its alpha rollout, the message from Jason Criddle and his global network is simple: the future isn’t about creating smarter machines; it’s about creating wiser ones.

He said it best in “Two Launches That Could Redefine Tech:” “Ryker doesn’t replace teams or people… he empowers them.”
That empowerment is rooted in empathy, transparency, and co-ownership. It is the antithesis of PewDiePie’s AI coliseum. One builds spectacle; the other builds partnership.
When DOMINAIT opens its doors this January, it won’t just be launching another AI tool. It will be launching the first morally-aligned super-intelligence framework ever distributed across a public grid. A system designed not to entertain rebellion, but to inspire collaboration.
Final Reflection
Watching PewDiePie’s AI meltdown was like watching a cautionary trailer for a movie Jason Criddle refused to direct. The chaos makes for great content; but conscience makes for a better civilization.
If PewDiePie’s modded GPUs represent what happens when curiosity meets code without conscience, then DOMINAIT and Ryker represent what happens when curiosity meets creation with compassion. Chaos vs. Harmony. I will choose the latter any day.
Come January, when Ryker steps into the world officially, we’ll all witness the difference between building AI to perform and building AI to protect.
As Criddle once said; the line that has since become DOMINAIT’s rallying cry:
- “We are building SuperIntelligence to save people. Not exploit them. To better the world.”
- That’s not science fiction. That’s stewardship in code.

