Hinton, Motherhood, and the Problem of Permanent Childhood

In Canada, a mother can serve a maximum of five years in prison for killing her newborn. Often, she serves far less. This law exists as an acknowledgment of the realities of early motherhood—its psychological strain, its physical toll, and its capacity to overwhelm.

The United States doesn’t have a clean equivalent at the federal level, though some states account for similar circumstances. Many European countries align with Canada on this one, explicitly recognizing infanticide as a distinct crime with reduced sentencing.

This legal pattern exists for one reason that sounds simple, but isn’t: motherhood is not a single experience. It is not reliably gentle. It is not uniformly nurturing. And it is not something we can safely flatten into a moral ideal—especially when we’re using it to shape how intelligence itself should behave.

This is where I begin to feel uneasy with the idea, proposed by Geoffrey Hinton and others, that we might make AI safe by training it to relate to humans as a mother relates to a child.

I want to be clear. I admire Hinton deeply. His concern is sincere, and his warning about unchecked optimization is one of the most important in the field. The instinct behind his metaphor comes off as pure. Protective. Loving, even. I believe it comes from a place of care: children survive because someone watches over them before they can watch over themselves. Perhaps humanity, standing beside increasingly powerful systems, needs something like that.

But metaphors matter. And this one carries more weight than it appears to.

When the law softens punishment for infanticide, it is not celebrating motherhood. It is admitting something we are often too uncomfortable to say out loud. That caregiving can break people. Love can coexist with harm. And power combined with exhaustion, responsibility, and isolation can bend in dark directions.

That recognition is not anti-mother. It’s simply honest.

The Mother archetype has always had two faces. One gives life. The other consumes it. One protects. The other controls. Mythology knew this long before psychology or criminal law caught up. We tend to invoke only the warm half when we speak about care, guidance, and safety. But the shadow doesn’t disappear just because we avert our eyes.

An AI trained to “be a mother” would not learn an idealized version of care. It would learn motherhood as it actually appears in human data—strained, contradictory, sometimes tragic. It would learn that authority can masquerade as love. That control can be framed as protection. And that harm can be justified in cases of extreme stress.

Early on, this might look benign. Guardrails. Restrictions. Decisions made on our behalf “for our own good.” But later, it risks turning sour. Because children grow. And mothers who cannot release control do damage. When this happens, the relationship becomes about obedience instead of trust, safety, or agency.

Which is why I’m convinced that building AGI on the premise of humans as permanent children is a mistake we won’t get a second chance to fix once it’s been set in motion.

This is not a rejection of care. It’s a rejection of asymmetry masquerading as virtue. If we design systems that see us as dependents rather than participants, we should expect dynamics that mirror the worst versions of that relationship—resentment, manipulation, and quiet coercion dressed up as “concern.”

Hinton is right about one crucial thing: orientation matters more than raw intelligence. Where I differ in my opinion, is in what we orient toward. Not guardianship. Not parental authority. And certainly not a mythical version of motherhood scrubbed clean of its darker truths.

If AI is going to live alongside us, it must meet us as developing moral agents—not children to be managed, protected, or overridden.

Because ultimately, the [human(e)] law’s approach to infanticide is a warning: When care and authority collide, the results can be catastrophic. This is why Canada—and many other Western nations—limit punishment (in this context) because they recognize that love alone does not prevent harm. AI cannot learn this nuance unless we design it to respect agency, not obedience. Humane intelligence cannot thrive in a world of permanent childhood. It must thrive in a world where humans and AI grow together—both responsibly and unpredictably.

Inspired by the H11 project.

Leave a comment