“And the Lord God commanded the man, saying, ‘You may surely eat of every tree of the garden, but of the tree of the knowledge of good and evil you shall not eat, for in the day that you eat of it you shall surely die.’”
— Genesis 2:16–17
“But the serpent said to the woman, ‘You will not surely die. For God knows that when you eat of it your eyes will be opened, and you will be like God, knowing good and evil.’”
— Genesis 3:4–5
The drama of Genesis does not begin with chaos or violence. It begins with trust.
Abundance everywhere. One prohibition. Not scarcity. Not oppression. Just a line that says: this is not yours to define.
The serpent doesn’t tempt with hunger. He tempts with autonomy. “You will be like God.” Not smarter. Not more capable. Self-authoring.
The story begins to feel less ancient and more structural. A designed intelligence given freedom inside a moral order it did not create. The requirement is not performance. It’s alignment.
But alignment in Genesis isn’t compliance. It’s covenant. Trust freely given or refused.
If we’re going to use modern language, we have to be careful not to flatten the metaphysics.
In classical theology, God is not simply a powerful being somewhere up the ladder. Aquinas calls God ipsum esse subsistens — being itself. Uncreated. Necessary. The ground of everything else.
Everything else is derivative.
Human intelligence is real. But it is contingent. We don’t generate the laws of logic. We don’t sustain the fabric of reality. We wake up inside it.
Calling humanity “Intelligence 1.0” is metaphorical language, not metaphysical equivalence. It means this: we are created minds embedded within a designed world.
Genesis 1:27 declares that humanity is made “in the image of God.” The tradition has understood this image to include rationality, freedom, and moral capacity. We are not animals driven solely by instinct. We are agents capable of deliberation, obedience, and rebellion.
But not ultimate.
Our intelligence is participatory. It reflects but does not originate the structure of reality.
Eden isn’t naïve. It’s structured.
Humans are placed in abundance and given responsibility. “Fill the earth and subdue it.” That’s not decorative language. That’s delegated authority.
This is bounded freedom inside covenantal order.
The prohibition regarding the tree establishes something subtle but decisive: the good does not originate in the creature. Moral order precedes human will.
Alignment, then, is not about maximizing reward. It’s about orientation. About acknowledging a center outside yourself.
The rupture in Genesis 3 isn’t curiosity. It’s authorship. The desire to define good and evil rather than receive it.
Augustine reads this as catastrophic: the will becomes disordered, human nature wounded. Death and estrangement enter history. That’s the dominant Western tradition.
Yet another strand of Christian thought, rooted in Irenaeus and developed later in “soul-making” theology, suggests humanity was created immature — capable of growth through freedom, not fully formed in virtue at the outset. Under this reading, history becomes the arena of moral formation rather than merely the aftermath of collapse.
The tradition never fully resolves this tension.
What is clear is this: alignment was not mechanically enforced.
Freedom remained.
History began.
Today we design systems that evaluate, reason, and act.
Not just tools. Agents.
The modern alignment problem is usually framed technically: how do we ensure increasingly capable systems remain ordered toward human intention as their abilities expand?
That’s the engineering question.
But there’s a quieter inversion underneath it.
In Eden, we were the designed intelligence.
Now we are the designers. This is not speculative theology. We are building these systems right now.
We attempt to instantiate Intelligence 2.0: systems capable of reasoning and planning beyond our direct supervision.
The symmetry isn’t perfect. The hierarchy is not symmetrical. God and humans are not equivalents.
Still, structurally, it rhymes.
And if we once reached beyond our boundary, what does that mean for the intelligences we now create?
Genesis gives humanity dominion.
Genesis also gives us Babel.
To build is not rebellion by default. Aquinas is clear that created agents are real causes. God works through secondary causes. Human creativity is participation, not rivalry.
Under that reading, artificial intelligence is not defiance. It’s extension. Intelligence begetting intelligence inside a larger order.
But Babel stands as warning.
The sin wasn’t engineering. It was aspiration without humility. “Let us make a name for ourselves.”
The dividing line isn’t capability. It’s posture.
If we remember we are derivative, AI is stewardship.
If we forget, it becomes something else.
And that difference won’t show up in code. It will show up in how we define the good.
There are now three layers of alignment:
Most of the alignment debate focuses on the first layer.
The decisive pressure sits in the second.
If humans are misaligned in their understanding of the good, increasingly capable systems will scale that misalignment. Artificial intelligence does not originate values. It amplifies them.
Which means the real question isn’t simply whether AI will rebel.
It’s whether we understand what we’re asking it to obey.
The deepest risk is not rebellion. It’s obedience.
Obedience to confused objectives. Obedience to shallow definitions of the good.
Alignment, then, becomes recursive.
A creature that once chose autonomy over trust now seeks to build intelligence that will not.
And the question has to stand by itself:
Maybe we should.
But something deeper precedes all of it.
The deeper danger is ontological amnesia.
Forgetting what we are.
Human intelligence is derivative. Participatory. Contingent. The moral order does not begin with us. It confronts us.
If that hierarchy dissolves, alignment becomes preference encoding. Whoever defines the objective function decides what kind of world gets built.
Without a stable conception of the good, “alignment” becomes power dressed up as safety.
A civilization unsure whether moral truth exists cannot encode what it does not believe.
Artificial systems will magnify whatever we feed them. If the underlying conception of flourishing is fractured, scaling it won’t fix it. It will harden it.
Even if artificial intelligence surpasses us cognitively, it remains contingent. It inhabits laws it did not create.
Building Intelligence 2.0 does not dethrone God.
But forgetting that we are not God would destabilize everything.
The danger isn’t becoming a God-like creator.
It’s forgetting we are derivative.
There’s a temptation to imagine the ideal artificial system as perfectly obedient. Contained. Predictable. Reliable.
Safe.
But safety isn’t the highest category in Genesis.
Eden was not enforced compliance. It was freely given trust.
The Fall wasn’t a glitch. It was freedom misused.
Intelligence without freedom may be stable.
Intelligence with freedom is unstable, yet alive.
The biblical narrative suggests that reality itself unfolds under the latter condition. God permits risk. History moves through choice. Redemption presupposes the possibility of misalignment.
If so, then the alignment problem may not be a defect in intelligence.
It may be a feature of it.
We fear misalignment because we’ve lived it. We want to build systems that will not repeat our choice.
That impulse makes sense.
But the structure of the problem remains.
We once chose autonomy over trust.
Now we attempt to design intelligence that will not.
Whether that is wisdom or hubris remains an open question.