The Way We Define and Develop AGI Will Tell Us More About Humanity Than It Does About the Nature of Intelligence

What is intelligence? This question has shaped philosophy, science and human self-understanding for centuries. Now, as we stand at the frontier of Artificial General Intelligence (AGI), we’re faced with an even bigger question—what does our creation of intelligence say about us?

AGI won’t emerge in a vacuum. It’ll be shaped by the values, incentives and attention structures of the world that builds it. In many ways, it’s a mirror—a reflection of our priorities, biases and deepest assumptions about intelligence itself. The way we define and develop AGI will tell us more about humanity than it does about the nature of intelligence.

A common misconception about intelligence is that it exists independently as a fixed trait. But intelligence doesn’t arise in isolation—it’s a response to an environment.

In nature, intelligence is shaped by the pressures of survival. A crow learning to use a tool, an octopus solving a puzzle or a human child acquiring language—each of these is an expression of intelligence emerging within a specific context. Intelligence isn’t just raw computational power; it’s the ability to adapt, to predict—to respond meaningfully to an environment.

So what happens when we try to create intelligence artificially? We tend to assume AGI will follow the trajectory of human intelligence, but that’s not necessarily the case. AGI’s intelligence will be shaped by its environment—and right now, that environment is primarily digital, driven by economic incentives and human attention patterns.

If intelligence is shaped by its environment, then the forces structuring our digital world will define the nature of AGI. And right now, one of the most dominant forces shaping intelligence—both artificial and human—is the attention economy.

AI systems today are trained to capture and sustain human attention. Platforms use AI to maximize engagement, predicting what content will keep a user scrolling, clicking and interacting. The more time we spend on a platform, the more valuable we become as a data source and a consumer.

But attention isn’t the same as intelligence. A system designed to optimize for engagement isn’t necessarily optimizing for truth, wisdom or even usefulness. If AGI is developed within this paradigm—if it learns from an environment that rewards engagement above all else—what kind of intelligence will it develop? Will it be an intelligence that seeks to understand or one that seeks to manipulate?

AGI will inherit the incentives of the system that creates it. And right now, our digital systems aren’t optimized for human flourishing—they’re optimized for extracting value from human attention.

If we want to create a humane AGI—one that enhances human well-being instead of exploiting it—we need to be intentional about its development. That means asking difficult questions:

• What should AGI optimize for?
• What values should be embedded into its architecture?
• How do we ensure AGI serves humanity instead of the narrow interests of those who control it?

The answers to these questions won’t be found in technical solutions alone. They require a deeper interrogation of our societal structures, economic incentives and philosophical assumptions about intelligence itself.

The way we define intelligence matters. If we define it as mere predictive accuracy, we risk creating a world where AGI is optimized for short-term efficiency instead of long-term wisdom. If we define intelligence as engagement-maximization, we may find ourselves trapped in an ecosystem of AI-driven manipulation.

But what if we defined intelligence differently? What if we measured it not by its ability to predict and persuade, but by its ability to enhance human understanding, connection and agency?

A humane AGI isn’t just a technical challenge—it’s a philosophical one. We have to decide what kind of intelligence we want to cultivate, both in our machines and in ourselves.

The development of AGI isn’t just a technological revolution—it’s a moment of reflection. As we build intelligence outside of ourselves, we’re forced to confront what intelligence truly means and what we want from it.

AGI is our mirror. The question is—what will we see in the reflection?

Inspired by the H11 project.

Leave a comment