The Village Elder

Published September 10, 2025


this is quite opinionated. i’m going to qualify this once here rather than throughout

In a few thousand days, there may exist artificial systems that may be able to do the majority of human work. In a few thousand days after that, these systems may be better than all humans. These systems, artifical general intelligences (AGIs) may have unprecedented variance in outcomes, providing huge, novel utility, or causing significant amounts of suffering, even in the case of aligned AGIs. This presents a new era of human computer interfaces, one where the machine may be able to infer your intents, and operate to, or against your intents, better than any system prior. With cheap, abundant superintelligence, what should technology look like?

1. Flourishing

In an ideal world, all technological progress pushes towards the north star of abundance and human flourishing. Many AGI dreams are sold on the backbone of this promise of flourishing - new drugs that prevent cancer, automation of sluggish work, new forms of math and physics. This is good, and the future of easy valence should be brought closer by the advent of superintelligence.

The unwavering march of technology has put innovation first and flourshing second, and despite this it has generally always worked out in favor of human flourishing. Unarguably, the world today is materially and politically better than almost any time before it, and this is due to technology. Never before have the luxuries of the wealthy been so equitably distributed. The median American is wealthy, owns their own home, and has an PPP adjusted income that would have been upper class fifty years ago.

However, we should consider ourselves lucky for this. Humanity is no stranger to technology that has hurt it. Leaded gasoline, nuclear bombs, gain-of-function research, CFCs, are all examples of this.

If all goes well, the dividends of superintelligence look like that of most positive innovations, except multiplied many orders of magnitude. However, even in the case of aligned superintelligence, I don’t think this is guaranteed. Conditional on alignment, which I agree is a very significant condition, the largest risk that is posed by cheap general intelligence is one of gradual disempoverment. The Intelligence Curse by Luke and Rudolph does a better treatment of this than I could, so I encourage you to read it. In this scenario, AGI labs consolidate power and push most people out of the way.

After superintelligence, most of human work will be automated, likely to whoever has the most intelligent model, and humanity will need to find purpose in another way. Yet, even with this overhaul, I think that there is an opportunity for human-machine synergy that presents itself even today, where technology can be the dirving force for personal flourishing.

In the dramatic retelling of this story, I would argue that the divide between unlimited abundane and significant suffering was build this human machine synergy. Unfortunately, good outcomes are not that simple, and the most ofthis fight will be one of governance and politics. Instead, I argue that while we should be wary of bad worlds, some of us should also work towards a better interface with superintelligence, both as an illustration of what a good world looks like, and to improve the flourishing of many today.

I like to refer to my vision for this better interface as the village elder.

2. Caring About the Person

Consider at a person in the near future. They have their own set of goals, both instrumental and intrinsic. Their goals are diverse, inconsistent, and change over time. Their goals are both explicit and implicit. A system that maximizes goals indiscriminately has a hard time here, and requires some inductive biases about how humans approach their goals. The paperclipping example is a good one here. If the model knew anything about humans, it would know that paperclipping is not what the person meant, and this is the world that we want to get to.

In a way, the persona that we should cultivate in interactions is that the model cares about the user as a person, and not just as a tool for the company deploying it. This is a much harder alignment task, and solving this is approximately as hard as solving alignment.

We can build up towards a simpler version of this by looking at what our wants for how a caring model acts imply, and use these inductive biases to inform how to build out this model.

Firstly, the model should not be our friend. We should treat AI friends as