cautious technooptimism

Published February 06, 2026


The next few years are going to be incredibly transformative. There are many technologies nearly on the cusp now, which will push over the edge in the next decade or two. Among these are not only non-human intelligences, but also strides in medicine, lab automation, and other similar fields.

We probably see large economic effects as these technologies become more and more prevalent. Recently, the stock market lost a good chunk of capitalization due to the release of Claude Cowork. It seems not unreasonable that we see large amounts of job displacement due to this type of automation, and it is unclear to me whether these technologies that will emerge in the next few years will be normal - i.e, whether they will produce more jobs as a result of their existence.

I think the most worrying situation is that these technologies are not normal – they do lead to job loss and permanent economic displacement. This family of futures, the ones where humans need not apply, seem like a real possibility, and to me can also be wildly good or bad.

There’s a good deal of effort in preventing wildly bad futures, and most of the companies building towards these transformative technologies often have a team dedicated to thinking about how to prevent bad futures; be those biosafety teams or alignment teams. I’m not entirely convinced that the extremely bad futures – ones where humans endure a great deal of suffering, or extinction – are guaranteed to not happen, which is why I qualify my optimism as caution. Being complacent here is really bad!

I think a lot of the notkilleveryone futures are also really bad – the consolidation of power in selfish groups results in massive disenfranchisement of the general public. This worries me a lot, in fact – one of the wants I have for the future is that we are able to lift Billions and Billions of people out of poverty – and the consolidation of wealth due to technology entirely negates this goal.

With the very real possibility of bad futures looming, why do I still have optimism? It boils down to two things: I think that we have a real shot at preventing bad futures, and good futures are very good.

Upper Bounds

The conversation around the future is by far and large dominated by the first-world. Today’s first world is in a really unique situation, where many millions of people are living in something not that far from a post-scarcity life. At the very least, there are large swathes of people who have material needs as a far, very secondary concern. Around 20% of American individual earners make $100,000 or more per year, which, even in the most expensive cities, means that you never have to worry about money in the present. I would know – I’m one of these people! While I do wish I had a little more, to fly to Iceland, get a nice car, and so on and so forth, these are problems even more priviledged than first world problems – zero’th world, or negative world problems, if you may. The fact of the matter is that I never worry about putting food on the table, paying rent, water, utilities, or anything else that I need, and many other people in the US and developed countries live in a similar situation. These are also the people controlling the discourse around the future.

The situation that these people live in is still a minority of the world. More people in the world are poor than are not, more people have to think about where their next meal is coming from than not, and so on.

I worry that the desire to not bring about technological change comes from the people who already have post-scarcity, and as a result think that the post-scarcity future cannot be that much better than it is now for them. However, this neglects that technology, if it improves material conditions as it has, will actually elevate the lives of billions of people by massive amounts, just not for rich 20-somethings in the first world, the people who control the conversation.

Yet, I also think that it’s possible that this future can also be wildly better than the current one, even for people who already have everything they could want materially.

Valence, Joy, Arousal

I think it’s important to pick what we value, and be careful about defining it. For this reason, I’m taking a small detour to talk about happiness.

I don’t think happiness is a nuanced enough term to do the heavy lifting that we expect of it, and as a result I’ve adapted my model to think of happiness of a space of three quantities: valence, joy, and arousal. These aren’t orthogonal but they’re independent to some extent.

Valence is this long term, high level sense of fulfillment and well being.

Joy is shorter term, something on the order of a few minutes to a few hours. Think of seeing your friends, going to a concert, or something similar.

Arousal is very, very temporary. Think about hitting the lotto, having an orgasm, etc.

In some way, it should be obvious that these things are not orthogonal! For example, seeing your friends or having an orgasm can both contribute to valence, and similarly most people are joyous hitting the lotto. Yet, I think drawing the distinction is important. A lot of the time, we model our preferences as increasing happiness, and when we don’t really understand what our preferences are, we end up maximizing the wrong thing.

One thing that’s important to me to nuance is that I think that joy and arousal are almost always equilibriated, that you can grow accustomed to some level of these two and only respond to changes from them. Your baseline level is ~0.

I don’t think valence is like this! I think you can live a happier life, just like you can live a sadder life, and as a result long term interventions to valence are a lot more meaningful to me. Despite this, I do think that almost all humans are able to feel a lot more negative valence, joy, arousal etc than they are positive versions of these quantities (apologies, I don’t really have good names for the negatives)

The Village Elder

In a way, we’ve been searching for the perfect interface with technology. My model of this is maximizing some function of valence, joy,and productivity. Historically, we have adopted technology to maximize productivity. The printing press, the sewing machine, the assembly line. In contemporary times, consumers have adopted technology to maximize valence and joy, to some extent focusing on the latter, while businesses still generally orient towards productivity.

(as a side note, i think in my model of the world, all businesses eventually have a path to revenue that can be traced back to the customer – I wonder if this will change over time, and we’ll see true business / AI circlejerks that lead themselves to revenue? i’m unclear on if this is possible too)

There’s a sentiment going around that this way of going about technology is a bad way to derive valence, that we’re giving up some form of long form happiness by focusing on joy and productivity; that giving up on tech and monkeying out is the right way to get find fullfilment, restricting your usage of technology to your work where you get more productive.

I question this view heavily! The ability to video call a loved one across the Atlantic is meaningful, being able to fly anywhere in the world for a low price allows you to broaden yourself, so on and so forth. Even in the restriction to social media, gambling apps like Kalshi, and other apps which seem clearly negative, there’s still value to be had to living a more fulfilling life – keeping in touch with friends, having accurate epistemics of the world, etc.

Over the summer of 2025, I thought of the idea of the village elder – an AI system that acts as a bottleneck for knowledge and interaction, which cares for you and gives you the information that is good for you. To truly accept this, you kind of have to be on board with the idea of the superintelligence handover.

At the core of the village elder is the idea of freedom. The system shouldn’t explicitly be able to stop you from doing something, and should also never act as a hard block on something, but instead it should help you realize a higher valence future where you pursue higher level wants over lower level wants in the hopes that this brings you more net happiness over time.

Technology lives in the background of life, and serve the goal of promoting human flourishing. They provide a small, positive impact to your day, and help you get things done. In the ideal world, having the technology doesn’t stifle your ability to get out of bed, leave the house, meet people, or date. Having technology should help you get these things done. Technology now tries to scratch this itch, and it fails. Generalist agents might help.

When I wrote these thoughts down, the idea of generalist agents doing things for you was a fiction, but it is no more.

In the past few weeks as I’m writing this, the Openclaw (fka. moltbot fka. clawdbot) project has been on fire, with people letting AI agents go out into the world and do things for them. Presumably, and I say presumably since I haven’t myself tried to, but it could also act as this large aggregator of information, and a step closer to this village elder.

My experiences making this work in the now have been poor – I tried Poke by Interaction and it was horrid at doing what I wanted it to do – it simply wasn’t smart or agentic (god i hate that word) enough. Openclaw promises more intelligent models, so maybe my next foray will go better.

I do think quite strongly that the next interface looks something like this – a generalist agent that helps you out. In the limit, I think this gives very high valence futures, and it’s a future that I’m excited about. It gives you the benefits of modern technology – knowing what your friends are doing, being able to talk to your loved ones, being able to know the odds on events (and maybe even gamble a bit tee hee) – while giving you the aspects of humanity that you value. It gets your ass out the door instead of keeping you in your room all day with social media feeds, for example.

In the far future, it feels like two things will happen: social media apps would have to adapt for these interfaces since they now need a new profit model, and people will want to give up more of their freedom to their model. I think it’s important that we don’t try to monetize the consumer – in this technoutopian future, the cost of the model, and the cost of the social services that it works on, are part of “rent” or “taxes” that we pay, and in the technoutopian future our material abundance takes care of that. Secondly, I think it’s important that we align models to push back against humans voluntarily giving up their freedom. I cannot think of a quantity that I think defines humanity more than freedom, and there’s little point to me to build a future where we give that up.

Cyborgs

I’m a little tired of writing this post without getting any feedback on it so I’m going to copy paste some other writing on this that I’ve written in the future so I can get it out quicker and then improve it with feedback.

In our day to day, we treat our senses as fixed – we have the set that we do, and no more.

The idea of adding senses seems to make as little sense as the idea of putting four dimensions into a three dimensional space. “Where does the new dimension go?”, you may ask, and there’s not much more of an intuitive answer than, “orthogonal to the other three”. The world of higher dimensions is a little kinder to us – you can understand translating through the fourth dimension as sliding through time, and you can give a user two extra knobs to understand rotations about the fourth axis, and you can take projections down to three dimensions, and maybe this is enough for the user to understand the fourth dimension.

Describing new senses seems harder, and attempts to do so rely on what a description of what more you can experience with it, or relying on connections we make to other sense, but what does it mean for yellow to be fresh and bright when you have no idea what a yellow is?

Yet, our minds are incredibly malleable, and perhaps we can build new senses through training that translate technology to intuition, being able to feel, at a deeper level, what we could once only observe.

The introduction of new sense has been done before. In an experiment, Udo Wächter built a belt with 13 vibrating motors on its perimeter, which buzzed in the direction of north, constantly.

I suddenly realized that my perception had shifted. I had some kind of internal map of the city in my head. I could always find my way home. Eventually, I felt I couldn’t get lost, even in a completely new place.

–Udo Wächter

I’d like to draw attention to this distinction that Wächter makes – that his perception had shifted, instead of just being able to measure the direction of north. I think that this distinction is really meaningful. In my process of doing Math or Physics, being able to develop an intuition for the objects I work with allows me to progress significantly faster, despite knowing the rules of how to manipulate these objects before and after developing the intuition.

In drawing this distinction I want to motivate some version of phenomenological transhumanism. In addition to transhumanism’s current focus on extending our life, becoming strong, faster, smarter – getting more of the things we have – that there is a possibility of extending the diversity of our experiences by building new senses, that is easy and achievable today.

Here’s one version of what this looks like: Smart Watches are common today and generally have a vibration motor on them. Just like the brain is able to process multiple frequencies overlaid on top of each other as distinct sounds, the brain should be able to process multiple prime vibrational frequencies overlaid on top of each other from your smart watch. I don’t think you’re going to be able to get more than 30Hz sensing or so, so you get at most 11 bits of information in this way at any point of time.

With these bits you can build proximity sensing to other people 1, magnetosensing, electrical sensing, directional help, and a whole suite of other things. Notably, this is a gut-level, instinctive sense that is not available to us today.

I think that this future with augmentation can be very very good. Somewhat definitionally it’s hard to model what valence in this world looks like, when we have so many new ways to experience, how high are our bounds on valence?

1

Yes, I’m taking this from that one r/applyingtocollege post.