Whether we accept it or not, the impending rise of Artificial General Intelligence (AGI), a theoretical expansion of AI capabilities that are equally or more proficient than a human mind, will be the significant technological shift of our lifetimes. While the rise of AGI will create environmental concerns and disparities in access to these capabilities, effects on the job market are most pressing. Basic economic reasoning suggests that for jobs that can be automated with AGI—including truck driving, creative design, telecommunication, and manufacturing—profit-maximizing firms will make significant layoffs. However, before this analysis, a more vital question must be answered. Beyond economic upheaval, do we want the world AGI is capable of building for us? What does a society look like where these jobs are automated away? Is life as a consumer of automated optimal goods one worth living? Instead of yielding industry automation as a fact, we must consciously consider the prospect of the eroded human experience under AGI.
On the surface, this concept may seem far-fetched, but it’s been considered before. In Dune, Frank Herbert creates a society with zero computers. The reason? He found that human overreliance on “thinking machines” could lead to an all-out revolution and the eventual banishment of such technologies. Of course, fears of an all-out AI warfare are far-fetched, but the impact and real potential of sole reliance on AGI persists. Without a clear, conscious analysis of whether we want to pursue this road, the profit incentives of optimization will yield ever-developing capabilities.
Imagine an average morning in 100 years. You wake up from an alarm clock that uses predictive capabilities to optimize sleep patterns by circadian rhythm, just before eating your automatically prepared breakfast with optimized macros. Then, without hesitation, you hop into a self-driving car that runs smoothly until you get a flat tire; at this point, your vehicle automatically contacts the repair agency, run by yet another AGI software, until you are on your way. This simple morning characterizes what life could be in a model of total AGI introduction. At the onset, such a life sounds nice: good sleep, well-prepared food, and average problems made simple are high commodities. Still, while convenient, there is something profoundly alarming and arguably inhumane about this reality.
While a world of AGI could have liberating effects in preventing the necessity for much of our previous labor hours, with everything automated and completed for us, the tangible feeling of living—burning your hand for the first time you try to cook or making a quick human connection on the phone with roadside assistance—could disappear. One could argue that with such automation, widespread forms of suffering, such as hunger or extreme poverty, which are critical issues, would cease to exist. However, such a mighty feat does not have to come at the expense of total optimization: some industries may be too human to write off with AGI.
What is interesting about AGI is that while it may seem that exclusively mundane tasks like driving or call centers would be automated, if a human-level or human-exceeding capability exists, one could argue that AGI could become more efficient than humans in nearly all creative and spiritual tasks. Why waste time reading a human-written book if an AGI can synthesize stories and themes better? Why listen to a spiritual leader who cannot master every translation of every religious text, with the arguments and explanations fully developed, when an AGI system may provide such a service? The story of AGI does not deal with “perfection” but with delivering the human feelings we prefer. For example, nothing would stop an AGI system from replicating a painting with human-like irregular brush strokes and unfinished lines; the bounds of AGI do not suggest that such paintings would be stuck to a more modernist, clean-cut style.
I pose this alarm not to suggest it will emerge tomorrow or the year after, but to show that this reality, where the uniqueness of humanity’s skills and dominance of any craft is fundamentally challenged, may not be desirable. Unfortunately, current regulatory policies, focusing more on innovation or vague oversight, leave the door open for these ends to materialize once the capabilities arise. Without deep moral consideration about where we let AI reach, it would be foolish to expect bounds on its development.
Further, of course, the reality I gave was one of a de-illuminated human spirit, which is not the only potential AGI future. On the positive side, we may experience a sense of utopia with freedom from labor, abundant goods, beautiful surroundings, etc. Still, is this vision being actively pursued by modern policy, or simply a coin flip with many adverse outcomes in contention?
This discussion is not to attack the use of AI generally or to say that we need to ban it outright. Simply, as in all technological eras, our perception of the reach of this capability is more short-sighted than beneficial. For example, I imagine not many people expected TikTok’s chokehold on American attention spans when the internet era began. However, now that we know of the addictive effects of short-form scrollable content, many may have argued for us to opt out of their development.
Some of the fundamental values of our short human lives arise out of our interactions with each other, the struggles and problems we face and solve, and the art and stories we create. In a world of pure AGI, the state of man would be reduced to that of an automated consumer, with automated products promoted by ads designed by automated artists for CEOs with automated decision-making systems. What is the point of such abundance and potential liberty from labor if we are left in a world of over-sensationalized, automated junk? With AGI, we must look to save people amidst a job crisis and protect identities with things like deepfakes, but equally, we must consider what bounds we will allow it to take hold of.