Is AI Hype, Hope, or Hallucination?
In economics, the classic production function is economic output is equal to capital times labor or simplified as Y = KL. Capital can be money, machinery, or property but it is most importantly controlled by capitalists. Capitalists’ natural incentive is to substitute capital for labor because that maximizes profits that don’t otherwise need to be shared with labor as well as reduces the risk of human agency of labor if machinery replaces them. One scenario is that this process expands output so much that the negative effects on labor are offset by their increased consumer options and opportunities for employment in an expanded and more developed economy. Another scenario is that output doesn’t expand and labor retains those negative effects while being made more unequal to the capitalists.
Goldman Sachs forecasted that AI could replace 300 million global jobs. In the more developed economies, they also noted that AI replacement could be more dominant. Their conclusion was that in 10 years it could be possible for AI to deliver a 7% increase in global GDP. Let’s call this the “assume everything goes right” case. Even in this case, they admit in the short-term there will be many displaced workers however their contention is that AI will be so productive that over 10 years those displaced will rebound. However, the Goldman Sachs report leaned on academic economic reasoning to argue its case. Jumping into reality we see another story.
The past 12 months have seen the global tech sector (centered in Silicon Valley) go through an internal correction. Venture capital funding is at historic lows, public market appeal has dimmed, institutions like Silicon Valley Bank have cratered, and interest rates have shifted to unprecedented highs. These conditions, combined with the rise of AI, led to over 350,000 people losing their jobs in tech over the past year. The appetite for new hiring has shrunk too. “Recent data from Indeed shows a more than 50% decline in software-development job postings compared to a year ago” (Insider). Between memos, interviews, and earnings calls, the global tech sector is not saying they want to make their workers more productive in isolation. The global tech sector, or better defined as the capitalists of the global tech sector, are saying that they want to get rid of their existing labor, replace it with AI, and have no plans to rehire in the future.
While we are only in the beginnings of the AI revolution, it seems odd that academic economists or their counterparts in high finance speak in inverse terms to what is occurring in the actual market. The global tech sector is the tip of the spear for the rest of the economy. They are saying right now they want to fire workers not retrain them with AI to up their productivity. They are saying they don’t want to hire workers in the future because AI can be used to keep low headcounts. Remember the smaller the share of labor in production the more profits capitalists retain. So we are confronted with a pressing problem of society being made more unequal and no plan to support the displaced workers. The “assume everything goes right” case relies on the capitalists doing the opposite of what they are explicitly saying they are doing right now — firing and not rehiring.
The other key question to be looked at is: will AI be as revolutionary and pervasive as the hype suggests it will? While all our imaginations are tickled by sci-fi fantasy, we must not let lofty tropes cloud reality. The reality of what we mean by “AI” is a particular type of technology called a large language model (LLM). They are simply probability machines. That’s it. They don’t understand the words they are ingesting or spitting back out to us. They calculate relationships between words, which they see as numbers, based on immense amounts of data and conditions set by software engineers. This allows them to do pretty impressive things but they are not anywhere close to some of our sci-fi fantasies.
Ironically, the best applications for this type of AI are in things we might call fictional activities. MidJourney is an AI app that turns user-generated text-based prompts into fully visualized images. Eleven Labs is an AI app that turns user-generated text-based scripts into audio voice emulations sourced from sample audio. Even the most famous AI app, ChatGPT by OpenAI, can be easily used to automatically write movie scripts. These are examples of a plethora of other apps that leverage AI to create fictional products. These tools have the potential to displace a lot of work in graphic design, marketing, and creative fields.
When applied to non-fiction, AI’s potential seems to have been revealed to be much more limited. We just saw the first time AI actually touched the real-world. “Last week, attorney Steven Schwartz allowed ChatGPT to ‘supplement’ his legal research in a recent federal filing, providing him with six cases and relevant precedent — all of which were completely hallucinated by the language model. He now ‘greatly regrets’ doing this, and while the national coverage of this gaffe probably caused any other lawyers thinking of trying it to think again…All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being…These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. (TechCrunch)”
Much of the coverage of AI had been around fictional applications. Many were impressed at the capabilities like that of generating images but the stakes never amounted to more than “that’s pretty cool.” These fictional AI apps don’t really have real-world consequences. The recent courtroom example showcased what happens when you try to bring AI into the real-world. AI provided bogus information in an official legal document. It revealed a huge problem for AI which are hallucinations. It’s a more sophisticated term than what it simply is — error. AI LLMs don’t know what is true or false, they just see numbers and probability relationships. The goal is to output the most probable relationships which approximate answers to our prompts. Unfortunately, this doesn’t always provide us with the truth but content that looks very much like the truth. For example, I tested this by asking a series of economic research questions. It provided me very plausible answers and links to sources at real data providers. However, when I went to double check at the source, I realized that there was no such data there. The AI made up fake statistics, claimed a legitimate institution as its source, and gave a realistic-looking URL.
It is possible that hallucinations can be remedied but AI’s rapid expansion will increase risk. It is unclear if every edge case can be accounted for by AI technologists if AI does expand outside the “that’s pretty cool” sphere to where Goldman Sachs thinks it will in 10 years. Can doctors be sued for malpractice if they provided care based on an AI hallucination? Can a real estate developer be sued if their building was engineered on designs based on an AI hallucination? The list of potential problems is infinite. Who is willing to create a giant liability by using AI for non-fictional activities that have real-world consequences? We already have digital technology and automation services, but we’ve grown our professions in a way that still maintains a level of human checks and balances. That seems to be the reply to these concerns but let’s not pretend that has real meaning. Part and parcel of this AI revolution are human checks and balances getting automated too. The global tech sector is firing at an incredible rate with no plans to rehire. Goldman Sachs forecasted 300 million global job losses and 2/3 of certain professions being especially vulnerable. You don’t do that and reap the maximized profits or 7% global GDP growth by keeping those costly and burdensome human checks and balances around.
The true hurdle that AI might not get over is the infinite downside in liability for non-fictional activities with real-world consequences which overshadows its positive yet finite upside. Let’s call this the “I’d rather not be sued for eternity” case. If this is what the future holds, the gains from AI productivity could be very positive but limited to creative fields that primarily output fictional products. There would be a hard ceiling for the non-fiction economy and human labor would be stickier. AI might augment, but it could never fulfill the forecasts that corporate boardrooms and academic economic ivory towers think it can. The medium-term could also reveal private sector turmoil from capitalists incorrectly hollowing out their labor pools in expectation of AI gains that never materialize.