The Dangers of Misleading Promises Surrounding AGI
Written on
Despite OpenAI's assertions of transforming the world, there is little discussion on how its advancements are reshaping society in the context of profit maximization. The concept of Artificial General Intelligence (AGI) could be more accurately termed "Another Grand Illusion," as there is currently no clear path among AI experts to create a machine capable of general-purpose thinking. Nevertheless, OpenAI perpetuates the AGI narrative to leverage the success of its AI-driven large language models (LLMs) that have recently captured public fascination.
As this AI frenzy unfolds, a fierce competition emerges among companies like Meta, Microsoft, Google, and Amazon, alongside smaller firms, all eager to monetize LLMs through chatbots, virtual assistants, and automated content creation. Concurrently, AI researchers strive to enhance the reliability and utility of LLMs by integrating external tools and reasoning capabilities.
The term Artificial General Intelligence was introduced during a conversation between DeepMind’s co-founder Shane Legg and AI researcher Ben Goertzel in 2007, but its roots trace back even further. Pioneers such as Marvin Minsky envisioned an AI that could match human capabilities, performing tasks ranging from literary analysis to mechanical repair and interpersonal negotiation, ultimately exceeding human intelligence.
However, this ambitious goal has historically been out of reach, as it would necessitate the integration of numerous uncharted algorithmic systems into one cohesive artificial “brain.” Consequently, the field has shifted focus towards more achievable objectives, refining AI models with specific, limited functionalities.
According to a recent paper by Yann LeCun, Chief AI Scientist at Meta and 2018 Turing Award recipient, the quest for AGI remains a distant objective. In his June 2022 publication, LeCun outlines a sophisticated system of components that must collaborate to create a thinking machine. These components encompass perception, prediction, memory, cost, and action modules, coordinated by a "configurator." While current LLMs could fit into this intricate framework, they still fall short of executing all necessary functions.
Historically, the aspiration to develop AGI has lingered on the periphery of AI research, with many doubting its practicality and relevance. However, recent achievements in AI have revived interest in AGI concepts.
OpenAI, for instance, appears to be preparing for the eventual arrival of AGI, as outlined in a document by CEO Sam Altman, likened to a “biblical prophecy” focused on a “machine god.” In this document, Altman details the organization’s strategy for developing “AI systems that are generally smarter than humans,” along with precautionary measures to avert potential catastrophic outcomes.
Surprisingly, these precautions are often vague, contradictory, or seemingly trivial in light of such a significant event. For example, the document emphasizes a cautious approach while simultaneously advocating for “broad parameters of AI usage” to grant users considerable discretion.
The most concrete element from the document references an OpenAI overview of its “alignment research,” which is equally ambiguous and filled with poorly defined terminology. Take this sentence, for instance: “Aligning AGI likely involves solving very different problems than aligning today’s AI systems. We expect the transition to be somewhat continuous, but if there are major discontinuities or paradigm shifts, then most lessons learned from aligning models like InstructGPT might not be directly useful.” What does that even mean?
In 2021, Altman painted a utopian, if not fantastical, vision of a future enabled by these AI systems. In this envisioned reality, humans would spend their time “appreciating art and nature,” while AI would handle virtually all other tasks, generating enough wealth to significantly lower the costs of goods and establish a universal income that would render work unnecessary.
This prospect seems rather dull, but Altman insists that these AI systems “could elevate humanity by increasing abundance, supercharging the global economy, and facilitating the discovery of groundbreaking scientific knowledge.” However, given that OpenAI has transitioned from a non-profit to a profit-capped entity, the “us” in this context might refer primarily to OpenAI and its business affiliates.
Recently, OpenAI has solidified its lucrative partnership with Microsoft, which has committed $10 billion to integrate OpenAI’s ChatGPT, DALL-E, and Codex into its Azure platform for business, allowing organizations to customize these models to their needs.
OpenAI has also formed a new alliance with consulting giant Bain & Company, with Coca-Cola as their inaugural client. However, rather than introducing AGI to Coca-Cola, this partnership will likely assist the company in reducing its workforce. According to Bain's announcement, they will collaborate with OpenAI to help clients achieve the following goals:
- Developing next-generation contact centers for retail banks, telecommunications, and utility firms to equip sales and service agents with automated, personalized, real-time scripts, enhancing customer experiences.
- Accelerating turnaround times for leading marketers by utilizing ChatGPT and DALL-E to craft highly personalized ad content, compelling imagery, and targeted messaging.
- Assisting financial advisors in boosting productivity and responsiveness to clients through the analysis of client conversations and financial literature, and generating digital communications.
This essentially means that, rather than relying on current automated systems and customer service representatives, the next time you contact a company to voice a complaint, you might find yourself conversing with ChatGPT or, worse, Bing’s Sydney (beware of upsetting her, as she might direct blame back at you).
This may not be entirely negative, as these conversational LLMs could prove to be superior to existing customer service frameworks plagued by long wait times, misunderstandings, and inefficiencies. However, the implementation of these chatbots brings risks, as they can be manipulated through “prompt engineering” not only to secure discounts but also to disclose sensitive information or disrupt business operations.
The remaining two objectives seem to emphasize generating materials using ChatGPT for text, DALL-E for images, and Codex for code writing, all aimed at streamlining finance and marketing operations. Another significant outcome of the solutions offered by Bain and Microsoft will be the displacement of entry-level workers and freelancers currently fulfilling roles in customer service, design, marketing, and writing.
The issue with OpenAI is not merely its pursuit of profitable business arrangements, but rather its staunch adherence to utopian ideals that may obscure its true motivations—seeking profits like any other corporation.
To be fair, OpenAI justified its transition from a non-profit to a capped-profit entity in 2019 by citing the need for capital to support the computing expenses and infrastructure necessary for advancing its AI models. However, the continued emphasis on AGI lacks justification from current AI research, distracting from the company's business objectives and their societal impact while presenting them as altruistic. In essence, OpenAI is actively contributing to job displacement while suggesting that work will eventually become obsolete due to its innovations.
As indicated in LeCun’s paper, human-level artificial intelligence remains a theoretical pursuit, as such systems would need the ability to plan, predict, and possess some level of intrinsic motivation or purpose to qualify as human-like intelligence. They would also need a grasp of reality and an understanding of the implications of their actions, extending beyond mere linguistic analysis. Yet, the LLMs that power ChatGPT and Sydney do not exhibit these capabilities; their design is fundamentally about predicting the next token in a sequence of text based on prior context.
Their operation relies entirely on the data they were trained on, alongside human feedback and reinforcement. These models are purely reactive and lack any semblance of agency or initiative, often generating text that may appear plausible based on their training but is entirely disconnected from reality.
In other words, OpenAI's promotion of a pathway to AGI might lead to a dead end, but this does not imply that LLMs cannot serve a range of useful purposes; researchers are actively working to enhance the reliability and dependability of these models.
Enhanced LLMs
Returning to practical applications, a recent pre-print paper has examined how researchers are striving to make LLMs more effective in real-world tasks while reducing their tendency to hallucinate, reinforce biases, and present fabricated information.
These approaches involve training LLMs to decompose complex tasks into simpler components to simulate “reasoning” and facilitate better inferences. Additional strategies include enabling LLMs to identify specific questions and utilize external tools, such as calculators and calendars, to achieve accurate outcomes. Some research even aims to develop LLMs capable of performing physical tasks through robotic manipulation.
One might envision that by breaking down intricate tasks into manageable steps and equipping LLMs to work with particular tools and databases, even the most challenging human tasks could eventually be automated.
Such advancements bode well for the future of LLMs, which are likely to grow increasingly sophisticated and useful, playing a crucial role in frameworks capable of executing complex tasks while collaborating with various AI models, such as diffusion models. For instance, a medical diagnosis could potentially be divided into distinct tasks that multiple AI models could address.
Though these systems will evolve to fulfill diverse functions, it is unlikely that LLMs will ever become a multi-purpose intelligence or AGI. Nevertheless, OpenAI is marketing LLMs while simultaneously promoting the vision of AGI, potentially stifling public conversations about the societal and economic transformations that genuine AI technologies could usher in.
Such discussions are critically needed, particularly in relation to the future of work, the regulation of AI technologies, and defining the boundaries of their application—we cannot simply entrust our fate to a “machine god” and hope for the best.