What Scared OpenAI Into Firing Sam Altman?

Blog #25

30/11/2023

Time To Read : 12 Min

The abrupt firing of Sam Altman from the helm of OpenAI, followed by his equally surprising reinstatement, hinted at something that spooked the company’s board—a development so significant it rattled the very foundations of this AI industry leader. Known for its cutting-edge advancements in artificial intelligence, OpenAI’s internal drama didn’t just raise eyebrows; it triggered a wave of speculation about what could have scared an organization at the forefront of AI research. This blog delves into the series of events that led to this tumultuous period in OpenAI’s history, uncovering the groundbreaking AI breakthrough that sparked fear and fascination in equal measure

Open AI’s Drama Timeline

  • Initial Firing of Sam Altman: On November 17, 2023, OpenAI’s board dismissed CEO Sam Altman. This decision was attributed to concerns about Altman not being consistently candid in his communications with the board, which hindered their ability to exercise their responsibilities. This action sent shock waves across the tech industry. Following this, OpenAI’s Chief Technology Officer, Mira Murati, was appointed as interim CEO, and a formal search for a permanent CEO was announced​​.
  • Internal Reactions and Developments: The firing of Altman came as a surprise to many, including Altman himself and OpenAI President and co-founder Greg Brockman, who subsequently quit the company. This management change was unexpected for many employees and caused significant disruption within OpenAI​​.
  • AI Breakthrough and Letter to the Board: Prior to Altman’s firing, OpenAI researchers had discovered a potentially significant breakthrough in artificial general intelligence (AGI) and had written a letter to the board (more on this below), warning of its potential to threaten humanity. This development and the letter were key factors in the board’s decision to oust Altman. More than 700 employees threatened to quit and join Microsoft in solidarity with Altman following his dismissal​​.
  • Altman’s Brief Stint at Microsoft: After his dismissal from OpenAI, Altman briefly joined Microsoft as the head of artificial intelligence research. This move was accompanied by other former OpenAI staff, including Greg Brockman​​.
  • Reinstatement as CEO: The situation culminated with Altman being reinstated as CEO of OpenAI on November 21, 2023, following marathon discussions about the future of the company and days of high drama​​.

Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime’

– Sam Altman, during his speech at the Asia-Pacific Economic Cooperation summit, 1 day before being fired.

Why did this Happen?

The tension between OpenAI’s board and CEO Sam Altman was characterized by differing perspectives on the company’s direction. The board, which is part of a nonprofit governing OpenAI’s monetization strategy, seemed to focus on the mission of making AI beneficial to humanity, as indicated by board member Ilya Sutskever. This perspective clashed with Altman’s commercial ambitions for OpenAI, leading to a power struggle. Sutskever’s belief that removing Altman was necessary to protect this mission highlights the board’s concern over the balance between ethical AI development and commercial growth, a fundamental difference that was a key factor in the board’s initial decision to dismiss Altman​​.

What was the Breakthrough? (Project Q*)

Project Q*, a significant development from OpenAI, represents an advanced AI model that excels in learning and performing mathematics. Unlike previous AI models like ChatGPT and GPT-4, which had limitations in consistently performing basic math functions, Q* is designed to overcome these challenges. It incorporates new techniques and approaches to enhance accuracy and reliability in solving mathematical problems. This development signifies a profound advancement in AI’s capacity for complex reasoning, understanding abstract concepts, and planning multiple steps ahead.

Q* is believed to be based on Q-learning, an approach that allows the AI to operate autonomously, contrasting with OpenAI’s earlier method, Reinforcement Learning Through Human Feedback (RLHF), which involved human interaction. The breakthrough in Q* might involve integrating an efficient heuristic into Q-learning, which could be a monumental step forward, enabling AI to foresee optimal steps and solutions. This capability allows the AI to navigate complex problems more effectively, avoiding suboptimal solutions. Q* enables OpenAI’s large language models (LLMs) to directly handle problems in math and logic, a task previously reliant on external computer software.

The advancements in Q* suggest a significant stride towards more sophisticated and autonomous AI systems. Its potential to understand and solve intricate mathematical problems paves the way for groundbreaking advancements in the field, including applications in scientific research and engineering. However, while Q* is a leap forward in AI’s mathematical capabilities, it represents the beginnings of a development rather than an achievement of superintelligence or AGI. The technology is still in its early stages, but it marks a major step in the road toward AGI, as defined by OpenAI, offering both exciting possibilities and raising important questions about the future of AI development

The ‘Leaked’ Letter

On top of this there is a supposed leaked OpenAI letter which points to the idea that this new AI might be somewhat self-aware, understand what and why it is doing things, and is therefore able to predict why it is giving specific output. If true, not only is it a massive improvement/upgrade to AI, but it appears to be the first real spark of AGI. But the letter also appears to be quite fake, so take this with a grain of salt..

The Implications of Q*

The second part of the leaked letter and one of the general worries coming from the Q* news refers to cryptography, cybersecurity, and encryption in general. Namely that Q* likely “has bootstrapped itself to a level of mathematical understanding far beyond the best human mathematicians.’ And with this you could train it on encrypted pieces of text, and it would simply decrypt that secured communication without knowing the decryption key. This can spell disaster for numerous industries, and the world at large. Think of secure communications being decrypted almost instantaneously, financial transactions, cryptocurrency, cybersecurity, all rely on forms of cryptography and the fear is that this new AI could crack them all. The chaos that would ensue from this is hard to fathom….

With this in mind, here are some early thoughts on the impacts and ramifications of the Q* model, if what we assume about it is true.       

  • Regulatory and Governance Challenges: Governments and international bodies may struggle even more than they already do with emerging tech to keep up with the pace of AGI development, leading to an even greater regulatory lag. This gap could create challenges in establishing legal and ethical frameworks to govern AGI use and its societal impact. Essentially leading private profit seeking corporations like Microsoft (who now retain a non-voting seat on the board of OpenAI) to use and guide AI development and commercialization with relatively little hindrance.
  • Redefining Human-AI Collaboration: This AI might be capable of more autonomous decision-making and creative problem-solving, changing how humans and AI will collaborate. It could take on more leadership roles in projects, challenging our current understanding of AI as merely a tool, that most people are only now starting to get the hang of..
  • Shift in Labor Market Dynamics: While not AGI, this advanced AI could still automate more complex tasks than current AI, impacting jobs that require higher cognitive functions. This could necessitate a reevaluation of job skills and training programs.
  • Empowering Small and Emerging Businesses: The advent of this advanced AI could level the playing field for smaller companies. Unlike traditional industries where large-scale investments and established brand presence are significant barriers, this AI technology could enable small or new companies to quickly adopt cutting-edge tools, making them vastly more competitive. These businesses could leverage the AI’s advanced capabilities to optimize operations, innovate rapidly, and tailor services or products to specific market needs with unprecedented precision. This democratization of technology could lead to a surge in entrepreneurial activity and innovation, challenging the dominance of larger corporations and potentially reshaping market dynamics across various industries.

Looking Forward

In light of recent advancements in AI, particularly with developments like OpenAI’s Q*, it’s becoming increasingly clear that the pace of technological change is accelerating. The traditional 10-year foresight timeline for anticipating major technological shifts might now be too extended. Instead, we may need to adjust our expectations and prepare for radical changes within shorter 2-5 year periods. This acceleration brings to mind Ray Kurzweil’s concept of the technological singularity, suggesting that this pivotal moment in human history, where AI surpasses human intelligence, might arrive sooner than previously anticipated. These developments underscore the importance of agility and adaptability in our approach to future technological advancements.

Other sources:

Leave a Reply

Your email address will not be published. Required fields are marked *