The Letter to Halt AI Experiments is A Sham

David "jedi" Lewis
9 min readApr 1, 2023

--

Dissecting the True Motives Behind the Open Letter for a Call to an AI Development Moratorium. A Sudden Burst of Altruism or a Clever Ploy to Catch Up?

Letter sitting on a table surrounded by random desk supplies, there is a small robot hovering over the table touching the letter with bony-dead hands, outside the window is raining and desolate.
“The Death of Text” — by David Lewis (author)

A recent open letter “Pause Giant AI Experiments” calling for a 6-month moratorium on AI development has sparked quite a debate and has garnered over 2000 signatures to date. While the authors valiantly claim to champion humanity’s best interests about the dangers of the impending Textocolypse, I can’t help but wonder if their plea is little more than a thinly-veiled attempt to give latecomers experiencing FOMO (Fear Of Missing Out) a chance to catch up and cash in on the AI gold rush train.

“Everyone wants a hand in the pot” — by David Lewis (author)

The Letter: FEARS

“AI systems with human-competitive intelligence can pose profound risks to society and humanity”

The letter was issued by the Future of Life Institute a non-profit primarily funded by the Musk Foundation, as well as the London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union’s transparency register.

The letter in all its earnestness raises several concerns about the breakneck pace of AI systems development, such as the potential for misinformation, job automation, and the risk of losing control over civilization itself. It calls for a temporary halt to AI development to allow time for experts to create shared safety protocols and improve AI governance systems. While the author’s apprehension may seem understandable, it’s crucial to recognize that the very things they fear might be precisely what society needs.

Or is the letter a smokescreen for a much more self-serving agenda?

I think we’ll find primarily 4 groups of people primarily who will or have already signed the letter:

  • Those who feel their business or job will be threatened by advanced GAI (General Artificial Intelligence), i.e.: If an AI could replace them and do their job or work.
  • Those who are currently or desire to develop their own AI systems.
  • Those who truly do not understand the technology. It is human nature to fear something potentially powerful that we do not understand.
  • Those who are developing AI systems and are in direct competition with OpenAI

There’s a reason the letter calls out GPT-4:

“we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

“Oh no!! OpenAI is developing Language Model AI systems faster than we can and is going to take all our business!” There’s only one company the letter is trying to stop: OpenAI.

The Misplaced Moral Fear:

Buist first, let’s tackle the moral fear that the letter proposes.

AI’s all fun and games until the Robot Apocalypse — David Lewis

Are we on the brink of an AI Text Apocalypse that will result in the breakdown of society as the letter author claims? — Well, the author is not entirely wrong. The letter raises alarms about AI flooding our information channels with propaganda, automating jobs, developing nonhuman minds that could surpass us, and risking the loss of control over civilization. However, these fears are not only misplaced but could also be viewed as opportunities for positive change in society.

Historically, no great positive change in society has ever been void of negative consequences. But the benefits resulting from rapid innovation often far outweigh any risks.

Flooding The World With Misinformation

“Contemporary AI systems are now becoming human-competitive at general tasks […] Should we let machines flood our information channels with propaganda and untruth?”

As AI systems become more advanced, they will undoubtedly have the potential to continue to generate misinformation. However, this is not any different from humans. Yet we understandably want to hold these AI systems to a higher truthiness standard because we’re so accustomed to the calculator: 2+2=4, and what the computer “says” is always true. But now, AI is being trained on human information, and when the data that they train on often contains falsehoods, propaganda, and misinformation, because, newsflash: Humans are Faulty by Nature. We can no longer blindly trust the machine.

“Can I trust you?” — by David Lewis (author)

One stated truth is often another’s lie, and we struggle with this problem significantly in our society today. the Twitter Files have revealed that people in positions of power use social media channels to control the minds and views of “Truth” and “Facts” of the common people so that what the average person gets to see and read is the truth these powerful people want you, us, to believe. So how can AI get around this problem? There is inherent bias even in the best systems. There is bias in the training data, of various sorts, and there will be bias in whatever governing agency the author of the letter has dreamt up.

See: How Bias in Natural Language Models will Hold Us Back From Innovation.

This is fighting a losing battle, we as faulty humans are not ever going to be able to generate completely true, unbiased, factual information consistently, and since language models are just putting together language structures that they’ve learned from us, this “faulty data”, will continue to propagate as well. However, while AI systems have the potential to generate and spread misinformation, they also possess the superior capability to counteract it.

“Analyzing Data, Humans use to do this” — by David Lewis (author)

Advanced AI systems can be employed to detect, analyze, and debunk false information, thereby ensuring a more accurate and reliable information ecosystem. As we continue to innovate with these sophisticated and advanced AI models, we’ll have the capability to create the systems necessary to harness the technology to improve the quality and reliability of the information in our society. But it will never, ever, be enough.

If the author of the letter were to get his way, we’ll start to see far more manipulation by people in power using AI than we’ve seen today through Social Media and News Media controlled information. Language Models today have training data that expands beyond the scope of the control and that’s scary to some who want to retain that power over us, if we set up systems of government to control these AI systems, beware of what they’ll use that for. “Loss of Control” as the author puts it, is a significant fear. But who, exactly, will be losing control? Maybe the people who are losing control should be the ones losing that control.

Job automation

Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”

A humanoid android robot cooking a fancy dish in a well-lit home apartment kitchen in the city
“The Automation of Everything” — by David Lewis (author)

While the automation of jobs may initially seem like a threat, it can lead to increased efficiency, reduced costs, and improved working conditions. AI-powered automation could free up time for people to pursue more creative, fulfilling, and meaningful endeavors, fostering a more humane and balanced society. I wrote about this much more extensively in ChatGPT is Going To Take Your Job.

The Author of the letter sounds like he’s arguing against the invention of the Agricultural Harvester in the early 1900s. These systems of automation are needed in society and we’ve had many such inventions historically that have positively improved society.

“We’re afraid of what we cannot control” — by David Lewis (author)

The current AI revolution has been in development for over a decade, so this isn’t anything new, but recent breakthroughs in the technology have created a cascade effect and suddenly swept the world by storm when OpenAI released ChatGPT, their high-powered AI-Language Model in December 2022. Now, companies are racing to innovate and cash in on these new powerful language models that will prove to revolutionize many business sectors across society, and make no mistake, the shakedown of this AI revolution has begun.

We’ve already seen a massive influx of new technologies in the AI space released in just 3 months. From the multi-modal GPT-4 release and Bing Chat, even Google’s Bard has now become available to the public. Business technologies such as Microsoft Office 365 Copilot, Github Copilot, and Azure OpenAI Service make rapid business acceleration possible, this now includes taking your job.

But this Moratorium Letter is Misleading

Under the Covers: The Self-Serving Rationale

The letter is a cunning stratagem to level the playing field for those lagging in the AI race. As AI innovation quickly becomes a veritable treasure trove, it’s no wonder some feel threatened or left behind. The letter author’s moral fears, while seemingly genuine, could be a ploy to mislead the public and slow down AI innovation by the AI companies that are currently and rapidly gobbling up the AI Market Share. By painting a dystopian picture of an AI-driven future, the letter creates a sense of panic that diverts attention from the immense potential AI offers. It’s not hard to imagine that this fear-mongering might be driven by an underlying desire to hinder AI development and by slamming the brakes on the AI development of competitors, if successful, the authors of the letter might just give themselves and their friends a chance to catch up or devise ways to monetize this rapidly innovating technology. How altruistic, indeed!

“The AI Arms Race” — by David Lewis (author)

We’ve already seen glimpses of what this looks like with Google as they suddenly found themselves behind in the AI arms race with the release of Microsoft’s Bing Chat, until they could scramble to release Bard and in an attempt to reclaim their rapidly losing battle for Market Share in Search, a place where they’ve been the unchallenged champion for decades. Companies with no current powerful AI technology at their disposal are left rapidly falling behind unless they can find a way to stop or slow their competition down.

However, not content with merely suggesting a voluntary pause, the letter also calls for governments to step in and forcefully impose a moratorium if the AI industry fails to voluntarily act. — I can see absolutely no cause to be suspicious of the motives of the author of a letter that calls for Governments to forcefully impose restrictions and control on innovation, research, and development in the private sector. Nope, no cause for alarm at all.

If this doesn’t scream “wait for me! I want to cash in on the Gold Rush!” I don’t know what does!

“All Aboard The Golden Express” — by David Lewis (author)

There’s no stopping this AI train. It’s as inevitable as the industrial revolution, the internet, and the iPhone, and the authors know this.

Embrace the Race, Don’t Stall It

Instead of trying to halt AI development with all the subtlety of a sledgehammer, a more productive approach would be to encourage collaboration and open dialogue among stakeholders. By working together as the technology is being developed, we can address the concerns raised in the letter without hindering progress, as the very models that are being developed contain the future tools we’ll need to combat misuse.

The moral fears expressed in the letter are misplaced and serve as a smokescreen for those who may wish to slow down AI innovation for their own gain. We’re like a dog chasing it’s tail. There is no “being prepared” for what’s coming. There is only stalling for the sake of monetary gain and power and political control. Instead of giving in to panic, we should recognize that the very things the author fears might be the driving forces behind the positive change AI can bring. By embracing AI’s potential and collaborating on safety, governance, and ethical considerations, we can ensure that the AI revolution benefits all of humanity, rather than succumbing to the misguided fears and hidden agendas of a few.

“They’re one of us now” — by David Lewis (author)

--

--

David "jedi" Lewis
David "jedi" Lewis

Written by David "jedi" Lewis

@highwayoflife Principal Cloud Engineer for Starbucks Technology. Automator of Things. GitOps Advocate. Life as Code.

No responses yet