An open letter signed by over 1,100 concerned AI experts has been sent to AI labs around the world, calling for a pause on the development of large-scale AI systems.
AI is one of the most important technologies of our time. It has the potential to solve some of the world’s most pressing problems, from climate change to poverty. But it also poses a serious threat to humanity if it is not developed and managed appropriately.
The open letter to AI labs is important because it shows that there is a growing consensus among AI experts that the risks posed by these systems are real and need to be addressed. Here’s what they had to say and what this means for you.
What does the open letter to AI labs say?
The letter, which can be found here, cites a number of concerns about the risks posed by the rapid advancement in AI, including;
- The potential for propaganda and disinformation.
- Manipulation of public opinion.
- Automation of tasks that are currently performed by humans resulting in mass unemployment.
- Lack of oversight and regulation.
The letter’s signatories are calling for a pause on the development of large-scale AI systems (more powerful than the abilities of GPT-4), until these risks can be properly assessed and managed. The open letter’s call to action is for AI labs to;
- Immediately pause the development of large-scale AI systems. If AI labs don’t, then governments should step in.
- Work together to develop safety protocols for these systems.
- Rigorously audit and oversee these protocols.
- AI developers work with lawmakers to
The signatories argue that these protocols should be rigorously audited and overseen by independent experts. In addition, they also state;
The letter goes on to give some specific projects that the signatories believe would go some way in being able to achieve this. These include;
- Set up proficient AI regulatory bodies.
- Introduce monitoring and tracking for advanced AI systems and extensive computational capabilities.
- Design authentication and watermarking methods to differentiate genuine from artificial content and monitor model leaks.
- Build a strong auditing and certification infrastructure.
- Define responsibility for damages caused by AI.
- Provide substantial public investment in AI safety research.
- Establish well-funded organizations to tackle economic and political challenges posed by AI, emphasizing the protection of democratic values.
Finally, it ends with some hope. Hope in that with the short pause, the battle for who develops the most powerful AI solutions slows and governments & regulations catch up. They argue that only approach will save humanity from the potential harm AI can bring, before it’s too late.
Why has the open letter to AI been released now?
Change at such an accelerated pace is not only uncomfortable, but potentially dangerous
The past 6 months has seen an explosion of AI developments, particularly in the world of Generative AI (Text and Image). As we have previously written about, the competition between Microsoft (and it’s use of OpenAI’s GPT-4 in the new Bing) and Google (with the latest experiment, Bard) is accelerating, and the search experience looks to be evolving at the same pace.
For this group of signatories, the pace of change has happened so fast that it has completely gone past the point where it can be effectively managed, whilst still being developed. It leaves too much risk; it’s like writing an essay whilst driving 100mph in a car with no brakes. The only option is to completely pause progress, and to only resume when humanity catches up.
Could the high profile signatories have other motives by asking for the pause in AI development?
Quite possibly. You could argue that some of the signatories have something to gain (beyond saving humanity) from AI labs pausing developments. However some of the highest profile signatories were Steve Wozniak (Apple), some researchers are DeepMind, developers and even engineers at Google and Microsoft, have been at the forefront of AI’s development.
There were some noticeable absences by the current CEOs of Google (Sundar Pichai), Microsoft (Satya Nadella), and OpenAI (Sam Altman). All of whom have been key instigators of the recent developments; at least from a big tech perspective.
Why? Microsoft have made the biggest dent in Google’s armour since it’s inception, and Google is feeling the heat to convince people it was the first to call itself an ‘AI-first company’. There is too much to lose for all AI labs involved.
Elon Musk (an early inventory in OpenAI who is clearly disgruntled at the fact he pulled his investment from the company too early) is probably the highest profile signatory to date. He of course as openly called AI the biggest threat to humanity and that ‘closed source for profit’ companies hold too much power.
That being said, OpenAI’s founder, Sam Altman, regularly posts on Twitter his thoughts on the need for more regulation on AI.
OpenAI do regularly update their policies and ethical standards. But for the most part, large-scale AI labs are self-regulating, which in the opinions in this open letter express is not effective enough to manage the power of what is to come from their systems.
But actions speak louder than words, and it’s clear these signatories want this open letter to AI labs and lawmakers to be taken seriously, and for action to be taken immediately.
What does this mean for you?
Search Marketers play a crucial role in the future with AI, if you do it right
- Stay up to date. Knowledge is power.
- Continue to have healthy skepticism towards AI.
- Integrate AI into your workflows.
- Experiment with AI & accept it will sometimes fail.
- Continue to fight for ethics and transparency.
That last point is important for where we are today, and what this open letter to AI labs is fighting for. Search practitioners can help to ensure that AI is developed and used in a safe and ethical manner by:
- Being mindful of the ethical implications of AI:
- Privacy and data protection. Large-scale AI systems can be used to collect and analyze large amounts of data, which could raise privacy and data protection concerns.
- Discrimination and bias. Large-scale AI systems can be used to discriminate against people or groups of people, which could raise ethical concerns.
- Supporting research into the safety and ethics of AI: Search practitioners can support causes designed to keep AI labs regulated.
- Getting involved in the public debate about AI: Search practitioners can get involved in the public debate about AI by writing blog posts, giving talks, and participating in online discussions & more.
- OpenAI Leadership Change: Sam Altman Out, Mira Murati Interim CEO - November 17, 2023
- Why are Search Engines becoming more sociable? - October 17, 2023
- Microsoft Ads Performance Max aims to be more transparent - July 28, 2023