IT-business sharks demand moratorium on development of artificial intelligence

Experts working in the field of artificial intelligence and high-ranking managers of the IT-industry call for a pause of at least six months in the development of powerful artificial intelligence systems, as they are potentially dangerous for society. An open letter to this effect was published by the Future of Life Institute and signed by more than 1,100 people, including Ilon Musk, Steve Wozniak, Evan Sharp, Jaan Tallin and many others.

In the letter, the authors call on all laboratories studying and developing artificial intelligence, as well as independent experts to immediately stop training artificial intelligence systems that are more powerful than OpenAI's GPT-4 for at least six months. If such a halt cannot be realized in the shortest possible time, government agencies should intervene in the situation and impose a moratorium.

This does not indicate a halt in the development of artificial intelligence as a whole, but only a step back from the dangerous race to undefined models with extraordinary capabilities. They also note that artificial intelligence labs and independent experts should use this pause to jointly develop and implement common safety protocols to improve the design and development of artificial intelligence.

The signatories argue that in recent months, artificial intelligence labs have lost control over the development and deployment of increasingly powerful digital minds that no one, not even their creators, can understand, predict, or reliably control.

Main points

The authors of the letter ask: Should we trust machines to fill our information channels with propaganda and lies? Should we automate all work, including the work that gives us pleasure? 

The recommendation of the letter writers is to be cautious about automating work and using artificial intelligence. The question is whether all work should be automated, including that which is enjoyable. In addition, the authors question whether we should develop non-human minds that could eventually replace us. Also of concern is the point of losing control of our civilization if we allow artificial intelligence to become too powerful. In their view, powerful artificial intelligence systems should only be developed when we are confident that they pose no risks and can only do good.

If you find an error or inaccuracy in the text, select it and press Ctrl + Enter
Other Posts