Just hours before officials arrived at an AI security summit hosted by Rishi Sunak, workers were still busy laying carpets in the huge makeshift buildings around Bletchley Park.
The event was organized in just ten weeks – much less than most international summits – and there was a great rush to get it started on time. “It’s like a big birthday party that no one wants to go to,” the guard remarked gloatingly as he checked on the delegates.
The arrival of Elon Musk, the unpredictable head of X (formerly Twitter) and Tesla, threatened to overshadow the entire event. According to insiders, he was surrounded by people who wanted to chat or take pictures.
Organizers were relieved to find him significantly less demanding and surprisingly calm compared to many other summit participants. But then Musk appeared to mock the leaders in attendance, tweeting a cartoon saying they actually wanted to use AI for their own purposes.
And negotiations between these leaders did not always go smoothly. Behind the scenes, there was fierce debate over the Bletchley Declaration, which agreed that 28 countries would work to coordinate work on AI risks, from deepfakes and disinformation to possible human extinction.
Insiders compare the process to the unpleasant process of making sausage, with each participant trying to push through his favorite project.
The result was mixed and far from the promise of a new global regulatory framework for artificial intelligence, but, crucially, it did not fail completely. As one delegate put it: “Does Bletchley’s statement say much?” Not really. But this is a good start.”

Even its creators acknowledged that the declaration would not be the last word on the issue or anything like that. Requested I Asked whether an international coordinating body would be needed, Lord Camrose, the UK’s AI minister, said: “AI regulation can’t just be done at a national level, it needs a national, multinational and international architecture and that’s coming together.”
He predicted that the next AI security summit in South Korea in the spring will “make things more concrete” about the next generation of AI technologies, which could be much more powerful than what already exists.
For some critics, alarmed by the remarkable pace of technological development, this may be a mistake. Andrea Miotti, head of strategy at artificial intelligence company Conjecture, said: “This is a good reason to throw the ball around. You can always wait for the next generation, and at some point it will be too late.” The Korea summit, which is being held virtually rather than in person, ultimately couldn’t be more useful than a “Zoom call,” he said.
But few would deny that important diplomatic breakthroughs eventually occurred. “It was a pretty epic moment when everyone involved in AI came together,” said Mustafa Suleiman, co-founder of artificial intelligence leader DeepMind.
One of the summit organizers said the decision to hold almost the entire event behind closed doors encouraged open debate. Another praised the role of civil society organizations such as charities, which could hold companies directly accountable for their shortcomings, something governments are reluctant to do.
The controversial decision to invite China to trial, which angered some Tory MPs, was justified because AI “poses a security risk that affects every member of our species”, Lord Camrose said.
He also denied that Joe Biden and Kamala Harris’ interventions on AI undermined this week’s summit, saying: “Instead, the vice president could have announced whatever she announced anywhere, she chose to do it.” The president could do it.” “I chose a random day to announce the decree.” The executive order requires AI developers to share safety findings with the US government.
Of course there were mistakes. One delegate commented on the unfortunate idea of opening the summit with a group of ministers from the rich world – Britain, the US and the EU – followed by a separate and implicitly fringe group from countries such as India and Nigeria.
There were also questions about whether the summit’s goal of tackling “advanced AI” made sense, since the term is not widely used in the tech world. Lord Camrose admitted: “I agree that the term ‘border’ does not always mean the same thing to everyone… I have spoken to people who say ‘Great Peak’ but I have some objections to the term. reason “Border”. To me it means something different. . »
But most of the participants were satisfied with the result. Ciaran Martin, former head of the National Cyber Security Centre, said: “There is no discussion of a global cyber security treaty on what is acceptable and what is not, but the summit was generally useful.”
He said the threat that AI could disrupt the status quo (where advanced cyber capabilities are limited to big governments and defenses evolve largely in lockstep with attack) has helped focus attention on the need for an overhaul: “Historically, tech companies on the West Coast have there was no real responsibility at the source for the safety of their technology – that is now starting to change.”
Source: I News

With a background in journalism and a passion for technology, I am an experienced writer and editor. As an author at 24 News Reporter, I specialize in writing about the latest news and developments within the tech industry. My work has been featured on various publications including Wired Magazine and Engadget.