Site icon Tech-Wire

The Bletchley Declaration is no game changer, but it’s a solid start to the global fight for AI safety

Countries attending the U.K.’s AI Safety Summit have released a declaration named after the venue, Bletchley Park, where codebreakers including the brilliant and tragic Alan Turing shortened World War II by a couple years.

The Bletchley Declaration is, in itself, nowhere near as much of a game changer as Turing’s bombe was. Unsurprisingly, given the flurry of lobbying that’s taken place in the run-up to the event, it mostly just serves as a pretty good snapshot of what 28 countries (and the EU) currently understand AI’s promises and risks to be.

The communiqué talks about the importance of “alignment with human intent” and points out that we really need to work on better understanding AI’s full capabilities. It notes the potential for “serious, even catastrophic, harm, either deliberate or unintentional,” but also recognizes that “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”—these are broad strokes, but they acknowledge the concerns that many have about AI’s immediate impact, as opposed to more arcane fears about the potential misdeeds of a future, rogue artificial general intelligence.

Civil society must play a part in working on AI safety, the document declares, despite the complaints of civil society groups that they have been shut out of the summit. Companies building “frontier” AI systems “have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures.”

There’s not much in here in the way of firm commitments and tangible measures, which is what you might expect from a declaration that: a) is the first of its kind; and b) is a compromise between frenemies and outright rivals with conflicting imperatives and legal systems, like the U.S., the U.K., the European Union, and China.

British commentators have noted the U.S.’s decision to use the summit to announce its own AI Safety Institute, which they say takes the shine off British Prime Minister Rishi Sunak’s recent announcement of a U.K. AI Safety Institute as a way to “advance the world’s knowledge of AI safety.” But I’m not so sure—the White House was careful to note that the U.S. institute will collaborate with its British counterpart, so I don’t really see how anyone’s a loser in this scenario.

As for China, the British government has been keen to keep the superpower in the room, but at arm’s length—Deputy Prime Minister Oliver Dowden talked up China’s attendance, but also said it “might not be appropriate for China to join” certain sessions “where we have like-minded countries working together.”

The Financial Times also notes that several of the Chinese academics attending the summit have signed onto a statement calling for stricter measures than those included in the Bletchley Declaration—or U.S. President Joe Biden’s executive order earlier this week—to address AI’s “existential risk to humanity.” This isn’t the official Chinese line just yet, but it may indicate where that’s going. There’s certainly a lot of scope for discord as the U.S. and China race for so-called AI supremacy, whatever that means.

So everyone isn’t entirely on the same page, but that was never going to be the case. I’d call this a promising start for international cooperation on a subject that—let’s not forget—was on very few people’s radars as a serious threat before this year. Crucially, these summits will be regular occurrences; the next one will take place in Korea in six months, and then there will be another in France a year from now. Let’s just hope those events are as inclusive as the Bletchley Declaration promises.

More news below.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

NEWSWORTHY

WeWorked. The Wall Street Journal reports that WeWork is about to file for bankruptcy. It’s been scrambling to stave off a default on its bond payments, and the clock will reportedly run out next week. WeWork’s share price cratered on the news, and has now lost more than half its value this week.

Meta’s ad-targeting ban. The EU’s privacy regulators have told the Irish Data Protection Commission to ban behavioral ad-targeting on Facebook and Instagram across the bloc. As Reuters reports, this comes at the request of Norway, which issued such a ban back in August, and which has been fining Meta around $90,000 each day since then, for breaching users’ privacy. The EU’s top court sank Meta’s legal basis for targeted advertising in July, and the company is now preparing to let European users pay a subscription fee for Facebook and Insta, rather than being forced to pay in data.

YouTube vs ad blockers. YouTube really doesn’t want people to use ad blockers. It’s been telling people with the tools installed that they’re violating its terms of service, and they have to either agree to see ads or pony up for YouTube Premium. As The Verge reports, YouTube said the blockage of playback for ad-blocker users was a “small experiment” a few months ago, but now it’s a “global effort.” Meanwhile, European privacy activist Alexander Hanff recently filed a complaint over YouTube’s use of JavaScript code to detect ad-blocking.

SIGNIFICANT FIGURES

60%

—The proportion of X users in the U.S. who incorrectly think a blue check means an account is authentic, according to a YouGov survey for anti-misinformation outfit NewsGuard. The survey also found that 16% thought “verification” indicates higher credibility, whereas it actually just indicates a willingness to pay for the symbol.

IN CASE YOU MISSED IT

Steve Ballmer started as Bill Gates’s assistant and now he’s on the verge of becoming wealthier than his one-time Microsoft boss, by Christiaan Hetzner

Group representing the New York Times and 2,200 others just dropped a scathing 77-page white paper on ChatGPT and LLMs being an illegal ripoff, by Paige Hagy

Google and the government agree that it hid its AI dominance for years, they just disagree on why, by Bloomberg

Nokia is suing Amazon in courts around the world because it says it’s invested billions in patented technologies and the retail giant is using them for free, by Prarthana Prakash

Tesla convinces jury its Autopilot wasn’t at fault in first lawsuit blaming a fatality on the technology to go to trial, by Bloomberg

Your brain activity literally drops when you have a Zoom meeting, research from Yale scientists finds, by Orianna Rosa Royle

BEFORE YOU GO

And the word of the year is…AI. It was always going to be AI. And now, according to the Collins Dictionary, it is. The dictionary’s definition, by the way, is: “The modelling of human mental functions by computer programs.”

Other shortlisted words include “de-influencing,” “greedflation,” and “nepo baby,” which is two words. The 2022 Collins word of the year, in case you’re wondering, was “permacrisis.” Expect other dictionaries to issue their words of the year in the coming weeks.

This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.

________________________________________________________________________________________________________________________________
Original Article Published at Fortune.com
________________________________________________________________________________________________________________________________

Exit mobile version