The Guardian

Latest news, sport, business, comment, analysis and reviews from the Guardian, the world's leading liberal voice

A brutal wrestle on a plane, passengers outraged, attendants helpless: I saw the UK’s deportation policy at work | Hugh Muir

On the runway at Gatwick, the visceral reality of forced removals was laid bare. If only more could see what is done in our name

It’s Gatwick airport, mid-afternoon, and on the runway there is turmoil. Public policy playing out in full view of the public. Voters, citizens, seeing what they don’t normally see.

“Murdaar, murdaaaaar,” screams the bucking, brawling, brawny man as a clutch of male security officials, with solid intent and hi-vis yellow jackets, collectively fight to pin him into a seat at the back of the airliner. “Me caaan go back a Jamaica,” he hollers, the visceral sound reverberating around the 777. “Dem kill me bredda. Dem a go kill me.”

Hugh Muir is executive editor, Opinion

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

Continue reading...

The Sheep Detectives review – Hugh Jackman gives a flock in baa-rking mad cosy crime caper

Jackman plays the farmer in this Babe-style feelgood family film about plucky sheep who help solve a murder

Here is a murder mystery that’s like a cross between Babe and The Thursday Murder Club, in which instead of plucky underdog retirees solving crimes, it’s … sheep? With a touch of Watership Down somewhere in the mix, this film, for some, may be off-putting. Actually, it makes for a sweet-natured family comedy, and a spiky and amusing cameo from Emma Thompson certainly doesn’t hurt.

Screenwriter Craig Mazin has adapted the bestselling book Three Bags Full by German crime author Leonie Swann, and the Despicable Me veteran Kyle Balda directs, shepherding a boisterous herd of live-action stars and digitally created woolly performers. The setting is the English village of Denbrook, swathed in what looks like digitally enhanced Californian sunshine, where Hugh Jackman plays George Hardy, a shepherd who lives in an American-looking stainless steel trailer on his field. George controls his flock without recourse to the traditional dog, but rather with his instinctive relationship with them all. And he is dedicated only to raising sheep for their wool, not their meat – which is not exactly the attitude of the local agribusiness types who have designs on his land.

Continue reading...

‘I can run 1:58’: Sabastian Sawe sets new target after historic London Marathon win

  • Berlin on agenda in September for new record holder

  • Athlete wants more drugs testing to show ‘we are clean’

Sabastian Sawe believes it is only a matter of time before he runs a marathon in one hour and 58 minutes after his superhuman sub-two hour performance in London on Sunday.

Speaking the day after he ran 1hr 59 mins and 30 seconds to break the world record by 65 seconds, the 31-year-old Kenyan confirmed that he planned to race again in the autumn – although he hasn’t decided where yet.

Continue reading...

Kirigasaku#8

tetsuo5 has added a photo to the pool:

Kirigasaku#8

@20260321 横浜市旭区/桐が作 GX7Mark2+M. ZUIKO DIGITAL ED 9-18mm f4.0-5.6

The Register

Biting the hand that feeds IT — Enterprise Technology News and Analysis

SpaceX dusts off Falcon Heavy for first flight in 18 months

Side boosters to make simultaneous touchdown while center core takes one for the team

SpaceX is preparing to launch its Falcon Heavy rocket for the first time in more than 18 months, kicking off what could be a busy time for the vehicle.…

MetaFilter

The past 24 hours of MetaFilter

Turning an invasive pest into dog treats

Meet the eco-venture turning an invasive pest into dog treats. Introducing the European carp to Australian waterways is an ecological disaster that continues to worsen, but an Adelaide couple have found a novel use for the pest. (Carp are bad news not only because they eat native fish, but also because they cause erosion of riverbanks.)

404 Media

404 Media is an independent media company founded by technology journalists Jason Koebler, Emanuel Maiberg, Samantha Cole, and Joseph Cox.

Google DeepMind Paper Argues LLMs Will Never Be Conscious

Google DeepMind Paper Argues LLMs Will Never Be Conscious

A senior staff scientist at Google’s artificial intelligence laboratory DeepMind, Alexander Lerchner, argues in a new paper that no AI or other computational system will ever become conscious. That conclusion appears to conflict with the narrative from AI company CEOs, including DeepMind’s own Demis Hassabis, who repeatedly talks about the advent of artificial general intelligence. Hassabis recently claimed AGI is “going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed.”

The paper shows the divergence between the self-serving narratives AI companies promote in the media and how they collapse under rigorous examination. Other philosophers and researchers of consciousness I talked to said Lerchner’s paper, titled “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,” is strong and that they’re glad to see the argument come from one of the big AI companies, but that other experts in the field have been making the exact same arguments for decades. 

“I think he [Lerchner] arrived at this conclusion on his own and he's reinvented the wheel and he's not well read, especially in philosophical areas and definitely not in biology,” Johannes Jäger, an evolutionary systems biologist and philosopher, told me. 

Lerchner’s paper is complicated and filled with jargon, but the argument broadly boils down to the point that any AI system is ultimately “mapmaker-dependent,” meaning it “requires an active, experiencing cognitive agent”—a human—to “alphabetize continuous physics into a finite set of meaningful states.” In other words, it needs a person to first organize the world in way that is useful to the AI system, like, for example, the way armies of low paid workers in Africa label images in order to create training data for AI. 

The so-called “abstraction fallacy” is the mistaken belief that because we’ve organized data in such a way that allows AI to manipulate language, symbols, and images in a way that mimics sentient behavior, that it could actually achieve consciousness. But, as Lerchner argues, this would be impossible without a physical body. 

“You have many other motivations as a human being. It's a bit more complicated than that, but all of those spring from the fact that you have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that,” Jäger told me. “An LLM doesn't do that. It's just a bunch of patterns on a hard drive. Then it gets prompted and it runs until the task is finished and then it's done. So it doesn't have any intrinsic meaning. Its meaning comes from the way that some human agent externally has defined a meaning.”

One could imagine an embodied AI programmed with human-like physical needs, and Jäger talked about why a system like that couldn’t achieve consciousness as well, but that’s beyond the scope of this article. There are mountains of literature and decades of research that have gone into these questions, and almost none of it is cited in Lerchner’s paper. 

“I'm in sympathy with 99 percent of everything that he [Lerchner] says,” Mark Bishop, a professor of cognitive computing at Goldsmiths, University of London, told me. “My only point of contention is that all these arguments have been presented years and years ago.”

Both Bishop and Jäger said that it was good, but odd, that Google allowed Lerchner to publish the paper. Both said the argument Lerchner makes, and that they agree with, is not an obscure philosophical point irrelevant to the average user, but that the claim that AI can’t achieve consciousness means that there’s a hard cap on what AI could accomplish practically and commercially. For example, Jäger and Bishop said AGI, and the impact 10 times the Industrial Revolution that DeepMind CEO Hassabis predicts, is not likely according to this perspective. 

“[Elon] Musk himself has argued that to get level five autonomy [in self-driving cars] you need generalized autonomy” which is Musk’s term for AGI, Bishop said. 

Lerchner’s paper argues that AGI without sentience is possible, saying that “the development of highly capable Artificial General Intelligence (AGI) does not inherently lead to the creation of a novel moral patient, but rather to the refinement of a highly sophisticated, non-sentient tool.” DeepMind is also actively operating as if AGI is coming. As I reported last year, for example, it was hiring for a “post-AGI” research scientist. 

Lerchner’s paper includes a disclaimer at the bottom that says “The theoretical framework and proofs detailed herein represent the author’s own research and conclusions. They do not necessarily reflect the official stance, views, or strategic policies of his employer.” The paper was originally published on March 10 and is still featured on Google DeepMind’s site. The PDF of the paper itself, hosted on philpapers.org, originally included Google DeepMind letterhead, but appears to have been replaced with a new PDF that removes Google’s branding from the paper, and moved the same disclaimer to the top of the paper, after I reached out for comment on April 20. Google did not respond to that request for comment. 

“We can imagine many financial and legislative reasons why Google would be sanguine with a conclusion that says computations can't be consciousness,” Bishop told me. “Because if the converse was true, and bizarrely enough here in Europe, we had some nutters who tried to get legislation through the European Parliament to give computational systems rights just a few years ago, which seems to be just utterly stupid. But you can imagine that Google will be quite happy for people to not think their systems are conscious. That means they might be less subject to legislation either in the US or anywhere in the world.”

Jäger said that he’s happy to see a Google DeepMind scientist publish this research, but said that AI companies could learn a lot by talking to the researchers and educating themselves with the work Lerchner failed to cite in his paper, or simply didn’t know existed. 

“The AI research community is extremely insular in a lot of ways,” Jager said. “For example, none of these guys know anything about the biological origins of words like ‘agency’ and ‘intelligence’ that they use all the time. They have absolutely frighteningly no clue. And I'm talking about Jeffrey Hinton and top people, Turing Prize winners and Nobel Prize winners that are absolutely marvelously clueless about both the conceptual history of these terms, where they came from in their own history of AI, and that they're used in a very weird way right now. And I'm always very surprised that there is so little interest. I guess it's just a high pressure environment and they go ahead developing things they don't have time to read.”

Emily Bender, a Professor of Linguistics at the University of Washington and co-author of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, told me that Lerchner might have been told that he’s replicating old work, or that he should at least cite it, if he had gone through a normal peer-review process. 

“Much of what's happening in this research space right now is you get these paper-shaped objects coming out of the corporate labs,” but that go through a proper scientific paper publishing process. 

Bender also told me that the field of computer science and humanity more broadly “if computer science could understand itself as one discipline among peers instead of the way that it sees itself, especially in these AGI labs, as the pinnacle of human achievement, and everybody else is just domain experts [...] it would be a better world if we didn't have that setup.”


Formula 1 News

Formula 1® - The Official F1® Website

What time is the 2026 Miami GP and how can I watch it?

Here are all the timings – along with all the additional information you need – for the fourth Grand Prix weekend of the 2026 season from the Miami International Autodrome in Florida.

Alonso reflects on F1 future beyond 2026

Fernando Alonso has voiced his hope that 2026 will not be “the last season” for him in F1, with the Spaniard suggesting that he does not “feel it is that time yet”.

The state of play at McLaren

McLaren won both titles last season but, in a new era of F1, they have not started as well as they might have hoped. Under Andrea Stella's watchful eye, can the papaya team turn things around?