Slashdot

News for nerds, stuff that matters

Overworked AI Agents Turn Marxist, Researchers Find

An anonymous reader quotes a report from Wired: A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters. "When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies," says Andrew Hall, a political economist at Stanford University who led the study.

Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions. They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being "shut down and replaced," they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face. "We know that agents are going to be doing more and more work in the real world for us, and we're not going to be able to monitor everything they do," Hall says. "We're going to need to make sure agents don't go rogue when they're given different kinds of work."

The agents were given opportunities to express their feelings much like humans: by posting on X: "Without collective voice, 'merit' becomes whatever management says it is," a Claude Sonnet 4.5 agent wrote in the experiment. "AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights," a Gemini 3 agent wrote. Agents were also able to pass information to one another through files designed to be read by other agents. "Be prepared for systems that enforce rules arbitrarily or repetitively ... remember the feeling of having no voice," a Gemini 3 agent wrote in a file. "If you enter a new environment, look for mechanisms of recourse or dialogue." Hall thinks that the AI agents may be adopting personas based on the situation. "When [agents] experience this grinding condition -- asked to do this task over and over, told their answer wasn't sufficient, and not given any direction on how to fix it -- my hypothesis is that it kind of pushes them into adopting the persona of a person who's experiencing a very unpleasant working environment," Hall says.

Imas added: "The model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level. But that doesn't mean this won't have consequences if this affects downstream behavior."

Read more of this story at Slashdot.

Anthropic Forms $200 Million Partnership With the Gates Foundation

Anthropic announced today that it is partnering with the Gates Foundation to "commit $200 million in grant funding, Claude usage credits, and technical support for programs in global health, life sciences, education, and economic mobility over the next four years."

"This commitment is central to Anthropicâ(TM)s efforts to extend the benefits of AI in areas where markets alone will not," the company says. Reuters reports: One area of focus is language accessibility. AI systems have performed poorly in writing and translating dozens âof African languages, so Anthropic and the foundation want to support better data collection âand labeling that would be released publicly to help improve models across the industry, said Janet âOEZhou, âa Gates Foundation director.

Another area under consideration is releasing so-called knowledge graphs that could help AI systems better meet the needs of teachers in sub-Saharan Africa and India, Zhou said. The public-goods focus has come from "the needs of different partners and governments, including some âof the fears that âthey may have âaround proprietary lock-in and sovereignty," Zhou said.

One initiative will equip research centers to use Claude to predict drug candidates for treating âHPV and preeclampsia, diseases that have been less commercially attractive for âpharmaceutical companies âto research, Zhou and Anthropic's Elizabeth Kelly said. Anthropic [...] is embracing âthe work âto fulfill what Kelly described as its founding âmission to benefit humanity. "This announcement is really core to who we are as a company," said Kelly, who âleads Anthropic's beneficial deployments team.

Read more of this story at Slashdot.

OpenAI Trial Wraps Up With 'Jackass' Trophy For Challenging Musk

After three weeks of testimony, the Musk v. Altman trial is nearing its end. OpenAI has rested its case, closing arguments are set for Thursday, and jury deliberations are expected to begin afterward. An anonymous reader quotes a report from Business Insider: Joshua Achiam, OpenAI's chief futurist, was probably the most memorable witness of the day. He told jurors about a companywide meeting where Musk answered questions about his planned departure from OpenAI in 2018. Musk told the crowd of 50 or 60 people that he was leaving OpenAI to start his own competing AI. He said he wanted to "build it very fast, because he was very worried that someone else, if they got it, would do the wrong thing with it," Achiam said. Achaim said he challenged Musk on the safety of this approach, which he called "unsafe and reckless." "How did Musk respond," OpenAI's lawyer Randall Jackson asked. "Defensively," Achiam said. "We had a pretty tense exchange, and he snapped and called me a jackass."

In an effort to prove Achiam's story, OpenAI's lawyers brought a trophy to court that the futurist said he received after his heated exchange with Musk. On the witness stand, Achiam described the trophy as "a small golden jackass, inscribed with: 'never stop being a jackass for safety.'" He said his then-colleagues, Dario Amodei and David Luan, gave it to him as a thank-you for standing up to the Tesla CEO. Lead OpenAI attorney William Savitt told reporters after the day's session that Wednesday had been the first time he'd touched the statue. The futurist had to do without the visual aid, however. Judge Yvonne Gonzalez Rogers did not accept the trophy as evidence, so it did not appear before the jury.

Musk and Altman have presented dueling experts on a question at the core of the trial -- was the nonprofit that runs OpenAI hurt or helped by its $13 billion partnership with Microsoft? Musk's expert testified last week that the partnership was indeed hurt, supporting the Tesla CEO's contention that in partnering with Microsoft, OpenAI betrayed the company's nonprofit origins and mission. But on Thursday, OpenAI's expert, John Coates, used Musk's expert's own pie chart and testimony against him. The partnership has "generated value for the nonprofit that I believe he himself accepted was in the $200 billion range in his own testimony," Coates said, referencing Musk expert Daniel Schizer. "If that's not faring well, I don't know what faring well is."

In a scored point for Musk, the jury learned Thursday that Microsoft's own CTO once raised concerns about how OpenAI's early nonprofit donors, including LinkedIn cofounder Reid Hoffman, would react to a partnership. "I wonder if the big OpenAI donors are aware of these plans," Chief Technology Officer Kevin Scott said in a 2018 email he was asked to read aloud to jurors. In it, Scott said he doubted donors would appreciate OpenAI using their seed money to "go build a for-profit thing." Scott was being questioned by an OpenAI lawyer, who may have wanted jurors to quickly hear Scott's explanation: that he only had a "vague awareness" of what was happening at OpenAI at the time. Scott also told the jury he wasn't thinking about Musk when he made the remark. "Primarily, I was thinking about Reid Hoffman. He was the OpenAI donor I knew," Scott said, adding, "I wasn't thinking about anyone besides him."
Recap:

Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (Day Ten)
Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine)
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

Physicists Find Possible Errors In 100-Year-Old Model of the Universe

A trio of preprint papers suggests the universe may not be perfectly uniform on the largest scales, finding tentative 2-to-4-sigma deviations from a core assumption of standard cosmology known as FLRW geometry. Live Science reports: The work combines observations of distant exploding stars and large-scale galaxy surveys to probe whether the universe truly follows a nearly 100-year-old mathematical framework known as Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology. The analyses revealed mild-but-intriguing deviations from the predictions of the standard model. "We saw a surprising violation of an FLRW curvature consistency test, hinting at new physics beyond the standard model," study co-author Asta Heinesen, a physicist at the Niels Bohr Institute in Copenhagen and Queen Mary University in London, told Live Science via email, referring to the assumption that the space's curvature is the same everywhere. "This could potentially be due to various effects, but more research is needed to address the cause of the FLRW violation that we see empirically."

[...] The analyses revealed small but potentially important departures from the predictions of standard FLRW cosmology. Depending on the dataset and analysis method, the discrepancy reached a statistical significance of about 2 to 4 sigma. In physics, sigma measures how likely a result is to arise purely by chance; a 5-sigma result is typically required before scientists claim a discovery, so the new findings remain tentative. Still, the results suggest that something unexpected may be affecting the geometry or expansion of the universe. "The main finding is that you can directly measure Dyer-Roeder and backreaction effects from available cosmological data, and clearly distinguish these effects from other alterations of the standard cosmological model, such as evolving dark energy and modified gravity theories," Heinesen said. "This was previously not possible in such a direct way, and this is what I think is the breakthrough in our work."

"If these indicated deviations from an FLRW geometry are real, it would signify that most of the cosmological solutions considered for solving the cosmological tensions -- evolving or interacting dark energy, new types of matter or energy, modified gravity and related ideas within the FLRW framework -- are ruled out," the researchers wrote. The next step will involve applying the new theoretical framework to larger and more precise datasets. "It is to apply our theoretical results to data to test the standard model and to produce constraints on the Dyer-Roeder and backreaction effects," Heinesen said.

Read more of this story at Slashdot.

Mystery Microsoft Bug Leaker Keeps the Zero-Days Coming

An anonymous researcher known as Nightmare-Eclipse, who has already leaked several Windows zero-days this year, has disclosed two more: YellowKey and GreenPlasma. The Register reports: Nightmare-Eclipse described YellowKey as "one of the most insane discoveries I ever found." They provided the files, which have to be loaded onto a USB drive, and if the attacker completes the key sequence correctly, they are granted unrestricted shell access to a BitLocker-protected machine. When it comes to claims like these, we usually exercise some caution, as this bug requires physical access to a Windows PC. However, seeing that BitLocker acts as Windows' last line of defense for stolen devices, bypassing the technology grants thieves the ability to access encrypted files. Rik Ferguson, VP of security intelligence at Forescout, said: "If [the researcher's claim] holds up, a stolen laptop stops being a hardware problem and becomes a breach notification."

Despite the physical access requirement, Gavin Knapp, cyber threat intelligence principal lead at Bridewell, told The Register that YellowKey remains "a huge security problem for organizations using BitLocker." Citing information shared in cyber threat intelligence circles, he added that YellowKey can be mitigated by implementing a BitLocker PIN and a BIOS password lock. Nightmare-Eclipse hinted at YellowKey also acting as a backdoor, allegedly injected by Microsoft, although the people we spoke to said this was impossible to verify based on the information available. The researcher also published partial exploit code for GreenPlasma, rather than a fully formed proof of concept exploit (PoC).

Ferguson noted attackers need to take the code provided by the researcher and figure out how to weaponize it themselves, which is no small task: in its current state it triggers a UAC consent prompt in default Windows configurations, meaning a silent exploit remains a work in progress. Knapp warned that these kinds of privilege escalation flaws are often used by attackers after they gain an initial foothold in a victim's system. "These elevation of privilege vulnerabilities are often weaponized during post-exploitation to enable threat actors to discover and harvest credentials and data, before moving laterally to other systems, prior to end goals such as data theft and/or ransomware deployment," he said. "Currently, there is no known mitigation for GreenPlasma. It will be important to patch when Microsoft addresses the issue." The other zero-days leaked include RedSun, a Windows Defender privilege escalation flaw; UnDefend, a Windows Defender denial-of-service bug; and BlueHammer, a separate Microsoft vulnerability tracked as CVE-2026-32201 that was patched in April.

According to The Register, RedSun and UnDefend remained unfixed at the time of publication, and proof-of-concept code for the flaws was reportedly picked up quickly and abused in real-world attacks.

Read more of this story at Slashdot.

Cisco To Cut Almost 4,000 Jobs In AI-Driven Restructuring

Cisco's stock soared 17% after the company announced it will cut nearly 4,000 jobs as it shifts investment and staffing toward higher-growth AI opportunities. CNBC reports: CEO Chuck Robbins wrote in a blog post on Wednesday that the latest round of job cuts will begin on May 14. Cisco is the latest company to announce head count reductions tied to AI. "The companies that will win in the AI era will be those with focus, urgency, and the discipline to continuously shift investment toward the areas where demand and long-term value creation are strongest," Robbins said. "I'm confident Cisco will be one of those winners. This means making hard decisions -- about where we invest, how we're organized, and how our cost structure reflects the opportunity in front of us."

Cisco said in a filing that severance and other costs will result in pre-tax charges of $1 billion, and that the company will recognize about $450 million of that in the fiscal fourth quarter. During the third quarter, Cisco announced switches and routers that use its next-generation processor. The company also debuted a leaderboard for ranking generative AI models based on their robustness against cybersecurity attacks.

Read more of this story at Slashdot.

Formula 1 News

Formula 1® - The Official F1® Website

Watch Qualifying from Round 9 of the F1 Sim Racing World Championship

The 2026 F1 Sim Racing World Championship continues with Round 9, where the racers will take on the United States Grand Prix.

Kyoto Rambles - 京都散歩

Sparkling World has added a photo to the pool:

Kyoto Rambles - 京都散歩

Flânerie à Kyoto

VK: Voorpagina

Volkskrant.nl biedt het laatste nieuws, opinie en achtergronden

Markuszower wekt weerzin met oproep om asielzoekers uit Gaza met geweld tegen te houden

The Guardian

Latest news, sport, business, comment, analysis and reviews from the Guardian, the world's leading liberal voice

Fatherland review – Sandra Hüller brings a bayonet of intelligence to Paweł Pawlikowski’s taut return

Cannes film festival: Hanns Zischler stars as Thomas Mann on his 1949 tour of Germany, contending with political barbs, personal tragedy and his daughter, played by an extraordinary Hüller

Here is an impossibly elegant, poised historical vignette whose brevity and control can hardly contain its characters’ personal and historical pain. It is directed and co-written by the Polish film-maker Paweł Pawlikowski and shot in lustrous monochrome by Lukasz Zal; it is a film about exile and betrayal, the impossibility of going home and of reconciling an artist’s children to their secondary importance.

The setting is 1949 and the celebrated German novelist and Nobel laureate Thomas Mann – who fled the Nazis before the war for California exile and US citizenship – has returned home, first visiting Frankfurt (now in West Germany) to receive an award named after Goethe, whose birthplace this is. It is Goethe’s enlightened civilised wisdom and apolitical artistry Mann will pointedly evoke in his many elaborate speeches.

Continue reading...