Slashdot

News for nerds, stuff that matters

Cloudflare Fast-Tracks Post-Quantum Rollout To 2029

Cloudflare is accelerating its post-quantum security plans and now aims to make its entire platform fully post-quantum secure by 2029. "The updated timeline follows new developments in quantum computing research that suggest current cryptographic standards could be broken sooner than previously expected," reports SiliconANGLE. From the report: The decision by Cloudflare to move its post-quantum security roadmap forward comes after Google LLC and research from Oratomic demonstrated significant advances in algorithms and hardware capable of breaking widely used encryption methods such as RSA-2048 and elliptic curve cryptography. [...] The company said progress across three key areas -- quantum hardware, error correction and quantum algorithms -- is advancing in parallel and compounding overall capability. Improvements in areas such as neutral atom architectures and more efficient error correction are reducing the resources required to break encryption, while algorithmic advances are lowering computational complexity. [...]

Cloudflare has already deployed post-quantum encryption across a large portion of its network and reports that more than half of human traffic it processes now uses post-quantum key agreement. The company plans to expand support for post-quantum authentication in 2026, followed by broader deployment across its network and products through 2028. By 2029, Cloudflare said, it expects all of its services to be fully post-quantum secure, with those services being available by default across its platform, without requiring customer action or additional cost as part of the company's commitment to security upgrades. Google said it plans to accelerate its post-quantum encryption migration target to 2029.

Read more of this story at Slashdot.

New Revelations Reignite Crypto Scandal Involving Argentina's President Milei

An anonymous reader quotes a report from the New York Times: President Javier Milei of Argentina promoted a cryptocurrency last year that quickly skyrocketed in value then cratered just as fast, costing investors millions of dollars and setting off a scandal and an investigation. Mr. Milei said he was simply highlighting a private venture and had no connection to the digital coin called $Libra. New evidence is now raising questions about his assertion. Phone logs from a federal investigation by Argentine prosecutors into the coin's collapse show seven phone calls between Mr. Milei and one of the entrepreneurs behind the cryptocurrency on the night in 2025 when Mr. Milei posted about $Libra on X. The contents of the calls, which took place before and after Mr. Milei's post, are not known.

But the phone logs -- which were obtained by The New York Times and first reported by a local cable news channel, C5N -- suggest a greater degree of communication between Mr. Milei and the entrepreneurs who launched the token than what the president has publicly acknowledged. Newly uncovered messages also suggest Mr. Milei received regular payments from one of the entrepreneurs while he was a congressman. Mr. Milei has not publicly commented on the call logs and other documents, and he did not respond to a request for comment. He is named as a person of interest in the federal prosecutor's continuing investigation into the digital coin, according to court documents reviewed by The Times, but has not been formally charged with any crime. The latest revelations have revived a scandal that threatens the very foundation of a president who rose to power and was elected president in 2023 by attacking a political class he called corrupt.

Read more of this story at Slashdot.

Stanford Daily Ponders Fate of Bill Gates Namesake Building On April Fools' Day

theodp writes: "Gates Computer Science Building renamed Peter Thiel Center for Panoptic Computing" reads the headline of an April Fools' Day story that ran in the Humor section of The Stanford Daily (with the further disclaimer that "This article is purely satirical and fictitious"). The story begins: "Following revelations that the billionaire founder of Microsoft, Bill Gates, had a longstanding relationship with convicted child sex trafficker Jeffrey Epstein, Stanford has announced it will strip Gates' name from the William H. Gates Computer Science Building and instead honor alumnus Peter Thiel B.A. '89, JD '92. Gates, who is not a Stanford alumnus, gave an initial gift of $6 million toward the building's construction in 1992."

While fictional, the story does make one wonder what may become of the academic and institutional buildings worldwide named after Bill Gates in the blowback over his past ties to Epstein, which have already played a factor in the breakdown of his marriage to Melinda French Gates and friendship with Warren Buffet. In addition to The Gates Computer Science Building at Stanford, this includes the Bill and Melinda Gates Computer Science Complex at the University of Texas at Austin, Bill and Melinda Gates Hall at Cornell, The Bill & Melinda Gates Center for Computer Science & Engineering at the University of Washington, and The William H. Gates Building at MIT's Stata Center. Buildings named after Gates' parents include Mary Gates Hall and William H. Gates Hall at the University of Washington, and The William Gates Building at the University of Cambridge (UK).
Aside from the Thiel angle, The Stanford Daily's April Fools' Day story may not be as far-fetched as it may seem -- many universities' naming policies include provisions allowing donors' names to be removed from buildings, programs, or other facilities under extraordinary circumstances. For example, the University of Washington's Regent Policy No. 50 states, "The University reserves the right to revoke and terminate any naming on reasonable grounds not limited to the revelation of corporate or individual acts detracting from the University's mission, integrity, or reputation." Then again, UW notes that Bill's parents and siblings served as UW Regents for decades, so one expects Bill will be granted some leeway here for what he has characterized as 'foolish' choices on his part.

Read more of this story at Slashdot.

Formula 1 News

Formula 1® - The Official F1® Website

WATCH: Winning overtakes but they happen progressively later

From first lap moves that prove decisive to last-gasp moments that get the crowd on their feet, F1 has seen it all over the years.

Cybersecurity in the Age of Instant Software

AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: “instant software.” Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.

AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve.

In this essay, I want to take an optimistic view of AI’s progress, and to speculate what AI-dominated cybersecurity in an age of instant software might look like. There are a number of unknowns that will factor into how the arms race between attacker and defender might play out.

How flaw discovery might work

On the attacker side, the ability of AIs to automatically find and exploit vulnerabilities has increased dramatically over the past few months. We are already seeing both government and criminal hackers using AI to attack systems. The exploitation part is critical here, because it gives an unsophisticated attacker capabilities far beyond their understanding. As AIs get better, expect more attackers to automate their attacks using AI. And as individuals and organizations can increasingly run powerful AI models locally, AI companies monitoring and disrupting malicious AI use will become increasingly irrelevant.

Expect open-source software, including open-source libraries incorporated in proprietary software, to be the most targeted, because vulnerabilities are easier to find in source code. Unknown No. 1 is how well AI vulnerability discovery tools will work against closed-source commercial software packages. I believe they will soon be good enough to find vulnerabilities just by analyzing a copy of a shipped product, without access to the source code. If that’s true, commercial software will be vulnerable as well.

Particularly vulnerable will be software in IoT devices: things like internet-connected cars, refrigerators, and security cameras. Also industrial IoT software in our internet-connected power grid, oil refineries and pipelines, chemical plants, and so on. IoT software tends to be of much lower quality, and industrial IoT software tends to be legacy.

Instant software is differently vulnerable. It’s not mass market. It’s created for a particular person, organization, or network. The attacker generally won’t have access to any code to analyze, which makes it less likely to be exploited by external attackers. If it’s ephemeral, any vulnerabilities will have a short lifetime. But lots of instant software will live on networks for a long time. And if it gets uploaded to shared tool libraries, attackers will be able to download and analyze that code.

All of this points to a future where AIs will become powerful tools of cyberattack, able to automatically find and exploit vulnerabilities in systems worldwide.

Automating patch creation

But that’s just half of the arms race. Defenders get to use AI, too. These same AI vulnerability-finding technologies are even more valuable for defense. When the defensive side finds an exploitable vulnerability, it can patch the code and deny it to attackers forever.

How this works in practice depends on another related capability: the ability of AIs to patch vulnerable software, which is closely related to their ability to write secure code in the first place.

AIs are not very good at this today; the instant software that AIs create is generally filled with vulnerabilities, both because AIs write insecure code and because the people vibe coding don’t understand security. OpenClaw is a good example of this.

Unknown No. 2 is how much better AIs will get at writing secure code. The fact that they’re trained on massive corpuses of poorly written and insecure code is a handicap, but they are getting better. If they can reliably write vulnerability-free code, it would be an enormous advantage for the defender. And AI-based vulnerability-finding makes it easier for an AI to train on writing secure code.

We can envision a future where AI tools that find and patch vulnerabilities are part of the typical software development process. We can’t say that the code would be vulnerability-free—that’s an impossible goal—but it could be without any easily findable vulnerabilities. If the technology got really good, the code could become essentially vulnerability-free.

Patching lags and legacy software

For new software—both commercial and instant—this future favors the defender. For commercial and conventional open-source software, it’s not that simple. Right now, the world is filled with legacy software. Much of it—like IoT device software—has no dedicated security team to update it. Sometimes it is incapable of being patched. Just as it’s harder for AIs to find vulnerabilities when they don’t have access to the source code, it’s harder for AIs to patch software when they are not embedded in the development process.

I’m not as confident that AI systems will be able to patch vulnerabilities as easily as they can find them, because patching often requires more holistic testing and understanding. That’s Unknown No. 3: how quickly AIs will be able to create reliable software updates for the vulnerabilities they find, and how quickly customers can update their systems.

Today, there is a time lag between when a vendor issues a patch and customers install that update. That time lag is even longer for large organizational software; the risk of an update breaking the underlying software system is just too great for organizations to roll out updates without testing them first. But if AI can help speed up that process, by writing patches faster and more reliably, and by testing them in some AI-generated twin environment, the advantage goes to the defender. If not, the attacker will still have a window to attack systems until a vulnerability is patched.

Toward self-healing

In a truly optimistic future, we can imagine a self-healing network. AI agents continuously scan the ever-evolving corpus of commercial and custom AI-generated software for vulnerabilities, and automatically patch them on discovery.

For that to work, software license agreements will need to change. Right now, software vendors control the cadence of security patches. Giving software purchasers this ability has implications about compatibility, the right to repair, and liability. Any solutions here are the realm of policy, not tech.

If the defense can find, but can’t reliably patch, flaws in legacy software, that’s where attackers will focus their efforts. If that’s the case, we can imagine a continuously evolving AI-powered intrusion detection, continuously scanning inputs and blocking malicious attacks before they get to vulnerable software. Not as transformative as automatically patching vulnerabilities in running code, but nevertheless valuable.

The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software. Again: Advantage defender.

There are other variables to consider. The relative success of attackers and defenders also depends on how plentiful vulnerabilities are, how easy they are to find, whether AIs will be able to find the more subtle and obscure vulnerabilities, and how much coordination there is among different attackers. All this comprises Unknown No. 4.

Vulnerability economics

Presumably, AIs will clean up the obvious stuff first, which means that any remaining vulnerabilities will be subtle. Finding them will take AI computing resources. In the optimistic scenario, defenders pool resources through information sharing, effectively amortizing the cost of defense. If information sharing doesn’t work for some reason, defense becomes much more expensive, as individual defenders will need to do their own research. But instant software means much more diversity in code: an advantage to the defender.

This needs to be balanced with the relative cost of attackers finding vulnerabilities. Attackers already have an inherent way to amortize the costs of finding a new vulnerability and create a new exploit. They can vulnerability hunt cross-platform, cross-vendor, and cross-system, and can use what they find to attack multiple targets simultaneously. Fixing a common vulnerability often requires cooperation among all the relevant platforms, vendors, and systems. Again, instant software is an advantage to the defender.

But those hard-to-find vulnerabilities become more valuable. Attackers will attempt to do what the major intelligence agencies do today: find “nobody but us” zero-day exploits. They will either use them slowly and sparingly to minimize detection or quickly and broadly to maximize profit before they’re patched. Meanwhile, defenders will be both vulnerability hunting and intrusion detecting, with the goal of patching vulnerabilities before the attackers find them.

We can even imagine a market for vulnerability sharing, where the defender who finds a vulnerability and creates a patch is compensated by everyone else in the information-sharing/repair network. This might be a stretch, but maybe.

Up the stack

Even in the most optimistic future, attackers aren’t going to just give up. They will attack the non-software parts of the system, such as the users. Or they’re going to look for loopholes in the system: things that the system technically allows but were unintended and unanticipated by the designers—whether human or AI—and can be used by attackers to their advantage.

What’s left in this world are attacks that don’t depend on finding and exploiting software vulnerabilities, like social engineering and credential stealing attacks. And we have already seen how AI-generated deepfakes make social engineering easier. But here, too, we can imagine defensive AI agents that monitor users’ behaviors, watching for signs of attack. This is another AI use case, and one that I’m not even sure how to think about in terms of the attacker/defender arms race. But at least we’re pushing attacks up the stack.

Also, attackers will attempt to infiltrate and influence defensive AIs and the networks they use to communicate, poisoning their output and degrading their capabilities. AI systems are vulnerable to all sorts of manipulations, such as prompt injection, and it’s unclear whether we will ever be able to solve that. This is Unknown No. 5, and it’s a biggie. There might always be a “trusting trust problem.”

No future is guaranteed. We truly don’t know whether these technologies will continue to improve and when they will plateau. But given the pace at which AI software development has improved in just the past few months, we need to start thinking about how cybersecurity works in this instant software world.

This essay originally appeared in CSO.

Wel.nl

Minder lezen, Meer weten.

Geraspte wortels, skyr, plantaardige drinks: wanneer gezond eten ultrabewerkt blijkt

Skyr met rode vruchten, amandelmelk, geraspte wortels met citroen: het zijn precies de producten waarmee supermarkten ons een gezond gevoel geven, met kreten als “rijk aan eiwit”, “suikervrij” of “zonder conserveermiddelen”. Toch worden ze door Foodwatch ingedeeld als ultrabewerkt, omdat er ingrediënten in zitten die geen enkele thuiskok in de kast heeft: verdikkingsmiddelen, textuurverbeteraars, conserveermiddelen en andere additieven die structuur, kleur en smaak moeten nabootsen.

Op basis van de NOVA-classificatie, die de mate van verwerking van producten in kaart brengt, wijst de organisatie tien alledaagse producten aan, waaronder Alpro’s geroosterde amandeldrink, geraspte wortelen met citroensap van Carrefour, tonijn met citroen, knapperige muesli en Yoplait Skyr met rode vruchten. Ze ogen eenvoudig, maar zijn volgens Foodwatch “sterk bewerkt” door de opeenstapeling van industriële processen en cosmetische additieven.

Gezondheidslogo’s vs. ultrabewerkt risico

De spanning zit precies daar: producten die hoog scoren op eiwit of vezels en soms een keurig Nutri-scoreprofiel hebben, vallen tegelijk in de categorie ultrabewerkt. Onderzoekers koppelen een hoge inname van ultrabewerkte voeding inmiddels aan een verhoogd risico op chronische ziekten, waaronder hart- en vaatziekten, type 2-diabetes, obesitas en bepaalde vormen van kanker. In Frankrijk is naar schatting een aanzienlijk deel van de dagelijkse calorie-inname afkomstig uit dit soort producten; in sommige markten gaat het al richting 60 procent van het aanbod.

Fabrikanten slaan terug. Danone, producent van Alpro, benadrukt dat er geen universele definitie van “ultrabewerkt” bestaat en dat je een product in zijn geheel moet beoordelen op voedingsprofiel. Yoplait kondigt ondertussen aan dat het twee additieven schrapt uit zijn Skyr-receptuur, terwijl Carrefour onderzoekt of het xanthaangom uit zijn geraspte-wortellijn kan vervangen.

Foodwatch wil nieuw waarschuwingslabel

Foodwatch vindt dat niet genoeg en vraagt de overheid om een verplicht en helder etiket dat direct aangeeft of een product ultrabewerkt is. In de praktijk zijn deze “grensgevallen” nu nauwelijks te herkennen in het schap: de consument ziet gezondheidsclaims, maar niet de lange lijst industriële hulpstoffen erachter.

Daarmee schuift de discussie op van vet, suiker en zout naar een bredere vraag: hoeveel industrie in ons eten vinden we nog acceptabel, zelfs als het er gezond uitziet? De strijd om dat label op de voorkant van de verpakking is in feite een strijd om vertrouwen – tussen fabrikanten die hun recepten stap voor stap bijstellen, en waakhonden die vinden dat de consument eindelijk het hele verhaal verdient.



The Register

Biting the hand that feeds IT — Enterprise Technology News and Analysis

Russia's Fancy Bear still attacking routers to boost fake sites, NCSC warns

200 orgs and 5,000 devices compromised so far in Vlad's latest intelligence grab, Microsoft reckons

The UK's National Cyber Security Centre (NCSC) has issued a fresh warning about Russia's ongoing targeting of routers to steal passwords and other secrets.…

Rijnmond - Nieuws

Het laatste nieuws van vandaag over Rotterdam, Feyenoord, het verkeer en het weer in de regio Rijnmond

26 organisaties spreken zich uit: 'Maak donkere gedachtes bespreekbaar'

26 Rotterdamse organisaties vinden dat er meer aandacht moet komen voor het vroegtijdig herkennen van donkere gedachtes bij jongeren. "Juist nu is het belangrijk om zichtbaar te maken dat niemand er alleen voor hoeft te staan. We doen dit samen."

Automobilist rijdt bezorger aan en botst tegen paal

Op de Planetenlaan in Spijkenisse is dinsdagmiddag een aanrijding geweest tussen een automobilist en een fietser. De fietser, die aan het werk was als bezorger, raakte gewond en is nagekeken door ambulancepersoneel. Op dat moment was het slachtoffer nog wel aanspreekbaar.

That's What I Like About You

Thomas Hawk posted a photo:

That's What I Like About You