Slashdot

News for nerds, stuff that matters

OpenAI Declares 'Code Red' As Google Catches Up In AI Race

OpenAI has reportedly issued a "code red" on Monday, pausing projects like ads, shopping agents, health tools, and its Pulse assistant to focus entirely on improving ChatGPT. "This includes core features like greater speed and reliability, better personalization, and the ability to answer more questions," reports The Verge, citing a memo reported by the Wall Street Journal and The Information. "There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development." From the report: The newfound urgency illustrates an inflection point for OpenAI as it spends hundreds of billions of dollars to fund growth and figures out a path to future profitability. It is also something of a full-circle moment in the AI race. Google, which declared its own "code red" after the arrival of ChatGPT, is a particular concern. Google's AI user base is growing -- helped by the success of popular tools like the Nano Banana image model -- and its latest AI model, Gemini 3, blew past its competitors on many industry benchmarks and popular metrics.

Read more of this story at Slashdot.

UK Plans To Ban Cryptocurrency Political Donations

The UK government plans to ban political donations made in cryptocurrency over fears of anonymity, foreign influence, and traceability issues, though the ban won't be ready in time for the upcoming elections bill. The Guardian reports: The government's ambition to ban crypto donations will be a blow to Nigel Farage's Reform UK party, which became the first to accept contributions in digital currency this year. It is believed to have received its first registrable donations in cryptocurrency this autumn and the party has set up its own crypto portal to receive contributions, saying it is subject to "enhanced" checks. Government sources have said ministers believe cryptocurrency donations to be a problem, as they are difficult to trace and could be exploited by foreign powers or criminals.

Pat McFadden, then a Cabinet Office minister, first raised the idea in July, saying: "I definitely think it is something that the Electoral Commission should be considering. I think that it's very important that we know who is providing the donation, are they properly registered, what are the bona fides of that donation." The Electoral Commission provides guidance on crypto donations but ministers accept any ban would probably have to come from the government through legislation. "Crypto donations present real risks to our democracy," said Susan Hawley, the executive director of Spotlight on Corruption. "We know that bad actors like Russia use crypto to undermine and interfere in democracies globally, while the difficulties involved in tracing the true source of transactions means that British voters may not know everyone who's funding the parties they vote for."

Read more of this story at Slashdot.

Wel.nl

Minder lezen, Meer weten.

Partij voor de Dieren wil handhavingsverzoek zeeleeuwen Artis

AMSTERDAM (ANP) - De leefomstandigheden en het welzijn van de zeeleeuwen in dierentuin Artis zijn niet verbeterd. Dat blijkt volgens de Amsterdamse afdeling van de Partij voor de Dieren uit nieuwe beelden. De Partij voor de Dieren heeft daarom een formeel handhavingsverzoek gedaan bij het ministerie van Landbouw, Visserij, Voedselzekerheid en Natuur.

In juli maakte de Partij voor de Dieren een melding bij meldpunt 144 vanwege slechte huisvesting van de drie Californische zeeleeuwen in Artis. De dierentuin liet toen weten dat de zeeleeuwen Artis gaan verlaten en een nieuw onderkomen krijgen in een dierentuin in Singapore.

In de tussentijd is er volgens de Partij voor de Dieren niet genoeg gedaan om de leefomstandigheden van de zeeleeuwen te verbeteren. Ze leven in een ondiep bassin zonder privacy en vertonen volgens de Partij voor de Dieren zorgwekkend gedrag waaruit blijkt dat de dieren verveeld en gefrustreerd zijn.


Airbus inspecteert tot 628 vliegtuigen om probleem metalen platen

PARIJS (ANP) - De Europese vliegtuigbouwer Airbus meldde dinsdag dat wereldwijd mogelijk tot 628 van zijn populaire A320-vliegtuigen moeten worden geïnspecteerd vanwege een 'kwaliteitsprobleem' met metalen platen dat aan het licht is gekomen.

Airbus voegde eraan toe dat het getal het "totale aantal potentieel getroffen vliegtuigen" weergeeft en "dat dit niet betekent dat al deze vliegtuigen noodzakelijkerwijs getroffen zijn".

Vrijdag meldde Airbus dat zo'n 6000 A320-vliegtuigen een onmiddellijke softwarefix nodig hebben. De apparatuur zou zijn aangetast door "intense zonnestraling" in de cockpit.


OMD EM1 12.3.2025 butterfly 1

uchi uchi has added a photo to the pool:

OMD EM1 12.3.2025 butterfly 1

OLYMPUS DIGITAL CAMERA

OMD EM1 12.3.2025 flower 1

uchi uchi has added a photo to the pool:

OMD EM1 12.3.2025 flower 1

OLYMPUS DIGITAL CAMERA

Sunday morning

lioil has added a photo to the pool:

Sunday morning

Showa Memorial Park, Tachikawa, Tokyo, Japan

Anil Dash

A blog about making culture. Since 1999.

Vibe Coding: Empowering and Imprisoning

In case you haven’t been following the world of software development closely, it’s good to know that vibe coding — using LLM tools to assist with writing code — can help enable many people to create apps or software that they wouldn’t otherwise be able to make. This has led to an extraordinary rapid adoption curve amongst even experienced coders in many different disciplines within the world of coding. But there’s a very important threat posed by vibe coding that almost no one has been talking about, one that’s far more insidious and specific than just the risks and threats posted by AI or LLMs in general.

Here’s a quick summary:

  • One of the most effective uses of LLMs is in helping programmers write code
  • A huge reason VCs and tech tycoons put billions into funding LLMs was so they could undermine coders and depress wages
  • Vibe coding might limit us to making simpler apps instead of the radical innovation we need to challenge Big Tech

Start vibing

It may be useful to start by explaining how people use LLMs to assist with writing software. My background is that I’ve helped build multiple companies focused on enabling millions of people to create with code. And I’m personally an example of one common scenario with vibe coding. Since I don’t code regularly anymore, I’ve become much slower and less efficient at even the web development tasks that I used to do professionally, which I used to be fairly competent at performing. In software development, there are usually a nearly-continuous stream of new technologies being released (like when you upgrade your phone, or your computer downloads an update to your web browser), and when those things change, developers have to update their skills and knowledge to stay current with the latest tools and techniques. If you’re not staying on top of things, your skillset can rapidly decay into irrelevance, and it can be hard to get back up to speed, even though you understand the fundamentals completely, and the underlying logic of how to write code hasn’t changed at all. It’s like knowing how to be an electrician but suddenly you have to do all your work in French, and you don’t speak French.

This is the kind of problem that LLMs are really good at helping with. Before I had this kind of coding assistant, I couldn’t do any meaningful projects within the limited amount of free time that I have available on nights and weekends to build things. Now, with the assistance of contemporary tools, I can get help with things like routine boilerplate code and obscure syntax, speeding up my work enough to focus on the fun, creative parts of coding that I love.

Even professional coders who are up to date on the latest technologies use these LLM tools to do things like creating scripts, which are essentially small bits of code used to automate or process common tasks. This kind of code is disposable, meaning it may only ever be run once, and it’s not exposed to the internet, so security or privacy concerns aren’t usually much of an issue. In that context, having the LLM create a utility for you can feel like being truly liberated from grunt work, something like having a robot vacuum around to sweep up the floor.

Surfing towards serfdom

This all sounds pretty good, right? It certainly helps explain why so many in the tech world tend to see AI much more positively than almost everyone else does; there’s a clear-cut example of people finding value from these tools in a way that feels empowering or even freeing.

But there are far darker sides to this use of AI. Let me put aside the threats and risks of AI that are true of all uses of the Big AI platforms, like the environmental impact, the training on content without consent, the psychological manipulation of users, the undermining of legal regulations, and other significant harms. These are all real, and profound, but I want to focus on what’s specific to using AI to help write code here, because there are negative externalities that are unique to this context that people haven’t discussed enough. (For more on the larger AI discussion, see "What would good AI look like?")

The first problem raised by vibe coding is an obvious one: the major tech investors focused on making AI good at writing code because they wanted to make coders less powerful and reduce their pay. If you go back a decade ago, nearly everyone in the world was saying “teach your kids to code” and being a software engineer was one of the highest paying, most powerful individual jobs in the history of labor. Pretty soon, coders were acting like it — using their power to improve workplace conditions for those around them at the major tech companies, and pushing their employers to be more socially responsible. Once workers began organizing in this way, the tech tycoons who founded the big tech companies, and the board members and venture capitalists who backed them, immediately began investing billions of dollars in building these technologies that would devalue the labor of millions of coders around the world.

It worked. More than half a million tech workers have been laid off in America since ChatGPT was released in November 2022.

That’s just in the private sector, and just the ones tracked by layoffs.fyi. Software engineering job listings have plummeted to a 5-year low. This is during a period of time that nobody even describes as a recession. The same venture capitalists who funded the AI boom keep insisting that these trends are about macroeconomic abstractions like interest rates, a stark contrast to their rhetoric the rest of the time, when they insist that they are alpha males who make their own decisions based on their strong convictions and brave stances against woke culture. It is, in fact, the case that they are just greedy people who invested a ton of money into trying to put a lot of good people out of work, and they succeeded in doing so.

There is no reason why AI tools like this couldn't be used in the way that they're often described, where they increase productivity and enable workers to do more and generate more value. But instead we have the wealthiest people in the world telling the wealthiest companies in the world, while they generate record profits, to lay off workers who could be creating cool things for customers, and then blaming it on everyone but themselves.

The past as prison

Then there’s the second problem raised by vibe coding: You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at. Worse, many of the people who are using vibe coding tools are increasingly those who don’t understand the code that is being generated by these systems. This means the people generating all of this newly-vibed code won’t even know when the output is insecure, or will perform poorly, or includes exploits that let others take over their system, or when it is simply incoherent nonsense that looks like code but doesn’t do anything.

All of those factors combine to encourage people to think of vibe coding tools as a sort of “black box” that just spits out an app for you. Even the giant tech companies are starting to encourage this mindset, tacitly endorsing the idea that people don’t need to know what their systems are doing under the hood. But obviously, somebody needs to know whether a system is actually secure. Somebody needs to know if a system is actually doing the tasks it says that it’s doing. The Big AI companies that make the most popular LLMs on the market today routinely design their products to induce emotional dependency in users by giving them positive feedback and encouragement, even when that requires generating false responses. Put more simply: they make the bot lie to you to make you feel good so you use the AI more. That’s terrible in a million ways, but one of them is that it sure does generate some bad code.

And a vibe coding tool absolutely won’t make something truly new. The most radical, disruptive, interesting, surprising, weird, fun innovations in technology have happened because people with a strange compulsion to do something cool had enough knowledge to get their code out into the world. The World Wide Web itself was not a huge technological leap over what came before — it took off because of a huge leap in insight into human nature and human behavior, that happened to be captured in code. The actual bits and bytes? They were mostly just plain text, much of which was in formats that had already been around for many years prior to Tim Berners-Lee assembling it all into the first web browser. That kind of surprising innovation could probably never be vibe coded, even though all of the raw materials might be scooped up by an LLM, because even if the human writing the prompt had that counterintuitive stroke of genius, the system would still be hemmed in by the constraints of the works it had been trained on. The past is a prison when you’re inventing the future.

What’s more, if you were going to use a vibe coding tool to make a truly radical new technology, do you think today’s Big AI companies would let their systems create that app? The same companies that made a platform that just put hundreds of thousands of coders out of work? The same companies that make a platform that tells your kids to end their own lives? The same companies whose cronies in the White House are saying there should never be any laws reining them in? Those folks are going to help you make new tech that threatens to disrupt their power? I don’t think so.

Putting power in people’s hands

I’m deeply torn about what the future of LLMs for coding should be. I’ve spent decades of my life trying to make it easier for everyone to make software. I’ve seen, firsthand, the power of using AI tools to help coders — especially those new to coding — build their confidence in being able to create something new. I love that potential, and in many ways, it’s the most positive and optimistic possibility around LLMs that I’ve seen. It’s the thing that makes me think that maybe there is a part of all the AI hype that is not pure bullshit. Especially if we can find a version of these tools that’s genuinely open source and free and has been trained on people’s code with their consent and cooperation, perhaps in collaboration with some educational institutions, I’d be delighted to see that shared with the world in a thoughtful way.

But I also have seen the majority of the working coders I know (and the non-working coders I know, including myself) rush to integrate the commercial coding assistants from the Big AI companies into their workflow without necessarily giving proper consideration to the long-term implications of that choice. What happens when we’ve developed our dependencies on that assistance? How will people introduce new technologies like new programming languages and frameworks if we all consider the LLMs to be the canonical way of writing our code, and the training models don’t know the new tech exists? How does our imagination shrink when we consider our options of what we create with code to be choosing between the outputs of the LLM rather than starting from the blank slate of our imagination? How will we build the next generation of coders skilled enough to catch the glaring errors that LLMs create in their code?

There’s never been this stark a contrast between the negatives and positives of a new technology being so tightly coupled before when it comes to enabling developers. Generally change comes to coders incrementally. Historically, there was always a (wonderful!) default skepticism to coding culture, where anything that reeked of marketing or hype was looked at with a huge amount of doubt until there was a significant amount of proof to back it up.

But in recent years, as with everything else, the culture wars have come for tech. There’s now a cohort in the coding world that has adopted a cult of personality around a handful of big tech tycoons despite the fact that these men are deeply corrosive to society. Or perhaps because they are. As a result, there’s a built-in constituency for any new AI tool, regardless of its negative externalities, which gives them a sense of momentum even where there may not be any.

It’s worth us examining what’s really going on, and articulating explicitly what we’re trying to enable. Who are we trying to empower? What does success look like? What do we want people to be able to build? What do we not want people to be able to make? What price is too high to pay? What convenience is not worth the cost?

What tools do we choose?

I do, still, believe deeply in the power of technology to empower people. I believe firmly that you have to understand how to create technology if you want to understand how to control it. And I still believe that we have to democratize the power to create and control technology to as many people as possible so that technology can be something people can use as a tool, rather than something that happens _to_them.

We are now in a complex phase, though, where the promise of democratizing access to creating technology is suddenly fraught in a way that it has never been before. The answer can’t possibly be that technology remains inaccessible and difficult for those outside of a privileged class, and easy for those who are already comfortable in the existing power structure.

A lot is still very uncertain, but I come back to one key question that helps me frame the discussion of what’s next: What’s the most radical app that we could build? And which tools will enable me to build it? Even if all we can do is start having a more complicated conversation about what we’re doing when we’re vibe coding, we’ll be making progress towards a more empowered future.

Burgemeester Terneuzen vertrekt definitief na conflict over komst azc

Burgemeester Erik Van Merrienboer kondigde zijn vertrek vorige week al aan in een brief aan de gemeenteraad.

The Register

Biting the hand that feeds IT — Enterprise Technology News and Analysis

Amazon is forging a walled garden for enterprise AI

AWS Chief Matt Garman lays out his vision bringing artificial intelligence to the enterprise

Re:Invent  Amazon wants to make AI meaningful to enterprises, and it’s building yet another walled garden disguised as an easy button to do it.…