Slashdot

News for nerds, stuff that matters

Waymo Hits a Dog In San Francisco, Reigniting Safety Debate

A Waymo robotaxi struck a small unleashed dog in San Francisco -- just weeks after another Waymo killed a beloved neighborhood cat. The dog's condition is unknown. The Los Angeles Times reports: The incident occurred near the intersection of Scott and Eddy streets and drew a small crowd, according to social media posts. A person claiming to be one of the passengers posted about the accident on Reddit. "Our Waymo just ran over a dog," the passenger wrote. "Kids saw the whole thing." The passenger described the dog as between 20 and 30 pounds and wrote that their family was traveling back home after a holiday tree lighting event. The National Highway Traffic Safety Administration has recorded Waymo taxis as being involved in at least 14 animal collisions since 2021.

"Unfortunately, a Waymo vehicle made contact with a small, unleashed dog in the roadway," a company spokesperson said. "We are dedicated to learning from this situation and how we show up for our community as we continue improving road safety in the cities we serve." The spokesperson added that Waymo vehicles have a much lower rate of injury-causing collisions than human drivers. Human drivers run into millions of animals while driving each year.

"I'm not sure a human driver would have avoided the dog either, though I do know that a human would have responded differently to a 'bump' followed by a car full of screaming people," the Waymo passenger wrote on Reddit. One person who commented on the discussion said that Waymo vehicles should be held to a higher standard than human drivers, because the autonomous taxis are supposed to improve road safety. "The whole point of this is because Waymo isn't supposed to make those mistakes," the person wrote on Reddit.

Read more of this story at Slashdot.

Kubernetes Is Retiring Its Popular Ingress NGINX Controller

During last month's KubeCon North America in Atlanta, Kubernetes maintainers announced the upcoming retirement of Ingress NGINX. "Best-effort maintenance will continue until March 2026," noted the Kubernetes SIG Network and the Security Response Committee. "Afterward, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered." In a recent op-ed for The Register, Steven J. Vaughan-Nichols reflects on the decision and speculates about what might have prevented this outcome: Ingress NGINX, for those who don't know it, is an ingress controller in Kubernetes clusters that manages and routes external HTTP and HTTPS traffic to the cluster's internal services based on configurable Ingress rules. It acts as a reverse proxy, ensuring that requests from clients outside the cluster are forwarded to the correct backend services within the cluster according to path, domain, and TLS configuration. As such, it's vital for network traffic management and load balancing. You know, the important stuff.

Now this longstanding project, once celebrated for its flexibility and breadth of features, will soon be "abandonware." So what? After all, it won't be the first time a once-popular program shuffled off the stage. Off the top of my head, dBase, Lotus 1-2-3, and VisiCalc spring to my mind. What's different is that there are still thousands of Ingress NGINX controllers in use. Why is it being put down, then, if it's so popular? Well, there is a good reason. As Tabitha Sable, a staff engineer at Datadog who is also co-chair of the Kubernetes special interest group for security, pointed out: "Ingress NGINX has always struggled with insufficient or barely sufficient maintainership. For years, the project has had only one or two people doing development work, on their own time, after work hours, and on weekends. Last year, the Ingress NGINX maintainers announced their plans to wind down Ingress NGINX and develop a replacement controller together with the Gateway API community. Unfortunately, even that announcement failed to generate additional interest in helping maintain Ingress NGINX or develop InGate to replace it." [...]

The final nail in the coffin was when security company Wix found a killer Ingress NGINX security hole. How bad was it? Wix declared: "Exploiting this flaw allows an attacker to execute arbitrary code and access all cluster secrets across namespaces, which could lead to complete cluster takeover." [...] You see, the real problem isn't that Ingress NGINX has a major security problem. Heck, hardly a month goes by without another stop-the-presses Windows bug being uncovered. No, the real issue is that here we have yet another example of a mission-critical open source program no one pays to support...

Read more of this story at Slashdot.

Apple To Resist India Order To Preload State-Run App As Political Outcry Builds

Apple does not plan to comply with India's mandate to preload its smartphones with a state-owned cyber safety app that cannot be disabled. According to Reuters, the order "sparked surveillance concerns and a political uproar" after it was revealed on Monday. From the report: In the wake of the criticism, India's telecom minister Jyotiraditya M. Scindia on Tuesday said the app was a "voluntary and democratic system," adding that users can choose to activate it and can "easily delete it from their phone at any time." At present, the app can be deleted by users. Scindia did not comment on or clarify the November 28 confidential directive that ordered smartphone makers to start preloading it and ensure "its functionalities are not disabled or restricted."

Apple however does not plan to comply with the directive and will tell the government it does not follow such mandates anywhere in the world as they raise a host of privacy and security issues for the company's iOS ecosystem, said two of the industry sources who are familiar with Apple's concerns. They declined to be named publicly as the company's strategy is private. "Its not only like taking a sledgehammer, this is like a double-barrel gun," said the first source.

Read more of this story at Slashdot.

OpenAI Declares 'Code Red' As Google Catches Up In AI Race

OpenAI has reportedly issued a "code red" on Monday, pausing projects like ads, shopping agents, health tools, and its Pulse assistant to focus entirely on improving ChatGPT. "This includes core features like greater speed and reliability, better personalization, and the ability to answer more questions," reports The Verge, citing a memo reported by the Wall Street Journal and The Information. "There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development." From the report: The newfound urgency illustrates an inflection point for OpenAI as it spends hundreds of billions of dollars to fund growth and figures out a path to future profitability. It is also something of a full-circle moment in the AI race. Google, which declared its own "code red" after the arrival of ChatGPT, is a particular concern. Google's AI user base is growing -- helped by the success of popular tools like the Nano Banana image model -- and its latest AI model, Gemini 3, blew past its competitors on many industry benchmarks and popular metrics.

Read more of this story at Slashdot.

Fokke & Sukke

F & S

Swimming Pool Reflections

Thomas Hawk posted a photo:

Swimming Pool Reflections

Out of Service

Thomas Hawk posted a photo:

Out of Service

Wel.nl

Minder lezen, Meer weten.

Partij voor de Dieren wil handhavingsverzoek zeeleeuwen Artis

AMSTERDAM (ANP) - De leefomstandigheden en het welzijn van de zeeleeuwen in dierentuin Artis zijn niet verbeterd. Dat blijkt volgens de Amsterdamse afdeling van de Partij voor de Dieren uit nieuwe beelden. De Partij voor de Dieren heeft daarom een formeel handhavingsverzoek gedaan bij het ministerie van Landbouw, Visserij, Voedselzekerheid en Natuur.

In juli maakte de Partij voor de Dieren een melding bij meldpunt 144 vanwege slechte huisvesting van de drie Californische zeeleeuwen in Artis. De dierentuin liet toen weten dat de zeeleeuwen Artis gaan verlaten en een nieuw onderkomen krijgen in een dierentuin in Singapore.

In de tussentijd is er volgens de Partij voor de Dieren niet genoeg gedaan om de leefomstandigheden van de zeeleeuwen te verbeteren. Ze leven in een ondiep bassin zonder privacy en vertonen volgens de Partij voor de Dieren zorgwekkend gedrag waaruit blijkt dat de dieren verveeld en gefrustreerd zijn.


Airbus inspecteert tot 628 vliegtuigen om probleem metalen platen

PARIJS (ANP) - De Europese vliegtuigbouwer Airbus meldde dinsdag dat wereldwijd mogelijk tot 628 van zijn populaire A320-vliegtuigen moeten worden geïnspecteerd vanwege een 'kwaliteitsprobleem' met metalen platen dat aan het licht is gekomen.

Airbus voegde eraan toe dat het getal het "totale aantal potentieel getroffen vliegtuigen" weergeeft en "dat dit niet betekent dat al deze vliegtuigen noodzakelijkerwijs getroffen zijn".

Vrijdag meldde Airbus dat zo'n 6000 A320-vliegtuigen een onmiddellijke softwarefix nodig hebben. De apparatuur zou zijn aangetast door "intense zonnestraling" in de cockpit.


Anil Dash

A blog about making culture. Since 1999.

Vibe Coding: Empowering and Imprisoning

In case you haven’t been following the world of software development closely, it’s good to know that vibe coding — using LLM tools to assist with writing code — can help enable many people to create apps or software that they wouldn’t otherwise be able to make. This has led to an extraordinary rapid adoption curve amongst even experienced coders in many different disciplines within the world of coding. But there’s a very important threat posed by vibe coding that almost no one has been talking about, one that’s far more insidious and specific than just the risks and threats posted by AI or LLMs in general.

Here’s a quick summary:

  • One of the most effective uses of LLMs is in helping programmers write code
  • A huge reason VCs and tech tycoons put billions into funding LLMs was so they could undermine coders and depress wages
  • Vibe coding might limit us to making simpler apps instead of the radical innovation we need to challenge Big Tech

Start vibing

It may be useful to start by explaining how people use LLMs to assist with writing software. My background is that I’ve helped build multiple companies focused on enabling millions of people to create with code. And I’m personally an example of one common scenario with vibe coding. Since I don’t code regularly anymore, I’ve become much slower and less efficient at even the web development tasks that I used to do professionally, which I used to be fairly competent at performing. In software development, there are usually a nearly-continuous stream of new technologies being released (like when you upgrade your phone, or your computer downloads an update to your web browser), and when those things change, developers have to update their skills and knowledge to stay current with the latest tools and techniques. If you’re not staying on top of things, your skillset can rapidly decay into irrelevance, and it can be hard to get back up to speed, even though you understand the fundamentals completely, and the underlying logic of how to write code hasn’t changed at all. It’s like knowing how to be an electrician but suddenly you have to do all your work in French, and you don’t speak French.

This is the kind of problem that LLMs are really good at helping with. Before I had this kind of coding assistant, I couldn’t do any meaningful projects within the limited amount of free time that I have available on nights and weekends to build things. Now, with the assistance of contemporary tools, I can get help with things like routine boilerplate code and obscure syntax, speeding up my work enough to focus on the fun, creative parts of coding that I love.

Even professional coders who are up to date on the latest technologies use these LLM tools to do things like creating scripts, which are essentially small bits of code used to automate or process common tasks. This kind of code is disposable, meaning it may only ever be run once, and it’s not exposed to the internet, so security or privacy concerns aren’t usually much of an issue. In that context, having the LLM create a utility for you can feel like being truly liberated from grunt work, something like having a robot vacuum around to sweep up the floor.

Surfing towards serfdom

This all sounds pretty good, right? It certainly helps explain why so many in the tech world tend to see AI much more positively than almost everyone else does; there’s a clear-cut example of people finding value from these tools in a way that feels empowering or even freeing.

But there are far darker sides to this use of AI. Let me put aside the threats and risks of AI that are true of all uses of the Big AI platforms, like the environmental impact, the training on content without consent, the psychological manipulation of users, the undermining of legal regulations, and other significant harms. These are all real, and profound, but I want to focus on what’s specific to using AI to help write code here, because there are negative externalities that are unique to this context that people haven’t discussed enough. (For more on the larger AI discussion, see "What would good AI look like?")

The first problem raised by vibe coding is an obvious one: the major tech investors focused on making AI good at writing code because they wanted to make coders less powerful and reduce their pay. If you go back a decade ago, nearly everyone in the world was saying “teach your kids to code” and being a software engineer was one of the highest paying, most powerful individual jobs in the history of labor. Pretty soon, coders were acting like it — using their power to improve workplace conditions for those around them at the major tech companies, and pushing their employers to be more socially responsible. Once workers began organizing in this way, the tech tycoons who founded the big tech companies, and the board members and venture capitalists who backed them, immediately began investing billions of dollars in building these technologies that would devalue the labor of millions of coders around the world.

It worked. More than half a million tech workers have been laid off in America since ChatGPT was released in November 2022.

That’s just in the private sector, and just the ones tracked by layoffs.fyi. Software engineering job listings have plummeted to a 5-year low. This is during a period of time that nobody even describes as a recession. The same venture capitalists who funded the AI boom keep insisting that these trends are about macroeconomic abstractions like interest rates, a stark contrast to their rhetoric the rest of the time, when they insist that they are alpha males who make their own decisions based on their strong convictions and brave stances against woke culture. It is, in fact, the case that they are just greedy people who invested a ton of money into trying to put a lot of good people out of work, and they succeeded in doing so.

There is no reason why AI tools like this couldn't be used in the way that they're often described, where they increase productivity and enable workers to do more and generate more value. But instead we have the wealthiest people in the world telling the wealthiest companies in the world, while they generate record profits, to lay off workers who could be creating cool things for customers, and then blaming it on everyone but themselves.

The past as prison

Then there’s the second problem raised by vibe coding: You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at. Worse, many of the people who are using vibe coding tools are increasingly those who don’t understand the code that is being generated by these systems. This means the people generating all of this newly-vibed code won’t even know when the output is insecure, or will perform poorly, or includes exploits that let others take over their system, or when it is simply incoherent nonsense that looks like code but doesn’t do anything.

All of those factors combine to encourage people to think of vibe coding tools as a sort of “black box” that just spits out an app for you. Even the giant tech companies are starting to encourage this mindset, tacitly endorsing the idea that people don’t need to know what their systems are doing under the hood. But obviously, somebody needs to know whether a system is actually secure. Somebody needs to know if a system is actually doing the tasks it says that it’s doing. The Big AI companies that make the most popular LLMs on the market today routinely design their products to induce emotional dependency in users by giving them positive feedback and encouragement, even when that requires generating false responses. Put more simply: they make the bot lie to you to make you feel good so you use the AI more. That’s terrible in a million ways, but one of them is that it sure does generate some bad code.

And a vibe coding tool absolutely won’t make something truly new. The most radical, disruptive, interesting, surprising, weird, fun innovations in technology have happened because people with a strange compulsion to do something cool had enough knowledge to get their code out into the world. The World Wide Web itself was not a huge technological leap over what came before — it took off because of a huge leap in insight into human nature and human behavior, that happened to be captured in code. The actual bits and bytes? They were mostly just plain text, much of which was in formats that had already been around for many years prior to Tim Berners-Lee assembling it all into the first web browser. That kind of surprising innovation could probably never be vibe coded, even though all of the raw materials might be scooped up by an LLM, because even if the human writing the prompt had that counterintuitive stroke of genius, the system would still be hemmed in by the constraints of the works it had been trained on. The past is a prison when you’re inventing the future.

What’s more, if you were going to use a vibe coding tool to make a truly radical new technology, do you think today’s Big AI companies would let their systems create that app? The same companies that made a platform that just put hundreds of thousands of coders out of work? The same companies that make a platform that tells your kids to end their own lives? The same companies whose cronies in the White House are saying there should never be any laws reining them in? Those folks are going to help you make new tech that threatens to disrupt their power? I don’t think so.

Putting power in people’s hands

I’m deeply torn about what the future of LLMs for coding should be. I’ve spent decades of my life trying to make it easier for everyone to make software. I’ve seen, firsthand, the power of using AI tools to help coders — especially those new to coding — build their confidence in being able to create something new. I love that potential, and in many ways, it’s the most positive and optimistic possibility around LLMs that I’ve seen. It’s the thing that makes me think that maybe there is a part of all the AI hype that is not pure bullshit. Especially if we can find a version of these tools that’s genuinely open source and free and has been trained on people’s code with their consent and cooperation, perhaps in collaboration with some educational institutions, I’d be delighted to see that shared with the world in a thoughtful way.

But I also have seen the majority of the working coders I know (and the non-working coders I know, including myself) rush to integrate the commercial coding assistants from the Big AI companies into their workflow without necessarily giving proper consideration to the long-term implications of that choice. What happens when we’ve developed our dependencies on that assistance? How will people introduce new technologies like new programming languages and frameworks if we all consider the LLMs to be the canonical way of writing our code, and the training models don’t know the new tech exists? How does our imagination shrink when we consider our options of what we create with code to be choosing between the outputs of the LLM rather than starting from the blank slate of our imagination? How will we build the next generation of coders skilled enough to catch the glaring errors that LLMs create in their code?

There’s never been this stark a contrast between the negatives and positives of a new technology being so tightly coupled before when it comes to enabling developers. Generally change comes to coders incrementally. Historically, there was always a (wonderful!) default skepticism to coding culture, where anything that reeked of marketing or hype was looked at with a huge amount of doubt until there was a significant amount of proof to back it up.

But in recent years, as with everything else, the culture wars have come for tech. There’s now a cohort in the coding world that has adopted a cult of personality around a handful of big tech tycoons despite the fact that these men are deeply corrosive to society. Or perhaps because they are. As a result, there’s a built-in constituency for any new AI tool, regardless of its negative externalities, which gives them a sense of momentum even where there may not be any.

It’s worth us examining what’s really going on, and articulating explicitly what we’re trying to enable. Who are we trying to empower? What does success look like? What do we want people to be able to build? What do we not want people to be able to make? What price is too high to pay? What convenience is not worth the cost?

What tools do we choose?

I do, still, believe deeply in the power of technology to empower people. I believe firmly that you have to understand how to create technology if you want to understand how to control it. And I still believe that we have to democratize the power to create and control technology to as many people as possible so that technology can be something people can use as a tool, rather than something that happens _to_them.

We are now in a complex phase, though, where the promise of democratizing access to creating technology is suddenly fraught in a way that it has never been before. The answer can’t possibly be that technology remains inaccessible and difficult for those outside of a privileged class, and easy for those who are already comfortable in the existing power structure.

A lot is still very uncertain, but I come back to one key question that helps me frame the discussion of what’s next: What’s the most radical app that we could build? And which tools will enable me to build it? Even if all we can do is start having a more complicated conversation about what we’re doing when we’re vibe coding, we’ll be making progress towards a more empowered future.