Slashdot

News for nerds, stuff that matters

Kubernetes Is Retiring Its Popular Ingress NGINX Controller

During last month's KubeCon North America in Atlanta, Kubernetes maintainers announced the upcoming retirement of Ingress NGINX. "Best-effort maintenance will continue until March 2026," noted the Kubernetes SIG Network and the Security Response Committee. "Afterward, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered." In a recent op-ed for The Register, Steven J. Vaughan-Nichols reflects on the decision and speculates about what might have prevented this outcome: Ingress NGINX, for those who don't know it, is an ingress controller in Kubernetes clusters that manages and routes external HTTP and HTTPS traffic to the cluster's internal services based on configurable Ingress rules. It acts as a reverse proxy, ensuring that requests from clients outside the cluster are forwarded to the correct backend services within the cluster according to path, domain, and TLS configuration. As such, it's vital for network traffic management and load balancing. You know, the important stuff.

Now this longstanding project, once celebrated for its flexibility and breadth of features, will soon be "abandonware." So what? After all, it won't be the first time a once-popular program shuffled off the stage. Off the top of my head, dBase, Lotus 1-2-3, and VisiCalc spring to my mind. What's different is that there are still thousands of Ingress NGINX controllers in use. Why is it being put down, then, if it's so popular? Well, there is a good reason. As Tabitha Sable, a staff engineer at Datadog who is also co-chair of the Kubernetes special interest group for security, pointed out: "Ingress NGINX has always struggled with insufficient or barely sufficient maintainership. For years, the project has had only one or two people doing development work, on their own time, after work hours, and on weekends. Last year, the Ingress NGINX maintainers announced their plans to wind down Ingress NGINX and develop a replacement controller together with the Gateway API community. Unfortunately, even that announcement failed to generate additional interest in helping maintain Ingress NGINX or develop InGate to replace it." [...]

The final nail in the coffin was when security company Wix found a killer Ingress NGINX security hole. How bad was it? Wix declared: "Exploiting this flaw allows an attacker to execute arbitrary code and access all cluster secrets across namespaces, which could lead to complete cluster takeover." [...] You see, the real problem isn't that Ingress NGINX has a major security problem. Heck, hardly a month goes by without another stop-the-presses Windows bug being uncovered. No, the real issue is that here we have yet another example of a mission-critical open source program no one pays to support...

Read more of this story at Slashdot.

Apple To Resist India Order To Preload State-Run App As Political Outcry Builds

Apple does not plan to comply with India's mandate to preload its smartphones with a state-owned cyber safety app that cannot be disabled. According to Reuters, the order "sparked surveillance concerns and a political uproar" after it was revealed on Monday. From the report: In the wake of the criticism, India's telecom minister Jyotiraditya M. Scindia on Tuesday said the app was a "voluntary and democratic system," adding that users can choose to activate it and can "easily delete it from their phone at any time." At present, the app can be deleted by users. Scindia did not comment on or clarify the November 28 confidential directive that ordered smartphone makers to start preloading it and ensure "its functionalities are not disabled or restricted."

Apple however does not plan to comply with the directive and will tell the government it does not follow such mandates anywhere in the world as they raise a host of privacy and security issues for the company's iOS ecosystem, said two of the industry sources who are familiar with Apple's concerns. They declined to be named publicly as the company's strategy is private. "Its not only like taking a sledgehammer, this is like a double-barrel gun," said the first source.

Read more of this story at Slashdot.

OpenAI Declares 'Code Red' As Google Catches Up In AI Race

OpenAI has reportedly issued a "code red" on Monday, pausing projects like ads, shopping agents, health tools, and its Pulse assistant to focus entirely on improving ChatGPT. "This includes core features like greater speed and reliability, better personalization, and the ability to answer more questions," reports The Verge, citing a memo reported by the Wall Street Journal and The Information. "There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development." From the report: The newfound urgency illustrates an inflection point for OpenAI as it spends hundreds of billions of dollars to fund growth and figures out a path to future profitability. It is also something of a full-circle moment in the AI race. Google, which declared its own "code red" after the arrival of ChatGPT, is a particular concern. Google's AI user base is growing -- helped by the success of popular tools like the Nano Banana image model -- and its latest AI model, Gemini 3, blew past its competitors on many industry benchmarks and popular metrics.

Read more of this story at Slashdot.

UK Plans To Ban Cryptocurrency Political Donations

The UK government plans to ban political donations made in cryptocurrency over fears of anonymity, foreign influence, and traceability issues, though the ban won't be ready in time for the upcoming elections bill. The Guardian reports: The government's ambition to ban crypto donations will be a blow to Nigel Farage's Reform UK party, which became the first to accept contributions in digital currency this year. It is believed to have received its first registrable donations in cryptocurrency this autumn and the party has set up its own crypto portal to receive contributions, saying it is subject to "enhanced" checks. Government sources have said ministers believe cryptocurrency donations to be a problem, as they are difficult to trace and could be exploited by foreign powers or criminals.

Pat McFadden, then a Cabinet Office minister, first raised the idea in July, saying: "I definitely think it is something that the Electoral Commission should be considering. I think that it's very important that we know who is providing the donation, are they properly registered, what are the bona fides of that donation." The Electoral Commission provides guidance on crypto donations but ministers accept any ban would probably have to come from the government through legislation. "Crypto donations present real risks to our democracy," said Susan Hawley, the executive director of Spotlight on Corruption. "We know that bad actors like Russia use crypto to undermine and interfere in democracies globally, while the difficulties involved in tracing the true source of transactions means that British voters may not know everyone who's funding the parties they vote for."

Read more of this story at Slashdot.

Fokke & Sukke

F & S

Swimming Pool Reflections

Thomas Hawk posted a photo:

Swimming Pool Reflections

Out of Service

Thomas Hawk posted a photo:

Out of Service

Center of the World, Felicity, California

Thomas Hawk posted a photo:

Center of the World, Felicity, California

Forever Said Too Soon

Thomas Hawk posted a photo:

Forever Said Too Soon

Wel.nl

Minder lezen, Meer weten.

Partij voor de Dieren wil handhavingsverzoek zeeleeuwen Artis

AMSTERDAM (ANP) - De leefomstandigheden en het welzijn van de zeeleeuwen in dierentuin Artis zijn niet verbeterd. Dat blijkt volgens de Amsterdamse afdeling van de Partij voor de Dieren uit nieuwe beelden. De Partij voor de Dieren heeft daarom een formeel handhavingsverzoek gedaan bij het ministerie van Landbouw, Visserij, Voedselzekerheid en Natuur.

In juli maakte de Partij voor de Dieren een melding bij meldpunt 144 vanwege slechte huisvesting van de drie Californische zeeleeuwen in Artis. De dierentuin liet toen weten dat de zeeleeuwen Artis gaan verlaten en een nieuw onderkomen krijgen in een dierentuin in Singapore.

In de tussentijd is er volgens de Partij voor de Dieren niet genoeg gedaan om de leefomstandigheden van de zeeleeuwen te verbeteren. Ze leven in een ondiep bassin zonder privacy en vertonen volgens de Partij voor de Dieren zorgwekkend gedrag waaruit blijkt dat de dieren verveeld en gefrustreerd zijn.


Airbus inspecteert tot 628 vliegtuigen om probleem metalen platen

PARIJS (ANP) - De Europese vliegtuigbouwer Airbus meldde dinsdag dat wereldwijd mogelijk tot 628 van zijn populaire A320-vliegtuigen moeten worden geïnspecteerd vanwege een 'kwaliteitsprobleem' met metalen platen dat aan het licht is gekomen.

Airbus voegde eraan toe dat het getal het "totale aantal potentieel getroffen vliegtuigen" weergeeft en "dat dit niet betekent dat al deze vliegtuigen noodzakelijkerwijs getroffen zijn".

Vrijdag meldde Airbus dat zo'n 6000 A320-vliegtuigen een onmiddellijke softwarefix nodig hebben. De apparatuur zou zijn aangetast door "intense zonnestraling" in de cockpit.


Palestijnen melden dood journalist door Israëlische droneaanval

KHAN YOUNIS (ANP) - De Palestijnse journalist Mahmoud Wadi is gedood bij een Israëlische droneaanval in de Gazastrook, meldt het Palestijnse persbureau WAFA. Een andere journalist raakte gewond bij dezelfde aanval, die plaatsvond in de zuidelijke stad Khan Younis.

Volgens lokale media werkte Wadi voor verschillende media, die onder meer zijn met een dronecamera gemaakte beelden verspreidden.

Sinds het begin van de Israëlische aanvallen op de Gazastrook, een Palestijns gebied, heeft Israël meer dan 200 journalisten gedood, heeft het Comité ter Bescherming van Journalisten (CPJ) becijferd. Israël laat geen buitenlandse journalisten toe in de Gazastrook.

Er geldt sinds 10 oktober een staakt-het-vuren tussen Israël en de Palestijnse beweging Hamas. Sindsdien hebben de Israëliërs volgens Palestijnse gezondheidsautoriteiten 357 Palestijnen in het gebied gedood.


Amnesty wil onderzoek naar RSF-oorlogsmisdaden opvangkamp Soedan

KHARTOEM (ANP) - De aanvallen van de paramilitaire Rapid Support Forces (RSF) op een vluchtelingenkamp in Soedan moeten worden onderzocht als oorlogsmisdaden, meldt Amnesty International in een rapport. Afgelopen april voerde de RSF de grootschalige aanval uit op een opvangkamp in de buurt van el-Fasher.

Volgens Amnesty werden hierbij opzettelijk burgers gedood, gijzelaars genomen en moskeeën, scholen en gezondheidsklinieken geplunderd en vernietigd. "Deze aanval stond niet op zichzelf, maar maakte deel uit van aanhoudende aanvallen op dorpen en kampen voor intern ontheemden", zegt secretaris-generaal Agnès Callamard.

Overlevenden van de aanval vertelden de hulporganisatie dat er tijdens de aanval opzettelijk op burgers werd geschoten, die zich in sommige gevallen hadden verstopt in huizen of een moskee. Dat noemt Amnesty een "ernstige schending van het internationaal humanitair recht".

De aanval op het vluchtelingenkamp leidde tot een massale uittocht van burgers naar el-Fasher, waar sprake was van een hongersnood. De stad werd eind oktober ingenomen door de RSF, waarbij op grote schaal mensen zouden zijn gedood.


ANWB: interesse om elektrisch te rijden gedaald

DEN HAAG (ANP) - Voor het eerst in jaren zijn mensen terughoudender geworden om binnen vijf jaar over te stappen naar elektrisch rijden. Volgens een onderzoek van de ANWB komt dit onder meer door het verminderen van de stimuleringsmaatregelen door de overheid. Zo is de aankoopsubsidie voor elektrische auto's geschrapt en moet er sinds dit jaar ook motorrijtuigenbelasting voor deze voertuigen worden betaald.

Bijna vier op de zes ondervraagden vinden elektrische auto's te duur om over te stappen. Ook vinden ze de actieradius vaak nog steeds onvoldoende en maken ze zich zorgen over de levensduur van de accu. Daarnaast kunnen veel mensen de auto thuis niet laden.

De belangenorganisatie deed onderzoek onder bijna 2360 mensen. Volgens de ANWB is de intentie om binnen vijf jaar een elektrisch aangedreven auto te kopen dit jaar gedaald tot 23 procent, van 28 procent vorig jaar. "Voor het eerst in lange tijd is de groep zonder aankoopintentie groter dan de groep met aankoopintentie", staat in het onderzoek.

Volgens de ANWB bestaan er nog veel "vooroordelen" over elektrisch rijden, als het gaat om kosten, actieradius en laadgemak. Zo is het aanbod betaalbare tweedehands auto's toegenomen en blijft de actieradius toenemen. De ANWB wijst er ook op dat het aantal laadpalen in Nederland blijft groeien.


Burgemeester Terneuzen vertrekt definitief na ruzie over azc

TERNEUZEN (ANP) - Erik van Merrienboer vertrekt definitief als burgemeester van Terneuzen. De provincie gaat op zoek naar een waarnemer. Dat heeft de commissaris van de Koning in Zeeland, Hugo de Jonge, dinsdagavond laat gezegd na een besloten vergadering van de gemeenteraad van Terneuzen.

Van Merrienboer stapte vorige maand op. De wethouders willen, op advies van de gemeenteraad, geen vergunning verstrekken voor de komst van een asielzoekerscentrum. De gemeenteraad had vorig jaar nog wel voor de komst van een opvang gestemd, en volgens Van Merrienboer zou de gemeente de wet overtreden door een vergunning te weigeren.

De Jonge had Van Merrienboer eerder gevraagd om terug te komen op zijn besluit om te vertrekken, maar hij bleef toen bij zijn beslissing. Van Merrienboer ontbrak ook bij de vergadering dinsdag.

De commissaris overlegt donderdag met de fractievoorzitters en de wethouders. Zij kunnen dan aangeven waar een waarnemend burgemeester volgens hen aan zou moeten voldoen. Op 16 december wil De Jonge een kandidaat voordragen. Die zou begin volgend jaar aan de slag moeten gaan.


Anil Dash

A blog about making culture. Since 1999.

Vibe Coding: Empowering and Imprisoning

In case you haven’t been following the world of software development closely, it’s good to know that vibe coding — using LLM tools to assist with writing code — can help enable many people to create apps or software that they wouldn’t otherwise be able to make. This has led to an extraordinary rapid adoption curve amongst even experienced coders in many different disciplines within the world of coding. But there’s a very important threat posed by vibe coding that almost no one has been talking about, one that’s far more insidious and specific than just the risks and threats posted by AI or LLMs in general.

Here’s a quick summary:

  • One of the most effective uses of LLMs is in helping programmers write code
  • A huge reason VCs and tech tycoons put billions into funding LLMs was so they could undermine coders and depress wages
  • Vibe coding might limit us to making simpler apps instead of the radical innovation we need to challenge Big Tech

Start vibing

It may be useful to start by explaining how people use LLMs to assist with writing software. My background is that I’ve helped build multiple companies focused on enabling millions of people to create with code. And I’m personally an example of one common scenario with vibe coding. Since I don’t code regularly anymore, I’ve become much slower and less efficient at even the web development tasks that I used to do professionally, which I used to be fairly competent at performing. In software development, there are usually a nearly-continuous stream of new technologies being released (like when you upgrade your phone, or your computer downloads an update to your web browser), and when those things change, developers have to update their skills and knowledge to stay current with the latest tools and techniques. If you’re not staying on top of things, your skillset can rapidly decay into irrelevance, and it can be hard to get back up to speed, even though you understand the fundamentals completely, and the underlying logic of how to write code hasn’t changed at all. It’s like knowing how to be an electrician but suddenly you have to do all your work in French, and you don’t speak French.

This is the kind of problem that LLMs are really good at helping with. Before I had this kind of coding assistant, I couldn’t do any meaningful projects within the limited amount of free time that I have available on nights and weekends to build things. Now, with the assistance of contemporary tools, I can get help with things like routine boilerplate code and obscure syntax, speeding up my work enough to focus on the fun, creative parts of coding that I love.

Even professional coders who are up to date on the latest technologies use these LLM tools to do things like creating scripts, which are essentially small bits of code used to automate or process common tasks. This kind of code is disposable, meaning it may only ever be run once, and it’s not exposed to the internet, so security or privacy concerns aren’t usually much of an issue. In that context, having the LLM create a utility for you can feel like being truly liberated from grunt work, something like having a robot vacuum around to sweep up the floor.

Surfing towards serfdom

This all sounds pretty good, right? It certainly helps explain why so many in the tech world tend to see AI much more positively than almost everyone else does; there’s a clear-cut example of people finding value from these tools in a way that feels empowering or even freeing.

But there are far darker sides to this use of AI. Let me put aside the threats and risks of AI that are true of all uses of the Big AI platforms, like the environmental impact, the training on content without consent, the psychological manipulation of users, the undermining of legal regulations, and other significant harms. These are all real, and profound, but I want to focus on what’s specific to using AI to help write code here, because there are negative externalities that are unique to this context that people haven’t discussed enough. (For more on the larger AI discussion, see "What would good AI look like?")

The first problem raised by vibe coding is an obvious one: the major tech investors focused on making AI good at writing code because they wanted to make coders less powerful and reduce their pay. If you go back a decade ago, nearly everyone in the world was saying “teach your kids to code” and being a software engineer was one of the highest paying, most powerful individual jobs in the history of labor. Pretty soon, coders were acting like it — using their power to improve workplace conditions for those around them at the major tech companies, and pushing their employers to be more socially responsible. Once workers began organizing in this way, the tech tycoons who founded the big tech companies, and the board members and venture capitalists who backed them, immediately began investing billions of dollars in building these technologies that would devalue the labor of millions of coders around the world.

It worked. More than half a million tech workers have been laid off in America since ChatGPT was released in November 2022.

That’s just in the private sector, and just the ones tracked by layoffs.fyi. Software engineering job listings have plummeted to a 5-year low. This is during a period of time that nobody even describes as a recession. The same venture capitalists who funded the AI boom keep insisting that these trends are about macroeconomic abstractions like interest rates, a stark contrast to their rhetoric the rest of the time, when they insist that they are alpha males who make their own decisions based on their strong convictions and brave stances against woke culture. It is, in fact, the case that they are just greedy people who invested a ton of money into trying to put a lot of good people out of work, and they succeeded in doing so.

There is no reason why AI tools like this couldn't be used in the way that they're often described, where they increase productivity and enable workers to do more and generate more value. But instead we have the wealthiest people in the world telling the wealthiest companies in the world, while they generate record profits, to lay off workers who could be creating cool things for customers, and then blaming it on everyone but themselves.

The past as prison

Then there’s the second problem raised by vibe coding: You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at. Worse, many of the people who are using vibe coding tools are increasingly those who don’t understand the code that is being generated by these systems. This means the people generating all of this newly-vibed code won’t even know when the output is insecure, or will perform poorly, or includes exploits that let others take over their system, or when it is simply incoherent nonsense that looks like code but doesn’t do anything.

All of those factors combine to encourage people to think of vibe coding tools as a sort of “black box” that just spits out an app for you. Even the giant tech companies are starting to encourage this mindset, tacitly endorsing the idea that people don’t need to know what their systems are doing under the hood. But obviously, somebody needs to know whether a system is actually secure. Somebody needs to know if a system is actually doing the tasks it says that it’s doing. The Big AI companies that make the most popular LLMs on the market today routinely design their products to induce emotional dependency in users by giving them positive feedback and encouragement, even when that requires generating false responses. Put more simply: they make the bot lie to you to make you feel good so you use the AI more. That’s terrible in a million ways, but one of them is that it sure does generate some bad code.

And a vibe coding tool absolutely won’t make something truly new. The most radical, disruptive, interesting, surprising, weird, fun innovations in technology have happened because people with a strange compulsion to do something cool had enough knowledge to get their code out into the world. The World Wide Web itself was not a huge technological leap over what came before — it took off because of a huge leap in insight into human nature and human behavior, that happened to be captured in code. The actual bits and bytes? They were mostly just plain text, much of which was in formats that had already been around for many years prior to Tim Berners-Lee assembling it all into the first web browser. That kind of surprising innovation could probably never be vibe coded, even though all of the raw materials might be scooped up by an LLM, because even if the human writing the prompt had that counterintuitive stroke of genius, the system would still be hemmed in by the constraints of the works it had been trained on. The past is a prison when you’re inventing the future.

What’s more, if you were going to use a vibe coding tool to make a truly radical new technology, do you think today’s Big AI companies would let their systems create that app? The same companies that made a platform that just put hundreds of thousands of coders out of work? The same companies that make a platform that tells your kids to end their own lives? The same companies whose cronies in the White House are saying there should never be any laws reining them in? Those folks are going to help you make new tech that threatens to disrupt their power? I don’t think so.

Putting power in people’s hands

I’m deeply torn about what the future of LLMs for coding should be. I’ve spent decades of my life trying to make it easier for everyone to make software. I’ve seen, firsthand, the power of using AI tools to help coders — especially those new to coding — build their confidence in being able to create something new. I love that potential, and in many ways, it’s the most positive and optimistic possibility around LLMs that I’ve seen. It’s the thing that makes me think that maybe there is a part of all the AI hype that is not pure bullshit. Especially if we can find a version of these tools that’s genuinely open source and free and has been trained on people’s code with their consent and cooperation, perhaps in collaboration with some educational institutions, I’d be delighted to see that shared with the world in a thoughtful way.

But I also have seen the majority of the working coders I know (and the non-working coders I know, including myself) rush to integrate the commercial coding assistants from the Big AI companies into their workflow without necessarily giving proper consideration to the long-term implications of that choice. What happens when we’ve developed our dependencies on that assistance? How will people introduce new technologies like new programming languages and frameworks if we all consider the LLMs to be the canonical way of writing our code, and the training models don’t know the new tech exists? How does our imagination shrink when we consider our options of what we create with code to be choosing between the outputs of the LLM rather than starting from the blank slate of our imagination? How will we build the next generation of coders skilled enough to catch the glaring errors that LLMs create in their code?

There’s never been this stark a contrast between the negatives and positives of a new technology being so tightly coupled before when it comes to enabling developers. Generally change comes to coders incrementally. Historically, there was always a (wonderful!) default skepticism to coding culture, where anything that reeked of marketing or hype was looked at with a huge amount of doubt until there was a significant amount of proof to back it up.

But in recent years, as with everything else, the culture wars have come for tech. There’s now a cohort in the coding world that has adopted a cult of personality around a handful of big tech tycoons despite the fact that these men are deeply corrosive to society. Or perhaps because they are. As a result, there’s a built-in constituency for any new AI tool, regardless of its negative externalities, which gives them a sense of momentum even where there may not be any.

It’s worth us examining what’s really going on, and articulating explicitly what we’re trying to enable. Who are we trying to empower? What does success look like? What do we want people to be able to build? What do we not want people to be able to make? What price is too high to pay? What convenience is not worth the cost?

What tools do we choose?

I do, still, believe deeply in the power of technology to empower people. I believe firmly that you have to understand how to create technology if you want to understand how to control it. And I still believe that we have to democratize the power to create and control technology to as many people as possible so that technology can be something people can use as a tool, rather than something that happens _to_them.

We are now in a complex phase, though, where the promise of democratizing access to creating technology is suddenly fraught in a way that it has never been before. The answer can’t possibly be that technology remains inaccessible and difficult for those outside of a privileged class, and easy for those who are already comfortable in the existing power structure.

A lot is still very uncertain, but I come back to one key question that helps me frame the discussion of what’s next: What’s the most radical app that we could build? And which tools will enable me to build it? Even if all we can do is start having a more complicated conversation about what we’re doing when we’re vibe coding, we’ll be making progress towards a more empowered future.

Burgemeester Terneuzen vertrekt definitief na conflict over komst azc

Burgemeester Erik Van Merrienboer kondigde zijn vertrek vorige week al aan in een brief aan de gemeenteraad.

Fedde Schurer

Ongeveer vijfenzestig jaar geleden vertelde onze geschiedenisleraar in Sneek de volgende anekdote. De Friese literator en politicus Fedde Schurer speelde voor Sinterklaas op een…

Sunday morning

lioil has added a photo to the pool:

Sunday morning

Showa Memorial Park, Tachikawa, Tokyo, Japan

The Register

Biting the hand that feeds IT — Enterprise Technology News and Analysis

Amazon is forging a walled garden for enterprise AI

AWS Chief Matt Garman lays out his vision bringing artificial intelligence to the enterprise

Re:Invent  Amazon wants to make AI meaningful to enterprises, and it’s building yet another walled garden disguised as an easy button to do it.…