Slashdot

News for nerds, stuff that matters

The Audio Industry Is Grappling With the Rise of 'Podslop'

An anonymous reader quotes a report from Bloomberg's Ashley Carman: Welcome to the modern era of podcasting in which thousands of new shows are released into the world every day with a sizable portion likely being AI-generated. Figuring out exactly which ones fall into that growing category is becoming more difficult just as the industry is starting to take this issue seriously. In only the past month or so, Amazon launched a feature that explains a product by generating a quasi-podcast, complete with co-hosts talking to each other and taking questions from users. Shout out to Business Insider reporter Katie Notopoulos for spotting this (and, naturally, demoing it with an adult diaper rash-cream). Not long ago, Nicholas Thompson, chief executive officer of the Atlantic, noted "podslop" dominated his Spotify search results when he typed in the word "Sora." This was around the time that OpenAI shut down its user-generated, AI-content-only app.

[...] All of which raises some big, difficult questions. For one, what should the listening platforms do about this incursion? As of right now, Apple Podcasts requires creators who generated a "material portion" of their show using AI to disclose it. The platform also bans misleading or deceptive content. Spotify hasn't published any specific guidelines around AI, though it maintains general rules around dangerous and misleading content. Where this conversation gets even trickier is when it comes to money. Many of these podcasts are hosted on at least one free service that allows programs to opt into their ad marketplace with zero barrier to entry, meaning these shows (and the hosting service) profit off every listen or download. Spreaker, a company owned by iHeartMedia, is the primary one to watch here. Though it tells users to disclose when they rely on AI, it still allows those shows to opt into its programmatic ad marketplace, which pays creators 60% of the revenue generated by the ads placed in their shows. It stands to reason that most of these thousands of shows don't reach many people. But in the aggregate, the ears and dollars could add up. Are the advertisers on board with being next to AI-generated content, some of which might be deemed "slop?" There's also the question of how to define "slop." Jackson of the Podcast Index and his co-host Adam Curry treat it as something listeners simply know when they hear it, while Alberto Betella, co-founder of RSS.com, defines it as "fully automated content with no human review."

Jeanine Wright, co-founder of Inception Point, rejects the debate altogether: "The people still talking about slop are still making 6-7 jokes," she said. "It's still yesterday's conversation."

Read more of this story at Slashdot.

The Guardian

Latest news, sport, business, comment, analysis and reviews from the Guardian, the world's leading liberal voice

Blake Lively and Justin Baldoni settle lawsuit over acrimonious It Ends With Us production

Settlement details were not revealed in the agreement that put an end to a highly anticipated trial before it began

Blake Lively and Justin Baldoni have settled their legal dispute from the acrimonious production of their 2024 film It Ends With Us, just weeks before a highly anticipated scheduled trial.

In a joint statement on Monday, legal representatives of both parties said: “The end product – the movie It Ends With Us – is a source of pride to all of us who worked to bring it to life. Raising awareness, and making a meaningful impact in the lives of domestic violence survivors – and all survivors – is a goal that we stand behind.”

Continue reading...

GameStop shares fall 10% after CEO skirts questions over eBay acquisition details

Ryan Cohen said he didn’t understand questions about how the video games retailer could afford its $55.5bn bid

GameStop’s shares fell more than 10% on Monday as questions emerged about how the company would finance its surprise $55.5bn bid for eBay.

In an interview with CNBC, Ryan Cohen, GameStop’s CEO, skirted repeated inquiries about how the video games retailer could afford the deal, saying he didn’t understand the questions.

Continue reading...

‘A test of our values’: Starmer to call for whole-society response to rising antisemitism

PM will say responsibility to stand with Jewish communities lies with ‘every one of us’ at event on Tuesday

Keir Starmer will call for a whole-of-society response to rising antisemitism on Tuesday, saying that it is not enough simply to condemn the scourge, but people “must show it” through their actions too.

Before a roundtable event at Downing Street, the prime minister will call for action on all forms of antisemitism, after a knife attack against the Jewish community in Golders Green last week, a spate of serious arson attacks and the terror incident in Heaton Park in October.

Continue reading...

Zohran Mamdani condemns ICE after police and protesters clash in Brooklyn

Police forcibly broke up protest outside hospital where federal immigration agents took detainee for evaluation

New York City’s mayor, Zohran Mamdani, and other local officials on Monday condemned Immigration and Customs Enforcement (ICE) after federal officers dragged a man out of a hospital building where he had been taken following an arrest, prompting a crowd of protesters to gather outside, where they clashed with police.

The incident over the weekend has also drawn scrutiny from critics questioning the New York police department’s response at the scene, in relation to New York City’s sanctuary laws, which bar local police from assisting federal immigration authorities in civil immigration enforcement.

Continue reading...

Met Gala 2026 red carpet: the best looks in pictures

Event chairs Nicole Kidman, Beyoncé, Venus Williams and Anna Wintour had guests dress to the theme ‘fashion is art’, at the event controversially funded by new honorary chairs Jeff Bezos and Lauren Sánchez Bezos

Continue reading...

Three die in boating tragedy off NSW coast after rescue boat rolls while trying to help sinking yacht

Three people confirmed dead but four made it to shore after rescue went wrong in rough seas

Three people are dead after a marine rescue turned deadly.

NSW Police said four people made it to shore after two vessels became stricken in heavy conditions on Monday night.

Continue reading...

View from Sorrento

BertvB posted a photo:

View from Sorrento

MetaFilter

The past 24 hours of MetaFilter

"Yes, that is Jennifer Coolidge. And then things get real."

Adrian Hon (May 3, 2026), "The Immersive Austenland Experience and the Impossibility of Romantic Larp": "I want to convince you that the 2013 romantic comedy Austenland has the most to teach us about how LARP is understood in popular culture." 'Making of' post: "How I Made My First Video Essay." Related April Fool's post: "Austenland: The World's Most Immersive Austen Experience?" See also "I'm Making Strandfall, a Solarpunk Orienteering Larp" and the recently-released Seasons of Larp, journal of Knutepunkt 2026, in which Hon discusses Nikolai Evreinov's "The Theatre for Oneself" (login required) and "The Storming of the Winter Palace" (captured on film). Early LARPing previously, and--less immersively--the Good Society RPG's 2nd ed. is coming soon but entries for a related game jam plus the extremely light Midnight Editions are readily available.

Tokens and Dreams

The one great principle of the English law is, to make business for itself.

photos/tokens-and-dreams-0.png

The recurring theme running through my mind the last few months has been complexity within a software application. Forget coding. Sales is using AI to write all new code, so for us engineers there's not a hell of a lot to do besides think (and be there to hold the bag).

Last week I generated a CSV of some internal company metrics. With only a sentence or two of prompt, generative AI extrapolated meaningful signals, correlated changes in the data with external signals that were not explicitly expressed (e.g. interest rate hikes), and built a polished interactive dashboard with relevant visualizations. Nevermind the fetishization of dark-mode or the tell-tale slop signs (what is it with that fucking font?) - most people would never notice these, it's coded to look "modern" and it looks the part. I didn't even ask for the dashboard or any visualizations. Results like these seem magical. I believe this is how most people experience generative AI.

Around the same time, I ran another AI coding experiment on one of my smaller open-source libraries, scout, and the process was so riddled with flaws and subtle failures that I know I lost time (and sanity) by even attempting to let AI write code. You see, scout is just a dead-simple RESTful search server written as a flask app. This is not frontiers of engineering shit, it's about as mechanical as it gets in terms of implementation. As in my previous experiments with AI, the strength of the tool in coding tasks was that it could trace logic bugs and find inconsistencies precisely and accurately. The weakness is that as soon as it began to write code it produced tangles of weeds that had to be aggressively hand-pruned, because each iteration the weeds had a tendency to spread...and spread.

photos/claude-and-his-bros-writing-my-code.jpg

Claude and his bros sit down to write me some code.

This is why I'm stuck. I'm stuck between competing narratives, each of which is exerting real business pressure. To push-back when people's daily experience of AI is of the magical variety is seen as almost perverse. I find myself constantly wanting to say "No! I embrace these tools! This is not thinly-veiled self-preservation! Just hear me out..." But how do I express this when, at every turn, a new silver bullet for agent orchestration, automatic coding, automatic review, automatic thinking is being announced? Going further, as one concerned with code as ground-truth for a system, how do I take the leap of faith and relinquish control to a swarm of agents and markdown files?

cybernetics

Intelligent Machines, 1935.

These dynamics, the rise of agentic coding loops, and some unrelated UFO stuff had me thinking about cybernetics (of all things). Cybernetics emerged after WWII as a framework for studying control mechanisms in complex systems. The canonical example is a thermostat that kicks on heating or cooling when the temperature falls outside the specified range, and then returns to passive mode when back within the acceptable range. The central idea is feedback.

The "first law" of cybernetics, Ashby's Law of Requisite Variety, states that in order to control a system, the regulating function (feedback) must be able to match the state-space complexity of the operating environment. The idea is that without adaptive control, the environment dominates the system and eventually leads to failure. In software engineering, I see a two-layered system where at the surface you have the software artifact itself, the application that users interact with. It must be able to encode and handle the complexity of it's intended usages. And then beneath that you have the actual code, the primary source of truth, where it is the programmer who is the control function for the overall system. The programmer's job, then, is two-fold: to manage the state of the code so that it can produce an artifact which, in turn, correctly handles its designed use-case.

The framing also explains to me why I've found the greatest utility in AI tooling in analysis tasks. When directed to do deep analyses on existing code-bases, reason about design tradeoffs, trace deadlocks or diagnose memory leaks AI has been amazing. In cybernetic terms, AI extends the amount of variety I'm able to cope with, and allows me to better regulate the code-base. Yet when directed top-down with specs, no matter how detailed, AI replaces the regulator with its own loop, made from the same substrate as the thing being regulated - the model watching the code and the model producing the code are now the same kind of process, and control dissolves.

According to that first law, the programmer must be able to match the state-space complexity of the code itself, in order to be able to effectively wield it and adapt it over time. Over the years, approaches like Agile, YAGNI, KISS all tend towards optimizing for this kind of adaptability. The core idea is to keep the system simple and minimal enough that both the programmer and the software artifact can adapt as things unfold. On the other end of the spectrum, domain-driven design and spec-driven development emphasize explicit front-loading of complexity modeling. This way the operating modes of the system are well-understood beforehand and the programmer's role becomes more mechanical. Formal methods, meanwhile, are in their own special corner. They front-load, too, but are anchored to machine-verifiable proofs and are the opposite of a vibed-out markdown file.

Those readers who are familiar with my open-source work can probably guess which camp I belong to. I prefer smaller tools, built bottom-up, where the design, behavior and invariants can reasonably be held in your head. Designing software from the bottom-up means building the lower-level component pieces to be clean and orthogonal, so that they can be composed into larger structures. When done correctly, new features tend to write themselves as new patterns emerge. For instance, working on huey, things like retry delays, revocation, rescheduling, ETAs, rate-limiting, chords -- all these features came out as natural consequences from a core set of building blocks. They are robust because the underlying structures are robust and compose well.

So where does AI-written code live in this framework? To me it lands very firmly in the top-down world. There's been a recent wave of hype around "spec-driven" AI development, where you front-load all design requirements into markdown beforehand (see prompting converges with coding). But more importantly, in the two-layered model of control, AI tools eliminate the programmer-as-mediator of the system. All that exists is the artifact, produced by AI, and the specification - some of which exists in markdown, some of which is nothing but a dim spectre haunting a long-forgotten context window.

Quicksand Supercedes Brooks' Tarpit

When the programmer is removed as the control system for managing software complexity, what happens? AI evangelists would argue that control is retained, it has simply shifted to the network of prompts, code, tests, and agents. I would argue, based on my own experience, that this is actually where things begin to break down. An AI-modulated feedback loop inevitably becomes self-referential and at some point the loop closes, because the thing anchoring it to reality - the programmer - can no longer keep up. Code gets written, reviewed, tested and modified using the same system that produced it. Because the speed at which AI produces code far exceeds what any person can reasonably review and fully understand, there's a kind of event horizon that gets crossed. The break occurs and beyond that point the ownership of the code is implicitly transferred.

The consequences of an AI feedback loop go beyond the loss of that lower-layer of control (the system can be understood by human programmers). Errors in design have a way of compounding in AI-written code, so that you end up with many islands which are internally consistent, but do not compose well with one another, much less produce a coherent whole. Even in my tiny 1,000 LOC image viewer prototype, AI produced two completely independent thumbnail caching mechanisms, redundant image display widgets, and three nearly-identical implementations of a context-menu. When prompted to refactor, the result ended up being worse - skeletal remnants of the old APIs calling into "refactored" functions that held the same old logic grafted into new (redundant) functions.

Worse still, this drift leads to real costs. Every iteration consumes tokens, so the code-base is not merely accumulating noise but paying to accumulate it. The feedback loop becomes an epistemically unstable, economically self-reinforcing pit of quicksand. Maybe just a few more hours of token-spend will fix it... But the tens- or hundreds-of-thousands of lines of code sitting behind the software artifact resist attempts to refactor, because the refactor requires the same tools which introduced the problems in the first place.

I recently had a call with an AI-native developer where, with refreshing candor, he showed me his AI development process. Several times he mentioned, almost apologetically and without a trace of defensiveness, that he was not a "real developer", yet he had vibe-coded a real product. He expressed frustration with opaque token spend, noted he had paid several thousand dollars over his normal usage just to get his agents "un-stuck" and the looms spinning again. One of the features in his application was a complex visualization of a knowledge-graph, each node brightly illuminated against a background web of connections. But for reasons which remain obscure, the graph had a tendency to wiggle around and reconfigure itself so that one was forced to mechanically mouse over nodes at random until the tooltip informed you that you'd arrived at the node you were interested in. How many more thousand dollar re-ups would it take to get the graph to sit still and behave, I wondered?

Cocytus

photos/dore-cocytus-watch-your-step.jpg

Look how thou steppest!
Take heed thou do not trample with thy feet
The heads of the tired, miserable brothers!

Ashby's Law gives us a few ways a thermostat can fail: it doesn't sample the environment frequently enough, it models the wrong system, or the environment changes too quickly to enable it to respond effectively. AI coding tools manage to hammer at all three of these at once, and the programmer can no longer be an effective regulator of the system. Iterating becomes a closed circuit of AI driving AI, while code bloats, errors compound, and prompts drift. The artifact may appear correct, but the underlying code is such a mess that no one can be sure.

Anthropic, OpenAI, Google all want us to believe that these tools will speed up and simplify the process of developing software. And in a way they do... right up until the event-horizon is crossed and the loop closes. Beyond there is nothing but iteration upon iteration, token burn, loss of grounding and increased spend. As the software system evolves and code grows, there is an almost addictive sense of making progress - something the AI-native developer spoke about to me - but towards whose goal? In the end the system may run, the dashboard may continue to render, and nobody will be able to say why.