Tokens and Dreams

The one great principle of the English law is, to make business for itself.

photos/tokens-and-dreams-0.png

The recurring theme running through my mind the last few months has been complexity within a software application. Forget coding. Sales is using AI to write all new code, so for us engineers there's not a hell of a lot to do besides think (and be there to hold the bag).

Last week I generated a CSV of some internal company metrics. With only a sentence or two of prompt, generative AI extrapolated meaningful signals, correlated changes in the data with external signals that were not explicitly expressed (e.g. interest rate hikes), and built a polished interactive dashboard with relevant visualizations. Nevermind the fetishization of dark-mode or the tell-tale slop signs (what is it with that fucking font?) - most people would never notice these, it's coded to look "modern" and it looks the part. I didn't even ask for the dashboard or any visualizations. Results like these seem magical. I believe this is how most people experience generative AI.

Around the same time, I ran another AI coding experiment on one of my smaller open-source libraries, scout, and the process was so riddled with flaws and subtle failures that I know I lost time (and sanity) by even attempting to let AI write code. You see, scout is just a dead-simple RESTful search server written as a flask app. This is not frontiers of engineering shit, it's about as mechanical as it gets in terms of implementation. As in my previous experiments with AI, the strength of the tool in coding tasks was that it could trace logic bugs and find inconsistencies precisely and accurately. The weakness is that as soon as it began to write code it produced tangles of weeds that had to be aggressively hand-pruned, because each iteration the weeds had a tendency to spread...and spread.

photos/claude-and-his-bros-writing-my-code.jpg

Claude and his bros sit down to write me some code.

This is why I'm stuck. I'm stuck between competing narratives, each of which is exerting real business pressure. To push-back when people's daily experience of AI is of the magical variety is seen as almost perverse. I find myself constantly wanting to say "No! I embrace these tools! This is not thinly-veiled self-preservation! Just hear me out..." But how do I express this when, at every turn, a new silver bullet for agent orchestration, automatic coding, automatic review, automatic thinking is being announced? Going further, as one concerned with code as ground-truth for a system, how do I take the leap of faith and relinquish control to a swarm of agents and markdown files?

cybernetics

Intelligent Machines, 1935.

These dynamics, the rise of agentic coding loops, and some unrelated UFO stuff had me thinking about cybernetics (of all things). Cybernetics emerged after WWII as a framework for studying control mechanisms in complex systems. The canonical example is a thermostat that kicks on heating or cooling when the temperature falls outside the specified range, and then returns to passive mode when back within the acceptable range. The central idea is feedback.

The "first law" of cybernetics, Ashby's Law of Requisite Variety, states that in order to control a system, the regulating function (feedback) must be able to match the state-space complexity of the operating environment. The idea is that without adaptive control, the environment dominates the system and eventually leads to failure. In software engineering, I see a two-layered system where at the surface you have the software artifact itself, the application that users interact with. It must be able to encode and handle the complexity of it's intended usages. And then beneath that you have the actual code, the primary source of truth, where it is the programmer who is the control function for the overall system. The programmer's job, then, is two-fold: to manage the state of the code so that it can produce an artifact which, in turn, correctly handles its designed use-case.

The framing also explains to me why I've found the greatest utility in AI tooling in analysis tasks. When directed to do deep analyses on existing code-bases, reason about design tradeoffs, trace deadlocks or diagnose memory leaks AI has been amazing. In cybernetic terms, AI extends the amount of variety I'm able to cope with, and allows me to better regulate the code-base. Yet when directed top-down with specs, no matter how detailed, AI replaces the regulator with its own loop, made from the same substrate as the thing being regulated - the model watching the code and the model producing the code are now the same kind of process, and control dissolves.

According to that first law, the programmer must be able to match the state-space complexity of the code itself, in order to be able to effectively wield it and adapt it over time. Over the years, approaches like Agile, YAGNI, KISS all tend towards optimizing for this kind of adaptability. The core idea is to keep the system simple and minimal enough that both the programmer and the software artifact can adapt as things unfold. On the other end of the spectrum, domain-driven design and spec-driven development emphasize explicit front-loading of complexity modeling. This way the operating modes of the system are well-understood beforehand and the programmer's role becomes more mechanical. Formal methods, meanwhile, are in their own special corner. They front-load, too, but are anchored to machine-verifiable proofs and are the opposite of a vibed-out markdown file.

Those readers who are familiar with my open-source work can probably guess which camp I belong to. I prefer smaller tools, built bottom-up, where the design, behavior and invariants can reasonably be held in your head. Designing software from the bottom-up means building the lower-level component pieces to be clean and orthogonal, so that they can be composed into larger structures. When done correctly, new features tend to write themselves as new patterns emerge. For instance, working on huey, things like retry delays, revocation, rescheduling, ETAs, rate-limiting, chords -- all these features came out as natural consequences from a core set of building blocks. They are robust because the underlying structures are robust and compose well.

So where does AI-written code live in this framework? To me it lands very firmly in the top-down world. There's been a recent wave of hype around "spec-driven" AI development, where you front-load all design requirements into markdown beforehand (see prompting converges with coding). But more importantly, in the two-layered model of control, AI tools eliminate the programmer-as-mediator of the system. All that exists is the artifact, produced by AI, and the specification - some of which exists in markdown, some of which is nothing but a dim spectre haunting a long-forgotten context window.

Quicksand Supercedes Brooks' Tarpit

When the programmer is removed as the control system for managing software complexity, what happens? AI evangelists would argue that control is retained, it has simply shifted to the network of prompts, code, tests, and agents. I would argue, based on my own experience, that this is actually where things begin to break down. An AI-modulated feedback loop inevitably becomes self-referential and at some point the loop closes, because the thing anchoring it to reality - the programmer - can no longer keep up. Code gets written, reviewed, tested and modified using the same system that produced it. Because the speed at which AI produces code far exceeds what any person can reasonably review and fully understand, there's a kind of event horizon that gets crossed. The break occurs and beyond that point the ownership of the code is implicitly transferred.

The consequences of an AI feedback loop go beyond the loss of that lower-layer of control (the system can be understood by human programmers). Errors in design have a way of compounding in AI-written code, so that you end up with many islands which are internally consistent, but do not compose well with one another, much less produce a coherent whole. Even in my tiny 1,000 LOC image viewer prototype, AI produced two completely independent thumbnail caching mechanisms, redundant image display widgets, and three nearly-identical implementations of a context-menu. When prompted to refactor, the result ended up being worse - skeletal remnants of the old APIs calling into "refactored" functions that held the same old logic grafted into new (redundant) functions.

Worse still, this drift leads to real costs. Every iteration consumes tokens, so the code-base is not merely accumulating noise but paying to accumulate it. The feedback loop becomes an epistemically unstable, economically self-reinforcing pit of quicksand. Maybe just a few more hours of token-spend will fix it... But the tens- or hundreds-of-thousands of lines of code sitting behind the software artifact resist attempts to refactor, because the refactor requires the same tools which introduced the problems in the first place.

I recently had a call with an AI-native developer where, with refreshing candor, he showed me his AI development process. Several times he mentioned, almost apologetically and without a trace of defensiveness, that he was not a "real developer", yet he had vibe-coded a real product. He expressed frustration with opaque token spend, noted he had paid several thousand dollars over his normal usage just to get his agents "un-stuck" and the looms spinning again. One of the features in his application was a complex visualization of a knowledge-graph, each node brightly illuminated against a background web of connections. But for reasons which remain obscure, the graph had a tendency to wiggle around and reconfigure itself so that one was forced to mechanically mouse over nodes at random until the tooltip informed you that you'd arrived at the node you were interested in. How many more thousand dollar re-ups would it take to get the graph to sit still and behave, I wondered?

Cocytus

photos/dore-cocytus-watch-your-step.jpg

Look how thou steppest!
Take heed thou do not trample with thy feet
The heads of the tired, miserable brothers!

Ashby's Law gives us a few ways a thermostat can fail: it doesn't sample the environment frequently enough, it models the wrong system, or the environment changes too quickly to enable it to respond effectively. AI coding tools manage to hammer at all three of these at once, and the programmer can no longer be an effective regulator of the system. Iterating becomes a closed circuit of AI driving AI, while code bloats, errors compound, and prompts drift. The artifact may appear correct, but the underlying code is such a mess that no one can be sure.

Anthropic, OpenAI, Google all want us to believe that these tools will speed up and simplify the process of developing software. And in a way they do... right up until the event-horizon is crossed and the loop closes. Beyond there is nothing but iteration upon iteration, token burn, loss of grounding and increased spend. As the software system evolves and code grows, there is an almost addictive sense of making progress - something the AI-native developer spoke about to me - but towards whose goal? In the end the system may run, the dashboard may continue to render, and nobody will be able to say why.

Bezoekers Alternatieve Herdenking willen niet wegkijken. ‘We pakken de afstandsbediening en zappen naar B&B Vol Liefde

Tijdens de Alternatieve Nationale Herdenking in Den Haag is er ook ruimte voor Palestijns leed. „We verschuilen ons achter onze gordijnen.”

Zo zagen de twee minuten stilte eruit op de Waalsdorpervlakte, in het Limburgse dorpje Heer en op de Grebbeberg

Twee minuten was het stil, om 20.00 uur maandagavond, in huiskamers, treincoupés en bij herdenkingsplekken. Neem de Waalsdorpervlakte, een van de bekendere herdenkingslocaties.

The Register

Biting the hand that feeds IT — Enterprise Technology News and Analysis

Microsoft fixes VS Code after app gives Copilot credit for human's work

Imagine working your butt off on a project, only to have VS Code put an attribution into your commit that says Copilot helped you, even if it did not. Microsoft has reversed a change that added a default AI attribution notice after user complaints that the bot was claiming credit for human-authored code. The initial change – a pull request – altered VS Code's Git extension to add "Co-authored-by: Copilot" to commits that involved some level of AI assistance. This was done in VS Code 1.110 in early March. The settings change was intended to "[add] the trailer for all AI-generated code, including inline completions." But developers said the AI authorship line got added even when not using Microsoft's Copilot AI assistant and when chat features had been disabled. And many expressed dissatisfaction with Microsoft activating the AI notice by default. "The most concerning part is that I had already checked the commit message before committing," wrote one developer in a GitHub community discussion post last week. "I deleted Copilot's generated English commit message and manually wrote my own commit message instead. However, after the commit was created, the final Git history still contained the Copilot co-author line. "This means the message I reviewed before committing was not the final content that ended up in Git history, or Copilot/VS Code added co-author metadata after my manual edit. That is unacceptable in a professional development workflow." Over the weekend, Dmitriy Vasyura, the VS Code reviewer who initially approved the pull request, apologized in a forum post for approving the change without checking to see how it would be received. "There was no ill intent by [an] evil corporation, but rather a desire to support functionality that some customers expect of VS Code [with regard to] AI-generated code," he wrote. He conceded that the implementation should respect when AI features have been disabled and should not misreport commit authorship. The fix, authored on May 3, is scheduled to appear in VS Code's upcoming 1.119 release. It changes the default setting for appending the Copilot authorship trailer back to opt-in. As Vasyura observed, other AI tools self-report their involvement. Last year, developers using Anthropic's Claude Code raised similar concerns about the AI agent automatically adding "Co-Authored-By: Claude" to commits. That remains the default for Claude Code and there are several open issues asking for the attribution line to be disabled by default. OpenAI's Codex started offering attribution by default in February. It can be disabled through the commit_attribution flag in the config.toml file. Software projects have developed their own standards for documenting AI code contributions. The Linux project, for example, requires humans to sign off on code contributions and to have AI assistance recorded in an attribution notice. The Zig project, on the other hand, forbids AI-assisted code submissions. As far as VS Code is concerned, developers mainly want the attribution trailer to be opt-in rather than opt-out – and they're annoyed Microsoft made that change unilaterally.  But the inclusion of AI credit in code commits raises some tricky questions. Given that purely AI-generated content may not qualify for copyright protection, having that notice potentially complicates commercial usage of AI tools.  When an AI agent has written some code, the question then becomes whether there was sufficient human involvement in the AI-code generation process to qualify for intellectual property protection. And organizations might not have the necessary workflow documentation processes in place to clarify that issue, were it ever to come up in litigation. There are also liability scenarios in which an AI attribution notification could complicate software-related disputes. For example, some insurers have reportedly balked at providing business liability insurance where AI is involved. So documenting AI involvement could give insurers leverage to wash their hands of related claims. What's more, a generic AI attribution notice does not clarify whether the agent wrote 100 percent of the code or whether it performed inconsequential autocompletions.  Then there's the general social backlash against AI-generated content. In some circles, AI involvement in creative work is anathema.  It's complicated, particularly when different AI systems have different standards for when AI authorship should be noted. VS Code is letting developers opt-in to Copilot attribution trailers; Anthropic and OpenAI have developers opt-out of their notices; and image generation models like Google Nano Banana add AI watermarks automatically, without the option to disable them. Meanwhile, not one commercial AI model credits the human authors who created their training material – unless forced to do so in court. ®

Microsoft fixes VS Code after app gives Copilot credit for human's work

Devs not thrilled that Git extension added the bot as co-author by default

Imagine working your butt off on a project, only to have VS Code put an attribution into your commit that says Copilot helped you, even if it did not. Microsoft has reversed a change that added a default AI attribution notice after user complaints that the bot was claiming credit for human-authored code.…

Kids say they can beat age checks by drawing on a fake mustache

It’s been months since the UK government began requiring stronger age checks under the Online Safety Act, and recent research suggests those measures are falling short of keeping kids away from harmful content. In some cases, even drawing on a mustache has been reported as enough to fool age detection software. Like keeping booze away from teenagers or nudie mags out of the hands of young lads, slapping a big “restricted, 18+” label on parts of the internet hasn't stopped kids testing the limits. Those limits, according to UK online safety group Internet Matters, are easy to sidestep. The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required.  The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it's easy to bypass online age checks (and another 17 percent say it's neither hard nor easy), only 32 percent say they've actually bypassed them, according to Internet Matters.  Dude, want some TikTok? My mom will hook us up Like scoring some booze from "cool" parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids' online delinquency.  More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it.  "When speaking to parents and children about these situations, they described scenarios in which parents felt they understood the risks involved and, based on their knowledge of their child, were confident the activity was safe," Internet Matters said of parents who let their kids engage in risky behavior as long as they did it where they could be supervised.  What this means for a major part of the OSA - namely keeping kids from accessing harmful content online - is that it’s falling short. Internet Matters has data to that end, too. Half of children (49 percent) who responded to the group's survey said that they've encountered harmful content online recently, suggesting that even those who don't circumvent age gates are still finding it in their feeds.  So, what can be done to make kids' online safety more effective? Parents told Internet Matters that lawmakers need to do more, and CEO Rachel Huggins agreed that they need help.  "Stronger action is needed from both government and industry to ensure that children can only access online services appropriate for their age and stage and where safety is built in from the outset, rather than added in response to harm," Huggins said in the report.  The Internet Matters chief pointed to the prime minister’s recent talks with social media firms about tackling online harms, describing the moment as “a timely opportunity for positive change.” ®

The Guardian

Latest news, sport, business, comment, analysis and reviews from the Guardian, the world's leading liberal voice

Manchester City’s wild draw at Everton hands Arsenal title edge despite late Doku strike

Somewhere in London, a celebrated former Everton midfielder may have raised a toast to his old club. Manchester City avoided a damaging defeat with virtually the last kick of the game at Everton but two dropped points handed Mikel Arteta’s Arsenal the advantage in their pursuit of a first league title in 22 years.

Jérémy Doku opened the scoring in magnificent style and ended the scoring in similar fashion in the 97th minute – six minutes of stoppage time had been signalled – to rescue a point for Pep Guardiola’s visitors. They had been stunned by a second half Everton fightback that saw David Moyes’ team take a 3-1 lead through the substitute Thierno Barry’s second goal of the night. But Doku, curling a sublime shot around Jordan Pickford after collecting a Phil Foden corner outside the area, extended City’s unbeaten run to 12 matches and showed this team will not disappear without a fight.

Continue reading...

Met Gala 2026 live: stars walk red carpet on fashion’s biggest night as Bezos backing could spark protests

Beyoncé, Nicole Kidman and Venus Williams co-chair annual New York event alongside Anna Wintour; Bezos sponsorship has sparked criticism

This may be the first time that mineral water has been an accessory at the Met Gala, but if anyone can make it happen, it’s Anna Wintour, the global chief content officer of Conde Nast and co-chair of the Met Gala. The fact that she combines it with a eau de nil feathered cape and trademark bob and sunglasses only makes it more fashion. And more memeable.

The Met Gala has had many moments over the last 10 years – Gaga in a shocking pink cape in 2019, Kim K in Marilyn’s dress in 2022 – but it has not had Beyoncé. The singer last attended in 2016, but as co-chair, she is back, and arguably the star attraction this year. No wonder when you look back at the outfits she has worn. Mostly designed by Riccardo Tisci when he was creative director at Givenchy, there’s the sheer black lace gown with purple feathers from 2012, the beaded black dress and veil from 2014 (the same night as famous elevator incident), the rubber gown from 2016 and – everyone’s favourite – the sheer, beaded nude bodystocking and side ponytail from 2015. With rumours swirling that the long-awaited rock-influenced Act III album will arrive soon, expect this year’s outfit to be mined for clues by the Beyhive.

Continue reading...

Trump threatens to blow Iran ‘off the face of the earth’ if it attacks US vessels

US launched an operation on Monday in the Gulf, dragging the region back to the brink of full-scale war

Donald Trump has threatened that Iran will be “blown off the face of the earth” if it attacks US vessels trying to reopen a route through the strait of Hormuz.

The US launched an operation on Monday to help hundreds of ships trapped with their crews in the Gulf, dragging the region back to the brink of full-scale war.

Continue reading...

Wel.nl

Minder lezen, Meer weten.

Palantir verdubbelt omzet in VS en verhoogt jaarverwachting

MIAMI (ANP) - Het Amerikaanse softwarebedrijf Palantir Technologies heeft in het eerste kwartaal van dit jaar zijn omzet in de Verenigde Staten ruim verdubbeld, grotendeels door opdrachten van de Amerikaanse overheid. De nettowinst nam met ruim de helft toe, werd maandag na het slot van de Amerikaanse beurzen bekend. Het concern verhoogde zijn winst- en omzetverwachting voor heel het jaar.

Palantir ontwikkelt software voor het analyseren en koppelen van grote hoeveelheden data. Die technologie wordt niet alleen gebruikt door bedrijven, maar ook door militaire en veiligheidsdiensten, waaronder de Amerikaanse vreemdelingenpolitie ICE.

Het concern behaalde in het afgelopen kwartaal een omzet van 1,6 miljard dollar, 85 procent meer dan in dezelfde periode vorig jaar. Dat is de sterkste groei ooit voor het bedrijf en ligt boven de verwachtingen van marktkenners. De nettowinst groeide met 53 procent tot ruim 870 miljoen dollar, eveneens boven verwachting.