Tokens and Dreams

The one great principle of the English law is, to make business for itself.

photos/tokens-and-dreams-0.png

The recurring theme running through my mind the last few months has been complexity within a software application. Forget coding. Sales is using AI to write all new code, so for us engineers there's not a hell of a lot to do besides think (and be there to hold the bag).

Last week I generated a CSV of some internal company metrics. With only a sentence or two of prompt, generative AI extrapolated meaningful signals, correlated changes in the data with external signals that were not explicitly expressed (e.g. interest rate hikes), and built a polished interactive dashboard with relevant visualizations. Nevermind the fetishization of dark-mode or the tell-tale slop signs (what is it with that fucking font?) - most people would never notice these, it's coded to look "modern" and it looks the part. I didn't even ask for the dashboard or any visualizations. Results like these seem magical. I believe this is how most people experience generative AI.

Around the same time, I ran another AI coding experiment on one of my smaller open-source libraries, scout, and the process was so riddled with flaws and subtle failures that I know I lost time (and sanity) by even attempting to let AI write code. You see, scout is just a dead-simple RESTful search server written as a flask app. This is not frontiers of engineering shit, it's about as mechanical as it gets in terms of implementation. As in my previous experiments with AI, the strength of the tool in coding tasks was that it could trace logic bugs and find inconsistencies precisely and accurately. The weakness is that as soon as it began to write code it produced tangles of weeds that had to be aggressively hand-pruned, because each iteration the weeds had a tendency to spread...and spread.

photos/claude-and-his-bros-writing-my-code.jpg

Claude and his bros sit down to write me some code.

This is why I'm stuck. I'm stuck between competing narratives, each of which is exerting real business pressure. To push-back when people's daily experience of AI is of the magical variety is seen as almost perverse. I find myself constantly wanting to say "No! I embrace these tools! This is not thinly-veiled self-preservation! Just hear me out..." But how do I express this when, at every turn, a new silver bullet for agent orchestration, automatic coding, automatic review, automatic thinking is being announced? Going further, as one concerned with code as ground-truth for a system, how do I take the leap of faith and relinquish control to a swarm of agents and markdown files?

cybernetics

Intelligent Machines, 1935.

These dynamics, the rise of agentic coding loops, and some unrelated UFO stuff had me thinking about cybernetics (of all things). Cybernetics emerged after WWII as a framework for studying control mechanisms in complex systems. The canonical example is a thermostat that kicks on heating or cooling when the temperature falls outside the specified range, and then returns to passive mode when back within the acceptable range. The central idea is feedback.

The "first law" of cybernetics, Ashby's Law of Requisite Variety, states that in order to control a system, the regulating function (feedback) must be able to match the state-space complexity of the operating environment. The idea is that without adaptive control, the environment dominates the system and eventually leads to failure. In software engineering, I see a two-layered system where at the surface you have the software artifact itself, the application that users interact with. It must be able to encode and handle the complexity of it's intended usages. And then beneath that you have the actual code, the primary source of truth, where it is the programmer who is the control function for the overall system. The programmer's job, then, is two-fold: to manage the state of the code so that it can produce an artifact which, in turn, correctly handles its designed use-case.

The framing also explains to me why I've found the greatest utility in AI tooling in analysis tasks. When directed to do deep analyses on existing code-bases, reason about design tradeoffs, trace deadlocks or diagnose memory leaks AI has been amazing. In cybernetic terms, AI extends the amount of variety I'm able to cope with, and allows me to better regulate the code-base. Yet when directed top-down with specs, no matter how detailed, AI replaces the regulator with its own loop, made from the same substrate as the thing being regulated - the model watching the code and the model producing the code are now the same kind of process, and control dissolves.

According to that first law, the programmer must be able to match the state-space complexity of the code itself, in order to be able to effectively wield it and adapt it over time. Over the years, approaches like Agile, YAGNI, KISS all tend towards optimizing for this kind of adaptability. The core idea is to keep the system simple and minimal enough that both the programmer and the software artifact can adapt as things unfold. On the other end of the spectrum, domain-driven design and spec-driven development emphasize explicit front-loading of complexity modeling. This way the operating modes of the system are well-understood beforehand and the programmer's role becomes more mechanical. Formal methods, meanwhile, are in their own special corner. They front-load, too, but are anchored to machine-verifiable proofs and are the opposite of a vibed-out markdown file.

Those readers who are familiar with my open-source work can probably guess which camp I belong to. I prefer smaller tools, built bottom-up, where the design, behavior and invariants can reasonably be held in your head. Designing software from the bottom-up means building the lower-level component pieces to be clean and orthogonal, so that they can be composed into larger structures. When done correctly, new features tend to write themselves as new patterns emerge. For instance, working on huey, things like retry delays, revocation, rescheduling, ETAs, rate-limiting, chords -- all these features came out as natural consequences from a core set of building blocks. They are robust because the underlying structures are robust and compose well.

So where does AI-written code live in this framework? To me it lands very firmly in the top-down world. There's been a recent wave of hype around "spec-driven" AI development, where you front-load all design requirements into markdown beforehand (see prompting converges with coding). But more importantly, in the two-layered model of control, AI tools eliminate the programmer-as-mediator of the system. All that exists is the artifact, produced by AI, and the specification - some of which exists in markdown, some of which is nothing but a dim spectre haunting a long-forgotten context window.

Quicksand Supercedes Brooks' Tarpit

When the programmer is removed as the control system for managing software complexity, what happens? AI evangelists would argue that control is retained, it has simply shifted to the network of prompts, code, tests, and agents. I would argue, based on my own experience, that this is actually where things begin to break down. An AI-modulated feedback loop inevitably becomes self-referential and at some point the loop closes, because the thing anchoring it to reality - the programmer - can no longer keep up. Code gets written, reviewed, tested and modified using the same system that produced it. Because the speed at which AI produces code far exceeds what any person can reasonably review and fully understand, there's a kind of event horizon that gets crossed. The break occurs and beyond that point the ownership of the code is implicitly transferred.

The consequences of an AI feedback loop go beyond the loss of that lower-layer of control (the system can be understood by human programmers). Errors in design have a way of compounding in AI-written code, so that you end up with many islands which are internally consistent, but do not compose well with one another, much less produce a coherent whole. Even in my tiny 1,000 LOC image viewer prototype, AI produced two completely independent thumbnail caching mechanisms, redundant image display widgets, and three nearly-identical implementations of a context-menu. When prompted to refactor, the result ended up being worse - skeletal remnants of the old APIs calling into "refactored" functions that held the same old logic grafted into new (redundant) functions.

Worse still, this drift leads to real costs. Every iteration consumes tokens, so the code-base is not merely accumulating noise but paying to accumulate it. The feedback loop becomes an epistemically unstable, economically self-reinforcing pit of quicksand. Maybe just a few more hours of token-spend will fix it... But the tens- or hundreds-of-thousands of lines of code sitting behind the software artifact resist attempts to refactor, because the refactor requires the same tools which introduced the problems in the first place.

I recently had a call with an AI-native developer where, with refreshing candor, he showed me his AI development process. Several times he mentioned, almost apologetically and without a trace of defensiveness, that he was not a "real developer", yet he had vibe-coded a real product. He expressed frustration with opaque token spend, noted he had paid several thousand dollars over his normal usage just to get his agents "un-stuck" and the looms spinning again. One of the features in his application was a complex visualization of a knowledge-graph, each node brightly illuminated against a background web of connections. But for reasons which remain obscure, the graph had a tendency to wiggle around and reconfigure itself so that one was forced to mechanically mouse over nodes at random until the tooltip informed you that you'd arrived at the node you were interested in. How many more thousand dollar re-ups would it take to get the graph to sit still and behave, I wondered?

Cocytus

photos/dore-cocytus-watch-your-step.jpg

Look how thou steppest!
Take heed thou do not trample with thy feet
The heads of the tired, miserable brothers!

Ashby's Law gives us a few ways a thermostat can fail: it doesn't sample the environment frequently enough, it models the wrong system, or the environment changes too quickly to enable it to respond effectively. AI coding tools manage to hammer at all three of these at once, and the programmer can no longer be an effective regulator of the system. Iterating becomes a closed circuit of AI driving AI, while code bloats, errors compound, and prompts drift. The artifact may appear correct, but the underlying code is such a mess that no one can be sure.

Anthropic, OpenAI, Google all want us to believe that these tools will speed up and simplify the process of developing software. And in a way they do... right up until the event-horizon is crossed and the loop closes. Beyond there is nothing but iteration upon iteration, token burn, loss of grounding and increased spend. As the software system evolves and code grows, there is an almost addictive sense of making progress - something the AI-native developer spoke about to me - but towards whose goal? In the end the system may run, the dashboard may continue to render, and nobody will be able to say why.

Bezoekers Alternatieve Herdenking willen niet wegkijken. ‘We pakken de afstandsbediening en zappen naar B&B Vol Liefde

Tijdens de Alternatieve Nationale Herdenking in Den Haag is er ook ruimte voor Palestijns leed. „We verschuilen ons achter onze gordijnen.”

Zo zagen de twee minuten stilte eruit op de Waalsdorpervlakte, in het Limburgse dorpje Heer en op de Grebbeberg

Twee minuten was het stil, om 20.00 uur maandagavond, in huiskamers, treincoupés en bij herdenkingsplekken. Neem de Waalsdorpervlakte, een van de bekendere herdenkingslocaties.

The Register

Biting the hand that feeds IT — Enterprise Technology News and Analysis

Microsoft fixes VS Code after app gives Copilot credit for human's work

Devs not thrilled that Git extension added the bot as co-author by default

Imagine working your butt off on a project, only to have VS Code put an attribution into your commit that says Copilot helped you, even if it did not. Microsoft has reversed a change that added a default AI attribution notice after user complaints that the bot was claiming credit for human-authored code.…

Kids say they can beat age checks by drawing on a fake mustache

46% say age checks are easy to bypass, and nearly a third admit getting around them

It’s been months since the UK government began requiring stronger age checks under the Online Safety Act, and recent research suggests those measures are falling short of keeping kids away from harmful content. In some cases, even drawing on a mustache has been reported as enough to fool age detection software.…

The Guardian

Latest news, sport, business, comment, analysis and reviews from the Guardian, the world's leading liberal voice

Manchester City’s wild draw at Everton hands Arsenal title edge despite late Doku strike

Somewhere in London, a celebrated former Everton midfielder may have raised a toast to his old club. Manchester City avoided a damaging defeat with virtually the last kick of the game at Everton but two dropped points handed Mikel Arteta’s Arsenal the advantage in their pursuit of a first league title in 22 years.

Jérémy Doku opened the scoring in magnificent style and ended the scoring in similar fashion in the 97th minute – six minutes of stoppage time had been signalled – to rescue a point for Pep Guardiola’s visitors. They had been stunned by a second half Everton fightback that saw David Moyes’ team take a 3-1 lead through the substitute Thierno Barry’s second goal of the night. But Doku, curling a sublime shot around Jordan Pickford after collecting a Phil Foden corner outside the area, extended City’s unbeaten run to 12 matches and showed this team will not disappear without a fight.

Continue reading...

Met Gala 2026 live: stars walk red carpet on fashion’s biggest night as Bezos backing could spark protests

Beyoncé, Nicole Kidman and Venus Williams co-chair annual New York event alongside Anna Wintour; Bezos sponsorship has sparked criticism

This may be the first time that mineral water has been an accessory at the Met Gala, but if anyone can make it happen, it’s Anna Wintour, the global chief content officer of Conde Nast and co-chair of the Met Gala. The fact that she combines it with a eau de nil feathered cape and trademark bob and sunglasses only makes it more fashion. And more memeable.

The Met Gala has had many moments over the last 10 years – Gaga in a shocking pink cape in 2019, Kim K in Marilyn’s dress in 2022 – but it has not had Beyoncé. The singer last attended in 2016, but as co-chair, she is back, and arguably the star attraction this year. No wonder when you look back at the outfits she has worn. Mostly designed by Riccardo Tisci when he was creative director at Givenchy, there’s the sheer black lace gown with purple feathers from 2012, the beaded black dress and veil from 2014 (the same night as famous elevator incident), the rubber gown from 2016 and – everyone’s favourite – the sheer, beaded nude bodystocking and side ponytail from 2015. With rumours swirling that the long-awaited rock-influenced Act III album will arrive soon, expect this year’s outfit to be mined for clues by the Beyhive.

Continue reading...

Trump threatens to blow Iran ‘off the face of the earth’ if it attacks US vessels

US launched an operation on Monday in the Gulf, dragging the region back to the brink of full-scale war

Donald Trump has threatened that Iran will be “blown off the face of the earth” if it attacks US vessels trying to reopen a route through the strait of Hormuz.

The US launched an operation on Monday to help hundreds of ships trapped with their crews in the Gulf, dragging the region back to the brink of full-scale war.

Continue reading...

Judge ‘disturbed’ over ‘legally deficient’ treatment of Trump gala shooting suspect

Cole Allen was isolated from other inmates, denied a Bible and placed on suicide watch despite showing no suicidal tendencies

A US judge on Monday apologized to the man accused of attempting to assassinate Donald Trump for the “legally deficient” treatment he has faced in a Washington DC, jail, including being placed on suicide watch, separated from other inmates and denied a Bible.

The US magistrate judge Zia Faruqui said he was disturbed by the conditions for Cole Allen, who allegedly fired a shotgun during a foiled attack on Trump and senior officials in his administration at a 25 April press gala. The judge said the conditions were inappropriate for a person with no criminal history.

Continue reading...

Wel.nl

Minder lezen, Meer weten.

Palantir verdubbelt omzet in VS en verhoogt jaarverwachting

MIAMI (ANP) - Het Amerikaanse softwarebedrijf Palantir Technologies heeft in het eerste kwartaal van dit jaar zijn omzet in de Verenigde Staten ruim verdubbeld, grotendeels door opdrachten van de Amerikaanse overheid. De nettowinst nam met ruim de helft toe, werd maandag na het slot van de Amerikaanse beurzen bekend. Het concern verhoogde zijn winst- en omzetverwachting voor heel het jaar.

Palantir ontwikkelt software voor het analyseren en koppelen van grote hoeveelheden data. Die technologie wordt niet alleen gebruikt door bedrijven, maar ook door militaire en veiligheidsdiensten, waaronder de Amerikaanse vreemdelingenpolitie ICE.

Het concern behaalde in het afgelopen kwartaal een omzet van 1,6 miljard dollar, 85 procent meer dan in dezelfde periode vorig jaar. Dat is de sterkste groei ooit voor het bedrijf en ligt boven de verwachtingen van marktkenners. De nettowinst groeide met 53 procent tot ruim 870 miljoen dollar, eveneens boven verwachting.


Slashdot

News for nerds, stuff that matters

Anthropic Nears $1.5 Billion AI Joint Venture With Wall Street Firms

Anthropic is reportedly nearing a roughly $1.5 billion joint venture with Blackstone, Goldman Sachs, Hellman & Friedman, and other Wall Street firms to sell AI tools to private-equity-backed companies. "The investors aim to create a company that acts as a consulting arm for Anthropic and helps teach businesses -- including the private-equity firms' portfolio companies -- how to incorporate AI across their operations," reports the Wall Street Journal. Anthropic, Blackstone, and Hellman & Friedman would each invest about $300 million, while Goldman would contribute around $150 million.

Read more of this story at Slashdot.

this isn't happiness.

ART, PHOTOGRAPHY, DESIGN & DISAPPOINTMENT INSTAGRAM ★ ELSEWHERES

The cookie that came in from the cold



The cookie that came in from the cold

True crime, Abigail Goldman







True crime, Abigail Goldman

VK: Voorpagina

Volkskrant.nl biedt het laatste nieuws, opinie en achtergronden

Rederij Maersk meldt doorvaart van vrachtschip met hulp van Amerikaans leger

kottke.org

Jason Kottke's weblog, home of fine hypertext products

New Banksy: Blinded by Nationalism

The artist Banksy has installed (without a permit, one assumes) a new statue in London that depicts a man in a suit marching off off a ledge, blinded by a flag.

The artwork has been dubbed Blind Patriotism, although Banksy, enigmatic as always, doesn’t explain the meaning of his latest work. However, many have interpreted it as satirising the rise of nationalistic fervour in the UK, typified by the populist politician Nigel Farage and other forces on the far right.

Another bullseye for Banksy. 🎯

Tags: art · Banksy · politics

No Fear

Thomas Hawk posted a photo:

No Fear

Can I Buy You a Drink?

Thomas Hawk posted a photo:

Can I Buy You a Drink?

Behance Featured Projects

The latest projects featured on the Behance

BLUR


The Moscow Times - Independent News From Russia

The Moscow Times offers everything you need to know about Russia: Breaking news, top stories, business, analysis, opinion, multimedia

Ukraine and Russia Declare Separate Temporary Ceasefires

The Russian Defense Ministry threatened a "massive missile strike on the center of Kyiv" if Ukraine violated its ceasefire.

Ukraine and Russia Declare Separate Truces

The Russian Defense Ministry threatened a "massive missile strike on the center of Kyiv" if Ukraine violated its ceasefire.