Formula 1 News

Formula 1® - The Official F1® Website

Catch the action from Day 1 of the second Bahrain test

George Russell and Mercedes topped the timesheets as the second and final pre-season test of 2026 kicked off in Bahrain – leading the way from McLaren’s Oscar Piastri and Ferrari’s Charles Leclerc.

What we learned from Day 1 of the second Bahrain test

F1.com's Lawrence Barretto looks back on the opening day of the second official 2026 pre-season test in Bahrain.

10 quiz questions on the latest Formula 1 news

Test your knowledge of the Formula 1 news from the past seven days...

Russell tops Day 1 of second Bahrain test

George Russell has ended the first day of the second 2026 pre-season test in Bahrain on top, the Mercedes driver setting the pace ahead of McLaren’s Oscar Piastri and the Ferrari of Charles Leclerc.

Slashdot

News for nerds, stuff that matters

Vermont EV Buses Prove Unreliable For Transportation This Winter

An anonymous reader writes: Electric buses are proving unreliable this winter for Vermont's Green Mountain Transit, as it needs to be over 41 degrees for the buses to charge, but due to a battery recall the buses are a fire hazard and can't be charged in a garage.

Spokesman for energy workers advocacy group Power the Future Larry Behrens told the Center Square: "Taxpayers were sold an $8 million 'solution' that can't operate in cold weather when the home for these buses is in New England."

"We're beyond the point where this looks like incompetence and starts to smell like fraud," Behrens said.

"When government rushes money out the door to satisfy green mandates, basic questions about performance, safety, and value for taxpayers are always pushed aside," Behrens said. "Americans deserve to know who approved this purchase and why the red flags were ignored."

General manager at Green Mountain Transit (GMT) Clayton Clark told The Center Square that "the federal government provides public transit agencies with new buses through a competitive grant application process, and success is not a given."

Read more of this story at Slashdot.

Linus Torvalds on How Linux Went From One-Man Show To Group Effort

Linus Torvalds has told The Register how Linux went from a solo hobby project on a single 386 PC in Helsinki to a genuinely collaborative effort, and the path involved crowdsourced checks, an FTP mirror at MIT, and a licensing decision that opened the floodgates.

Torvalds released the first public snapshot, Linux 0.02, on October 5, 1991, on a Finnish FTP server -- about 10,000 lines of code that he had cross-compiled under Minix. He originally wanted to call it "Freax," but his friend Ari Lemmke, who set up the server, named the directory "Linux" instead. Early contributor Theodore Ts'o set up the first North American mirror on his VAXstation at MIT, since the sole 64 kbps link between Finland and the US made downloads painful. That mirror gave developers on this side of the Atlantic their first practical access to the kernel.

Another early developer, Dirk Hohndel, recalled that Torvalds initially threw away incoming patches and reimplemented them from scratch -- a habit he eventually dropped because it did not scale. When Torvalds could not afford to upgrade his underpowered 386, developer H. Peter Anvin collected checks from contributors through his university mailbox and wired the funds to Finland, covering the international banking fees himself. Torvalds got a 486DX/2. In 1992, he moved the kernel to the GPL, and the first full distributions appeared in 1992-1993, turning Linux from a kernel into installable systems.

Read more of this story at Slashdot.

Microsoft Says Bug Causes Copilot To Summarize Confidential Emails

Microsoft says a Microsoft 365 Copilot bug has been causing the AI assistant to summarize confidential emails since late January, bypassing data loss prevention (DLP) policies that organizations rely on to protect sensitive information. From a report: According to a service alert seen by BleepingComputer, this bug (tracked under CW1226324 and first detected on January 21) affects the Copilot "work tab" chat feature, which incorrectly reads and summarizes emails stored in users' Sent Items and Drafts folders, including messages that carry confidentiality labels explicitly designed to restrict access by automated tools.

Copilot Chat (short for Microsoft 365 Copilot Chat) is the company's AI-powered, content-aware chat that lets users interact with AI agents. Microsoft began rolling out Copilot Chat to Word, Excel, PowerPoint, Outlook, and OneNote for paying Microsoft 365 business customers in September 2025.

Read more of this story at Slashdot.

WordPress Gets AI Assistant That Can Edit Text, Generate Images and Tweak Your Site

WordPress has started rolling out an AI assistant built into its site editor and media library that can edit and translate text, generate and edit images through Google's Nano Banana model, and make structural changes to sites like creating new pages or swapping fonts.

Users can also invoke the assistant by tagging "@ai" in block notes, a commenting feature added to the site editor in December's WordPress 6.9 update. The tool is opt-in -- users need to toggle on "AI tools" in their site settings -- though sites originally created using WordPress's AI website builder, launched last year, will have it enabled by default.

Read more of this story at Slashdot.

Leaked Email Suggests Ring Plans To Expand 'Search Party' Surveillance Beyond Dogs

Ring's AI-powered "Search Party" feature, which links neighborhood cameras into a networked surveillance system to find lost dogs, was never intended to stop at pets, according to an internal email from founder Jamie Siminoff obtained by 404 Media.

Siminoff told employees in early October, shortly after the feature launched, that Search Party was introduced "first for finding dogs" and that the technology would eventually help "zero out crime in neighborhoods." The on-by-default feature faced intense backlash after Ring promoted it during a Super Bowl ad. Ring has since also rolled out "Familiar Faces," a facial recognition tool that identifies friends and family on a user's camera, and "Fire Watch," an AI-based fire alert system.

A Ring spokesperson told the publication Search Party does not process human biometrics or track people.

Read more of this story at Slashdot.

thexiffy

Last.fm last recent tracks from thexiffy.

Sopor Aeternus & The Ensemble of Shadows - Deathhouse

Sopor Aeternus & The Ensemble of Shadows

Anil Dash

A blog about making culture. Since 1999.

How did we end up threatening our kids’ lives with AI?

I have to begin by warning you about the content in this piece; while I won’t be dwelling on any specifics, this will necessarily be a broad discussion about some of the most disturbing topics imaginable. I resent that I have to give you that warning, but I’m forced to because of the choices that the Big AI companies have made that affect children. I don’t say this lightly. But this is the point we must reckon with if we are having an honest conversation about contemporary technology.

Let me get the worst of it out of the way right up front, and then we can move on to understanding how this happened. ChatGPT has repeatedly produced output that encouraged and incited children to end their own lives. Grok’s AI generates sexualized imagery of children, which the company makes available commercially to paid subscribers.

It used to be that encouraging children to self-harm, or producing sexualized imagery of children, were universally agreed upon as being amongst the worst things one could do in society. These were among the rare truly non-partisan, unifying moral agreements that transcended all social and cultural barriers. And now, some of the world’s biggest and most powerful companies, led by a few of the wealthiest and most powerful men who have ever lived, are violating these rules, for profit, and not only is there little public uproar, it seems as if very few have even noticed.

How did we get here?

The ideas behind a crisis

A perfect storm of factors have combined to lead us towards the worst case scenario for AI. There is now an entire market of commercial products that attack our children, and to understand why, we need to look at the mindset of the people who are creating those products. Here are some of the key motivations that drove them to this point.

1. Everyone feels desperately behind and wants to catch up

There’s an old adage from Intel’s founder Andy Grove that people in Silicon Valley used to love to quote: “Only the paranoid survive”. This attitude persists, with leaders absolutely convinced that everything is a zero-sum game, and any perceived success by another company is an existential threat to one’s own future.

At Google, the company’s researchers had published the fundamental paper underlying the creation of LLMs in 2017, but hadn’t capitalized on that invention by making a successful consumer product by 2022, when OpenAI released ChatGPT. Within Google leadership (and amongst the big tech tycoons), the fact that OpenAI was able to have a hit product with this technology was seen as a grave failure by Google, despite the fact that even OpenAI’s own leadership hadn’t expected ChatGPT to be a big hit upon launch. A crisis ensued within Google in the months that followed.

These kinds of industry narratives have more weight than reality in driving decision-making and investment, and the refrain of “move fast and break things” is still burned into people’s heads, so the end result these days is that shipping any product is okay, as long as it helps you catch up to your competitor. Thus, since Grok is seriously behind its competitors in usage, and of course Grok's CEO Elon Musk is always desperate for attention, they have every incentive to ship a product with a catastrophically toxic design — including one that creates abusive imagery.

2. Accountability is “woke” and must be crushed

Another fundamental article of faith in the last decade amongst tech tycoons (and their fanboys) is that woke culture must be destroyed. They have an amorphous and ever-evolving definition of what “woke” means, but it always includes any measures of accountability. One key example is the trust and safety teams that had been trying to keep all of the major technology platforms from committing the worst harms that their products were capable of producing.

Here, again, Google provides us with useful context. The company had one of the most mature and experienced AI safety research teams in the world at the time when the first paper on the transformer model (LLMs) was published. Right around the time that paper was published, Google also saw one of its engineers publish a sexist screed on gender essentialism designed to bait the company into becoming part of the culture war, which it ham-handedly stumbled directly into. Like so much of Silicon Valley, Google’s leadership did not understand that these campaigns are always attempts to game the refs, and they let themselves be played by these bad actors; within a few years, a backlash had built and they began cutting everyone who had warned about risks around the new AI platforms, including some of the most credible and respected voices in the industry on these issues.

Eliminating those roles was considered vital because these people were blamed for having “slowed down” the company with their silly concerns about things like people’s lives, or the health of the world’s information ecosystem. A lot of the wealthy execs across the industry were absolutely convinced that the reason Google had ended up behind in AI, despite having invented LLMs, was because they had too many “woke” employees, and those employees were too worried about esoteric concerns like people’s well-being.

It does not ever enter the conversation that 1. executives are accountable for the failures that happen at a company, 2. Google had a million other failures during these same years (including those countless redundant messaging apps they kept launching!) that may have had far more to do with their inability to seize the market opportunity and 3. it may be a good thing that Google didn’t rush to market with a product that tells children to harm themselves, and those workers who ended up being fired may have saved Google from that fate!

3. Product managers are veterans of genocidal regimes

The third fact that enabled the creation of pernicious AI products is more subtle, but has more wide-ranging implications once we face it. In the tech industry, product managers are often quietly amongst the most influential figures in determining the influence a company has on culture. (At least until all the product managers are replaced by an LLM being run by their CEO.) At their best, product managers are the people who decide exactly what features and functionality go into a product, synthesizing and coordinating between the disciplines of engineering, marketing, sales, support, research, design, and many other specialties. I’m a product person, so I have a lot of empathy for the challenges of the role, and a healthy respect for the power it can often hold.

But in today’s Silicon Valley, a huge number of the people who act as product managers spent the formative years of their careers in companies like Facebook (now Meta). If those PMs now work at OpenAI, then the moments when they were learning how to practice their craft were spent at a company that made products that directly enabled and accelerated a genocide. That’s not according to me, that’s the opinion of multiple respected international human rights organizations. If you chose to go work at Facebook after the Rohingya genocide had happened, then you were certainly not going to learn from your manager that you should not make products that encourage or incite people to commit violence.

Even when they’re not enabling the worst things in the world, product managers who spend time in these cultures learn more destructive habits, like strategic line-stepping. This is the habit of repeatedly violating their own policies on things like privacy and security, or allowing users to violate platform policies on things like abuse and harassment. This tactic is followed by then feigning surprise when the behavior is caught. After sending out an obligatory apology, they repeat the behavior again a few more times until everyone either gets so used to it that they stop complaining or the continued bad actions drives off the good people, which makes it seem to the media or outside observers that the problem has gone away. Then, they amend their terms of service to say that the formerly-disallowed behavior is now permissible, so that in the future they can say, “See? It doesn’t violate our policy.”

Because so many people in the industry now have these kind of credential on their LinkedIn profiles, their peers can’t easily mention many kinds of ethical concerns when designing a product without implicitly condemning their coworkers. This becomes even more fraught when someone might potentially be unknowingly offending one of their leaders. As a result, it becomes a race to the bottom, where the person with the worst ethical standards on the team determines the standards to which everyone designs their work. As a result, if the prevailing sentiment about creating products at a company is that having millions of users just inevitably means killing some of them (“you’ve got to break a few eggs to make an omelet”), there can be risk to contradicting that idea. Pointing out that, in fact, most platforms on the internet do not harm users in these ways and their creators work very hard to ensure that tech products don’t present a risk to their communities, can end up being a career-limiting move.

4. Compensation is tied to feature adoption

This is a more subtle point, but explains a lot of the incentives and motivations behind so much of what happens with today’s major technology platforms. The introduction or rollout of new capabilities is measured when these companies launch new features, and the success of those rollouts or launches are often tied to the measurements of individual performance for the people who were responsible for those features. These will be measured using metrics like “KPIs” (key performance indicators) or other similar corporate acronyms, all of which basically represent the concept of being rewarded for whether the thing you made was adopted by users in the real world. In the abstract, it makes sense to reward employees based on whether the things they create actually succeed in the market, so that their work is aligned with whatever makes the company succeed.

In practice, people’s incentives and motivations get incredibly distorted over time by these kinds of gamified systems being used to measure their work, especially as it becomes a larger and larger part of their compensation. If you’ve ever wondered why some intrusive AI feature that you never asked for is jumping in front of your cursor when you’re just trying to do a normal task the same way that you’ve been doing it for years, it’s because someone’s KPI was measuring whether you were going to click on that AI button. Much of the time, the system doesn’t distinguish between “I accidentally clicked on this feature while trying to get rid of it” and “I enthusiastically chose to click on this button”. This is what I mean when I say we need an internet of consent.

But you see the grim end game of this kind of thinking, and these kinds of reward systems, when kids’ well-being is on the line. Someone’s compensation may well be tied to a metric or measurement of “how many people used the image generation feature?” without regard to whether that feature was being used to generate imagery of children without consent. Getting a user addicted to a product, even to the point where they’re getting positive reinforcement when discussing the most self-destructive behaviors, will show up in a measurement system as increased engagement — exactly the kind of behavior that most compensation systems reward employees for producing.

5. Their cronies have made it impossible to regulate them

A strange reality of the United States’ sad decline into authoritarianism is that it is presently impossible to create federal regulation to stop the harms that these large AI platforms are causing. Most Americans are not familiar with this level of corruption and crony capitalism, but Trump’s AI Czar David Sacks has an unbelievably broad number of conflicts of interest from his investments across the AI spectrum; it’s impossible to know how many because nobody in the Trump administration follows even the basic legal requirements around disclosure or disinvestment, and the entire corrupt Republican Party in Congress refuses to do their constitutionally-required duty to hold the executive branch accountable for these failures.

As a result, at the behest of the most venal power brokers in Silicon Valley, the Trump administration is insisting on trying to stop all AI regulations at the state level, and of course will have the collusion of the captive Supreme Court to assist in this endeavor. Because they regularly have completely unaccountable and unrecorded conversations, the leaders of the Big AI companies (all of whom attended the Inauguration of this President and support the rampant lawbreaking of this administration with rewards like open bribery) know that there will be no constraints on the products that they launch, and no punishments or accountability if those products cause harm.

All of the pertinent regulatory bodies, from the Federal Trade Commission to the Consumer Financial Protection Bureau have had their competent leadership replaced by Trump cronies as well, meaning that their agendas are captured and they will not be able to protect citizens from these companies, either.

There will, of course, still be attempts at accountability at the state and local level, and these will wind their way through the courts over time. But the harms will continue in the meantime. And there will be attempts to push back on the international level, both from regulators overseas, and increasingly by governments and consumers outside the United States refusing to use technologies developed in this country. But again, these remedies will take time to mature, and in the meantime, children will still be in harm’s way.

What about the kids?

It used to be such a trope of political campaigns and social movements to say “what about the children?” that it is almost beyond parody. I personally have mocked the phrase because it’s so often deployed in bad faith, to short-circuit complicated topics and suppress debate. But this is that rare circumstance where things are actually not that complicated. Simply discussing the reality of what these products do should be enough.

People will say, “but it’s inevitable! These products will just have these problems sometimes!” And that is simply false. There are already products on the market that don’t have these egregious moral failings. More to the point, even if it were true that these products couldn’t exist without killing or harming children — then that’s a reason not to ship them at all.

If it is, indeed absolutely unavoidable that, for example, ChatGPT has to advocate violence, then let’s simply attach a rule in the code that modifies it to change the object of the violence to be Sam Altman. Or your boss. I suspect that if, suddenly, the chatbot deployed to every laptop at your company had a chance of suggesting that people cause bodily harm to your CEO, people would suddenly figure out a way to fix that bug. But somehow when it makes that suggestion about your 12-year-old, this is an insurmountably complex challenge.

We can expect things to get worse before they get better. OpenAI has already announced that it is going to be allowing people to generate sexual content on its service for a fee later this year. To their credit, when doing so, they stated their policy prohibiting the use of the service to generate images that sexualize children. But the service they’re using to ensure compliance, Thorn, whose product is meant to help protect against such content, was conspicuously silent about Musk’s recent foray into generating sexualized imagery of children. An organization whose entire purpose is preventing this kind of material, where every public message they have put out is decrying this content, somehow falls mute when the world’s richest man carries out the most blatant launch of this capability ever? If even the watchdogs have lost their voice, how are regular people supposed to feel like they have a chance at fighting back?

And then, if no one is reining in OpenAI, and they have to keep up with their competitors, and the competition isn’t worried about silly concerns like ethics, and the other platforms are selling child exploitation material, and all of the product mangers are Meta alumni who know that they can just keep gaming the terms of service if they need to, and laws aren’t being enforced, and all the product managers making the product learned to make decisions while they were at Meta… well, will you be surprised?

How do we move forward?

It should be an industry-stopping scandal that this is the current state of two of the biggest players in the most-hyped, most-funded, most consequential area of the entire business world right now. It should be unfathomable that people are thinking about deploying these technologies in their businesses — in their schools! — or integrating these products into their own platforms. And yet I would bet that the vast majority of people using these products have no idea about these risks or realities of these platforms at all. Even the vast majority of people who work in tech probably are barely aware.

What’s worse is, the majority of people I’ve talked to in tech, who do know about this have not taken a single action about it. Not one.

I’ll be following up with an entire list of suggestions about actions we can take, and ways we can push for accountability for the bad actors who are endangering kids every day. In the meantime, reflect for yourself about this reality. Who will you share this information with? How will this change your view of what these companies are? How will this change the way you make decisions about using these products? Now that you know: what will you do?

kottke.org

Jason Kottke's weblog, home of fine hypertext products

A Sense of Getting Closer

With music by Max Cooper and visuals by Conner Griffith, A Sense of Getting Closer is a music video that was inspired by a quote submitted to Cooper’s On Being project:

I have a sense of getting closer to something which my life depends on. I can sense it but I cannot tell if I should be excited or terrified about what will happen.

Mesmerizing. Like literally, given that it’s based on “a hypnotic light show we can’t look away from, yet we know is made up of low-quality content fed to us by engagement algorithms.”

Tags: Conner Griffith · Max Cooper · mesmerizing · music · video

Wel.nl

Minder lezen, Meer weten.

Waarom dinosaurussen wel, maar vogels niet uitstierven

Toen 66 miljoen jaar geleden de dinosaurussen uitstierven, overleefde één groep afstammelingen: de vogels. En dat kwam, omdat die dieren in tientallen miljoenen jaren tijd weer klein waren geworden. Tegenwoordig zijn er 10.000 vogelsoorten.

Daarmee is het de meest diverse groep dieren met vier ledematen ter wereld. Ooit waren dinosaurussen klein. 230 miljoen jaar geleden wogen de meesten tussen de 10 en 35 kilo. Ze waren zo groot als een gemiddelde hond.

Maar al snel werden ze groter. Binnen 30 miljoen jaar wogen ze 10.000 kilo. Nog later werden sommige soorten wel 35 meter lang en wogen 90.000 kilo. De dino's stopten wel met groeien, maar behielden hun grootte, behalve de maniraptora. Van deze gevederde dieren werd een deel juist weer klein. En alleen de dieren die nog maar een kilo wogen, overleefden de asteroïde-inslag, die de dinosaurussen de kop kostte. Dat waren de vogels. Doordat ze zo klein waren, konden ze zich makkelijker aanpassen aan de veranderde omstandigheden in tegenstelling tot de enorme dinosaurussen met bijbehorende grote honger. De voorouders van de vogel werden in eerste instantie kleiner, omdat ze daardoor beter konden vliegen. Dat kost met minder gewicht uiteraard minder energie.

Bron(nen): Science


The Register

Biting the hand that feeds IT — Enterprise Technology News and Analysis

Fraudster hacked hotel system, paid 1 cent for luxury rooms, Spanish cops say

'First time we have detected a crime using this method,' cops say

Spanish police arrested a hacker who allegedly manipulated a hotel booking website, allowing him to pay one cent for luxury hotel stays. He also raided the mini-bars and didn't settle some of those tabs, police say.…

bl brutalism IX

conspectus_bs posted a photo:

bl brutalism IX

Fomapan 100 with Mamiya 645 Pro and Sekor 80 mm

Found Photo, The Isiah Calloway Collection

Thomas Hawk posted a photo:

Found Photo, The Isiah Calloway Collection

Found Kodachrome Slide

Thomas Hawk posted a photo:

Found Kodachrome Slide

MetaFilter

The past 24 hours of MetaFilter

Oh No! *pop*

February 14th marked the 35th anniversary of the launch of a little puzzle game for the Amiga that would rocket to popularity on the backs of its little green-headed hero-victims. Lemmings helped secure the fortunes of its creators, DMA Design (who would go on to make a little franchise called Grand Theft Auto). For its anniversary, many of the original team sat down with YouTuber onaretrotip to talk about the creation of the game and its legacy.

Though the game was built on the core of lemmings being suicidal little guys, it turns out that most of what people know about actual lemmings is complete false

‘U bent als kandidaat-bewindspersoon geheel verantwoordelijk voor de eigen integriteit’, staat in het handboek voor bewindspersonen

Het aandikken van haar cv kostte beoogd staatssecretaris Nathalie van Berkel de beoogde positie én haar Kamerlidmaatschap. Hoe zit het met de 27 andere kabinetsleden die maandag geïnaugureerd worden? Hoe grondig is hun screening geweest?


Rijnmond - Nieuws

Het laatste nieuws van vandaag over Rotterdam, Feyenoord, het verkeer en het weer in de regio Rijnmond

Auto gevonden die mogelijk hardloopster Lisa aanreed, bestuurder nog niet gepakt

De politie heeft een rode Mini Cooper in beslag genomen die mogelijk betrokken was bij de aanrijding van de 23-jarige Lisa uit Hoek van Holland. De vrouw werd maandag tijdens een rondje hardlopen aangereden in 's-Gravenzande en de bestuurder van de auto reed daarna door. Het is nog onduidelijk wie er in de rode Mini reed. Daar doet de politie nog onderzoek naar.