Formula 1 News

Formula 1® - The Official F1® Website

10 quiz questions on the latest Formula 1 news

Test your knowledge of the Formula 1 news from the past seven days...

Russell tops Day 1 of second Bahrain test

George Russell has ended the first day of the second 2026 pre-season test in Bahrain on top, the Mercedes driver setting the pace ahead of McLaren’s Oscar Piastri and the Ferrari of Charles Leclerc.

thexiffy

Last.fm last recent tracks from thexiffy.

Sopor Aeternus & The Ensemble of Shadows - Deathhouse

Sopor Aeternus & The Ensemble of Shadows

King Hannah - Meal Deal

King Hannah

Anil Dash

A blog about making culture. Since 1999.

How did we end up threatening our kids’ lives with AI?

I have to begin by warning you about the content in this piece; while I won’t be dwelling on any specifics, this will necessarily be a broad discussion about some of the most disturbing topics imaginable. I resent that I have to give you that warning, but I’m forced to because of the choices that the Big AI companies have made that affect children. I don’t say this lightly. But this is the point we must reckon with if we are having an honest conversation about contemporary technology.

Let me get the worst of it out of the way right up front, and then we can move on to understanding how this happened. ChatGPT has repeatedly produced output that encouraged and incited children to end their own lives. Grok’s AI generates sexualized imagery of children, which the company makes available commercially to paid subscribers.

It used to be that encouraging children to self-harm, or producing sexualized imagery of children, were universally agreed upon as being amongst the worst things one could do in society. These were among the rare truly non-partisan, unifying moral agreements that transcended all social and cultural barriers. And now, some of the world’s biggest and most powerful companies, led by a few of the wealthiest and most powerful men who have ever lived, are violating these rules, for profit, and not only is there little public uproar, it seems as if very few have even noticed.

How did we get here?

The ideas behind a crisis

A perfect storm of factors have combined to lead us towards the worst case scenario for AI. There is now an entire market of commercial products that attack our children, and to understand why, we need to look at the mindset of the people who are creating those products. Here are some of the key motivations that drove them to this point.

1. Everyone feels desperately behind and wants to catch up

There’s an old adage from Intel’s founder Andy Grove that people in Silicon Valley used to love to quote: “Only the paranoid survive”. This attitude persists, with leaders absolutely convinced that everything is a zero-sum game, and any perceived success by another company is an existential threat to one’s own future.

At Google, the company’s researchers had published the fundamental paper underlying the creation of LLMs in 2017, but hadn’t capitalized on that invention by making a successful consumer product by 2022, when OpenAI released ChatGPT. Within Google leadership (and amongst the big tech tycoons), the fact that OpenAI was able to have a hit product with this technology was seen as a grave failure by Google, despite the fact that even OpenAI’s own leadership hadn’t expected ChatGPT to be a big hit upon launch. A crisis ensued within Google in the months that followed.

These kinds of industry narratives have more weight than reality in driving decision-making and investment, and the refrain of “move fast and break things” is still burned into people’s heads, so the end result these days is that shipping any product is okay, as long as it helps you catch up to your competitor. Thus, since Grok is seriously behind its competitors in usage, and of course Grok's CEO Elon Musk is always desperate for attention, they have every incentive to ship a product with a catastrophically toxic design — including one that creates abusive imagery.

2. Accountability is “woke” and must be crushed

Another fundamental article of faith in the last decade amongst tech tycoons (and their fanboys) is that woke culture must be destroyed. They have an amorphous and ever-evolving definition of what “woke” means, but it always includes any measures of accountability. One key example is the trust and safety teams that had been trying to keep all of the major technology platforms from committing the worst harms that their products were capable of producing.

Here, again, Google provides us with useful context. The company had one of the most mature and experienced AI safety research teams in the world at the time when the first paper on the transformer model (LLMs) was published. Right around the time that paper was published, Google also saw one of its engineers publish a sexist screed on gender essentialism designed to bait the company into becoming part of the culture war, which it ham-handedly stumbled directly into. Like so much of Silicon Valley, Google’s leadership did not understand that these campaigns are always attempts to game the refs, and they let themselves be played by these bad actors; within a few years, a backlash had built and they began cutting everyone who had warned about risks around the new AI platforms, including some of the most credible and respected voices in the industry on these issues.

Eliminating those roles was considered vital because these people were blamed for having “slowed down” the company with their silly concerns about things like people’s lives, or the health of the world’s information ecosystem. A lot of the wealthy execs across the industry were absolutely convinced that the reason Google had ended up behind in AI, despite having invented LLMs, was because they had too many “woke” employees, and those employees were too worried about esoteric concerns like people’s well-being.

It does not ever enter the conversation that 1. executives are accountable for the failures that happen at a company, 2. Google had a million other failures during these same years (including those countless redundant messaging apps they kept launching!) that may have had far more to do with their inability to seize the market opportunity and 3. it may be a good thing that Google didn’t rush to market with a product that tells children to harm themselves, and those workers who ended up being fired may have saved Google from that fate!

3. Product managers are veterans of genocidal regimes

The third fact that enabled the creation of pernicious AI products is more subtle, but has more wide-ranging implications once we face it. In the tech industry, product managers are often quietly amongst the most influential figures in determining the influence a company has on culture. (At least until all the product managers are replaced by an LLM being run by their CEO.) At their best, product managers are the people who decide exactly what features and functionality go into a product, synthesizing and coordinating between the disciplines of engineering, marketing, sales, support, research, design, and many other specialties. I’m a product person, so I have a lot of empathy for the challenges of the role, and a healthy respect for the power it can often hold.

But in today’s Silicon Valley, a huge number of the people who act as product managers spent the formative years of their careers in companies like Facebook (now Meta). If those PMs now work at OpenAI, then the moments when they were learning how to practice their craft were spent at a company that made products that directly enabled and accelerated a genocide. That’s not according to me, that’s the opinion of multiple respected international human rights organizations. If you chose to go work at Facebook after the Rohingya genocide had happened, then you were certainly not going to learn from your manager that you should not make products that encourage or incite people to commit violence.

Even when they’re not enabling the worst things in the world, product managers who spend time in these cultures learn more destructive habits, like strategic line-stepping. This is the habit of repeatedly violating their own policies on things like privacy and security, or allowing users to violate platform policies on things like abuse and harassment. This tactic is followed by then feigning surprise when the behavior is caught. After sending out an obligatory apology, they repeat the behavior again a few more times until everyone either gets so used to it that they stop complaining or the continued bad actions drives off the good people, which makes it seem to the media or outside observers that the problem has gone away. Then, they amend their terms of service to say that the formerly-disallowed behavior is now permissible, so that in the future they can say, “See? It doesn’t violate our policy.”

Because so many people in the industry now have these kind of credential on their LinkedIn profiles, their peers can’t easily mention many kinds of ethical concerns when designing a product without implicitly condemning their coworkers. This becomes even more fraught when someone might potentially be unknowingly offending one of their leaders. As a result, it becomes a race to the bottom, where the person with the worst ethical standards on the team determines the standards to which everyone designs their work. As a result, if the prevailing sentiment about creating products at a company is that having millions of users just inevitably means killing some of them (“you’ve got to break a few eggs to make an omelet”), there can be risk to contradicting that idea. Pointing out that, in fact, most platforms on the internet do not harm users in these ways and their creators work very hard to ensure that tech products don’t present a risk to their communities, can end up being a career-limiting move.

4. Compensation is tied to feature adoption

This is a more subtle point, but explains a lot of the incentives and motivations behind so much of what happens with today’s major technology platforms. The introduction or rollout of new capabilities is measured when these companies launch new features, and the success of those rollouts or launches are often tied to the measurements of individual performance for the people who were responsible for those features. These will be measured using metrics like “KPIs” (key performance indicators) or other similar corporate acronyms, all of which basically represent the concept of being rewarded for whether the thing you made was adopted by users in the real world. In the abstract, it makes sense to reward employees based on whether the things they create actually succeed in the market, so that their work is aligned with whatever makes the company succeed.

In practice, people’s incentives and motivations get incredibly distorted over time by these kinds of gamified systems being used to measure their work, especially as it becomes a larger and larger part of their compensation. If you’ve ever wondered why some intrusive AI feature that you never asked for is jumping in front of your cursor when you’re just trying to do a normal task the same way that you’ve been doing it for years, it’s because someone’s KPI was measuring whether you were going to click on that AI button. Much of the time, the system doesn’t distinguish between “I accidentally clicked on this feature while trying to get rid of it” and “I enthusiastically chose to click on this button”. This is what I mean when I say we need an internet of consent.

But you see the grim end game of this kind of thinking, and these kinds of reward systems, when kids’ well-being is on the line. Someone’s compensation may well be tied to a metric or measurement of “how many people used the image generation feature?” without regard to whether that feature was being used to generate imagery of children without consent. Getting a user addicted to a product, even to the point where they’re getting positive reinforcement when discussing the most self-destructive behaviors, will show up in a measurement system as increased engagement — exactly the kind of behavior that most compensation systems reward employees for producing.

5. Their cronies have made it impossible to regulate them

A strange reality of the United States’ sad decline into authoritarianism is that it is presently impossible to create federal regulation to stop the harms that these large AI platforms are causing. Most Americans are not familiar with this level of corruption and crony capitalism, but Trump’s AI Czar David Sacks has an unbelievably broad number of conflicts of interest from his investments across the AI spectrum; it’s impossible to know how many because nobody in the Trump administration follows even the basic legal requirements around disclosure or disinvestment, and the entire corrupt Republican Party in Congress refuses to do their constitutionally-required duty to hold the executive branch accountable for these failures.

As a result, at the behest of the most venal power brokers in Silicon Valley, the Trump administration is insisting on trying to stop all AI regulations at the state level, and of course will have the collusion of the captive Supreme Court to assist in this endeavor. Because they regularly have completely unaccountable and unrecorded conversations, the leaders of the Big AI companies (all of whom attended the Inauguration of this President and support the rampant lawbreaking of this administration with rewards like open bribery) know that there will be no constraints on the products that they launch, and no punishments or accountability if those products cause harm.

All of the pertinent regulatory bodies, from the Federal Trade Commission to the Consumer Financial Protection Bureau have had their competent leadership replaced by Trump cronies as well, meaning that their agendas are captured and they will not be able to protect citizens from these companies, either.

There will, of course, still be attempts at accountability at the state and local level, and these will wind their way through the courts over time. But the harms will continue in the meantime. And there will be attempts to push back on the international level, both from regulators overseas, and increasingly by governments and consumers outside the United States refusing to use technologies developed in this country. But again, these remedies will take time to mature, and in the meantime, children will still be in harm’s way.

What about the kids?

It used to be such a trope of political campaigns and social movements to say “what about the children?” that it is almost beyond parody. I personally have mocked the phrase because it’s so often deployed in bad faith, to short-circuit complicated topics and suppress debate. But this is that rare circumstance where things are actually not that complicated. Simply discussing the reality of what these products do should be enough.

People will say, “but it’s inevitable! These products will just have these problems sometimes!” And that is simply false. There are already products on the market that don’t have these egregious moral failings. More to the point, even if it were true that these products couldn’t exist without killing or harming children — then that’s a reason not to ship them at all.

If it is, indeed absolutely unavoidable that, for example, ChatGPT has to advocate violence, then let’s simply attach a rule in the code that modifies it to change the object of the violence to be Sam Altman. Or your boss. I suspect that if, suddenly, the chatbot deployed to every laptop at your company had a chance of suggesting that people cause bodily harm to your CEO, people would suddenly figure out a way to fix that bug. But somehow when it makes that suggestion about your 12-year-old, this is an insurmountably complex challenge.

We can expect things to get worse before they get better. OpenAI has already announced that it is going to be allowing people to generate sexual content on its service for a fee later this year. To their credit, when doing so, they stated their policy prohibiting the use of the service to generate images that sexualize children. But the service they’re using to ensure compliance, Thorn, whose product is meant to help protect against such content, was conspicuously silent about Musk’s recent foray into generating sexualized imagery of children. An organization whose entire purpose is preventing this kind of material, where every public message they have put out is decrying this content, somehow falls mute when the world’s richest man carries out the most blatant launch of this capability ever? If even the watchdogs have lost their voice, how are regular people supposed to feel like they have a chance at fighting back?

And then, if no one is reining in OpenAI, and they have to keep up with their competitors, and the competition isn’t worried about silly concerns like ethics, and the other platforms are selling child exploitation material, and all of the product mangers are Meta alumni who know that they can just keep gaming the terms of service if they need to, and laws aren’t being enforced, and all the product managers making the product learned to make decisions while they were at Meta… well, will you be surprised?

How do we move forward?

It should be an industry-stopping scandal that this is the current state of two of the biggest players in the most-hyped, most-funded, most consequential area of the entire business world right now. It should be unfathomable that people are thinking about deploying these technologies in their businesses — in their schools! — or integrating these products into their own platforms. And yet I would bet that the vast majority of people using these products have no idea about these risks or realities of these platforms at all. Even the vast majority of people who work in tech probably are barely aware.

What’s worse is, the majority of people I’ve talked to in tech, who do know about this have not taken a single action about it. Not one.

I’ll be following up with an entire list of suggestions about actions we can take, and ways we can push for accountability for the bad actors who are endangering kids every day. In the meantime, reflect for yourself about this reality. Who will you share this information with? How will this change your view of what these companies are? How will this change the way you make decisions about using these products? Now that you know: what will you do?

Slashdot

News for nerds, stuff that matters

Microsoft Says Bug Causes Copilot To Summarize Confidential Emails

Microsoft says a Microsoft 365 Copilot bug has been causing the AI assistant to summarize confidential emails since late January, bypassing data loss prevention (DLP) policies that organizations rely on to protect sensitive information. From a report: According to a service alert seen by BleepingComputer, this bug (tracked under CW1226324 and first detected on January 21) affects the Copilot "work tab" chat feature, which incorrectly reads and summarizes emails stored in users' Sent Items and Drafts folders, including messages that carry confidentiality labels explicitly designed to restrict access by automated tools.

Copilot Chat (short for Microsoft 365 Copilot Chat) is the company's AI-powered, content-aware chat that lets users interact with AI agents. Microsoft began rolling out Copilot Chat to Word, Excel, PowerPoint, Outlook, and OneNote for paying Microsoft 365 business customers in September 2025.

Read more of this story at Slashdot.

WordPress Gets AI Assistant That Can Edit Text, Generate Images and Tweak Your Site

WordPress has started rolling out an AI assistant built into its site editor and media library that can edit and translate text, generate and edit images through Google's Nano Banana model, and make structural changes to sites like creating new pages or swapping fonts.

Users can also invoke the assistant by tagging "@ai" in block notes, a commenting feature added to the site editor in December's WordPress 6.9 update. The tool is opt-in -- users need to toggle on "AI tools" in their site settings -- though sites originally created using WordPress's AI website builder, launched last year, will have it enabled by default.

Read more of this story at Slashdot.

Leaked Email Suggests Ring Plans To Expand 'Search Party' Surveillance Beyond Dogs

Ring's AI-powered "Search Party" feature, which links neighborhood cameras into a networked surveillance system to find lost dogs, was never intended to stop at pets, according to an internal email from founder Jamie Siminoff obtained by 404 Media.

Siminoff told employees in early October, shortly after the feature launched, that Search Party was introduced "first for finding dogs" and that the technology would eventually help "zero out crime in neighborhoods." The on-by-default feature faced intense backlash after Ring promoted it during a Super Bowl ad. Ring has since also rolled out "Familiar Faces," a facial recognition tool that identifies friends and family on a user's camera, and "Fire Watch," an AI-based fire alert system.

A Ring spokesperson told the publication Search Party does not process human biometrics or track people.

Read more of this story at Slashdot.

Rijnmond - Nieuws

Het laatste nieuws van vandaag over Rotterdam, Feyenoord, het verkeer en het weer in de regio Rijnmond

Auto gevonden die mogelijk hardloopster Lisa aanreed, bestuurder nog niet gepakt

De politie heeft een rode Mini Cooper in beslag genomen die mogelijk betrokken was bij de aanrijding van de 23-jarige Lisa uit Hoek van Holland. De vrouw werd maandag tijdens een rondje hardlopen aangereden in 's-Gravenzande en de bestuurder van de auto reed daarna door. Het is nog onduidelijk wie er in de rode Mini reed. Daar doet de politie nog onderzoek naar.

The Guardian

Latest news, sport, business, comment, analysis and reviews from the Guardian, the world's leading liberal voice

Tamás Vásáry obituary

Conductor and pianist highly regarded for his elegant interpretations of Chopin and Liszt

The Hungarian pianist Tamás Vásáry, who has died aged 92, was highly regarded for his elegance and clarity of execution in music by Chopin and Vásáry’s compatriot Liszt. His first concerts in the early 1960s, in London, New York and other major cities such as Milan, Vienna and Berlin, gave promise of a new talent that was exciting for its poetic expressivity rather than for daredevil virtuosity.

That priority was maintained as his career unfolded, and although his repertoire was also to embrace Debussy, Mozart, Bach, Beethoven and Schumann, as well as the concertos of Rachmaninov and the chamber music of Brahms, it was Chopin and Liszt to which he constantly returned.

Continue reading...

Ben Jennings on Nigel Farage’s ‘shadow cabinet’ – cartoon

Continue reading...

Fifa’s plan for expanded 48-team Club World Cup will not be blocked by Uefa

  • Backing a sign of improved relations between presidents

  • Tournament expected not to be held every two years

Uefa is ready to back Fifa’s proposed expansion of the Club World Cup to 48 teams for the next edition in 2029 in a sign of improving relations between their respective presidents, Aleksander Ceferin and Gianni Infantino.

European football’s governing body had opposed plans to grow the Club World Cup over concerns an expanded tournament could threaten the status of the Champions League, but Uefa is now willing to back Fifa in return for an undertaking that the competition will not be held every two years.

Continue reading...

Climber faces manslaughter charge after leaving girlfriend on Austria’s tallest peak

Kerstin G froze to death on Großglockner when Thomas P descended mountain to fetch help

An Austrian mountaineer is to appear in court accused of gross negligent manslaughter after his girlfriend died of hypothermia when he left her close to the summit on a climb that went dramatically wrong.

The 33-year-old woman, identified only as Kerstin G, froze to death on 19 January 2025, about 50 metres below the summit of the Großglockner, Austria’s tallest mountain, after an ascent of more than 17 hours with her boyfriend, Thomas P, 36.

Continue reading...

Wel.nl

Minder lezen, Meer weten.

Zelensky: toelaten Russische vlag Paralympics smerig en onjuist

KYIV (ANP) - De Oekraïense president Volodymyr Zelensky noemt het besluit dat enkele Russische en Belarussische sporters onder eigen vlag mogen meedoen aan de Paralympische Winterspelen van Milaan-Cortina "smerig" en "onjuist". Dat heeft hij gezegd in een interview met de Britse journalist Piers Morgan dat op X is geplaatst.

Het Internationaal Paralympisch Comité (IPC) kondigde dinsdag aan dat zes Russen en vier Belarussen onder eigen vlag mogen meedoen aan de Paralympics van volgende maand. Eind vorig jaar beëindigde het IPC onverwacht de schorsing van Rusland en Belarus, die was ingesteld na de oorlog in Oekraïne.

Zelensky vergeleek het besluit van het IPC met een "sluipende bezetting". "Zo heeft de Russische agressie zich ongeveer ontwikkeld, een sluipende bezetting - de Krim, niemand reageerde, toen Donbas, niemand reageerde, toen een volledige invasie", zei de president van Oekraïne.


X moet onderzoekers gegevens rond Hongaarse verkiezingen geven

BERLIJN (ANP/RTR) - Het socialemediaplatform X moet onderzoekers toegang geven tot gegevens rond de verkiezingen in Hongarije op 12 april. Dit heeft het hof van beroep in Berlijn bepaald. De beslissing wordt gezien als een mijlpaal in de uitvoering van de Digital Services Act (DSA) van de EU.

Volgens die wet zijn grote onlineplatforms als X verplicht onderzoekers toegang te geven tot gegevens om desinformatie, haatzaaien en verkiezingsmanipulatie te monitoren. X zou zich hier niet aan hebben gehouden. Het hof heeft geoordeeld dat het socialemediaplatform van miljardair Elon Musk informatie moet delen over bijvoorbeeld het bereik van en reacties op berichten rond de aanstaande Hongaarse parlementsverkiezingen, meldt een van de eisers.

Een lagere rechtbank oordeelde eerder dat de rechtsmacht in Ierland lag, waar X zijn Europese hoofdkantoor heeft. Het hof in Berlijn bepaalde echter dat Duitse rechtbanken ook kunnen optreden bij een lokaal probleem. Het onderzoek wordt gedaan vanuit Duitsland.


Miljardair Wexner zegt eiland Epstein eenmaal te hebben bezocht

WASHINGTON (ANP/BLOOMBERG) - Miljardair Les Wexner heeft tegenover een onderzoekscommissie van het Amerikaanse Huis van Afgevaardigden gezegd dat hij een keer op het privé-eiland van Jeffrey Epstein is geweest. De in 2019 overleden zedendelinquent was jarenlang de financieel adviseur van Wexner, die rijk werd met winkel- en marketingconglomeraat L Brands.

In zijn getuigenis voor congresleden verklaarde Wexner dat hij naïef was over Epstein en door hem is misleid, meldden Amerikaanse media. Over het bezoek aan Epsteins eiland in de Caraïben zei hij dat het om een eenmalig bezoek ging, samen met zijn vrouw en kinderen.

Epstein overleed in 2019 in een cel in New York, volgens de autoriteiten door zelfmoord. Hij zat vast in afwachting van een proces waarin hij werd beschuldigd van het runnen van een groot netwerk waarin minderjarige meisjes seksueel werden uitgebuit. In 2008 werd hij al veroordeeld voor het ronselen van een minderjarige voor prostitutie. De onderzoekscommissie wil nu nagaan welke rol Epsteins netwerk van invloedrijke kennissen hierin speelde.


Jetten: kwestie-Van Berkel spijtig, maar foutjes worden gemaakt

DEN HAAG (ANP) - Aankomend premier Rob Jetten (D66) vindt het "spijtig" dat Nathalie van Berkel, tot maandag beoogd staatssecretaris Financiën, zich heeft teruggetrokken. Van Berkel vermeldde studies op haar cv die ze niet had gedaan of niet had afgemaakt. Toen bleek dat zij daarover gelogen had, trok zij zich eerst terug als beoogd staatssecretaris en een dag later als Kamerlid.

Jetten had het "graag voorkomen", maar erkent dat dat niet is gelukt. Volgens hem zegt dat niets over zijn leiderschap. "Het laat zien dat er in allerlei procedures ook altijd nog foutjes kunnen worden gemaakt." Hij vindt het voor Van Berkel persoonlijk "een heel groot drama".

De beoogd premier verwacht donderdag een vervanger voor Van Berkel te kunnen aankondigen. Hij zei niet of het om een man of vrouw gaat. Vrijdag komt de nieuwe staatssecretaris in spé langs bij formateur Jetten.


VS dreigen weer met vertrek uit Internationaal Energieagentschap

PARIJS (ANP/AFP/BLOOMBERG) - De Verenigde Staten hebben opnieuw gedreigd uit het Internationaal Energieagentschap (IEA) te stappen, tenzij de organisatie minder aandacht besteedt aan klimaatverandering. Dit zei de Amerikaanse minister van Energie, Chris Wright, woensdag bij een vergadering in Parijs. Vorig jaar dreigde de minister ook al met opstappen.

De VS dragen jaarlijks ongeveer 6 miljoen dollar bij aan het IEA, wat neerkomt op 14 procent van het totale budget. Wright wil dat het agentschap zich richt op "de oorspronkelijke missie" van energiezekerheid en noemde het klimaatbeleid "politiek gedoe". IEA-directeur Fatih Birol benadrukte eerder op de dag dat het agentschap datagedreven en apolitiek is.

Als reactie op de kritiek van Wright kondigde de Britse energieminister Ed Miliband aan dat het Verenigd Koninkrijk extra geld zal bijdragen aan het Clean Energy Transitions Programme van het IEA. Hij gaf daarbij aan dat schone energie de toekomst is.


Brief aan Franse minister: neem beweringen VN-rapporteur terug

DEN HAAG (ANP) - Ruim 150 (voormalige) diplomaten, ambassadeurs en politici roepen in een open brief de Franse minister van Buitenlandse Zaken Jean-Noël Barrot op om beweringen terug te nemen die hij vorige week deed over VN-rapporteur Francesca Albanese. De brief is onder anderen ondertekend door de Nederlandse oud-ministers Jan Pronk, Ben Bot, Laurens Jan Brinkhorst, Jozias van Aartsen en Hedy d'Ancona.

Barrot zou op 11 februari hebben verwezen naar een digitaal vervormde versie van een toespraak van Albanese op het Al Jazeera Media Forum van 7 februari. Albanese zou daarin Israël hebben gekarakteriseerd als de "gemeenschappelijke vijand van de mensheid". Barrot riep op tot haar ontslag. Factchecking door het programma Truth or Fake van nieuwszender France 24 heeft uitgewezen dat Albanese deze bewering nooit heeft gedaan.

De briefschrijvers vinden het "zeer zorgwekkend" dat Barrot gemanipuleerde informatie heeft verspreid. Zij roepen hem op de verspreide desinformatie in te trekken en de onafhankelijkheid van Albanese's mandaat te respecteren. Ook een commissie van de VN heeft zich uitgelaten over de aanval op Albanese. "Wij veroordelen de acties van ministers van bepaalde staten die zich baseren op verzonnen feiten en mevrouw Albanese bekritiseren voor uitspraken die zij nooit heeft gedaan."

'Niet ingekort of verdraaid'

Barrot blijft vooralsnog bij zijn beweringen. "Ik heb de woorden van mevrouw Albanese niet ingekort of verdraaid. Ik heb ze simpelweg veroordeeld, omdat ze verwerpelijk zijn", zegt hij in het Franse parlement. Dat is te zien in een filmpje dat hij woensdag deelt op X.

Albanese laat zich regelmatig kritisch uit over Israël. Ze noemde Gaza onder meer "het grootste en schandaligste concentratiekamp van de 21e eeuw". Begin februari vorig jaar zou de speciale VN-rapporteur voor de bezette Palestijnse gebieden in de Tweede Kamer spreken, maar dat werd tegengehouden door meerdere rechtse partijen.

Vanuit pro-Palestijnse hoek kan Albanese juist op steun rekenen. Zo eerde de Nederlandse mensenrechtenorganisatie The Rights Forum haar vorig jaar met de Dries van Agt-prijs.


The Register

Biting the hand that feeds IT — Enterprise Technology News and Analysis

Windows 11 finally hits right note: MIDI 2.0 support arrives

Musical instrument digital interface protocol leaves preview for bright lights of General Availability

Microsoft has finally ushered in the era of MIDI 2.0 for Windows 11, more than a year after first teasing the functionality for Windows Insiders.…