Read more of this story at Slashdot.
LONDEN (ANP) - De Britse politie heeft de voormalige ambassadeur voor de Verenigde Staten Peter Mandelson op borgtocht vrijgelaten in afwachting van verder onderzoek.
De 72-jarige Mandelson werd maandag gearresteerd op verdenking van wangedrag in een publiek ambt. Hij werd meegenomen naar een politiebureau voor verhoor.
Mandelson had banden met de Amerikaanse zedendelinquent Jeffrey Epstein. De afgelopen maanden kwamen daar meer details over naar buiten. In de vrijgegeven Epstein-documenten werd gesuggereerd dat Mandelson gevoelige informatie zou hebben gedeeld met Epstein. De politie liet eerder weten een onderzoek te openen daarnaar.
Mandelson werd meegenomen voor verhoor nadat de politie begin februari huiszoekingen had gedaan op twee adressen van de voormalige ambassadeur.
If the circumstances of those from Roj camp become even more perilous, the Albanese government may be forced to intervene
Get our breaking news email, free app or daily news podcast
Ever since news broke that 34 Australians were leaving Roj camp in north-east Syria to travel to Australia, their status and travel plans have been at the centre of a political maelstrom. The Albanese government has insisted there would be no repatriation of the 11 Australian women and their 23 children, whose journey was halted on 16 February when they were sent back to the detention camp.
The newly minted opposition under Angus Taylor has demanded answers to the level of assistance the Albanese government has provided to the group, including the issuing of passports, and the Coalition has now proposed new Australian laws to criminalise non-government financial and logistical support that helps the families return to Australia.
Continue reading...Punch may look sweet with his plushie – but anthropomorphism can’t tell us what a wild animal is truly experiencing
Get our breaking news email, free app or daily news podcast
Standing in line at Ikea’s click and collect service to pick up a large plush orangutan, a wave of fatigue washes over me.
Not only because I have been in transit for almost 24 hours after a series of flight delays, and this is my last stop before collapsing in a heap on my living room floor, but also for the reason I, and so many others, have made this journey.
Continue reading...A DJ turned soldier explains how life has changed for Ukraine’s men while Tracey McVeigh and Shaun Walker report on the impact of the conflict and what could happen next
In the early hours of 24 February 2022, President Volodymyr Zelenskyy announced that Ukraine was at war with Russia – and put the country into a state of martial law.
Tracey McVeigh, editor of the Guardian’s global development desk, tells Annie Kelly what happened next. “Immediately if you were a male aged between 18 and 60 you were not allowed to leave the country. So there were all these hard choices going on behind closed doors in every family. If you were married and maybe had young children. Did your partner take the kids and leave? And all those incredibly difficult conversations were happening in a very short space of time. And I’m not sure you’re ever prepared for that kind of conversation within a family.”
Continue reading...Preparations for the nation’s second appearance at the tournament has been impacted by the troubling events at home
This week, Iran’s women’s football team is expected to touch down in Australia to compete in their second Women’s Asian Cup. But exactly who will arrive, or what condition they will be in when they get here, is anyone’s guess.
Amid a backdrop of anti-government protests and subsequent violent crackdowns by the authorities over the past few months, Iran’s top women footballers have been struggling to prepare for one of the biggest tournaments of their lives.
Continue reading...Taking action on AI harms against kids
In my last piece, I talked about the harms that AI is visiting on children through the irresponsible choices made by the platforms creating those products. While we dove a bit into the incentives and institutional pressures that cause those companies to make such wildly irresponsible decisions, what we haven’t yet reckoned with is how we hold these companies accountable.
Often, people tell me they feel overwhelmed at the idea of trying to engage with getting laws passed, or fighting a big political campaign to rein in the giant tech companies that are causing so much harm. And grassroots, local organizing can be extraordinarily effective in standing up for the values of your community against the agenda of the Big AI companies.
But while I think it’s vital that we pursue systemic justice (and it’s the only way to stop many kinds of harm), I do understand the desire for something more immediate and human-scale. So, I wanted to share some direct, personal actions that you can take to respond to the threats that Big AI has made against kids. Each of these tactics have been proven effective by others who have used the same strategies, so you can feel confident when adapting these for your own use.
If your company or organization maintains a presence on Twitter (or X, as they have tried to rename themselves), it is important to protect yourself, your coworkers, and also your employer from the risks of being on the platform. Many times, leadership in organizations have an outdated view of the platform that is uninformed about the current level of danger and harm presented by participating on the social network, and an accurate description of the problem can often be effective in driving a decision to make a change.
Here is some dialogue you can use or modify to catalyze a productive conversation at work:
Hi, [name]. I saw a while ago that Twitter is being investigated in multiple countries around the world for having generated explicit imagery of women and children. The story even said that their CEO reinstated the account of a user who had shared child exploitation pictures on the site, and monetized the account that had shared the pictures.
Can you verify that our team is required to be on the service even though there is child abuse imagery on the site? I know that Musk’s account is shown to everyone on Twitter, so I’m concerned we’ll see whatever content he shares or retweets. Should I forward any of the child abuse material that I encounter in the course of carrying out the duties of my role to HR or legal, or both? And what is our reporting process for reporting this kind of material to the authorities, as I haven’t been trained in any procedures around these kinds of sensitive materials?
That should be enough to trigger a useful conversation at your workplace. (You can share this link if they want a credible, business-minded link to reference.) If they need more context about the burden on workers, you can also mention the fact that content moderators who have to interact with this kind of content have had serious issues with trauma, according to many academic studies. There is also the risk of employees and partners having concerns about nonconsensual imagery being generated from their images if the company posts anything on Twitter that features their faces or bodies. As some articles have noted, the Grok AI tool that Twitter uses is even designed to permit the creation of imagery that makes its targets look like the victims of violence, including targets who are underage.
As a result, your emails to your manager should CC your HR team, and should make explicit that you don’t wish to be liable for the risks the company is taking on by remaining on the platform. Talk to your coworkers, and share this information with them, and see if they will join you in the conversation. If you’re able to, it’s not a bad idea to look up a local labor lawyer and see if they’re willing to talk to you for free in case you need someone to CC on an email while discussing these topics. Make your employers say to you, explicitly, that the decision to remain on the platform is theirs, that they’re aware of the risks, that they indemnify you of those risks. You should ask that they take on accountability for burdens like legal costs or even psychological counseling for the real and severe impacts that come from enduring the harms that crimes like those enabled by Twitter can cause.
All of these strategies can also apply to products that integrate with Twitter’s service at a technical level, for sharing content or posting tweets, or for technical platforms that try to use Grok’s AI features. If you are a product manager, or know a product manager, that is considering connecting to a platform that makes child abuse material, you have failed at the most fundamental tenet of your craft. If you work at a company that has incorporated these technologies, file a bug mentioning the issues listed above, and again, CC your legal team and mention these concerns. “Our product might plug in to a platform that generates CSAM” is a show-stopping bug for any product, and any organization that doesn’t understand that is fundamentally broken.
Once you catalyze this conversation, you can begin mapping out a broader communication strategy that takes advantage of the many excellent options for replacing this legacy social media channel.
An increasing number of schools are falling prey to the “AI is inevitable!” rhetoric and desperately chasing the idea of putting AI tools into kids’ hands. Worse, a lot of schools think that the only kinds of technology that exist are the kinds made by giant tech companies. And because many of the adults making the decisions about AI are not necessarily experts in every detail of every technology, the decision about which AI platforms to use often comes down to which ones people have heard about the most. For most people, that means ChatGPT, since it’s gotten the most free hype from media.
As a result, many schools and educational institutions are considering the deployment of a platform that has told multiple children to self-harm, including several who have taken their own lives. This is something that you can take action about at your kid’s school.
First, you can begin simply by gathering resources. There are many credible stories which you can share to illustrate the risk to administrators, and to other parents. Typically, apologists for this product will raise a few objections, which you can respond to in a thoughtful way:
With these responses in hand, you can provide some basic facts about the risks of the specific tool or platform that is being recommended, and help present a cogent argument against its deployment. It’s important to frame the argument in terms of child safety — the conventional arguments against LLMs, grounded in concerns like environmental impact, labor impact, intellectual property rights, or other similar issues tend to be dismissed out of hand due to effective propagandizing by Big AI advocates.
If, instead, you ignore the debate about LLMs and focus on real-world safety concerns based on actual threats that have happened to actual children, you should be able to have a very direct impact. And these are messages that others will generally pick up and amplify as well, whether they are fellow parents, or local media.
From here, you can begin a conversation that re-evaluates the goals of the initiative from first principles. "Everyone else is doing it" is not a valid way of advocating for technology, and even if they feel that LLMs are a technology that students should become familiar with, they should begin by engaging with the many resources on the topic created by academics who are not tied to the Big AI companies.
The key reason I wanted to capture some specific actions that people can take around responding to the harms that Big AI poses towards children is to remind us all that the power to take action lies in everyone’s hands. It’s not an abstract concept, or a theoretical thing that we have to wait for someone else to do.
We are in an outrageous place, where the actions of some of the biggest and most influential technology companies in the world are so beyond the pale that we can’t even discuss the things that they are doing in polite company. The actions that take place on these platforms used to mean that simply accessing these kinds of sites during one’s workday would be a firing offense. Now we have employers and schools trying to require people to use these things.
The pushback has to come at every level. Do talk to your elected officials. Do organize with others at your local level. If you work in tech, make sure to resist every attempt at normalizing these platforms, or incorporating their technologies into your own.
Finally, use your voice and your courage, and trust in your sense of basic decency. It might only take you a few minutes to draft up an email and send it to the right people. If you need help figuring out who to send it to, or how to phrase it, let me know and I’ll help! But these things that feel small can be quite enormous when they all add up together. And that’s exactly what our kids deserve.