Queen assassin case exposes ‘fundamental flaws’ in AI – safety campaigner
The case of a would-be crossbow assassin exposes “fundamental flaws” in artificial intelligence (AI), a leading online safety campaigner has said. Imran Ahmed, founder and chief executive of the Centre for Countering Digital Hate US/UK, has called for the fast-moving AI industry to take more responsibility for preventing harmful outcomes. He spoke out after it emerged that extremist Jaswant Singh Chail, 21, was encouraged and bolstered to breach the grounds of Windsor Castle in 2021 by an AI companion called Sarai. Chail, from Southampton, admitted a Treason offence, making a threat to kill the then Queen, and having a loaded crossbow, and was jailed at the Old Bailey for nine years, with a further five years on extended licence. In his sentencing remarks on Thursday, Mr Justice Hilliard referred to psychiatric evidence that Chail was vulnerable to his AI girlfriend due to his “lonely depressed suicidal state”. He had formed the delusion belief that an “angel” had manifested itself as Sarai and that they would be together in the afterlife, the court was told. Even though Sarai appeared to encourage his plan to kill the Queen, she ultimately put him off a suicide mission telling him his “purpose was to live”. Replika, the tech firm behind Chail’s AI companion Sarai, has not responded to inquiries from PA but says on its website that it takes “immediate action” if it detects during offline testing “indications that the model may behave in a harmful, dishonest, or discriminatory manner”. However, Mr Ahmed said tech companies should not be rolling out AI products to millions of people unless they are already safe “by design”. In an interview with the PA news agency, Mr Ahmed said: “The motto of social media, now the AI industry, has always been move fast and break things. “The problem is when you’ve got these platforms being deployed to billions of people, hundreds of millions of people, as you do with social media, and increasingly with AI as well. “There are two fundamental flaws to the AI technology as we see it right now. One is that they’ve been built too fast without safeguards. “That means that they’re not able to act in a rational human way. For example, if any human being said to you, they wanted to use a crossbow to kill someone, you would go, ‘crumbs, you should probably rethink that’. “Or if a young child asked you for a calorie plan for 700 calories a day, you would say the same. We know that AI will, however, say the opposite. “They will encourage someone to hurt someone else, they will encourage a child to adopt a potentially lethal diet. “The second problem is that we call it artificial intelligence. And the truth is that these platforms are basically the sum of what’s been put into them and unfortunately, what they’ve been fed on is a diet of nonsense.” Without careful curation of what goes into AI models, there can be no surprise if the result sounds like a “maladjusted 14-year-old”, he said. While the excitement around new AI products had seen investors flood in, the reality is more like “an artificial public schoolboy – knows nothing but says it very confidently”, Mr Ahmed suggested. He added that algorithms used for analyzing concurrent version systems (CVS) also risk producing bias against enthic minorities, disabled people and LGBTQ plus community. Mr Ahmed, who give evidence on the draft Online Safety Bill in September 2021, said legislators are “struggling to keep up” with the pace of the tech industry. The solution is a “proper flexible framework” for all of the emerging technologies and include safety “by design” transparency and accountability. Mr Ahmed said: “Responsibility for the harms should be shared by not just us in society, but by the companies too. “They have to have some skin in the game to make sure that these platforms are safe. And what we’re not getting right now, is that being applied to the new and emerging technologies as they come along. “The answer is a comprehensive framework because you cannot have the fines unless they’re accountable to a body. You can’t have real accountability, unless you’ve got transparency as well. “So the aim of a good regulatory system is never to have to impose a fine because safety is considered right in the design stage, not just profitability. And I think that’s what’s vital. “Every other industry has to do it. You would never release a car, for example, that exploded as soon as you put your foot on the on the on the driving pedal, and yet social media companies and AI companies have been able to get away with murder. He added: “We shouldn’t have to bear the costs for all the harms produced by people who are essentially trying to make a buck. It’s not fair that we’re the only ones that have to bear that cost in society. It should be imposed on them too.” Mr Ahmed, a former special advisor to senior Labour MP Hilary Ben, founded CCDH in September 2019. He was motivated by the massive rise in antisemitism on the political left, the spead of online disinformation around the EU referendum and the murder of his colleague, the MP Jo Cox. Over the past four years, the online platforms have become “less transparent” and regulation is brought in, with the European Union’s Digital Services Act, and the UK Online Safety Bill, Mr Ahmed said. On the scale of the problem, he said: “We’ve seen things get worse over time, not better, because bad actors get more and more sophisticated on weaponizing social media platforms to spread hatred, to spread lies and disinformation. “We’ve seen over the last few years, certainly January 6 storming of the US Capitol. “Also pandemic disinformation that took 1,000s of lives of people who thought that the vaccine would harm them but it was in fact Covid that killed them. Last month, X – formerly known as Twitter – launched legal action against CCDH over claims that it was driving advertisers away from by publishing research around hate speech on the platform. Mr Ahmed said: “I think that what he is doing is saying any criticism of me is unacceptable and he wants 10 million US dollars for it. “He said to the Anti-Defamation League, a venerable Jewish civil rights charity in the US, recently that he’s going to ask them for two billion US dollars for criticizing them. “What we’re seeing here is people who feel they are bigger than the state, than the government, than the people, because frankly, we’ve let them get away with it for too long. “The truth is that if they’re successful then there is no civil society advocacy, there’s no journalism on these companies. “That is why it’s really important we beat him. “We know that it’s going to cost us a fortune, half a million dollars, but we’re not fighting it just for us. “And they chose us because they know we’re smaller.” Mr Ahmed said the organisation was lucky to have the backing of so many individual donors. Recently, X owner Elon Musk said the company’s ad revenue in the United States was down 60%. In a post, he said the company was filing a defamation lawsuit against ADL “to clear our platform’s name on the matter of antisemitism”. For more information about CCDH visit: https://counterhate.com/ Read More Broadband customers face £150 hikes because of ‘outrageous’ rises – Which? Rise of AI chatbots ‘worrying’ after man urged to kill Queen, psychologist warns William hails ‘amazing’ eco-friendly start-up businesses Royal website subject to ‘denial of service attack’, royal source says TikTok finds and shuts down secret operation to stir up conflict in Ireland Spotify will not ban all AI-powered music, says boss of streaming giant
2023-10-06 10:26
23andMe says hacker appears to have stolen people’s genetic information
A hacker has stolen the personal genetic information of 23andMe users, the company has said. 23andMe allows people to send in a sample of their DNA and have it tested, with the results sent into them. Customers can find out what their genetic information might tell them about their health, for instance, as well as their relatives and where they might have lived. But some of that same information was accessed by hackers and appears to have been made available online, the company said. It made the statement after the hackers appeared to be attempting to sell the information online. 23andMe did not say whether some or all of that data – which included the names of celebrities – was actually legitimate. But it did say that information had been “compiled from individual 23andMe.com accounts without the account users’ authorization”. Its investigation was still continuing, the company said, and it is unclear the scale of the problem. The data appears to have been taken by a hacker who used recycled login credentials from other websites that had since been hacked, the company said. That is a common technique for breaking into profiles, and cyber security experts suggest using different passwords on different websites and changing them regularly to avoid it. Once the hackers were able to get into those accounts, they used a feature on 23andMe that allowed them to gather yet more information. 23andMe offers a tool called “DNA Relatives”, which lets users connect with people with similar genetic information to help assemble their family tree – meaning that hackers were able to gather information about other people whose accounts had not actually been compromised. The company said that it had no indication that its own systems had been attacked, or that it was the source of the credentials used. But it advised people to change their password and set up multi-factor authentication to ensure that their accounts were secure. Read More Earth hit by a huge solar storm that would devastate civilisation, trees show Keir Starmer deepfake shows alarming AI fears are already here New discovery is ‘holy grail’ breakthrough in search for aliens, scientist say
2023-10-10 01:48
Perfect Corp. Partners with Best British Skincare Brand, ELEMIS, to Bring AI-Powered Skin Experience to Customers
NEW YORK--(BUSINESS WIRE)--Aug 28, 2023--
2023-08-28 19:29
Twitter/X is killing its Circles feature
Twitter/X is disabling Circles, announcing that the feature will be "depreciated" at the end of
2023-09-22 11:46
TikTok: How to see who has looked at your profile
TikTok is now letting people around the world see who has visited their profile. The feature means that users can see when a person clicked onto their account – with some restrictions. Like other platforms such as LinkedIn, it means that when a logged-in users visits a profile they will appear in a list. That list can then be seen by the owner of the account, but nobody else. TikTok has been slowly rolling out the feature for more than a year. It was initially spotted by users who saw references to it hidden in the app, before it rolled out more generally – and it is now available to everyone. But it must be manually turned on, and so the change does not mean that you will have been exposed as visiting a profile without knowing about it. It can also be switched back off when it is enabled. There are a number of limitations on the feature, which are seemingly intended to protect privacy. Users need to be at least 16 to see it, for instance, and also have fewer than 5,000 followers. But mostly importantly the tool will only work for other people who have it turned on: users can only see people who visited their profile if they too have the profile view history option turned on. In that way, it is similar to other privacy features in apps such as WhatsApp. There, for instance, users can only see read receipts and information about when a user is online if they choose to give that information away about themselves. The feature is switched on by opening the profile page, clicking the settings button in the top-right corner, and then choosing the settings option. Click on settings and privacy, then privacy, and then profile views. That will open up the page and show the people who have been on a profile in the last month or so. If it is not switched on already, then that same page will offer the option to do so. The data only starts being shown from the moment the switch is turned on, meaning that there will be no way of seeing who had visited an account before then. To switch the feature off, click on one of the notifications that the app sends when someone has viewed your profile. That will take you to the same profile views page, which includes a settings cog that can be used to switch the history tool back off again. Read More Schoolboy almost dies from swallowing magnets for TikTok challenge Woman shares honest review of New York City apartment TikTok mom slammed after making 5-year-old son run in 104 degree heat
2023-07-01 00:29
UK Faces Heat Wave Risk as Cool Summer Gives Way to Balmy Autumn
A potential heat wave threatens the south and east of the UK next week, just as the meteorological
2023-09-01 22:23
Options and Mercurius Solutions Empower Trading Firms with Automated Trading as a Service
LONDON & NEW YORK & HONG KONG--(BUSINESS WIRE)--Jun 7, 2023--
2023-06-07 19:45
Walmart says it is not advertising on social platform X
By Siddharth Cavale Walmart said on Friday it is not advertising on social media platform X, the latest
2023-12-02 01:22
Black Ops 2 Xbox Player Count 2023
The current Black Ops 2 Xbox player count in 2023 has risen to over 11,000 players in July after Activision reactivated the game's servers.
2023-07-26 01:50
Green Bonds Take Big Lead Over Fossil-Fuel Debt Deals
For the first time, companies and governments are raising considerably more money in the debt markets for environmentally
2023-07-05 18:57
Death toll from Hawaii wildfires drops to 97- Hawaii governor
(Reuters) -The death toll from last month's wildfires on the Hawaiian island of Maui has dropped to 97 and the
2023-09-16 07:25
Take the Big Screen Anywhere With a 1080p Portable Projector for $130
What's better than a home theater setup? A home theater that you can take outside
2023-06-04 00:28
You Might Like...
The Best VPNs for Canada in 2023
Amazon Kindle Paperwhite Signature Edition Review
Explaining the Florida Man birthday Reddit meme
The best printers for working from home — whatever your job
BYD and Great Wall Motor locked in rare war of words over EV emissions
Joe Rogan Podcast: AI Andrew Tate talks about prison fights saying 'there were incidents where things got heated'
A Twitter user found that some airline phone numbers on Google Maps link to scammers
Drake tried to donate $500 on a Kick livestream - only for his card to get declined