Virgin Orbit to sell its manufacturing facility to Rocket Lab
WASHINGTON (Reuters) -Richard Branson's bankrupt satellite launch firm Virgin Orbit said in a Tuesday court filing it plans to sell
2023-05-24 02:56
EU, US to ready voluntary AI code of conduct
The European Union and United States said Wednesday they expect to draft a voluntary code of conduct on artificial intelligence "within weeks" with the hope...
2023-05-31 22:59
Scientists confirm that one of Mexican aliens is 'alive' after controversial research
Scientists in Mexico have given their verdict on the supposed 'aliens' that were presented to the country's congress last week. Much controversy existed around the aliens who were presented by a man named Jaime Maussan who has previously been accused of using the mummified beings, apparently found in Peru, as part of an elaborate hoax. Despite spawning dozens of memes, the aliens are apparently being treated seriously enough that they have now been studied by scientists who have said that the figures are ‘single skeletons’ and also have 'eggs' inside of them. The two aliens have been named Clara and Mauricio and have reportedly been studied in a lab at the Noor Clinic in Mexico. Lead researcher Dr Jose de Jesus Zalce Benitez, a former navy forensics doctor, who added that as well as being "a single skeleton" the aliens are also a "complete organic being." He also denied that the aliens were part of a hoax and even said that Clara was "alive, was intact, was biological and was in gestation." However, much like the alien bodies themselves, the research has been clouded in controversy and scepticism as the research has yet to be officially verified, with Nasa scientist Dr David Spergel questioning why the findings haven't been made public, as per the BBC. Spergel said: "He said: "If you have something strange, make samples available to the world scientific community and we'll see what's there." Benitez did add in his address at the press conference: "We are facing the paradigm of describing a new species or given the opportunity to accept that there has been contact with other beings, non-humans, that were drawn and marked in the past by diverse cultures throughout the world." Sign up for our free Indy100 weekly newsletter Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.
2023-09-20 16:23
Britain invites China to AI summit
LONDON Britain has invited China to its Artificial Intelligence Safety Summit in November, foreign minister James Cleverly said
2023-09-19 21:17
I just learned you're not supposed to throw away old phone chargers
I am not a perfect person, but I try to do the right thing. I
2023-09-15 02:20
SoftBank’s Arm Indicated to Open Higher in New York Debut
Arm Holdings Plc is indicated to open higher at $57 a share in its much-anticipated trading debut in
2023-09-15 00:26
K-pop Fans Are Fighting Big Coal to Protect Beach Made Famous by BTS
As heavy rains pummeled South Korea last month, K-pop fans braved stormy conditions to stage a protest on
2023-08-03 06:27
Broadcom, VMware Stocks Gain Despite Apparent Delay to Merger Closing
The groups said in a statement that their merger, expected to close Monday, will complete "soon."
2023-10-30 20:49
POTTERY BARN KIDS AND POTTERY BARN TEEN DEBUT LARGEST-EVER BACK-TO-SCHOOL ASSORTMENT, INCLUDING ACCESSIBLE COLLECTION OF BACKPACKS AND DESKS
SAN FRANCISCO--(BUSINESS WIRE)--Aug 4, 2023--
2023-08-04 21:17
Binance did monthly transactions worth $90 billion in banned China market- WSJ
Binance users traded $90 billion of cryptocurrency related assets in a single month in China, where cryptocurrency trading
2023-08-02 12:54
What is superintelligence? How AI could replace humans as the dominant lifeform on Earth
In the ‘Unfinished Fable of the Sparrows’, a group of small birds come up with a plan to capture an owl egg and raise the chick as their servant. “How easy life would be,” they say, if the owl could work for them, and they could live a life of leisure. Despite warnings from members of their flock that they should first figure out how to tame an owl before they raise one, the sparrows devote all their efforts to capturing an egg. This tale, as its title suggests, does not have an ending. Its author, Swedish philosopher Nick Bostrom, deliberately left it open-ended as he believes that humanity is currently in the egg hunting phase when it comes to superhuman AI. In his seminal work on artificial intelligence, titled Superintelligence: Paths, Dangers, Strategies, the Oxford University professor posits that AI may well destroy us if we are not sufficiently prepared. Superintelligence, which he describes as an artificial intelligence that “greatly exceeds the cognitive performance of humans in virtually all domains of interest”, may be a lot closer than many realise, with AI experts and leading industry figures warning that it may be just a few years away. On Monday, the creator of ChatGPT echoed Professor Bostrom’s 2014 book by warning that the seemingly exponential progress of AI technology in recent years means that the imminent arrival of superintelligence is inevitable – and we need to start preparing for it before it’s too late. OpenAI boss Sam Altman, whose company’s AI chatbot is the fastest growing app in history, has previously described Professor Bostrom’s book as “the best thing I’ve seen on this topic”. Just a year after reading it, Mr Altman co-founded OpenAI alongside other similarly worried tech leaders like Elon Musk and Ilya Sutskever in order to better understand and mitigate against the risks of advanced artificial intelligence. Initially launched as a non-profit, OpenAI has since transformed into arguably the leading private AI firm – and potentially the closest to achieving superintelligence. Mr Altman believes superintelligence has the potential to not only offer us a life of leisure by doing all the majority of our labour, but also holds the key to curing diseases, eliminate suffering and transforming humanity into an interstellar species. Any attempts to block its progress, he wrote this week, would be “unintuitively risky” and would require “something like a global surveillance regime” that would be virtually impossible to implement. It is already difficult to understand what is going on inside the ‘mind’ of AI tools currently available, but once superintelligence is achieved, even its actions may become incomprehensible. It could make discoveries that we would be incapable of understanding, or take decisions that make no sense to us. The biological and evolutionary limitations of brains made of organic matter mean we may need some form of brain-computer interface in order to keep up. Being unable to compete with AI in this new technological era, Professor Bostrom warns, could see humanity replaced as the dominant lifeform on Earth. The superintelligence may then see us as superfluous to its own goals. If this happens, and some form of AI has figured out how to hijack all the utilities and technology we rely upon – or even the nuclear weapons we possess – then it would not take long for AI to wipe us off the face of the planet. A more benign, but similarly bleak, scenario is that the gulf in intelligence between us and the AI will mean it views us in the same way we view animals. In a 2015 conversation between Mr Musk and scientist Neil deGrasse Tyson, they theorised that AI will treat us like a pet labrador. “They’ll domesticate us,” Professor Tyson said. “They’ll keep the docile humans and get rid of the violent ones.” In an effort to prevent this outcome, Mr Musk has dedicated a portion of his immense fortune towards funding a brain chip startup called Neuralink. The device has already been tested on monkeys, allowing them to play video games with their minds, and the ultimate goal is to transform humans into a form of hybrid superintelligence. (Critics note that even if successful, the technology would similarly create a two-tiered society of the chipped, and the chipless.) Since cutting ties with OpenAI, the tech billionaire has issued several warnings about the imminent emergence of superintelligence. In March, he joined more than 1,000 researchers in calling for a moratorium on the development of powerful AI systems for at least six months. That time should then be spent researching AI safety measures, they wrote in an open letter, in order to avert disaster. It would take an improbable consensus of leading AI companies around the world, the majority of which are all profit-seeking, in order for any such pause to be impactful. And while OpenAI continues to spearhead the hunt for the owl’s egg, Mr Altman appears to have at least heeded the warnings from Professor Bostrom’s fable. In a 2016 interview with the New Yorker, he revealed that he is a doomsday prepper – specifically for an AI-driven apocalypse. “I try not to think about it too much, he said, revealing that he has “guns, gold, potassium iodide, antibiotics, batteries, water [and] gas masks” stashed away in a hideout in rural California. Not that any of that will be much use to the rest of us. Read More 10 ways AI will change the world – from curing cancer to wiping out humanity Photoshop unveils ‘extraordinary’ AI that transforms your pictures with a text prompt ChatGPT creator issues stark warning on AI ‘RIP photoshop’: New AI can alter any photo with the click of a mouse
2023-05-24 17:18
Sierra Space to Present at Jefferies Industrials Conference
LOUISVILLE, Colo.--(BUSINESS WIRE)--Sep 5, 2023--
2023-09-05 21:28
You Might Like...
'Sophisticated' prosthetic hand found on medieval skeleton
Is There a PlayStation State of Play in September 2023?
CBC Gem Now Available on the Roku Platform in Canada
Gamers are boycotting Starfield because players can add pronouns
Cisco forecasts annual revenue below estimates
AI Frenzy Accounts for All of S&P 500 Gain in 2023, SocGen Says
German Coalition Agrees on €6.5 Billion for Corporate Tax Relief
xQc flaunts branded watch worth 6 figures after $100M Kick deal, Internet says ‘too much money for 1 day of happiness’
