
China’s Chip-Gear Makers Soar as US Probe Spurs Development Bets
China’s semiconductor equipment makers surged as Washington’s investigation into chips used for Huawei Technologies Co.’s new smartphone spurred
2023-09-08 14:50

NASA set to compete against Netflix with its own streaming service
NASA is turning its attention from the stars to our screens, after it announced it will launch its own streaming service. The organisation announced that it is launching a beta for a streaming platform with the name NASA+ – and yes, the plus is shaped to look like a star. It looks like the streaming service will be ad-free and available to watch on the new beta site and the NASA app. Essentially, it’ll work a little like Netflix for space content, updating the current Nasa TV output which features livestreams of rocket launches and other events. Sign up to our free Indy100 weekly newsletter There’s not much information out there about the new platform but it’ll come “later this year” and NASA has stated that it won’t require a paid subscription. NASA communications administrator Marc Etkind said in the press release that the organisation has designed the platform around “putting space on demand at your fingertips”. Introducing NASA's On-Demand Streaming Service, NASA+ (Official Trailer) www.youtube.com Judging by the look of the new promo clip, it’ll feature a lot of educational videos and public content as well as documentaries. “Modernizing our main websites from a technology standpoint and streamlining how the public engages with our content online are critical first steps in making our agency’s information more accessible, discoverable, and secure,” said NASA chief information officer Jeff Seaton. Meanwhile, it comes after Nasa celebrated the first birthday of the James Webb Space Telescope this summer by releasing extraordinary images of stars 'being born'. In the images, which almost look surreal, rainbow bursts with tiny twinkles can be seen. "The darkest areas are the densest, where thick dust cocoons still-forming protostars", the space giants say. These occur when a star first bursts through its natal envelope of cosmic dust, shooting out a pair of opposing jets into space like a newborn first stretching her arms out into the world." Nasa also recently stated that it discovered “diverse organic matter” on the surface of Mars, which could change our understanding of the red planet and the search for life in the universe. Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.
2023-07-31 18:24

Former ByteDance executive says Chinese Communist Party tracked Hong Kong protesters via data
A former executive at ByteDance, the Chinese company which owns the popular short-video app TikTok, says in a legal filing that some members of the ruling Communist Party used data held by the company to identify and locate protesters in Hong Kong
2023-06-07 18:46

X/Twitter executives had a very bad day defending Musk's platform
Since Elon Musk acquired Twitter, the company has rarely made its executives available for media
2023-08-11 08:21

Android update blamed for record number of 999 calls
Police in the UK have blamed an Android smartphone update on a record increase in accidental 999 calls. The National Police Chiefs Council said the Emergency SOS function was resulting in emergency switchboards being overwhelmed by “silent” calls. The emergency feature is activated when a side button on a device is repeatedly pressed, which triggers a countdown that allows the action to be cancelled by dragging a slider across the screen. However, many users appear to inadvertently initiate emergency calls when their device is in a bag or pocket. “Nationally, all emergency services are currently experiencing record high 999 call volumes,” the National Police Chiefs Council said. “There’s a few reasons for this, but one we think is having a significant impact is an update to Android smartphones.” Met Police chief superintendent Dan Ivey said people should disable the emergency feature, claiming that an “unprecedented” number of calls to emergency lines in June were a result of people accidentally activating it. The majority of smartphone owners in the UK use Android, with Samsung, Huawei and Google Pixel phones all using the mobile operating system. Google, which first began rolling out the Emergency SOS update with the release of Android 12 in 2021, said that it was working with these smartphone manufacturers in order to resolve the issue. “To help these manufacturers prevent unintentional emergency calls on their devices, Android is providing them with additional guidance and resources,” a spokesperson for Google said. “We anticipate device manufacturers will roll out updates to their users that address this issue shortly. Users that continue to experience this issue should switch Emergency SOS off for the next couple of days.” The feature can be deactivated within the ‘Safety and Emergency’ section of Android’s settings. Android researcher Mishaal Rahman noted on Twitter that the issue also appeared to impact other law enforcement agencies around the world, including police in Canada and Europe. Read More Facebook and Instagram to block news in Canada Police warn about dangerous emergency setting on Android phones Meta rejects accusation of censorship of language around female body Facebook and Instagram to block news in Canada
2023-06-23 19:23

Pay to Post? Elon Musk Floats Idea of Charging to Tweet
Elon Musk is signaling he’s thinking about requiring users to pay to post on Twitter.
2023-09-19 07:49

‘Is AI dangerous?’ UK’s most Googled questions about artificial intelligence
People in the UK want to know how artificial intelligence works, how to use it to make money and whether it will take their jobs, according to Google. The search engine company revealed the UK’s most googled questions about AI over the past three months ahead of Rishi Sunak’s AI summit. Here, PA takes a look at some of the burning questions the UK wants the answers to. What is AI? In a nutshell, AI refers to the training of machines to solve problems and make decisions in a way that is similar to how the human brain works. However, to boil AI down to a short definition would be to underestimate its complexity and variations. For example, “weak” or “narrow” AI is AI trained to perform specific tasks and enables technology people may be familiar with in their home, such as Amazon’s Alexa or autonomous vehicles, while “strong AI”, comprised of Artificial General Intelligence and Artificial Super Intelligence, refers to AI where a machine would have an intelligence equal to or surpassing humans. What is generative AI? Generative AI refers to models which can create something completely new based on the vast data they have been trained on. Recent examples of this include ChatGPT, where users can make requests such as “write a poem that features the Battle of Waterloo”. ChatGPT would then produce a new poem based on the material it had been trained on, in this case vast quantities of history books and poetry. How to make AI song covers? Much like the production of a new poem using AI, it is possible to create new music using models which have been trained on previously recorded sounds. However, this is proving tricky ground for human musicians who fear their work may be used without their consent to produce brand new creations, or even to imitate them. Spotify boss Daniel Ek told the BBC he thought there were legitimate use cases for the technology in music, but that it should not be used to impersonate real artists without their consent. He said there were three “buckets” of AI use in music: tools such as auto-tune, which he said was acceptable; software which impersonated artists, which was not; and a more controversial middle ground where AI-generated music was inspired by a specific artist but did not directly mimic them. How to make money with AI? The possibilities for making money using AI are seemingly endless, with people using it to produce music, books, essays, translations and much more. AI can also be used to streamline processes in existing jobs, producing presentations or documents in a fraction of the time it would usually take. However, the issue of copyright looms large over AI’s creative uses. Who created AI? While the concept has been discussed in art and culture for centuries, the 20th century will be remembered as the period when AI began to take practical shape. In 1950, wartime codebreaker Alan Turing published a paper called Computing Machinery and Intelligence in which he considered whether machines could think, introducing what became known as the Turing Test where a human would attempt to distinguish between the responses of another human and a computer. Six years later computer scientist John McCarthy coined the term “artificial intelligence” during the inaugural AI conference at Dartmouth College, while in the same year the first running AI software programme was created by Allen Newell, JC Shaw and Herbert Simon. Is AI dangerous? Tesla, SpaceX and X owner Elon Musk told the PA news agency at the UK’s AI Safety Summit: “I think AI is one of the biggest threats (to humans). “We have for the first time the situation where we have something that is going to be far smarter than the smartest human. “We’re not stronger or faster than other creatures, but we are more intelligent, and here we are for the first time, really in human history, with something that is going to be far more intelligent than us. “It’s not clear to me if we can control such a thing, but I think we can aspire to guide it in a direction that’s beneficial to humanity.” Will AI take my job? As with all technological advances, AI will change the way we work, making some jobs redundant but creating others too. Rishi Sunak recently attempted to assuage people’s fears, saying: “It’s important to recognise that AI doesn’t just automate and take people’s jobs. “A better way to think about it is as a co-pilot. “As with all technologies, they change our labour market, I think over time of course they make our economy more prosperous, more productive. “They create more growth overall but it does mean that there are changes in the labour market.” Read More Big tech poses ‘existential threat’ to UK journalism, survey of editors finds King warns of urgent need to ‘combat significant risks of powerful AI’ Kamala Harris arrives in the UK ahead of AI safety summit Study finds ‘deepfakes’ from Ukraine war undermining trust in conflict footage More than 500 potential cyber attacks logged every second, BT says AI being used to create child abuse imagery, watchdog warns
2023-11-02 11:16

Amazon's Ring to pay $5.8 million to settle FTC privacy lawsuit
Amazon's smart doorbell company, Ring, has agreed to settle a lawsuit from the Federal Trade Commission alleging that it had "unreasonable" data security and privacy practices, according to a Wednesday filing in the US District Court for Washington D.C.
2023-06-01 03:30

CEO of Germany's Merck: decoupling from China would be at huge economic cost
By Ludwig Burger and Patricia Weiss FRANKFURT (Reuters) -The CEO of German technology group Merck KGaA said that unravelling trade
2023-06-06 23:20

3 companies to pay $615,000 in NY attorney general investigation over faked net neutrality comments
New York’s attorney general says three companies accused of falsifying millions of public comments to support the contentious 2017 federal repeal of net neutrality rules have agreed to pay $615,000 in penalties to New York and other states
2023-05-11 02:57

The best VPNs for watching the NFL from anywhere in the world
We hate to break it to you, but the online world is full of restrictions,
2023-09-05 17:53

Wrongly arrested because of facial recognition: Why new police tech risks serious miscarriages of justice
On 16 February, Porcha Woodruff was helping her children get ready for school when six Detroit police officers arrived at her door. They told her she was under arrest for a January carjacking and robbery. She was so shocked she wondered for a moment if she was being pranked. She was eight months into a difficult pregnancy and partway through a nursing school programme. She did little else besides study and take care of her kids. She certainly wasn’t out stealing cars at gunpoint, she said. “I’m like, ‘What,?’ I opened my door so he could see my stomach. ‘I’m eight months pregnant. You can see two vehicles in the driveway. Why would I carjack?’” she told The Independent. “‘You’ve gotta be wrong. You can’t have the right person.’” Her children cried as she asked officers if the suspect was pregnant and insisted they had mistakenly arrested her. She was put in handcuffs and taken to jail, where she had panic attacks and early contractions. She later learned police identified her as a suspect after running security footage through the department’s facial recognition software, relying on a 2015 mugshot from a past traffic arrest into a photo lineup where the carjacking victim singled out Ms Woodruff as her assailant. The Detroit Police Department eventually dropped the case, but the arrest has deeply shaken Ms Woodruff. “What happened to the questioning? What happened to me speaking to someone?” she said. “What happened to any of the initial steps that I thought were available to a person who was accused of doing something?” The case underscores the growing risks of civil rights violations as police departments and law enforcement agencies across the country increasingly adopt facial-recognition and other mass surveillance technologies, often used as an unreliable shortcut around methodical human police work. Criminal justice advocates and the people targeted by this burgeoning police tech argue these programmes are riddled with the same biases and opaque or nonexistent oversight measures plaguing policing at large. The early results, at least, haven’t been encouraging. At least six people around the US have been falsely arrested using facial ID technology. All of them are Black. These misfires haven’t stopped the technology from proliferating across the country. At least half of federal law enforcement agencies with officers and a quarter of state and local agencies are using it. “We have no idea how often facial recognition is getting it wrong,” Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP), told The Independent. “When you have facial recognition being used thousands of times, without any accountability for mistakes, it’s inviting injustice,” he added. Nowhere has that injustice been more pronounced than Detroit, a city where Black people have long experienced documented over-policing from law enforcement. Three of the six people mistakenly arrested by facial recognition technology have been in the Motor City, according to the ACLU. This status quo is why Ms Woodruff is suing DPD, claiming among other things that the agency has engaged in “a pattern of racial discrimination” against her and other Black residents “by using facial recognition technology practices proven to misidentify Black citizens at a higher rate than others in violation of the equal protection guaranteed by” the Michigan civil rights statutes. “I definitely believe that situation would’ve gone differently had it been another race, honestly, just my opinion. There was no remorse shown to me and I was pregnant. I pleaded,” she told The Independent. “Being mistaken for something as serious as that crime – carjacking and armed robbery – that could’ve put me in a whole different type of lifestyle,” she added. “I was in school for nursing. Felons cannot become nurses. I could’ve ended up in jail. That could have altered my life tremendously.” The Independent has requested comment from DPD. After Ms Woodruff filed her suit, Detroit police chief James White said in a press conference in August “poor investigative work” led to the false arrest, not facial recognition technology. He claimed that department software gave detectives numerous possible suspects and was only meant to be a “launch” point for further investigation. “What this is, is very, very poor investigative work that led to a number of inappropriate decisions being made along the lines of the investigation, and that’s something this team is committed to not only correcting, having accountability, having transparency with this community, and in building policy immediately to ensure regardless of the tool being used, this never happens,” Mr White said. He added that officers won’t be allowed to use images sourced by facial recognition in lineups, and warrants based on facial ID matches must be reviewed by two captains before being carried out. ‘The lead and the conclusion’ Some aren’t convinced these changes will prevent the excesses of what they see as a fundamentally flawed technology. “The technology is flawed. It’s inaccurate,” Philip Mayor, senior staff attorney at the ACLU of Michigan, told The Independent. “Police repeatedly assured us that it’s being used only as an investigative lead, but what we see here in Detroit time and time again is it is both being used as the lead and the conclusion.” Studies suggest that facial-recognition algorithms, which have been used to capture suspects in high-profile cases like those connected to January 6, also fail to accurately identify Black people and women, driving up inequalities in arrests, because image-training datasets often lack full diversity. However, according to Mr Mayor, police departments make things even worse by failing to do basic training and common-sense investigative work on top of facial recognition tools. He represents Robert Williams, a Detroit man who was mistakenly arrested for a 2020 theft from a high-end Detroit boutique. A security contractor employed by the store worked with the city and state police and flagged Mr Williams’ name using facial recognition tools. How police came to trust that Williams was the right man reveals the sloppiness of how facial ID tech is used in practice, according to the ACLU attorney. After the theft, police searched a database containing both past photos of Mr Williams and his present-day driver’s license. ‘It picks out 486 people who are the most likely perpetrators; not a single one of them is his current driver’s license, even though his current driver’s license is in the database that was searched,’’ Mr Mayor said. “That seems like an obvious exculpatory fact, the kind of thing that would lead you to say if you were actually thinking, this isn’t the right guy.” When these dubious matches are then used to build a line-up, questionable police work attains the gloss of near-fact, and witnesses choose from a group of people who may have no credible tie to a crime that took place but still look something like the person who did. “This is not me,” Mr Williams told police during his investigation, according to The New York Times. “You think all Black men look alike?” The father of two, after asking a local police voluntarily stop using facial recognition technology, sued the DPD in 2021. “This technology is dangerous when it doesn’t work, which is what the cases in Detroit are about. It’s even more dangerous when it does work. It can be used to systematically surveillance when we come and go from every one of the places that are important in our private lives,” the ACLU attorney said. “I don’t think there’s any reason to believe that departments elsewhere right now are not making the same mistakes.” ‘A force multiplier for police racism’ Detroit isn’t the only place grappling with the impacts – and errors – of this technology. In Louisiana, the use of facial recognition technology led to a wrongful arrest of a Georgia man for a string of purse thefts. A man in Baltimore spent nine days in jail after police incorrectly identified him as a match to a suspect who assaulted a bus driver. The Baltimore Police Department ran nearly 800 facial recognition searches last year. Those cases and others have added to a growing list of misidentified suspects in a new era of racial profiling dragnets fuelled by tech that is rapidly outpacing police and lawmakers’ ability to fix it. Facial recognition software often is “a force multiplier for police racism,” worsening racial disparities and amplifying existing biases, according to Mr Cahn. It can spur a vicious cycle. Black and brown people are already arrested at disproportionate rates. These arrests mean they are more likely to enter a database of faces being analyzed and used for police investigations. Then, error-prone facial recognition technology is used to comb these databases, often failing to identify or distinguish between Black and brown people, particularly Black women. “So the algorithms are biased, but that’s just the start, not the end of the injustice,” Mr Cahn says. Such technologies, advocates warn, are embedded in wider mass surveillance programmes that often lack robust public oversight. In New York City, law enforcement agencies relied on facial recognition technology in at least 22,000 cases between 2016 and 2019, according to Amnesty International. New York City’s Police Department spent nearly $3bn growing its surveillance operations and adding new technology between 2007 and 2019, including roughly $400m for the Domain Awareness System, built in partnership with Microsoft to collect footage from tens of thousands of cameras throughout the city, according to an analysis from STOP and the Legal Aid Society. The NYPD has failed to comply with public disclosure requirements about what those contracts – from facial recognition software to drones and license plate readers – actually include, according to the report. Until 2020, that money was listed under “special expenses” in the police budget until passage of the Public Oversight of Surveillance Technology Act. The following year, more than $277m in budget items were listed under that special expenses programme, the report found. “We’ve seen just concerted pushback from police departments against the sort of oversight that every other type of government agency has because they don’t want to be held accountable,” according to Mr Cahn. “If we treated surveillance technology vendors the way we treated other technology vendors, it would be like Theranos – police would be arresting some of these vendors for fraud rather than giving them government contracts,” he added. “But there is no accountability.” On 7 August, 2020, New York City Police Department officers in riot gear launched a six-hour siege outside Derrick Ingram’s Hell’s Kitchen apartment. Mr Ingram – a racial justice organiser who is embroiled in a federal lawsuit against the NYPD – was surrounded by more than 50 officers after he allegedly shouted into an officer’s ear at a protest earlier that summer. Police insisted they had a warrant on assault charges, but couldn’t produce one when Mr Ingram asked them to, according to his suit. The whole encounter, in which the NYPD deployed snipers, drones, helicopters, and police dogs, began with facial recognition technology. “To say that I was terrified is an understatement – I was traumatized, I still am,” Mr Ingram later testified. “I fear deep down in my core that if I opened my door to those officers, my life would be swiftly taken.” To identify Mr Ingram as a potential suspect, NYPD relied on facial recognition software “as a limited investigative tool, comparing a still image from a surveillance video to a pool of lawfully possessed arrest photos,” according to a police statement, adding that “no one has ever been arrested solely on the basis of a computer match.” The software pulls from a massive internal database of mugshots to generate possible matches, according to the department. Civil rights groups and lawmakers criticized the department’s use of facial recognition – initially hailed as a tool to crack down on violence offenders – for being deployed to suppress dissent, and triggering a potentially lethal police encounter at Mr Ingram’s home. As for Ms Woodruff in Detroit, she hopes her experience can show the dangers of relying too heavily on facial recognition technology. “It may be a good tool to use, but you have to do the investigative part of using that, too,” she said. “It’s just like everything else. You have your pieces that you put together to complete a puzzle.” Her life would’ve been a whole lot different, she said, if “someone would’ve just taken the time to say, ‘OK, stop, we’re going to check this out, let me make a phone call.’” Read More Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged White House science adviser calls for more safeguards against artificial intelligence risks How a Drake concert put NYPD’s ‘arsenal’ of surveillance technologies under the spotlight
2023-09-15 03:47
You Might Like...

DNA Script to Unveil Industry’s Most Versatile On-site, On-Demand DNA Printer at the World’s Largest Synthetic Biology Conference

Has Logan Paul renewed contract with WWE? Fans say 'you gotta be trolling us'

Save 55% on Amazon Fire TV Stick 4K Max Ahead of Prime Day

Sollum Technologies Welcomes Michael Hanan as New Sales Director for the US Market

Apple loses about $200 billion in market capitalization on reports of iPhone restrictions in China

The Best Outdoor Projectors for 2023

Nintendo Download: Dark Heroes

Half of North Korean missile program funded by cyberattacks and crypto theft, White House says