×
Security

Amnesty International Confirms Apple's Warning to Journalists About Spyware-Infected iPhones (techcrunch.com) 75

TechCrunch reports: Apple's warnings in late October that Indian journalists and opposition figures may have been targeted by state-sponsored attacks prompted a forceful counterattack from Prime Minister Narendra Modi's government. Officials publicly doubted Apple's findings and announced a probe into device security.

India has never confirmed nor denied using the Pegasus tool, but nonprofit advocacy group Amnesty International reported Thursday that it found NSO Group's invasive spyware on the iPhones of prominent journalists in India, lending more credibility to Apple's early warnings. "Our latest findings show that increasingly, journalists in India face the threat of unlawful surveillance simply for doing their jobs, alongside other tools of repression including imprisonment under draconian laws, smear campaigns, harassment, and intimidation," said Donncha Ã" Cearbhaill, head of Amnesty International's Security Lab, in the blog post.

Cloud security company Lookout has also published "an in-depth technical look" at Pegasus, calling its use "a targeted espionage attack being actively leveraged against an undetermined number of mobile users around the world." It uses sophisticated function hooking to subvert OS- and application-layer security in voice/audio calls and apps including Gmail, Facebook, WhatsApp, Facetime, Viber, WeChat, Telegram, Apple's built-in messaging and email apps, and others. It steals the victim's contact list and GPS location, as well as personal, Wi-Fi, and router passwords stored on the device...

According to news reports, NSO Group sells weaponized software that targets mobile phones to governments and has been operating since 2010, according to its LinkedIn page. The Pegasus spyware has existed for a significant amount of time, and is advertised and sold for use on high-value targets for multiple purposes, including high-level espionage on iOS, Android, and Blackberry.

Thanks to Slashdodt reader Mirnotoriety for sharing the news.
China

That Chinese Spy Balloon Used an American ISP to Communicate, Say US Officials (nbcnews.com) 74

NBC News reports that the Chinese spy balloon that flew across the U.S. in February "used an American internet service provider to communicate, according to two current and one former U.S. official familiar with the assessment."

it used the American ISP connection "to send and receive communications from China, primarily related to its navigation." Officials familiar with the assessment said it found that the connection allowed the balloon to send burst transmissions, or high-bandwidth collections of data over short periods of time.

The Biden administration sought a highly secretive court order from the federal Foreign Intelligence Surveillance Court to collect intelligence about it while it was over the U.S., according to multiple current and former U.S. officials. How the court ruled has not been disclosed. Such a court order would have allowed U.S. intelligence agencies to conduct electronic surveillance on the balloon as it flew over the U.S. and as it sent and received messages to and from China, the officials said, including communications sent via the American internet service provider...

The previously unreported U.S. effort to monitor the balloon's communications could be one reason Biden administration officials have insisted that they got more intelligence out of the device than it got as it flew over the U.S. Senior administration officials have said the U.S. was able to protect sensitive sites on the ground because they closely tracked the balloon's projected flight path. The U.S. military moved or obscured sensitive equipment so the balloon could not collect images or video while it was overhead.

NBC News is not naming the internet service provider, but says it denied that the Chinese balloon had used its network, "a determination it said was based on its own investigation and discussions it had with U.S. officials." The balloon contained "multiple antennas, including an array most likely able to collect and geolocate communications," according to reports from a U.S. State Depratment official cited by NBC News in February. "It was also powered by enormous solar panels that generated enough power to operate intelligence collection sensors, the official said.

Reached for comment this week, a spokesperson for the Chinese Embassy in Washington told NBC News that the balloon was just a weather balloon that had accidentally drifted into American airspace.
Government

India Targets Apple Over Its Phone Hacking Notifications (washingtonpost.com) 100

In October, Apple issued notifications warning over a half dozen India lawmakers of their iPhones being targets of state-sponsored attacks. According to a new report from the Washington Post, the Modi government responded by criticizing Apple's security and demanding explanations to mitigate political impact (Warning: source may be paywalled; alternative source). From the report: Officials from the ruling Bharatiya Janata Party (BJP) publicly questioned whether the Silicon Valley company's internal threat algorithms were faulty and announced an investigation into the security of Apple devices. In private, according to three people with knowledge of the matter, senior Modi administration officials called Apple's India representatives to demand that the company help soften the political impact of the warnings. They also summoned an Apple security expert from outside the country to a meeting in New Delhi, where government representatives pressed the Apple official to come up with alternative explanations for the warnings to users, the people said. They spoke on the condition of anonymity to discuss sensitive matters. "They were really angry," one of those people said.

The visiting Apple official stood by the company's warnings. But the intensity of the Indian government effort to discredit and strong-arm Apple disturbed executives at the company's headquarters, in Cupertino, Calif., and illustrated how even Silicon Valley's most powerful tech companies can face pressure from the increasingly assertive leadership of the world's most populous country -- and one of the most critical technology markets of the coming decade. The recent episode also exemplified the dangers facing government critics in India and the lengths to which the Modi administration will go to deflect suspicions that it has engaged in hacking against its perceived enemies, according to digital rights groups, industry workers and Indian journalists. Many of the more than 20 people who received Apple's warnings at the end of October have been publicly critical of Modi or his longtime ally, Gautam Adani, an Indian energy and infrastructure tycoon. They included a firebrand politician from West Bengal state, a Communist leader from southern India and a New Delhi-based spokesman for the nation's largest opposition party. [...] Gopal Krishna Agarwal, a national spokesman for the BJP, said any evidence of hacking should be presented to the Indian government for investigation.

The Modi government has never confirmed or denied using spyware, and it has refused to cooperate with a committee appointed by India's Supreme Court to investigate whether it had. But two years ago, the Forbidden Stories journalism consortium, which included The Post, found that phones belonging to Indian journalists and political figures were infected with Pegasus, which grants attackers access to a device's encrypted messages, camera and microphone. In recent weeks, The Post, in collaboration with Amnesty, found fresh cases of infections among Indian journalists. Additional work by The Post and New York security firm iVerify found that opposition politicians had been targeted, adding to the evidence suggesting the Indian government's use of powerful surveillance tools. In addition, Amnesty showed The Post evidence it found in June that suggested a Pegasus customer was preparing to hack people in India. Amnesty asked that the evidence not be detailed to avoid teaching Pegasus users how to cover their tracks.
"These findings show that spyware abuse continues unabated in India," said Donncha O Cearbhaill, head of Amnesty International's Security Lab. "Journalists, activists and opposition politicians in India can neither protect themselves against being targeted by highly invasive spyware nor expect meaningful accountability."
Electronic Frontier Foundation

EFF Warns: 'Think Twice Before Giving Surveillance for the Holidays' (eff.org) 28

"It's easy to default to giving the tech gifts that retailers tend to push on us this time of year..." notes Lifehacker senior writer Thorin Klosowski.

"But before you give one, think twice about what you're opting that person into." A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair). One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for... And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids' smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories.... U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches....

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen. If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible.... Giving the gift of electronics shouldn't come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

AI

Rite Aid Banned From Using Facial Recognition Software 60

An anonymous reader quotes a report from TechCrunch: Rite Aid has been banned from using facial recognition software for five years, after the Federal Trade Commission (FTC) found that the U.S. drugstore giant's "reckless use of facial surveillance systems" left customers humiliated and put their "sensitive information at risk." The FTC's Order (PDF), which is subject to approval from the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also instructs Rite Aid to delete any images it collected as part of its facial recognition system rollout, as well as any products that were built from those images. The company must also implement a robust data security program to safeguard any personal data it collects.

A Reuters report from 2020 detailed how the drugstore chain had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period starting in 2012, with "largely lower-income, non-white neighborhoods" serving as the technology testbed. With the FTC's increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency's crosshairs. Among its allegations are that Rite Aid -- in partnership with two contracted companies -- created a "watchlist database" containing images of customers that the company said had engaged in criminal activity at one of its stores. These images, which were often poor quality, were captured from CCTV or employees' mobile phone cameras.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action -- and the majority of the time this instruction was to "approach and identify," meaning verifying the customer's identity and asking them to leave. Often, these "matches" were false positives that led to employees incorrectly accusing customers of wrongdoing, creating "embarrassment, harassment, and other harm," according to the FTC. "Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing," the complaint reads. Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, while also instructing employees to specifically not reveal this information to customers.
In a press release, Rite Aid said that it was "pleased to reach an agreement with the FTC," but that it disagreed with the crux of the allegations.

"The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores," Rite Aid said in its statement. "Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC's investigation regarding the Company's use of the technology began."
Transportation

In Contrast To Cruise, Waymo Is Touting Its Vehicles' Safety In New Report (sfist.com) 55

Waymo has a new peer-reviewed study (PDF) to share that shows how safe its autonomous cars are compared to cars driven by humans. SFist reports: As the Chronicle notes, the study covers the 1.76 million driverless miles that Waymo's cars have registered in San Francisco so far, along with about 5.4 million miles registered elsewhere. It compares data about vehicle crashes of all kinds, and finds that Waymo vehicles were in involved in crashes resulting in injury or property damage far less often than human-driven cars. In fact, the "human benchmark" -- which is what Waymo is using to refer to human averages for various driving foibles -- is 5.55 crashes per 1 million miles. And the Waymo robot benchmark is just 0.6 crashes per 1 million miles. The overall figure for crash rates found Waymo's to be 6.7 times lower (0.41 incidents per 1 million miles) than the rate of humans (2.78 per million). This included data from Phoenix, San Francisco, and Los Angeles.

The report's "Conclusions" section is less than definitive in its findings, noting that the data of police-reported incidents across various jurisdictions may not be consistent or "apples-to-apples." "The benchmark rates themselves... varied considerably between locations and within the same location," the report's authors say. "This raises questions whether the benchmark data sources have comparable reporting thresholds (surveillance bias) or if other factors that were not controlled for in the benchmarks (time of day, mix of driving) is affecting the benchmark rates."

Still, the report, one of several that Alphabet-owned Waymo has commissioned in recent months, is convincingly thorough and academic in its approach, and seems to be great news for the company as it hopes to scale up -- starting with the enormous LA market. Waymo, like Cruise previously, has sought to convince a skeptical public that driverless vehicles are, in fact, safer than humans. And this is another step toward doing so -- even if people are going to be naturally wary of sharing the road with too many robots.

Education

Ask Slashdot: What Are Some Methods To Stop Digital Surveillance In Schools? 115

Longtime Slashdot reader Kreuzfeld writes: Help please: here in Lawrence, Kansas, the public school district has recently started using Gaggle (source may be paywalled; alternative source), a system for monitoring all digital documents and communications created by students on school-provided devices. Unsurprisingly, the system inundates employees with false 'alerts' but the district nonetheless hails this pervasive, dystopic surveillance system as a great success. What useful advice can readers here offer regarding successful methods to get public officials to backtrack from a policy so corrosive to liberty, trust, and digital freedoms?
Facebook

Does Meta's New Face Camera Herald a New Age of Surveillance? Or Distraction... (seattletimes.com) 74

"For the past two weeks, I've been using a new camera to secretly snap photos and record videos of strangers in parks, on trains, inside stores and at restaurants," writes a reporter for the New York Times. They were testing the recently released $300 Ray-Ban Meta glasses — "I promise it was all in the name of journalism" — which also includes microphones (and speakers, for listening to audio).

They call the device "part of a broader ambition in Silicon Valley to shift computing away from smartphone and computer screens and toward our faces." Meta, Apple and Magic Leap have all been hyping mixed-reality headsets that use cameras to allow their software to interact with objects in the real world. On Tuesday, Zuckerberg posted a video on Instagram demonstrating how the smart glasses could use AI to scan a shirt and help him pick out a pair of matching pants. Wearable face computers, the companies say, could eventually change the way we live and work... While I was impressed with the comfortable, stylish design of the glasses, I felt bothered by the implications for our privacy...

To inform people that they are being photographed, the Ray-Ban Meta glasses include a tiny LED light embedded in the right frame to indicate when the device is recording. When a photo is snapped, it flashes momentarily. When a video is recording, it is continuously illuminated. As I shot 200 photos and videos with the glasses in public, including on BART trains, on hiking trails and in parks, no one looked at the LED light or confronted me about it. And why would they? It would be rude to comment on a stranger's glasses, let alone stare at them... [A] Meta spokesperson, said the company took privacy seriously and designed safety measures, including a tamper-detection technology, to prevent users from covering up the LED light with tape.

But another concern was how smart glasses might impact our ability to focus: Even when I wasn't using any of the features, I felt distracted while wearing them... I had problems concentrating while driving a car or riding a scooter. Not only was I constantly bracing myself for opportunities to shoot video, but the reflection from other car headlights emitted a harsh, blue strobe effect through the eyeglass lenses. Meta's safety manual for the Ray-Bans advises people to stay focused while driving, but it doesn't mention the glare from headlights. While doing work on a computer, the glasses felt unnecessary because there was rarely anything worth photographing at my desk, but a part of my mind constantly felt preoccupied by the possibility...

Ben Long, a photography teacher in San Francisco, said he was skeptical about the premise of the Meta glasses helping people remain present. "If you've got the camera with you, you're immediately not in the moment," he said. "Now you're wondering, Is this something I can present and record?"

The reporter admits they'll fondly cherish its photos of their dog [including in the original article], but "the main problem is that the glasses don't do much we can't already do with phones... while these types of moments are truly precious, that benefit probably won't be enough to convince a vast majority of consumers to buy smart glasses and wear them regularly, given the potential costs of lost privacy and distraction."
Apple

Apple Now Requires a Judge's Consent To Hand Over Push Notification Data (reuters.com) 19

Apple has said it now requires a judge's order to hand over information about its customers' push notification to law enforcement, putting the iPhone maker's policy in line with rival Google and raising the hurdle officials must clear to get app data about users. From a report: The new policy was not formally announced but appeared sometime over the past few days on Apple's publicly available law enforcement guidelines. It follows the revelation from Oregon Senator Ron Wyden that officials were requesting such data from Apple as well as from Google, the unit of Alphabet that makes the operating system for Android phones.

Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. These are the audible "dings" or visual indicators users get when they receive an email or their sports team wins a game. What users often do not realize is that almost all such notifications travel over Google and Apple's servers. In a letter first disclosed by Reuters last week, Wyden said the practice gave the two companies unique insight into traffic flowing from those apps to users, putting them "in a unique position to facilitate government surveillance of how users are using particular apps."

Television

Your Smart TV Knows What You're Watching (themarkup.org) 164

An anonymous reader shares a report: If you bought a new smart TV during any of the holiday sales, there's likely to be an uninvited guest watching along with you. The most popular smart TVs sold today use automatic content recognition (ACR), a kind of ad surveillance technology that collects data on everything you view and sends it to a proprietary database to identify what you're watching and serve you highly targeted ads. The software is largely hidden from view, and it's complicated to opt out. Many consumers aren't aware of ACR, let alone that it's active on their shiny new TVs. If that's you, and you'd like to turn it off, we're going to show you how.

First, a quick primer on the tech: ACR identifies what's displayed on your television, including content served through a cable TV box, streaming service, or game console, by continuously grabbing screenshots and comparing them to a massive database of media and advertisements. Think of it as a Shazam-like service constantly running in the background while your TV is on.

These TVs can capture and identify 7,200 images per hour, or approximately two every second. The data is then used for content recommendations and ad targeting, which is a huge business; advertisers spent an estimated $18.6 billion on smart TV ads in 2022, according to market research firm eMarketer. For anyone who'd rather not have ACR looking over their shoulder while they watch, we've put together a guide to turning it off on three of the most popular smart TV software platforms in use last year. Depending on the platform, turning off ACR took us between 10 and 37 clicks.

AI

MIT Group Releases White Papers On Governance of AI (mit.edu) 46

An anonymous reader quotes a report from MIT News: Providing a resource for U.S. policymakers, a committee of MIT leaders and scholars has released a set of policy briefs that outlines a framework for the governance of artificial intelligence. The approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI. The aim of the papers is to help enhance U.S. leadership in the area of artificial intelligence broadly, while limiting harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.

The main policy paper, "A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector," suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications. "As a country we're already regulating a lot of relatively high-risk things and providing governance there," says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. "We're not saying that's sufficient, but let's start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach." [...]

"The framework we put together gives a concrete way of thinking about these things," says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT's Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort. The project includes multiple additional policy papers and comes amid heightened interest in AI over last year as well as considerable new industry investment in the field. The European Union is currently trying to finalize AI regulations using its own approach, one that assigns broad levels of risk to certain types of applications. In that process, general-purpose AI technologies such as language models have become a new sticking point. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more.
These are the key policies and approaches mentioned in the white papers:

Extension of Current Regulatory and Liability Approaches: The framework proposes extending current regulatory and liability approaches to cover AI. It suggests leveraging existing U.S. government entities that oversee relevant domains for regulating AI tools. This is seen as a practical approach, starting with areas where human activity is already being regulated and deemed high risk.

Identification of Purpose and Intent of AI Tools: The framework emphasizes the importance of AI providers defining the purpose and intent of AI applications in advance. This identification process would enable the application of relevant regulations based on the specific purpose of AI tools.

Responsibility and Accountability: The policy brief underscores the responsibility of AI providers to clearly define the purpose and intent of their tools. It also suggests establishing guardrails to prevent misuse and determining the extent of accountability for specific problems. The framework aims to identify situations where end users could reasonably be held responsible for the consequences of misusing AI tools.

Advances in Auditing of AI Tools: The policy brief calls for advances in auditing new AI tools, whether initiated by the government, user-driven, or arising from legal liability proceedings. Public standards for auditing are recommended, potentially established by a nonprofit entity or a federal entity similar to the National Institute of Standards and Technology (NIST).

Consideration of a Self-Regulatory Organization (SRO): The framework suggests considering the creation of a new, government-approved "self-regulatory organization" (SRO) agency for AI. This SRO, similar to FINRA for the financial industry, could accumulate domain-specific knowledge, ensuring responsiveness and flexibility in engaging with a rapidly changing AI industry.

Encouragement of Research for Societal Benefit: The policy papers highlight the importance of encouraging research on how to make AI beneficial to society. For instance, there is a focus on exploring the possibility of AI augmenting and aiding workers rather than replacing them, leading to long-term economic growth distributed throughout society.

Addressing Legal Issues Specific to AI: The framework acknowledges the need to address specific legal matters related to AI, including copyright and intellectual property issues. Special consideration is also mentioned for "human plus" legal issues, where AI capabilities go beyond human capacities, such as mass surveillance tools.

Broadening Perspectives in Policymaking: The ad hoc committee emphasizes the need for a broad range of disciplinary perspectives in policymaking, advocating for academic institutions to play a role in addressing the interplay between technology and society. The goal is to govern AI effectively by considering both technical and social systems.
Privacy

Ex-Commissioner For Facial Recognition Tech Joins Facewatch Firm He Approved (theguardian.com) 12

The recently-departed watchdog in charge of monitoring facial recognition technology in UK has joined the private firm he controversially approved, paving the way for the mass roll-out of biometric surveillance cameras in high streets across the country. From a report: In a move critics have dubbed an "outrageous conflict of interest," Professor Fraser Sampson, former biometrics and surveillance camera commissioner, has joined Facewatch as a non-executive director. Sampson left his watchdog role on 31 October, with Companies House records showing he was registered as a company director at Facewatch the following day, 1 November.

Campaigners claim this might mean he was negotiating his Facewatch contract while in post, and have urged the advisory committee on business appointments to investigate if it may have "compromised his work in public office." It is understood that the committee is currently considering the issue. Facewatch uses biometric cameras to check faces against a watch list and, despite widespread concern over the technology, has received backing from the Home Office, and has already been introduced in hundreds of high-street shops and supermarkets.

EU

Europe Reaches a Deal On the World's First Comprehensive AI Rules (apnews.com) 36

An anonymous reader quotes a report from the Associated Press: European Union negotiators clinched a deal Friday on the world's first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity. Negotiators from the European Parliament and the bloc's 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

"Deal!" tweeted European Commissioner Thierry Breton, just before midnight. "The EU becomes the very first continent to set clear rules for the use of AI." The result came after marathon closed-door talks this week, with one session lasting 22 hours before a second round kicked off Friday morning. Officials provided scant details on what exactly will make it into the eventual law, which wouldn't take effect until 2025 at the earliest. They were under the gun to secure a political victory for the flagship legislation but were expected to leave the door open to further talks to work out the fine print, likely to bring more backroom lobbying.

The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable. But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google's Bard chatbot. Foundation models looked set to be one of the biggest sticking points for Europe. However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI's backer Microsoft. [...] Under the deal, the most advanced foundation models that pose the biggest "systemic risks" will get extra scrutiny, including requirements to disclose more information such as how much computing power was used to train the systems.

Google

Governments Spying on Apple, Google Users Through Push Notifications (reuters.com) 33

Unidentified governments are surveilling smartphone users via their apps' push notifications, a U.S. senator warned on Wednesday. From a report: In a letter to the Department of Justice, Senator Ron Wyden said foreign officials were demanding the data from Alphabet's Google and Apple. Although details were sparse, the letter lays out yet another path by which governments can track smartphones. Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. [...] That gives the two companies unique insight into the traffic flowing from those apps to their users, and in turn puts them "in a unique position to facilitate government surveillance of how users are using particular apps," Wyden said.

He asked the Department of Justice to "repeal or modify any policies" that hindered public discussions of push notification spying. In a statement, Apple said that Wyden's letter gave them the opening they needed to share more details with the public about how governments monitored push notifications. "In this case, the federal government prohibited us from sharing any information," the company said in a statement. "Now that this method has become public we are updating our transparency reporting to detail these kinds of requests."

AI

AI Models May Enable a New Era of Mass Spying, Says Bruce Schneier (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor. In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons.

Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level: "Spying and surveillance are different but related things," Schneier writes. "If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did." Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive. "This spying is not limited to conversations on our phones or computers," Schneier writes. "Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and 'Hey, Google' are already always listening; the conversations just aren't being saved yet." [...]

In his editorial, Schneier raises concerns about the chilling effect that mass spying could have on society, cautioning that the knowledge of being under constant surveillance may lead individuals to alter their behavior, engage in self-censorship, and conform to perceived norms, ultimately stifling free expression and personal privacy. So what can people do about it? Anyone seeking protection from this type of mass spying will likely need to look toward government regulation to keep it in check since commercial pressures often trump technological safety and ethics. [...] Schneier isn't optimistic on that front, however, closing with the line, "We could prohibit mass spying. We could pass strong data-privacy rules. But we haven't done anything to limit mass surveillance. Why would spying be any different?" It's a thought-provoking piece, and you can read the entire thing on Slate.

Transportation

Automakers' Data Privacy Practices 'Are Unacceptable,' Says US Senator (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: US Senator Edward Markey (D-Mass.) is one of the more technologically engaged of our elected lawmakers. And like many technologically engaged Ars Technica readers, he does not like what he sees in terms of automakers' approach to data privacy. On Friday, Sen. Markey wrote to 14 car companies with a variety of questions about data privacy policies, urging them to do better. As Ars reported in September, the Mozilla Foundation published a scathing report on the subject of data privacy and automakers. The problems were widespread -- most automakers collect too much personal data and are too eager to sell or share it with third parties, the foundation found.

Markey noted (PDF) the Mozilla Foundation report in his letters, which were sent to BMW, Ford, General Motors, Honda, Hyundai, Kia, Mazda, Mercedes-Benz, Nissan, Stellantis, Subaru, Tesla, Toyota, and Volkswagen. The senator is concerned about the large amounts of data that modern cars can collect, including the troubling potential to use biometric data (like the rate a driver blinks and breathes, as well as their pulse) to infer mood or mental health. Sen. Markey is also worried about automakers' use of Bluetooth, which he said has expanded "their surveillance to include information that has nothing to do with a vehicle's operation, such as data from smartphones that are wirelessly connected to the vehicle."
"These practices are unacceptable," Markey wrote. "Although certain data collection and sharing practices may have real benefits, consumers should not be subject to a massive data collection apparatus, with any disclosures hidden in pages-long privacy policies filled with legalese. Cars should not -- and cannot -- become yet another venue where privacy takes a backseat."

The 14 automakers have until December 21 to answer Markey's questions.
Electronic Frontier Foundation

EFF Proposes Addressing Online Harms with 'Privacy-First' Policies (eff.org) 32

Long-time Slashdot reader nmb3000 writes: The Electronic Frontier Foundation has published a new white paper, Privacy First: A Better Way to Address Online Harms , to propose an alternative to the "often ill-conceived, bills written by state, federal, and international regulators to tackle a broad set of digital topics ranging from child safety to artificial intelligence." According to the EFF, "these scattershot proposals to correct online harm are often based on censorship and news cycles. Instead of this chaotic approach that rarely leads to the passage of good laws, we propose another solution."
The EFF writes:

What would this comprehensive privacy law look like? We believe it must include these components:

  • No online behavioral ads.
  • Data minimization.
  • Opt-in consent.
  • User rights to access, port, correct, and delete information.
  • No preemption of state laws.
  • Strong enforcement with a private right to action.
  • No pay-for-privacy schemes.
  • No deceptive design.

A strong comprehensive data privacy law promotes privacy, free expression, and security. It can also help protect children, support journalism, protect access to health care, foster digital justice, limit private data collection to train generative AI, limit foreign government surveillance, and strengthen competition. These are all issues on which lawmakers are actively pushing legislation—both good and bad.


The Military

The US Military's AI 'Swarm' Initiatives Speed Pace of Hard Decisions About Autonomous Weapons (apnews.com) 70

AI employed by the U.S. military "has piloted pint-sized surveillance drones in special operations forces' missions and helped Ukraine in its war against Russia," reports the Associated Press.

But that's the beginning. AI also "tracks soldiers' fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space." Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative — dubbed Replicator — seeks to "galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many," Deputy Secretary of Defense Kathleen Hicks said in August. While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy — including on weaponized systems.'

There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles. That's especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them — and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

United States

Secretive White House Surveillance Program Gives Cops Access To Trillions of US Phone Records (wired.com) 104

An anonymous reader quotes a report from Wired: A little-known surveillance program tracks more than a trillion domestic phone records within the United States each year, according to a letter WIRED obtained that was sent by US senator Ron Wyden to the Department of Justice (DOJ) on Sunday, challenging the program's legality. According to the letter, a surveillance program now known as Data Analytical Services (DAS) has for more than a decade allowed federal, state, and local law enforcement agencies to mine the details of Americans' calls, analyzing the phone records of countless people who are not suspected of any crime, including victims. Using a technique known as chain analysis, the program targets not only those in direct phone contact with a criminal suspect but anyone with whom those individuals have been in contact as well.

The DAS program, formerly known as Hemisphere, is run in coordination with the telecom giant AT&T, which captures and conducts analysis of US call records for law enforcement agencies, from local police and sheriffs' departments to US customs offices and postal inspectors across the country, according to a White House memo reviewed by WIRED. Records show that the White House has, for the past decade, provided more than $6 million to the program, which allows the targeting of the records of any calls that use AT&T's infrastructure -- a maze of routers and switches that crisscross the United States. In a letter to US attorney general Merrick Garland on Sunday, Wyden wrote that he had "serious concerns about the legality" of the DAS program, adding that "troubling information" he'd received "would justifiably outrage many Americans and other members of Congress." That information, which Wyden says the DOJ confidentially provided to him, is considered "sensitive but unclassified" by the US government, meaning that while it poses no risk to national security, federal officials, like Wyden, are forbidden from disclosing it to the public, according to the senator's letter.
AT&T spokesperson Kim Hart Jonson said only that the company is required by law to comply with a lawful subpoena. However, "there is no law requiring AT&T to store decades' worth of Americans' call records for law enforcement purposes," notes Wired. "Documents reviewed by WIRED show that AT&T officials have attended law enforcement conferences in Texas as recently as 2018 to train police officials on how best to utilize AT&T's voluntary, albeit revenue-generating, assistance."

"The collection of call record data under DAS is not wiretapping, which on US soil requires a warrant based on probable cause. Call records stored by AT&T do not include recordings of any conversations. Instead, the records include a range of identifying information, such as the caller and recipient's names, phone numbers, and the dates and times they placed calls, for six months or more at a time." It's unclear exactly how far back the call records accessible under DAS go, although a slide deck released under the Freedom of Information Act in 2014 states that they can be queried for up to 10 years.
United States

US Privacy Groups Urge Senate Not To Ram Through NSA Spying Powers (wired.com) 35

Some of the United States' largest civil liberties groups are urging Senate majority leader Chuck Schumer not to pursue a short-term extension of the Section 702 surveillance program slated to sunset on December 31. From a report: The more than 20 groups -- Demand Progress, the Brennan Center for Justice, American Civil Liberties Union, and Asian Americans Advancing Justice among them -- oppose plans that would allow the program to continue temporarily by amending "must-pass" legislation, such as the bill needed now to avert a government shutdown by Friday, or the National Defense Authorization Act, annual legislation set to dictate $886 billion in national security spending across the Pentagon and US Department of Energy in 2024.

"In its current form, [Section 702] is dangerous to our liberties and our democracy, and it should not be renewed for any length of time without robust debate, an opportunity for amendment, and -- ultimately -- far-reaching reforms," a letter from the groups to Schumer says. It adds that any attempt to prolong the program by rushed amendment "would demonstrate blatant disregard for the civil liberties and civil rights of the American people."

Slashdot Top Deals