×
United States

US Expects To Make Multi-Billion Chips Awards Within the Next Year (reuters.com) 13

David Shepardson reports via Reuters: U.S. Commerce Secretary Gina Raimondo said she expects to make around a dozen semiconductor chips funding awards within the next year, including multi-billion dollar announcements that could drastically reshape U.S. chip production. She announced the first award on Monday -- $35 million to a BAE Systems facility in Hampshire to produce chips for fighter planes from the "Chips for America" semiconductor manufacturing and research subsidy program approved by Congress in August 2022.

"Next year we'll get into some of the bigger ones with leading-edge fabs," Raimondo told reporters. "A year from now I think we will have made 10 or 12 similar announcements, some of them multi-billion dollar announcements." In an interview with Reuters, Raimondo said that the number of awards could go higher than 12. She said she wants the percentage of semiconductors produced in the United States to rise from about 12% to closer to 20% -- though that is still down from 40% in 1990 -- and to have at least two "leading-edge" U.S. manufacturing clusters. In addition, she wants the U.S. to have cutting-edge memory and packaging production and to "meet the military's needs for current and mature" chips. Raimondo noted that the U.S. currently does not have any cutting-edge manufacturing production and wants to get that to about 10%.

AI

MIT Group Releases White Papers On Governance of AI (mit.edu) 46

An anonymous reader quotes a report from MIT News: Providing a resource for U.S. policymakers, a committee of MIT leaders and scholars has released a set of policy briefs that outlines a framework for the governance of artificial intelligence. The approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI. The aim of the papers is to help enhance U.S. leadership in the area of artificial intelligence broadly, while limiting harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.

The main policy paper, "A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector," suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications. "As a country we're already regulating a lot of relatively high-risk things and providing governance there," says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. "We're not saying that's sufficient, but let's start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach." [...]

"The framework we put together gives a concrete way of thinking about these things," says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT's Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort. The project includes multiple additional policy papers and comes amid heightened interest in AI over last year as well as considerable new industry investment in the field. The European Union is currently trying to finalize AI regulations using its own approach, one that assigns broad levels of risk to certain types of applications. In that process, general-purpose AI technologies such as language models have become a new sticking point. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more.
These are the key policies and approaches mentioned in the white papers:

Extension of Current Regulatory and Liability Approaches: The framework proposes extending current regulatory and liability approaches to cover AI. It suggests leveraging existing U.S. government entities that oversee relevant domains for regulating AI tools. This is seen as a practical approach, starting with areas where human activity is already being regulated and deemed high risk.

Identification of Purpose and Intent of AI Tools: The framework emphasizes the importance of AI providers defining the purpose and intent of AI applications in advance. This identification process would enable the application of relevant regulations based on the specific purpose of AI tools.

Responsibility and Accountability: The policy brief underscores the responsibility of AI providers to clearly define the purpose and intent of their tools. It also suggests establishing guardrails to prevent misuse and determining the extent of accountability for specific problems. The framework aims to identify situations where end users could reasonably be held responsible for the consequences of misusing AI tools.

Advances in Auditing of AI Tools: The policy brief calls for advances in auditing new AI tools, whether initiated by the government, user-driven, or arising from legal liability proceedings. Public standards for auditing are recommended, potentially established by a nonprofit entity or a federal entity similar to the National Institute of Standards and Technology (NIST).

Consideration of a Self-Regulatory Organization (SRO): The framework suggests considering the creation of a new, government-approved "self-regulatory organization" (SRO) agency for AI. This SRO, similar to FINRA for the financial industry, could accumulate domain-specific knowledge, ensuring responsiveness and flexibility in engaging with a rapidly changing AI industry.

Encouragement of Research for Societal Benefit: The policy papers highlight the importance of encouraging research on how to make AI beneficial to society. For instance, there is a focus on exploring the possibility of AI augmenting and aiding workers rather than replacing them, leading to long-term economic growth distributed throughout society.

Addressing Legal Issues Specific to AI: The framework acknowledges the need to address specific legal matters related to AI, including copyright and intellectual property issues. Special consideration is also mentioned for "human plus" legal issues, where AI capabilities go beyond human capacities, such as mass surveillance tools.

Broadening Perspectives in Policymaking: The ad hoc committee emphasizes the need for a broad range of disciplinary perspectives in policymaking, advocating for academic institutions to play a role in addressing the interplay between technology and society. The goal is to govern AI effectively by considering both technical and social systems.
China

Huawei To Start Building First European Factory In France (reuters.com) 35

An anonymous reader quotes a report from Reuters: China's Huawei will start building its mobile phone network equipment factory in France next year, a source familiar with the matter said, pressing ahead with its first plant in Europe even as some European governments curb the use of the firm's 5G gear. The company outlined plans for the factory with an initial investment of 200 million euros ($215.28 million) in 2020, but the roll-out was delayed by the COVID-19 pandemic, the source said on Monday. The source did not give a timeline for when the factory in Brumath, near Strasbourg, will be up and running. A French government source said the site was expected to open in 2025. Further reading: 'How Washington Chased Huawei Out of Europe'
Security

US Healthcare Giant Norton Says Hackers Stole Millions of Patients' Data During Ransomware Attack (techcrunch.com) 27

An anonymous reader quotes a report from TechCrunch: Kentucky-based nonprofit healthcare system Norton Healthcare has confirmed that hackers accessed the personal data of millions of patients and employees during an earlier ransomware attack. Norton operates more than 40 clinics and hospitals in and around Louisville, Kentucky, and is the city's third-largest private employer. The organization has more than 20,000 employees, and more than 3,000 total providers on its medical staff, according to its website. In a filing with Maine's attorney general on Friday, Norton said that the sensitive data of approximately 2.5 million patients, as well as employees and their dependents, was accessed during its May ransomware attack.

In a letter sent to those affected, the nonprofit said that hackers had access to "certain network storage devices between May 7 and May 9," but did not access Norton Healthcare's medical record system or Norton MyChart, its electronic medical record system. But Norton admitted that following a "time-consuming" internal investigation, which the organization completed in November, Norton found that hackers accessed a "wide range of sensitive information," including names, dates of birth, Social Security numbers, health and insurance information and medical identification numbers. Norton Healthcare says that, for some individuals, the exposed data may have also included financial account numbers, driver licenses or other government ID numbers, as well as digital signatures. It's not known if any of the accessed data was encrypted.

Norton says it notified law enforcement about the attack and confirmed it did not pay any ransom payment. The organization did not name the hackers responsible for the cyberattack, but the incident was claimed by the notorious ALPHV/BlackCat ransomware gang in May, according to data breach news site DataBreaches.net, which reported that the group claimed it exfiltrated almost five terabytes of data. TechCrunch could not confirm this, as the ALPHV website was inaccessible at the time of writing.

United States

Why the US Needs a Moonshot Mentality for AI - Led by the Public Sector (wsj.com) 76

Fei-Fei Li and John Etchemendy, the founding co-directors of the Stanford Institute for Human-Centered Artificial Intelligence, in an op-ed on WSJ argue that AI is too important to be left entirely in the hands of the big tech companies: Among other things, 2023 will be remembered as the year artificial intelligence went mainstream. But while Americans from every corner of the country began dabbling with tools like ChatGPT and Midjourney, we believe 2023 is also the year Congress failed to act on what we see as the big picture: AI's impact will be far bigger than the products that companies are releasing at a breakneck pace. AI is a broad, general-purpose technology with profound implications for society that cannot be overstated.

[...] So what needs to happen? President Biden has set the stage, and with all this attention, it's time for Congress to act. They need to pass the Create AI Act, adhere to the elements called on by the new executive order, and invest more in the public sector to ensure America's leadership in creating AI technology steeped in the values we stand for. We also encourage an investment in human capital to bring more talent to the U.S. to work in the field of AI within academia and the government.

But why does this matter? Because this technology isn't just good for optimizing ad revenue for technology companies, but can fuel the next generation of scientific discovery, ranging from nuclear fusion to curing cancer. Furthermore, to truly understand this technology, including its sometimes unpredictable emergent capabilities and behaviors, public-sector researchers urgently need to replicate and examine the under-the-hood architecture of these models. That's why government research labs need to take a larger role in AI. [...]

United Kingdom

UK's First Carbon Capture Plant Turns CO2 Into Jet Fuel (sky.com) 119

"The machines in the facility waft air towards a water-based solvent," reports the Times of London, "which carbon dioxide in the air dissolves into. An electrical current then separates those compounds from the solvent, creating a pure stream of CO2."

More details from Sky News: The UK's first-ever direct air capture plant has been turned on to remove CO2 from the atmosphere and turn it into jet fuel. The machine, developed by Mission Zero Technologies in partnership with the University of Sheffield, will run on solar power to recover 50 tonnes of CO2 from the air per year and turn it into Sustainable Aviation Fuel (SAF)...

Aviation accounts for about 2% of the world's emissions and Ihab Ahmed, research associate from the University of Sheffield, said the fuel has the capacity to massively reduce the impact of aviation on the environment — and is an important step towards the government's ambitious target to increase the use of SAF to at least 10% by 2030.

America opened its first carbon-capture facility in November in a warehouse in California. While the carbon isn't converted into sustainable air fuel, it can capture a maximum of 1,000 tons of carbon dioxide per year/
Government

US Diet Committee Debates Whether Potatoes are Vegetables or 'Starchy Grain' (msn.com) 129

Every five years America's federal Department of Health updates its dietary guidelines with the latest nutrition science, affecting federal nutrition programs and various other government health initiatives.

Now an anonymous reader shared this report from the Wall Street Journal: Botanists count potatoes as a vegetable. But should Americans? The U.S. Dietary Guidelines Advisory Committee has sparked the question... White potatoes, which come in various colors, are classified as "starchy vegetables." But the committee could uproot potatoes from the vegetable bin and toss them in with a broader category of rice, other grains and carbohydrates as the Departments of Agriculture and Health and Human Services weigh updates to national diet guidelines for 2025.

The scientific debate isn't easy to follow. But it sounds like a half-baked idea to Kam Quarles, chief executive of the National Potato Council, a potato-industry group. The dietary guidelines shape nutrition advice to Americans, as well as what foods are served in school cafeterias. Potatoes, according to Quarles, should be respected as a gateway vegetable. "Kids are far more likely to eat" dishes with other vegetables if potatoes are involved, he said.

Not all parents swallow that a trail of tubers leads to leafy greens. Some complained about a Peppa Pig animated cartoon that featured a potato preaching the nutritional value of vegetables. "By the power of vegetables, I am here," Super Potato said, soaring through the sky, singing, "Fruit and vegetables keep us alive. Always remember to eat your five." The U.K.'s National Health Service, for one, doesn't count spuds toward the U.K.'s recommended five portions of fruits and vegetables a day. "It's a giant spud singing it. You're, like, 'Really? A potato's one of your five a day?'" said Dan Greef, the owner of Deliciously Guilt Free, a sugar-free bakery in Cambridge, U.K. He spent years persuading his two children to eat vegetables. Then, he said, "a drawing of a potato tells you it's fine, and you don't listen to your dad...."

Nutrition researchers say the potato contains helpful nutrients, including potassium and vitamin C, but its health benefits are diminished when it is fried. Nearly half of all U.S. potatoes eaten as food go into frozen products, mostly french fries, the USDA found.

For comparison, the article points out that under U.S. dietary guidelines, "corn on the cob is a starchy vegetable, while cornmeal is a grain."
Privacy

Republican Presidential Candidates Debate Anonymity on Social Media (cnbc.com) 174

Four Republican candidates for U.S. president debated Wednesday — and moderator Megyn Kelly had a tough question for former South Carolina governor Nikki Haley. "Can you please speak to the requirement that you said that every anonymous internet user needs to out themselves?" Nikki Haley: What I said was, that social media companies need to show us their algorithms. I also said there are millions of bots on social media right now. They're foreign, they're Chinese, they're Iranian. I will always fight for freedom of speech for Americans; we do not need freedom of speech for Russians and Iranians and Hamas. We need social media companies to go and fight back on all of these bots that are happening. That's what I said.

As a mom, do I think social media would be more civil if we went and had people's names next to that? Yes, I do think that, because I think we've got too much cyberbullying, I think we've got child pornography and all of those things. But having said that, I never said government should go and require anyone's name.

DeSantis: That's false.

Haley: What I said —

DeSantis:You said I want your name. As president of the United States, her first day in office, she said one of the first things I'm going to do --

Haley: I said we were going to get the millions of bots.

DeSantis: "All social medias? I want your name." A government i.d. to dox every American. That's what she said. You can roll the tape. She said I want your name — and that was going to be one of the first things she did in office. And then she got real serious blowback — and understandably so, because it would be a massive expansion of government. We have anonymous speech. The Federalist Papers were written with anonymous writers — Jay, Madison, and Hamilton, they went under "Publius". It's something that's important — and especially given how conservatives have been attacked and they've lost jobs and they've been cancelled. You know the regime would use that to weaponize that against our own people. It was a bad idea, and she should own up to it.

Haley: This cracks me up, because Ron is so hypocritical, because he actually went and tried to push a law that would stop anonymous people from talking to the press, and went so far to say bloggers should have to register with the state --

DeSantis:That's not true.

Haley: — if they're going to write about elected officials. It was in the — check your newpaper. It was absolutely there.

DeSantis quickly attributed the introduction of that legislation to "some legislator".

The press had already extensively written about Haley's position on anonymity on social media. Three weeks ago Business Insider covered a Fox News interview, and quoted Nikki Haley as saying: "When I get into office, the first thing we have to do, social media companies, they have to show America their algorithms. Let us see why they're pushing what they're pushing. The second thing is every person on social media should be verified by their name." Haley said this was why her proposals would be necessary to counter the "national security threat" posed by anonymous social media accounts and social media bots. "When you do that, all of a sudden people have to stand by what they say, and it gets rid of the Russian bots, the Iranian bots, and the Chinese bots," Haley said. "And then you're gonna get some civility when people know their name is next to what they say, and they know their pastor and their family member's gonna see it. It's gonna help our kids and it's gonna help our country," she continued... A representative for the Haley campaign told Business Insider that Haley's proposals were "common sense."

"We all know that America's enemies use anonymous bots to spread anti-American lies and sow chaos and division within our borders. Nikki believes social media companies need to do a better job of verifying users so we can crack down on Chinese, Iranian, and Russian bots," the representative said.

The next day CNBC reported that Haley "appeared to add a caveat... suggesting Wednesday that Americans should still be allowed to post anonymously online." A spokesperson for Haley's campaign added, "Social media companies need to do a better job of verifying users as human in order to crack down on anonymous foreign bots. We can do this while protecting America's right to free speech and Americans who post anonymously."

Privacy issues had also come up just five minutes earlier in the debate. In March America's Treasury Secretary had recommended the country "advance policy and technical work on a potential central bank digital currency, or CBDC, so the U.S. is prepared if CBDC is determined to be in the national interest."

But Florida governor Ron DeSantis spoke out forecefully against the possibility. "They want to get rid of cash, crypto, they want to force you to do that. They'll take away your privacy. They will absolutely regulate your purchases. On Day One as president, we take the idea of Central Bank Digital Currency, and we throw it in the trash can. It'll be dead on arrival." [The audience applauded.]
Education

Harvard Accused of Bowing to Meta By Ousted Disinformation Scholar in Whistleblower Complaint (cjr.org) 148

The Washington Post reports: A prominent disinformation scholar has accused Harvard University of dismissing her to curry favor with Facebook and its current and former executives in violation of her right to free speech.

Joan Donovan claimed in a filing with the Education Department and the Massachusetts attorney general that her superiors soured on her as Harvard was getting a record $500 million pledge from Meta founder Mark Zuckerberg's charitable arm. As research director of Harvard Kennedy School projects delving into mis- and disinformation on social media platforms, Donovan had raised millions in grants, testified before Congress and been a frequent commentator on television, often faulting internet companies for profiting from the spread of divisive falsehoods. Last year, the school's dean told her that he was winding down her main project and that she should stop fundraising for it. This year, the school eliminated her position.

As one of the first researchers with access to "the Facebook papers" leaked by Frances Haugen, Donovan was asked to speak at a meeting of the Dean's Council, a group of the university's high-profile donors, remembers The Columbia Journalism Review : Elliot Schrage, then the vice president of communications and global policy for Meta, was also at the meeting. Donovan says that, after she brought up the Haugen leaks, Schrage became agitated and visibly angry, "rocking in his chair and waving his arms and trying to interrupt." During a Q&A session after her talk, Donovan says, Schrage reiterated a number of common Meta talking points, including the fact that disinformation is a fluid concept with no agreed-upon definition and that the company didn't want to be an "arbiter of truth."

According to Donovan, Nancy Gibbs, Donovan's faculty advisor, was supportive after the incident. She says that they discussed how Schrage would likely try to pressure Douglas Elmendorf, the dean of the Kennedy School of Government (where the Shorenstein Center hosting Donovan's project is based) about the idea of creating a public archive of the documents... After Elmendorf called her in for a status meeting, Donovan claims that he told her she was not to raise any more money for her project; that she was forbidden to spend the money that she had raised (a total of twelve million dollars, she says); and that she couldn't hire any new staff. According to Donovan, Elmendorf told her that he wasn't going to allow any expenditure that increased her public profile, and used a number of Meta talking points in his assessment of her work...

Donovan says she tried to move her work to the Berkman Klein Center at Harvard, but that the head of that center told her that they didn't have the "political capital" to bring on someone whom Elmendorf had "targeted"... Donovan told me that she believes the pressure to shut down her project is part of a broader pattern of influence in which Meta and other tech platforms have tried to make research into disinformation as difficult as possible... Donovan said she hopes that by blowing the whistle on Harvard, her case will be the "tip of the spear."

Another interesting detail from the article: [Donovan] alleges that Meta pressured Elmendorf to act, noting that he is friends with Sheryl Sandberg, the company's chief operating officer. (Elmendorf was Sandberg's advisor when she studied at Harvard in the early nineties; he attended Sandberg's wedding in 2022, four days before moving to shut down Donovan's project.)
Social Networks

Reactions Continue to Viral Video that Led to Calls for College Presidents to Resign 414

After billionaire Bill Ackman demanded three college presidents "resign in disgrace," that post on X — excerpting their testimony before a U.S. Congressional committee — has now been viewed more than 104 million times, provoking a variety of reactions.

Saturday afternoon, one of the three college presidents resigned — University of Pennsylvania president Liz Magill.

Politico reports that the Republican-led Committee now "will be investigating Harvard University, MIT and the University of Pennsylvania after their institutions' leaders failed to sufficiently condemn student protests calling for 'Jewish genocide.'" The BBC reports a wealthy UPenn donor reportedly withdrew a stock grant worth $100 million.

But after watching the entire Congressional hearing, New York Times opinion columnist Michelle Goldberg wrote that she'd seen a "more understandable" context: In the questioning before the now-infamous exchange, you can see the trap [Congresswoman Elise] Stefanik laid. "You understand that the use of the term 'intifada' in the context of the Israeli-Arab conflict is indeed a call for violent armed resistance against the state of Israel, including violence against civilians and the genocide of Jews. Are you aware of that?" she asked Claudine Gay of Harvard. Gay responded that such language was "abhorrent."

Stefanik then badgered her to admit that students chanting about intifada were calling for genocide, and asked angrily whether that was against Harvard's code of conduct. "Will admissions offers be rescinded or any disciplinary action be taken against students or applicants who say, 'From the river to the sea' or 'intifada,' advocating for the murder of Jews?" Gay repeated that such "hateful, reckless, offensive speech is personally abhorrent to me," but said action would be taken only "when speech crosses into conduct." So later in the hearing, when Stefanik again started questioning Gay, Kornbluth and Magill about whether it was permissible for students to call for the genocide of the Jews, she was referring, it seemed clear, to common pro-Palestinian rhetoric and trying to get the university presidents to commit to disciplining those who use it. Doing so would be an egregious violation of free speech. After all, even if you're disgusted by slogans like "From the river to the sea, Palestine will be free," their meaning is contested...

Liberal blogger Josh Marshall argues that "While groups like Hamas certainly use the word [intifada] with a strong eliminationist meaning it is simply not the case that the term consistently or usually or mostly refers to genocide. It's just not. Stefanik's basic equation was and is simply false and the university presidents were maladroit enough to fall into her trap."

The Wall Street Journal published an investigation the day after the hearing. A political science professor at the University of California, Berkeley hired a survey firm to poll 250 students across the U.S. from "a variety of backgrounds" — and the results were surprising: A Latino engineering student from a southern university reported "definitely" supporting "from the river to the sea" because "Palestinians and Israelis should live in two separate countries, side by side." Shown on a map of the region that a Palestinian state would stretch from the Jordan River to the Mediterranean Sea, leaving no room for Israel, he downgraded his enthusiasm for the mantra to "probably not." Of the 80 students who saw the map, 75% similarly changed their view... In all, after learning a handful of basic facts about the Middle East, 67.8% of students went from supporting "from the river to the sea" to rejecting the mantra. These students had never seen a map of the Mideast and knew little about the region's geography, history, or demography.
More about the phrase from the Associated Press: Many Palestinian activists say it's a call for peace and equality after 75 years of Israeli statehood and decades-long, open-ended Israeli military rule over millions of Palestinians. Jews hear a clear demand for Israel's destruction... By 2012, it was clear that Hamas had claimed the slogan in its drive to claim land spanning Israel, the Gaza Strip and the West Bank... The phrase also has roots in the Hamas charter... [Since 1997 the U.S. government has considered Hamas a terrorist organization.]

"A Palestine between the river to the sea leaves not a single inch for Israel," read an open letter signed by 30 Jewish news outlets around the world and released on Wednesday... Last month, Vienna police banned a pro-Palestinian demonstration, citing the fact that the phrase "from the river to the sea" was mentioned in invitations and characterizing it as a call to violence. And in Britain, the Labour party issued a temporary punishment to a member of Parliament, Andy McDonald, for using the phrase during a rally at which he called for a stop to bombardment.

As the controversy rages on, Ackman's X timeline now includes an official response reposted from a college that wasn't called to testify — Stanford University: In the context of the national discourse, Stanford unequivocally condemns calls for the genocide of Jews or any peoples. That statement would clearly violate Stanford's Fundamental Standard, the code of conduct for all students at the university.
Ackman also retweeted this response from OpenAI CEO Sam Altman: for a long time i said that antisemitism, particularly on the american left, was not as bad as people claimed. i'd like to just state that i was totally wrong. i still don't understand it, really. or know what to do about it. but it is so fucked.
Wednesday UPenn's president announced they'd immediately consider a new change in policy," in an X post viewed 38.7 million times: For decades under multiple Penn presidents and consistent with most universities, Penn's policies have been guided by the [U.S.] Constitution and the law. In today's world, where we are seeing signs of hate proliferating across our campus and our world in a way not seen in years, these policies need to be clarified and evaluated. Penn must initiate a serious and careful look at our policies, and provost Jackson and I will immediately convene a process to do so. As president, I'm committed to a safe, secure, and supportive environment so all members of our community can thrive. We can and we will get this right. Thank you.
The next day the university's business school called on Magill to resign. And Saturday afternoon, Magill resigned.
Businesses

US Postal Service Warns Rural Mail Carriers: Don't Publicly Blame Delays on Amazon (msn.com) 119

15,279 people live in the rural Minnesota town of Bemidji. But now mail carriers there, "overwhelmed by Amazon packages, say they've been warned not to use the word 'Amazon,' including when customers ask why the mail is delayed," reports the Washington Post: "We are not to mention the word 'Amazon' to anyone," said a mail carrier who spoke on the condition of anonymity to protect their job. "If asked, they're to be referred to as 'Delivery Partners' or 'Distributors,'" said a second carrier. "It's ridiculous." The directive, passed down Monday morning from U.S. Postal Service management, comes three weeks after mail carriers in the northern Minnesota town staged a symbolic strike outside the post office, protesting the heavy workloads and long hours caused by the sudden arrival of thousands of Amazon packages...

In addition to being banned from saying "Amazon," postal workers have also been told their jobs could be at risk if they speak publicly about post office issues. Staffers were told they could attend Tuesday's meeting only on their 30-minute lunch break if they changed out of uniform, mail carriers said. One mail carrier said he'd been warned there could be "consequences" for those who showed up.

Postal customers in Bemidji have been complaining about late and missing mail since the beginning of November, when the contract for delivering Amazon packages in town switched from UPS to the post office. Mail carriers told The Post last month that they were instructed to deliver packages before the mail, leaving residents waiting for tax rebates, credit card statements, medical documents and checks...

The post office has held a contract to deliver Amazon packages on Sundays since 2013. The agency, which has lost $6.5 billion in the past year, has said that it's crucial to increase package volume by cutting deals with Amazon and other retailers.

Tuesday the town's mayor held a listening session for the state's two senators with Bemidji residents, whose complaints included "missing medications and late bills resulting in fees." Senator Amy Klobuchar later told the Post that "We need a very clear commitment that we're not going to be prioritizing Amazon packages over regular mail," promising to explore improving postal staffing and pay for rural carriers. On Monday, the Minnesota senators introduced a bill called the Postal Delivery Accountability Act, which would require the post office to improve tracking and reporting of delayed and undelivered mail nationally.
Patents

White House Threatens Patents of High-Priced Drugs (apnews.com) 151

The Biden administration is threatening to cancel the patents of some costly medications to allow rivals to make their own more affordable versions. The Associated Press reports: Under a plan announced Thursday, the government would consider overriding the patent for high-priced drugs that have been developed with the help of taxpayer money and letting competitors make them in hopes of driving down the cost. In a 15-second video released to YouTube on Wednesday night, President Joe Biden promised the move would lower prices. "Today, we're taking a very important step toward ending price gouging so you don't have to pay more for the medicine you need," he said.

White House officials would not name drugs that might potentially be targeted. The government would consider seizing a patent if a drug is only available to a "narrow set of consumers," according to the proposal that will be open to public comment for 60 days. Drugmakers are almost certain to challenge the plan in court if it is enacted. [...] The White House also intends to focus more closely on private equity firms that purchase hospitals and health systems, then often whittle them down and sell quickly for a profit. The departments of Justice and Health and Human Services, and the Federal Trade Commission will work to share more data about health system ownership.

While only a minority of drugs on the market relied so heavily on taxpayer dollars, the threat of a government "march-in" on patents will make many pharmaceutical companies think twice, said Jing Luo, a professor of medicine at University of Pittsburgh. "If I was a drug company that was trying to license a product that had benefited heavily from taxpayer money, I'd be very careful about how to price that product," Luo said. "I wouldn't want anyone to take my product away from me."

EU

Europe Reaches a Deal On the World's First Comprehensive AI Rules (apnews.com) 36

An anonymous reader quotes a report from the Associated Press: European Union negotiators clinched a deal Friday on the world's first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity. Negotiators from the European Parliament and the bloc's 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

"Deal!" tweeted European Commissioner Thierry Breton, just before midnight. "The EU becomes the very first continent to set clear rules for the use of AI." The result came after marathon closed-door talks this week, with one session lasting 22 hours before a second round kicked off Friday morning. Officials provided scant details on what exactly will make it into the eventual law, which wouldn't take effect until 2025 at the earliest. They were under the gun to secure a political victory for the flagship legislation but were expected to leave the door open to further talks to work out the fine print, likely to bring more backroom lobbying.

The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable. But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google's Bard chatbot. Foundation models looked set to be one of the biggest sticking points for Europe. However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI's backer Microsoft. [...] Under the deal, the most advanced foundation models that pose the biggest "systemic risks" will get extra scrutiny, including requirements to disclose more information such as how much computing power was used to train the systems.

Privacy

Verizon Gave Phone Data To Armed Stalker Who Posed As Cop Over Email (404media.co) 27

Slash_Account_Dot writes: The FBI investigated a man who allegedly posed as a police officer in emails and phone calls to trick Verizon to hand over phone data belonging to a specific person that the suspect met on the dating section of porn site xHamster, according to a newly unsealed court record. Despite the relatively unconvincing cover story concocted by the suspect, including the use of a clearly non-government ProtonMail email address, Verizon handed over the victim's data to the alleged stalker, including their address and phone logs. The stalker then went on to threaten the victim and ended up driving to where he believed the victim lived while armed with a knife, according to the record.

The news is a massive failure by Verizon who did not verify that the data request was fraudulent, and the company potentially put someone's safety at risk. The news also highlights the now common use of fraudulent emergency data requests (EDRs) or search warrants in the digital underworld, where criminals pretend to be law enforcement officers, fabricate an urgent scenario such as a kidnapping, and then convince telecoms or tech companies to hand over data that should only be accessible through legitimate law enforcement requests. As 404 Media previously reported, some hackers are using compromised government email accounts for this purpose.

Google

Governments Spying on Apple, Google Users Through Push Notifications (reuters.com) 33

Unidentified governments are surveilling smartphone users via their apps' push notifications, a U.S. senator warned on Wednesday. From a report: In a letter to the Department of Justice, Senator Ron Wyden said foreign officials were demanding the data from Alphabet's Google and Apple. Although details were sparse, the letter lays out yet another path by which governments can track smartphones. Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. [...] That gives the two companies unique insight into the traffic flowing from those apps to their users, and in turn puts them "in a unique position to facilitate government surveillance of how users are using particular apps," Wyden said.

He asked the Department of Justice to "repeal or modify any policies" that hindered public discussions of push notification spying. In a statement, Apple said that Wyden's letter gave them the opening they needed to share more details with the public about how governments monitored push notifications. "In this case, the federal government prohibited us from sharing any information," the company said in a statement. "Now that this method has become public we are updating our transparency reporting to detail these kinds of requests."

AI

AI Models May Enable a New Era of Mass Spying, Says Bruce Schneier (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor. In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons.

Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level: "Spying and surveillance are different but related things," Schneier writes. "If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did." Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive. "This spying is not limited to conversations on our phones or computers," Schneier writes. "Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and 'Hey, Google' are already always listening; the conversations just aren't being saved yet." [...]

In his editorial, Schneier raises concerns about the chilling effect that mass spying could have on society, cautioning that the knowledge of being under constant surveillance may lead individuals to alter their behavior, engage in self-censorship, and conform to perceived norms, ultimately stifling free expression and personal privacy. So what can people do about it? Anyone seeking protection from this type of mass spying will likely need to look toward government regulation to keep it in check since commercial pressures often trump technological safety and ethics. [...] Schneier isn't optimistic on that front, however, closing with the line, "We could prohibit mass spying. We could pass strong data-privacy rules. But we haven't done anything to limit mass surveillance. Why would spying be any different?" It's a thought-provoking piece, and you can read the entire thing on Slate.

Transportation

Congress Spent Billions On EV Chargers. But Not One Has Come Online. (politico.com) 227

Press2ToContinue shares a report from Politico: Congress at the urging of the Biden administration agreed in 2021 to spend $7.5 billion to build tens of thousands of electric vehicle chargers across the country, aiming to appease anxious drivers while tackling climate change. Two years later, the program has yet to install a single charger. States and the charger industry blame the delays mostly on the labyrinth of new contracting and performance requirements they have to navigate to receive federal funds. While federal officials have authorized more than $2 billion of the funds to be sent to states, fewer than half of states have even started to take bids from contractors to build the chargers -- let alone begin construction. [...]

The goal is a reliable and standardized network in every corner of the nation, said Gabe Klein, executive director of the Joint Office of Energy and Transportation, which leads the federal government's efforts on EV charging. "You have to go slow to go fast," Klein said in an interview. "These are things that take a little bit of time, but boy, when you're done, it's going to completely change the game." [...] Aatish Patel, president of charger manufacturer XCharge North America, is worried the delays in installing chargers are imperiling efforts to drive up EV adoption. "As an EV driver, a charger being installed in two years isn't really going to help me out now," Patel said. "We're in dire need of chargers here."

The Biden administration is expecting a deluge of chargers funded by the law to break ground in early 2024. A senior administration official granted anonymity to speak on the specifics of the rollout said the pace is to be expected, given that the goal is to create a "convenient, affordable, reliable, made-in-America equitable network." "Anybody can throw a charger in the ground -- that's not that hard, it doesn't take that long," the official said. "Building a network is different." The administration insists it is doing all it can to speed up the process, including by streamlining federal permitting for EV chargers and providing technical assistance to states and companies through the Joint Office. It expects the U.S. to hit Biden's 500,000 charger target four years early, in 2026, the official said.

Firefox

Firefox On the Brink? (brycewray.com) 239

An anonymous reader shares a report: A somewhat obscure guideline for developers of U.S. government websites may be about to accelerate the long, sad decline of Mozilla's Firefox browser. There already are plenty of large entities, both public and private, whose websites lack proper support for Firefox; and that will get only worse in the near future, because the 'fox's auburn paws are perilously close to the lip of the proverbial slippery slope. The U.S. Web Design System (USWDS) provides a comprehensive set of standards which guide those who build the U.S. government's many websites. Its documentation for developers borrows a "2% rule" from its British counterpart: "... we officially support any browser above 2% usage as observed by analytics.usa.gov." (Firefox's market share was 2.2%, per the traffic for the previous ninety days.)

[...] "So what?" you may wonder. "That's just for web developers in the U.S. government. It doesn't affect any other web devs." Actually, it very well could. Here's how I envision the dominoes falling:

1. Once Firefox slips below the 2% threshold in the government's visitor analytics, USWDS tells government web devs they don't have to support Firefox anymore.
2. When that word gets out, it spreads quickly to not only the front-end dev community but also the corporate IT departments for whom some web devs work. Many corporations do a lot of business with the government and, thus, whatever the government does from an IT standpoint is going to influence what corporations do.
3. Corporations see this change as an opportunity to lower dev costs and delivery times, in that it provides an excuse to remove some testing (and, in rare cases, specific coding) from their development workflow.

China

US Issues Warning To Nvidia, Urging To Stop Redesigning Chips For China (fortune.com) 86

At the Reagan National Defense Forum in Simi Valley, California, on Saturday, US Commerce Secretary Gina Raimondo issued a cautionary statement to Nvidia, urging them to stop redesigning AI chips for China that maneuver around export restrictions. "We cannot let China get these chips. Period," she said. "We're going to deny them our most cutting-edge technology." Fortune reports: Raimondo said American companies will need to adapt to US national security priorities, including export controls that her department has placed on semiconductor exports. "I know there are CEOs of chip companies in this audience who were a little cranky with me when I did that because you're losing revenue," she said. "Such is life. Protecting our national security matters more than short-term revenue."

Raimondo called out Nvidia Corp., which designed chips specifically for the Chinese market after the US imposed its initial round of curbs in October 2022. "If you redesign a chip around a particular cut line that enables them to do AI, I'm going to control it the very next day," Raimondo said. Communication with China can help stabilize ties between the two countries, but "on matters of national security, we've got to be eyes wide open about the threat," she said. "This is the biggest threat we've ever had and we need to meet the moment," she said.
Further reading: Nvidia CEO Says US Will Take Years To Achieve Chip Independence
Transportation

Automakers' Data Privacy Practices 'Are Unacceptable,' Says US Senator (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: US Senator Edward Markey (D-Mass.) is one of the more technologically engaged of our elected lawmakers. And like many technologically engaged Ars Technica readers, he does not like what he sees in terms of automakers' approach to data privacy. On Friday, Sen. Markey wrote to 14 car companies with a variety of questions about data privacy policies, urging them to do better. As Ars reported in September, the Mozilla Foundation published a scathing report on the subject of data privacy and automakers. The problems were widespread -- most automakers collect too much personal data and are too eager to sell or share it with third parties, the foundation found.

Markey noted (PDF) the Mozilla Foundation report in his letters, which were sent to BMW, Ford, General Motors, Honda, Hyundai, Kia, Mazda, Mercedes-Benz, Nissan, Stellantis, Subaru, Tesla, Toyota, and Volkswagen. The senator is concerned about the large amounts of data that modern cars can collect, including the troubling potential to use biometric data (like the rate a driver blinks and breathes, as well as their pulse) to infer mood or mental health. Sen. Markey is also worried about automakers' use of Bluetooth, which he said has expanded "their surveillance to include information that has nothing to do with a vehicle's operation, such as data from smartphones that are wirelessly connected to the vehicle."
"These practices are unacceptable," Markey wrote. "Although certain data collection and sharing practices may have real benefits, consumers should not be subject to a massive data collection apparatus, with any disclosures hidden in pages-long privacy policies filled with legalese. Cars should not -- and cannot -- become yet another venue where privacy takes a backseat."

The 14 automakers have until December 21 to answer Markey's questions.

Slashdot Top Deals