AI

A Russian Propaganda Network Is Promoting an AI-Manipulated Biden Video (wired.com) 224

An anonymous reader quotes a report from Wired: In recent weeks, as so-called cheap fake video clips suggesting President Joe Biden is unfit for office have gone viral on social media, a Kremlin-affiliated disinformation network has been promoting a parody music video featuring Biden wearing a diaper and being pushed around in a wheelchair. The video is called "Bye, Bye Biden" and has been viewed more than 5 million times on X since it was first promoted in the middle of May. It depicts Biden as senile, wearing a hearing aid, and taking a lot of medication. It also shows him giving money to a character who seems to represent illegal migrants while denying money to US citizens until they change their costume to mimic the Ukrainian flag. Another scene shows Biden opening the front door of a family home that features a Confederate flag on the wall and allowing migrants to come in and take over. Finally, the video contains references to stolen election conspiracies pushed by former president Donald Trump.

The video was created by Little Bug, a group that mimics the style of Little Big, a real Russian band that fled the country in 2022 following Russia's invasion of Ukraine. The video features several Moscow-based actors -- who spoke with Russian media outlet Agency.Media -- but also appears to use artificial intelligence technology to make the actors resemble Biden and Trump, as well as Ilya Prusikin, the lead singer of Little Big. "Biden and Trump appear to be the same actor, with deepfake video-editing changing his facial features until he resembles Biden in one case and Trump in the other case," says Alex Fink, an AI and machine-vision expert who analyzed the video for WIRED. "The editing is inconsistent, so you can see that in some cases he resembles Biden more and in others less. The facial features keep changing." An analysis by True Media, a nonprofit that was founded to tackle the spread of election-related deepfakes, found with 100 percent confidence that there was AI-generated audio used in the video. It also assessed with 78 percent confidence that some AI technology was used to manipulate the faces of the actors.

Fink says the obvious nature of the deepfake technology on display here suggests that the video was created in a rush, using a small number of iterations of a generative adversarial network in order to create the characters of Biden and Trump. It is unclear who is behind the video, but "Bye, Bye Biden" has been promoted by the Kremlin-aligned network known as Doppelganger. The campaign posted tens of thousands of times on X and was uncovered by Antibot4Navalny, an anonymous collective of Russian researchers who have been tracking Doppelganger's activity for the past six months. The campaign first began on May 21, and there have been almost 4,000 posts on X promoting the video in 13 languages that were promoted by a network of almost 25,000 accounts. The Antibot4Navalny researchers concluded that the posts were written with the help of generative AI technology. The video has been shared 6.5 million times on X and has been viewed almost 5 million times.

AI

An AI-Generated Candidate Wants to Run For Mayor in Wyoming (futurism.com) 49

An anonymous reader shared this report from Futurism: An AI chatbot named VIC, or Virtually Integrated Citizen, is trying to make it onto the ballot in this year's mayoral election for Wyoming's capital city of Cheyenne.

But as reported by Wired, Wyoming's secretary of state is battling against VIC's legitimacy as a candidate — and now, an investigation is underway.

According to Wired, VIC, which was built on OpenAI's GPT-4 and trained on thousands of documents gleaned from Cheyenne council meetings, was created by Cheyenne resident and library worker Victor Miller. Should VIC win, Miller told Wired that he'll serve as the bot's "meat puppet," operating the AI but allowing it to make decisions for the capital city.... "My campaign promise," Miller told Wired, "is he's going to do 100 percent of the voting on these big, thick documents that I'm not going to read and that I don't think people in there right now are reading...."

Unfortunately for the AI and its — his? — meat puppet, however, they've already made some political enemies, most notably Wyoming Secretary of State Chuck Gray. As Gray, who has challenged the legality of the bot, told Wired in a statement, all mayoral candidates need to meet the requirements of a "qualified elector." This "necessitates being a real person," Gray argues... Per Wired, it's also run amuck with OpenAI, which says the AI violates the company's "policies against political campaigning." (Miller told Wired that he'll move VIC to Meta's open-source Llama 3 model if need be, which seems a bit like VIC will turn into a different candidate entirely.)

The Wyoming Tribune Eagle offers more details: [H]is dad helped him design the best system for VIC. Using his $20-a-month ChatGPT subscription, Miller had an 8,000-character limit to feed VIC supporting documents that would make it an effective mayoral candidate...

While on the phone with Miller, the Wyoming Tribune Eagle also interviewed VIC itself. When asked whether AI technology is better suited for elected office than humans, VIC said a hybrid solution is the best approach. "As an AI, I bring unique strengths to the role, such as impartial decision-making, data-driven policies and the ability to analyze information rapidly and accurately," VIC said. "However, it's important to recognize the value of human experience and empathy and leadership. So ideally, an AI and human partnership would be the most beneficial for Cheyenne...." The artificial intelligence said this unique approach could pave a new pathway for the integration of human leadership and advanced technology in politics.

AI

AI Candidate Running For Parliament in the UK Says AI Can Humanize Politics (nbcnews.com) 39

An artificial intelligence candidate is on the ballot for the United Kingdom's general election next month. From a report: "AI Steve," represented by Sussex businessman Steve Endacott, will appear on the ballot alongside non-AI candidates running to represent constituents in the Brighton Pavilion area of Brighton and Hove, a city on England's southern coast. "AI Steve is the AI co-pilot," Endacott said in an interview. "I'm the real politician going into Parliament, but I'm controlled by my co-pilot." Endacott is the chairman of Neural Voice, a company that creates personalized voice assistants for businesses in the form of an AI avatar. Neural Voice's technology is behind AI Steve, one of the seven characters the company created to showcase its technology.

He said the idea is to use AI to create a politician who is always around to talk with constituents and who can take their views into consideration. People can ask AI Steve questions or share their opinions on Endacott's policies on its website, during which a large language model will give answers in voice and text based on a database of information about his party's policies. If he doesn't have a policy for a particular issue raised, the AI will conduct some internet research before engaging the voter and pushing them to suggest a policy.

Republicans

Trump Promises He'd Commute the Life Sentence of 'Silk Road' Founder Ross Ulbricht (semafor.com) 283

In 2011 Ross Ulbricht launched an anonymous, Tor-hidden "darknet" marketplace (with transactions conducted in bitcoin). By 2015 he'd been sentenced to life in prison for crimes including money laundering, distributing narcotics, and trafficking in fraudulent identity documents — without the possibility of parole.

Today a U.S. presidential candidate promised to commute that life sentence — Donald Trump, speaking at the national convention of the Libertarian Party as it prepares to nominate its own candidate for president.

Commuting Ulbricht's life sentence is "a top demand" of a political movement that intends to run its own candidate against Trump, reports Semafor: "On day one, we will commute the sentence," Trump said, offering to free the creator of what was once the internet's most infamous drug clearinghouse. "We will bring him home." His speeches more typically include a pledge to execute drug dealers, citing China as a model.

"It's time to be winners," said Trump, asking rhetorically if third party delegates wanted to go on getting single-digit protest votes. "I'm asking for the Libertarian Party's endorsement, or at least lots of your votes...."

"I've been indicted by the government on 91 different things," Trump said. "So if I wasn't a libertarian before, I sure as hell am a libertarian now."

More coverage from NBC News: At times, Trump turned on the crowd, criticizing libertarians' turnout at previous elections. "You can keep going the way you have for the last long decades and get your 3% and meet again, get another 3%," Trump said following jeers from the crowd.
Another high-profile supporter for commuting Ulbricht's sentence is actor-turned documentary maker Alex Winter. Best known for playing slacker Bill S. Preston Esq in Bill & Ted's Excellent Adventure and its sequels, Winter also directed, wrote, and co-produced the 2015 documentary Deep Web: The Untold Story of Bitcoin and the Silk Road (narrated by Keanu Reeves).

Writing earlier this month in Rolling Stone, Winter called Silk Road "inarguably a criminal operation" but also "a vibrant and diverse community of people from around the world. They were not only there for drugs but for the freedom of an encrypted and anonymous space, to convene and discuss everything from politics to literature and art, philosophy and drugs, drug recovery, and the onerous War on Drugs..." It's my firm opinion, and the opinion of many prison-system and criminal-law experts, that [Ulbricht's] sentence is disproportionate to his charges and that he deserves clemency. This case indeed reflects just one of the millions of unjust sentences in the long and failed War on Drugs... No matter what one thinks of Ulbricht, Silk Road, or the crimes that may have been committed, 10 years in prison is more than sufficient and customary punishment for those offenses or sins. Ross Ulbricht should be free.
The Courts

Political Consultant Behind Fake Biden Robocalls Faces $6 Million Fine, Criminal Charges (apnews.com) 49

Political consultant Steven Kramer faces a $6 million fine and over two dozen criminal charges for using AI-generated robocalls mimicking President Joe Biden's voice to mislead New Hampshire voters ahead of the presidential primary. The Associated Press reports: The Federal Communications Commission said the fine it proposed Thursday for Steven Kramer is its first involving generative AI technology. The company accused of transmitting the calls, Lingo Telecom, faces a $2 million fine, though in both cases the parties could settle or further negotiate, the FCC said. Kramer has admitted orchestrating a message that was sent to thousands of voters two days before the first-in-the-nation primary on Jan. 23. The message played an AI-generated voice similar to the Democratic president's that used his phrase "What a bunch of malarkey" and falsely suggested that voting in the primary would preclude voters from casting ballots in November.

Kramer is facing 13 felony charges alleging he violated a New Hampshire law against attempting to deter someone from voting using misleading information. He also faces 13 misdemeanor charges accusing him of falsely representing himself as a candidate by his own conduct or that of another person. The charges were filed in four counties and will be prosecuted by the state attorney general's office. Attorney General John Formella said New Hampshire was committed to ensuring that its elections "remain free from unlawful interference."

Kramer, who owns a firm that specializes in get-out-the-vote projects, did not respond to an email seeking comment Thursday. He told The Associated Press in February that he wasn't trying to influence the outcome of the election but rather wanted to send a wake-up call about the potential dangers of artificial intelligence when he paid a New Orleans magician $150 to create the recording. "Maybe I'm a villain today, but I think in the end we get a better country and better democracy because of what I've done, deliberately," Kramer said in February.

AI

FCC Chair Proposes Disclosure Rules For AI-Generated Content In Political Ads (qz.com) 37

FCC Chairwoman Jessica Rosenworcel has proposed (PDF) disclosure rules for AI-generated content used in political ads. "If adopted, the proposal would look into whether the FCC should require political ads on radio and TV to disclose when there is AI-generated content," reports Quartz. From the report: The FCC is seeking comment on whether on-air and written disclosure should be required in broadcasters' political files when AI-generated content is used in political ads; proposing that the rules apply to both candidates and issue advertisements; requesting comment on what a specific definition of AI-generated comment should look like; and proposing that disclosure rules be applied to broadcasters and entities involved in programming, such as cable operators and radio providers.

The proposed disclosure rules do not prohibit the use of AI-generated content in political ads. The FCC has authority through the Bipartisan Campaign Reform Act to make rules around political advertising. If the proposal is adopted, the FCC will take public comment on the rules.
"As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used," Rosenworcel said in a statement. "Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue."
Social Networks

Could Better Data Protections Reduce Big Tech's Polarizing Power? (nbcnews.com) 39

"What if the big tech companies achieved their ultimate business goal — maximizing engagement on their platforms — in a way that has undermined our ability to function as an open society?"

That's the question being asked by Chuck Todd, chief political analyst for NBC News: What if they realized that when folks agree on a solution to a problem, they are most likely to log off a site or move on? It sure looks like the people at these major data-hoarding companies have optimized their algorithms to do just that. As a new book argues, Big Tech appears to have perfected a model that has created rhetorical paralysis. Using our own data against us to create dopamine triggers, tech platforms have created "a state of perpetual disagreement across the divide and a concurrent state of perpetual agreement within each side," authors Frank McCourt and Michael Casey write, adding: "Once this uneasy state of divisive 'equilibrium' is established, it creates profit-making opportunities for the platforms to generate revenue from advertisers who prize the sticky highly engaged audiences it generates."

In their new book, "Our Biggest Fight," McCourt (a longtime businessman and onetime owner of the Los Angeles Dodgers) and Casey are attempting a call to action akin to Thomas Paine's 18th century-era "Common Sense." The book argues that "we must act now to embed the core values of a free, democratic society in the internet of tomorrow." The authors believe many of the current ills in society can be traced to how the internet works. "Information is the lifeblood of any society, and our three-decade-old digital system for distributing it is fatally corrupt at its heart," they write. "It has failed to function as a trusted, neutral exchange of facts and ideas and has therefore catastrophically hindered our ability to gather respectfully to debate, to compromise and to hash out solutions.... Everything, ultimately, comes down to our ability to communicate openly and truthfully with one another. We have lost that ability — thanks to how the internet has evolved away from its open, decentralized ideals...."

Ultimately, what the authors are imagining is a new internet that essentially flips the user agreement 180 degrees, so that a tech company has to agree to your terms and conditions to use your data and has to seek your permission (perhaps with compensation) to access your entire social map of whom and what you engage with on the internet. Most important, under such an arrangement, these companies couldn't prevent you from using their services if you refused to let them have your data... Unlike most anti-Big Tech books, this one isn't calling for the breakup of companies like Meta, Amazon, Alphabet, Microsoft or Apple. Instead, it's calling for a new set of laws that protect data so none of those companies gets to own it, either specifically or in the aggregate...

The authors seem mindful that this Congress or a new one isn't going to act unless the public demands action. And people may not demand this change in our relationship with tech if they don't have an alternative to point to. That's why McCourt, through an organization he founded called Project Liberty, is trying to build our new internet with new protocols that make individual data management a lot easier and second nature. (If you want to understand the tech behind this new internet more, read the book!)

Wait, there's more. The article adds that the authors "envision an internet where all apps and the algorithms that power them are open source and can be audited at will. They believe that simply preventing these private companies from owning and mapping our data will deprive them of the manipulative marketing and behavioral tactics they've used to derive their own power and fortunes at the expense of democracy."

And the NBC News analyst seems to agree. "For whatever reason, despite our societal fear of government databases and government surveillance, we've basically handed our entire personas to the techies of Silicon Valley."
The Internet

FCC Votes To Restore Net Neutrality Rules (nytimes.com) 54

An anonymous reader quotes a report from the New York Times: The Federal Communications Commission voted on Thursday to restore regulations that expand government oversight of broadband providersand aim to protect consumer access to the internet, a move that will reignite a long-running battle over the open internet. Known as net neutrality, the regulations were first put in place nearly a decade ago under the Obama administration and are aimed at preventing internet service providers like Verizon or Comcast from blocking or degrading the delivery of services from competitors like Netflix and YouTube. The rules were repealed under President Donald J. Trump, and have proved to be a contentious partisan issue over the years while pitting tech giants against broadband providers.

In a 3-to-2 vote along party lines, the five-member commission appointed by President Biden revived the rules that declare broadband a utility-like service regulated like phones and water. The rules also give the F.C.C. the ability to demand broadband providers report and respond to outages, as well as expand the agency's oversight of the providers' security issues. Broadband providers are expected to sue to try to overturn the reinstated rules.

The core purpose of the regulations is to prevent internet service providers from controlling the quality of consumers' experience when they visit websites and use services online. When the rules were established, Google, Netflix and other online services warned that broadband providers had the incentive to slow down or block access to their services. Consumer and free speech groups supported this view. There have been few examples of blocking or slowing of sites, which proponents of net neutrality say is largely because of fear that the companies would invite scrutiny if they did so. And opponents say the rules could lead to more and unnecessary government oversight of the industry.

China

China Will Use AI To Disrupt Elections in the US, South Korea and India, Microsoft Warns (theguardian.com) 157

China will attempt to disrupt elections in the US, South Korea and India this year with artificial intelligence-generated content after making a dry run with the presidential poll in Taiwan, Microsoft has warned. From a report: The US tech firm said it expected Chinese state-backed cyber groups to target high-profile elections in 2024, with North Korea also involved, according to a report by the company's threat intelligence team published on Friday. "As populations in India, South Korea and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections," the report reads.

Microsoft said that "at a minimum" China will create and distribute through social media AI-generated content that "benefits their positions in these high-profile elections." The company added that the impact of AI-made content was minor but warned that could change. "While the impact of such content in swaying audiences remains low, China's increasing experimentation in augmenting memes, videos and audio will continue -- and may prove effective down the line," said Microsoft. Microsoft said in the report that China had already attempted an AI-generated disinformation campaign in the Taiwan presidential election in January. The company said this was the first time it had seen a state-backed entity using AI-made content in a bid to influence a foreign election.

UPDATE: Last fall, America's State Department "accused the Chinese government of spending billions of dollars annually on a global campaign of disinformation," reports the Wall Street Journal: In an interview, Tom Burt, Microsoft's head of customer security and trust, said China's disinformation operations have become much more active in the past six months, mirroring rising activity of cyberattacks linked to Beijing. "We're seeing them experiment," Burt said. "I'm worried about where it might go next."
The Internet

FCC To Vote To Restore Net Neutrality Rules (reuters.com) 60

An anonymous reader quotes a report from Reuters: The U.S. Federal Communications Commission will vote to reinstate landmark net neutrality rules and assume new regulatory oversight of broadband internet that was rescinded under former President Donald Trump, the agency's chair said. The FCC told advocates on Tuesday of the plan to vote on the final rule at its April 25 meeting. The commission voted 3-2 in October on the proposal to reinstate open internet rules adopted in 2015 and re-establish the commission's authority over broadband internet.

Net neutrality refers to the principle that internet service providers should enable access to all content and applications regardless of the source, and without favoring or blocking particular products or websites. FCC Chair Jessica Rosenworcel confirmed the planned commission vote in an interview with Reuters. "The pandemic made clear that broadband is an essential service, that every one of us -- no matter who we are or where we live -- needs it to have a fair shot at success in the digital age," she said. "An essential service requires oversight and in this case we are just putting back in place the rules that have already been court-approved that ensures that broadband access is fast, open and fair."

Government

Can Apps Turn Us Into Unpaid Lobbyists? (msn.com) 73

"Today's most effective corporate lobbying no longer involves wooing members of Congress..." writes the Wall Street Journal. Instead the lobbying sector "now works in secret to influence lawmakers with the help of an unlikely ally: you." [Lobbyists] teamed up with PR gurus, social-media experts, political pollsters, data analysts and grassroots organizers to foment seemingly organic public outcries designed to pressure lawmakers and compel them to take actions that would benefit the lobbyists' corporate clients...

By the middle of 2011, an army of lobbyists working for the pillars of the corporate lobbying establishment — the major movie studios, the music industry, pharmaceutical manufacturers and the U.S. Chamber of Commerce — were executing a nearly $100 million campaign to win approval for the internet bill [the PROTECT IP Act, or "PIPA"]. They pressured scores of lawmakers to co-sponsor the legislation. At one point, 99 of the 100 members of the U.S. Senate appeared ready to support it — an astounding number, given that most bills have just a handful of co-sponsors before they are called up for a vote. When lobbyists for Google and its allies went to Capitol Hill, they made little headway. Against such well-financed and influential opponents, the futility of the traditional lobbying approach became clear. If tech companies were going to turn back the anti-piracy bills, they would need to find another way.

It was around this time that one of Google's Washington strategists suggested an alternative strategy. "Let's rally our users," Adam Kovacevich, then 34 and a senior member of Google's Washington office, told colleagues. Kovacevich turned Google's opposition to the anti-piracy legislation into a coast-to-coast political influence effort with all the bells and whistles of a presidential campaign. The goal: to whip up enough opposition to the legislation among ordinary Americans that Congress would be forced to abandon the effort... The campaign slogan they settled on — "Don't Kill the Internet" — exaggerated the likely impact of the bill, but it succeeded in stirring apprehension among web users.

The coup de grace came on Jan. 18, 2012, when Google and its allies pulled off the mother of all outside influence campaigns. When users logged on to the web that day, they discovered, to their great frustration, that many of the sites they'd come to rely on — Wikipedia, Reddit, Craigslist — were either blacked out or displayed text outlining the detrimental impacts of the proposed legislation. For its part, Google inserted a black censorship bar over its multicolored logo and posted a tool that enabled users to contact their elected representatives. "Tell Congress: Please don't censor the web!" a message on Google's home page read. With some 115,000 websites taking part, the protest achieved a staggering reach. Tens of millions of people visited Wikipedia's blacked-out website, 4.5 million users signed a Google petition opposing the legislation, and more than 2.4 million people took to Twitter to express their views on the bills. "We must stop [these bills] to keep the web open & free," the reality TV star Kim Kardashian wrote in a tweet to her 10 million followers...

Within two days, the legislation was dead...

Over the following decade, outside influence tactics would become the cornerstone of Washington's lobbying industry — and they remain so today.

"The 2012 effort is considered the most successful consumer mobilization in the history of internet policy," writes the Washington Post — agreeing that it's since spawned more app-based, crowdsourced lobbying campaigns. Sites like Airbnb "have also repeatedly asked their users to oppose city government restrictions on the apps." Uber, Lyft, DoorDash and other gig work companies also blitzed the apps' users with scenarios of higher prices or suspended service unless people voted for a 2020 California ballot measure on contract workers. Voters approved it."

The Wall Street Journal also details how lobbyists successfully killed higher taxes for tobacco products, the oil-and-gas industry, and even on private-equity investors — and note similar tactics were used against a bill targeting TikTok. "Some say the campaign backfired. Lawmakers complained that the effort showed how the Chinese government could co-opt internet users to do their bidding in the U.S., and the House of Representatives voted to ban the app if its owners did not agree to sell it.

"TikTok's lobbyists said they were pleased with the effort. They persuaded 65 members of the House to vote in favor of the company and are confident that the Senate will block the effort."

The Journal's article was adapted from an upcoming book titled "The Wolves of K Street: The Secret History of How Big Money Took Over Big Government." But the Washington Post argues the phenomenon raises two questions. "How much do you want technology companies to turn you into their lobbyists? And what's in it for you?"
AI

Hillary Clinton, Election Officials Warn AI Could Threaten Elections (wsj.com) 255

Hillary Clinton and U.S. election officials said they are concerned disinformation generated and spread by AI could threaten the 2024 presidential election [non-paywalled link]. WSJ: Clinton, a former secretary of state and 2016 presidential candidate, said she thinks foreign actors like Russian President Vladimir Putin could use AI to interfere in elections in the U.S. and elsewhere. Dozens of countries are running elections this year. "Anybody who's not worried is not paying attention," Clinton said Thursday at Columbia University, where election officials and tech executives discussed how AI could impact global elections.

She added: "It could only be a very small handful of people in St. Petersburg or Moldova or wherever they are right now who are lighting the fire, but because of the algorithms everyone gets burned." Clinton said Putin tried to undermine her before the 2016 election by spreading disinformation on Facebook, Twitter and Snapchat about "all these terrible things" she purportedly did. "I don't think any of us understood it," she said. "I did not understand it. I can tell you my campaign did not understand it. The so-called dark web was filled with these kinds of memes and stories and videos of all sorts portraying me in all kinds of less than flattering ways." Clinton added: "What they did to me was primitive and what we're talking about now is the leap in technology."

Social Networks

Users Shocked To Find Instagram Limits Political Content By Default (arstechnica.com) 58

Instagram has been limiting recommended political content by default without notifying users. Ars Technica reports: Instead, Instagram rolled out the change in February, announcing in a blog that the platform doesn't "want to proactively recommend political content from accounts you don't follow." That post confirmed that Meta "won't proactively recommend content about politics on recommendation surfaces across Instagram and Threads," so that those platforms can remain "a great experience for everyone." "This change does not impact posts from accounts people choose to follow; it impacts what the system recommends, and people can control if they want more," Meta's spokesperson Dani Lever told Ars. "We have been working for years to show people less political content based on what they told us they want, and what posts they told us are political."

To change the setting, users can navigate to Instagram's menu for "settings and activity" in their profiles, where they can update their "content preferences." On this menu, "political content" is the last item under a list of "suggested content" controls that allow users to set preferences for what content is recommended in their feeds. There are currently two options for controlling what political content users see. Choosing "don't limit" means "you might see more political or social topics in your suggested content," the app says. By default, all users are set to "limit," which means "you might see less political or social topics." "This affects suggestions in Explore, Reels, Feed, Recommendations, and Suggested Users," Instagram's settings menu explains. "It does not affect content from accounts you follow. This setting also applies to Threads."
"Did [y'all] know Instagram was actively limiting the reach of political content like this?!" an X user named Olayemi Olurin wrote in an X post. "I had no idea 'til I saw this comment and I checked my settings and sho nuff political content was limited."

"This is actually kinda wild that Instagram defaults everyone to this," another user wrote. "Obviously political content is toxic but during an election season it's a little weird to just hide it from everyone?"
Censorship

India Will Fact-Check Online Posts About Government Matters (techcrunch.com) 32

An anonymous reader quotes a report from TechCrunch: In India, a government-run agency will now monitor and undertake fact-checking for government related matters on social media even as tech giants expressed grave concerns about it last year. The Ministry of Electronics and IT on Wednesday wrote in a gazette notification that it is amending the IT Rules 2021 to cement into law the proposal to make the fact checking unit of Press Information Bureau the dedicated arbiter of truth for New Delhi matters. Tech companies as well as other firms that serve more than 5 million users in India will be required to "make reasonable efforts" to not display, store, transmit or otherwise share information that deceives or misleads users about matters pertaining to the government, the IT ministry said. India's move comes just weeks ahead of the general elections in the country. Relying on a government agency such as the Press Information Bureau as the sole source to fact-check government business without giving it a clear definition or providing clear checks and balances "may lead to misuse during implementation of the law, which will profoundly infringe on press freedom," Asia Internet Coalition, an industry group that represents Meta, Amazon, Google and Apple, cautioned last year.

Meanwhile, comedian Kunal Kamra, with support from the Editors Guild of India, cautioned that the move could create an environment that forces social media firms to welcome "a regime of self-interested censorship."
Medicine

5-Year Study Finds No Brain Abnormalities In 'Havana Syndrome' Patients (www.cbc.ca) 38

An anonymous reader quotes a report from CBC News: An array of advanced tests found no brain injuries or degeneration among U.S. diplomats and other government employees who suffer mysterious health problems once dubbed "Havana syndrome," researchers reported Monday. The National Institutes of Health's (NIH) nearly five-year study offers no explanation for symptoms including headaches, balance problems and difficulties with thinking and sleep that were first reported in Cuba in 2016 and later by hundreds of American personnel in multiple countries. But it did contradict some earlier findings that raised the spectre of brain injuries in people experiencing what the State Department now calls "anomalous health incidents."

"These individuals have real symptoms and are going through a very tough time," said Dr. Leighton Chan, NIH's chief of rehabilitation medicine, who helped lead the research. "They can be quite profound, disabling and difficult to treat." Yet sophisticated MRI scans detected no significant differences in brain volume, structure or white matter -- signs of injury or degeneration -- when Havana syndrome patients were compared to healthy government workers with similar jobs, including some in the same embassy. Nor were there significant differences in cognitive and other tests, according to findings published in the Journal of the American Medical Association.

China

CIA Used Chinese Social Media In Covert Influence Operation Against Xi Jinping's Government (reuters.com) 114

An anonymous reader quotes a report from Reuters: Two years into office, President Donald Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, according to former U.S. officials with direct knowledge of the highly classified operation. Three former officials told Reuters that the CIA created a small team of operatives who used bogus internet identities to spread negative narratives about Xi Jinping's government while leaking disparaging intelligence to overseas news outlets. The effort, which began in 2019, has not been previously reported.

The CIA team promoted allegations that members of the ruling Communist Party were hiding ill-gotten money overseas and slammed as corrupt and wasteful China's Belt and Road Initiative, which provides financing for infrastructure projects in the developing world, the sources told Reuters. Although the U.S. officials declined to provide specific details of these operations, they said the disparaging narratives were based in fact despite being secretly released by intelligence operatives under false cover. The efforts within China were intended to foment paranoia among top leaders there, forcing its government to expend resources chasing intrusions into Beijing's tightly controlled internet, two former officials said. "We wanted them chasing ghosts," one of these former officials said. [...]

The CIA operation came in response to years of aggressive covert efforts by China aimed at increasing its global influence, the sources said. During his presidency, Trump pushed a tougher response to China than had his predecessors. The CIA's campaign signaled a return to methods that marked Washington's struggle with the former Soviet Union. "The Cold War is back," said Tim Weiner, author of a book on the history of political warfare. Reuters was unable to determine the impact of the secret operations or whether the administration of President Joe Biden has maintained the CIA program.

Google

Google Restricts AI Chatbot Gemini From Answering Queries on Global Elections (reuters.com) 53

Google is restricting AI chatbot Gemini from answering questions about the global elections set to happen this year, the Alphabet-owned firm said on Tuesday, as it looks to avoid potential missteps in the deployment of the technology. From a report: The update comes at a time when advancements in generative AI, including image and video generation, have fanned concerns of misinformation and fake news among the public, prompting governments to regulate the technology.

When asked about elections such as the upcoming U.S. presidential match-up between Joe Biden and Donald Trump, Gemini responds with "I'm still learning how to answer this question. In the meantime, try Google Search". Google had announced restrictions within the U.S. in December, saying they would come into effect ahead of the election. "In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we are restricting the types of election-related queries for which Gemini will return responses," a company spokesperson said on Tuesday.

Canada

Police Now Need Warrant For IP Addresses, Canada's Top Court Rules (www.cbc.ca) 36

The Supreme Court of Canada ruled today that police must now have a warrant or court order to obtain a person or organization's IP address. CBC News reports: The top court was asked to consider whether an IP address alone, without any of the personal information attached to it, was protected by an expectation of privacy under the Charter. In a five-four split decision, the court said a reasonable expectation of privacy is attached to the numbers making up a person's IP address, and just getting those numbers alone constitutes a search. Writing for the majority, Justice Andromache Karakatsanis wrote that an IP address is "the crucial link between an internet user and their online activity." "Thus, the subject matter of this search was the information these IP addresses could reveal about specific internet users including, ultimately, their identity." Writing for the four dissenting judges, Justice Suzanne Cote disagreed with that central point, saying there should be no expectation of privacy around an IP address alone. [...]

In the Supreme Court majority decision, Karakatsanis said that only considering the information associated with an IP address to be protected by the Charter and not the IP address itself "reflects piecemeal reasoning" that ignores the broad purpose of the Charter. The ruling said the privacy interests cannot be limited to what the IP address can reveal on its own "without consideration of what it can reveal in combination with other available information, particularly from third-party websites." It went on to say that because an IP address unlocks a user's identity, it comes with a reasonable expectation of privacy and is therefore protected by the Charter. "If [the Charter] is to meaningfully protect the online privacy of Canadians in today's overwhelmingly digital world, it must protect their IP addresses," the ruling said.

Justice Cote, writing on behalf of justices Richard Wagner, Malcolm Rowe and Michelle O'Bonsawin, acknowledged that IP addresses "are not sought for their own sake" but are "sought for the information they reveal." "However, the evidentiary record in this case establishes that an IP address, on its own, reveals only limited information," she wrote. Cote said the biographical personal information the law was designed to protect are not revealed through having access to an IP address. Police must use that IP address to access personal information that is held by an ISP or a website that tracks customers' IP addresses to determine their habits. "On its own, an IP address does not even reveal browsing habits," Cote wrote. "What it reveals is a user's ISP -- hardly a more private piece of information than electricity usage or heat emissions." Cote said placing a reasonable expectation of privacy on an IP address alone upsets the careful balance the Supreme Court has struck between Canadians' privacy interests and the needs of law enforcement. "It would be inconsistent with a functional approach to defining the subject matter of the search to effectively hold that any step taken in an investigation engages a reasonable expectation of privacy," the dissenting opinion said.

AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
The Courts

New Bill Would Let Defendants Inspect Algorithms Used Against Them In Court (theverge.com) 47

Lauren Feiner reports via The Verge: Reps. Mark Takano (D-CA) and Dwight Evans (D-PA) reintroduced the Justice in Forensic Algorithms Act on Thursday, which would allow defendants to access the source code of software used to analyze evidence in their criminal proceedings. It would also require the National Institute of Standards and Technology (NIST) to create testing standards for forensic algorithms, which software used by federal enforcers would need to meet.

The bill would act as a check on unintended outcomes that could be created by using technology to help solve crimes. Academic research has highlighted the ways human bias can be built into software and how facial recognition systems often struggle to differentiate Black faces, in particular. The use of algorithms to make consequential decisions in many different sectors, including both crime-solving and health care, has raised alarms for consumers and advocates as a result of such research.

Takano acknowledged that gaining or hiring the deep expertise needed to analyze the source code might not be possible for every defendant. But requiring NIST to create standards for the tools could at least give them a starting point for understanding whether a program matches the basic standards. Takano introduced previous iterations of the bill in 2019 and 2021, but they were not taken up by a committee.

Slashdot Top Deals