×
AI

FCC Chair Proposes Disclosure Rules For AI-Generated Content In Political Ads (qz.com) 37

FCC Chairwoman Jessica Rosenworcel has proposed (PDF) disclosure rules for AI-generated content used in political ads. "If adopted, the proposal would look into whether the FCC should require political ads on radio and TV to disclose when there is AI-generated content," reports Quartz. From the report: The FCC is seeking comment on whether on-air and written disclosure should be required in broadcasters' political files when AI-generated content is used in political ads; proposing that the rules apply to both candidates and issue advertisements; requesting comment on what a specific definition of AI-generated comment should look like; and proposing that disclosure rules be applied to broadcasters and entities involved in programming, such as cable operators and radio providers.

The proposed disclosure rules do not prohibit the use of AI-generated content in political ads. The FCC has authority through the Bipartisan Campaign Reform Act to make rules around political advertising. If the proposal is adopted, the FCC will take public comment on the rules.
"As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used," Rosenworcel said in a statement. "Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue."
Social Networks

Could Better Data Protections Reduce Big Tech's Polarizing Power? (nbcnews.com) 39

"What if the big tech companies achieved their ultimate business goal — maximizing engagement on their platforms — in a way that has undermined our ability to function as an open society?"

That's the question being asked by Chuck Todd, chief political analyst for NBC News: What if they realized that when folks agree on a solution to a problem, they are most likely to log off a site or move on? It sure looks like the people at these major data-hoarding companies have optimized their algorithms to do just that. As a new book argues, Big Tech appears to have perfected a model that has created rhetorical paralysis. Using our own data against us to create dopamine triggers, tech platforms have created "a state of perpetual disagreement across the divide and a concurrent state of perpetual agreement within each side," authors Frank McCourt and Michael Casey write, adding: "Once this uneasy state of divisive 'equilibrium' is established, it creates profit-making opportunities for the platforms to generate revenue from advertisers who prize the sticky highly engaged audiences it generates."

In their new book, "Our Biggest Fight," McCourt (a longtime businessman and onetime owner of the Los Angeles Dodgers) and Casey are attempting a call to action akin to Thomas Paine's 18th century-era "Common Sense." The book argues that "we must act now to embed the core values of a free, democratic society in the internet of tomorrow." The authors believe many of the current ills in society can be traced to how the internet works. "Information is the lifeblood of any society, and our three-decade-old digital system for distributing it is fatally corrupt at its heart," they write. "It has failed to function as a trusted, neutral exchange of facts and ideas and has therefore catastrophically hindered our ability to gather respectfully to debate, to compromise and to hash out solutions.... Everything, ultimately, comes down to our ability to communicate openly and truthfully with one another. We have lost that ability — thanks to how the internet has evolved away from its open, decentralized ideals...."

Ultimately, what the authors are imagining is a new internet that essentially flips the user agreement 180 degrees, so that a tech company has to agree to your terms and conditions to use your data and has to seek your permission (perhaps with compensation) to access your entire social map of whom and what you engage with on the internet. Most important, under such an arrangement, these companies couldn't prevent you from using their services if you refused to let them have your data... Unlike most anti-Big Tech books, this one isn't calling for the breakup of companies like Meta, Amazon, Alphabet, Microsoft or Apple. Instead, it's calling for a new set of laws that protect data so none of those companies gets to own it, either specifically or in the aggregate...

The authors seem mindful that this Congress or a new one isn't going to act unless the public demands action. And people may not demand this change in our relationship with tech if they don't have an alternative to point to. That's why McCourt, through an organization he founded called Project Liberty, is trying to build our new internet with new protocols that make individual data management a lot easier and second nature. (If you want to understand the tech behind this new internet more, read the book!)

Wait, there's more. The article adds that the authors "envision an internet where all apps and the algorithms that power them are open source and can be audited at will. They believe that simply preventing these private companies from owning and mapping our data will deprive them of the manipulative marketing and behavioral tactics they've used to derive their own power and fortunes at the expense of democracy."

And the NBC News analyst seems to agree. "For whatever reason, despite our societal fear of government databases and government surveillance, we've basically handed our entire personas to the techies of Silicon Valley."
The Internet

FCC Votes To Restore Net Neutrality Rules (nytimes.com) 54

An anonymous reader quotes a report from the New York Times: The Federal Communications Commission voted on Thursday to restore regulations that expand government oversight of broadband providersand aim to protect consumer access to the internet, a move that will reignite a long-running battle over the open internet. Known as net neutrality, the regulations were first put in place nearly a decade ago under the Obama administration and are aimed at preventing internet service providers like Verizon or Comcast from blocking or degrading the delivery of services from competitors like Netflix and YouTube. The rules were repealed under President Donald J. Trump, and have proved to be a contentious partisan issue over the years while pitting tech giants against broadband providers.

In a 3-to-2 vote along party lines, the five-member commission appointed by President Biden revived the rules that declare broadband a utility-like service regulated like phones and water. The rules also give the F.C.C. the ability to demand broadband providers report and respond to outages, as well as expand the agency's oversight of the providers' security issues. Broadband providers are expected to sue to try to overturn the reinstated rules.

The core purpose of the regulations is to prevent internet service providers from controlling the quality of consumers' experience when they visit websites and use services online. When the rules were established, Google, Netflix and other online services warned that broadband providers had the incentive to slow down or block access to their services. Consumer and free speech groups supported this view. There have been few examples of blocking or slowing of sites, which proponents of net neutrality say is largely because of fear that the companies would invite scrutiny if they did so. And opponents say the rules could lead to more and unnecessary government oversight of the industry.

China

China Will Use AI To Disrupt Elections in the US, South Korea and India, Microsoft Warns (theguardian.com) 157

China will attempt to disrupt elections in the US, South Korea and India this year with artificial intelligence-generated content after making a dry run with the presidential poll in Taiwan, Microsoft has warned. From a report: The US tech firm said it expected Chinese state-backed cyber groups to target high-profile elections in 2024, with North Korea also involved, according to a report by the company's threat intelligence team published on Friday. "As populations in India, South Korea and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections," the report reads.

Microsoft said that "at a minimum" China will create and distribute through social media AI-generated content that "benefits their positions in these high-profile elections." The company added that the impact of AI-made content was minor but warned that could change. "While the impact of such content in swaying audiences remains low, China's increasing experimentation in augmenting memes, videos and audio will continue -- and may prove effective down the line," said Microsoft. Microsoft said in the report that China had already attempted an AI-generated disinformation campaign in the Taiwan presidential election in January. The company said this was the first time it had seen a state-backed entity using AI-made content in a bid to influence a foreign election.

UPDATE: Last fall, America's State Department "accused the Chinese government of spending billions of dollars annually on a global campaign of disinformation," reports the Wall Street Journal: In an interview, Tom Burt, Microsoft's head of customer security and trust, said China's disinformation operations have become much more active in the past six months, mirroring rising activity of cyberattacks linked to Beijing. "We're seeing them experiment," Burt said. "I'm worried about where it might go next."
The Internet

FCC To Vote To Restore Net Neutrality Rules (reuters.com) 60

An anonymous reader quotes a report from Reuters: The U.S. Federal Communications Commission will vote to reinstate landmark net neutrality rules and assume new regulatory oversight of broadband internet that was rescinded under former President Donald Trump, the agency's chair said. The FCC told advocates on Tuesday of the plan to vote on the final rule at its April 25 meeting. The commission voted 3-2 in October on the proposal to reinstate open internet rules adopted in 2015 and re-establish the commission's authority over broadband internet.

Net neutrality refers to the principle that internet service providers should enable access to all content and applications regardless of the source, and without favoring or blocking particular products or websites. FCC Chair Jessica Rosenworcel confirmed the planned commission vote in an interview with Reuters. "The pandemic made clear that broadband is an essential service, that every one of us -- no matter who we are or where we live -- needs it to have a fair shot at success in the digital age," she said. "An essential service requires oversight and in this case we are just putting back in place the rules that have already been court-approved that ensures that broadband access is fast, open and fair."

Government

Can Apps Turn Us Into Unpaid Lobbyists? (msn.com) 73

"Today's most effective corporate lobbying no longer involves wooing members of Congress..." writes the Wall Street Journal. Instead the lobbying sector "now works in secret to influence lawmakers with the help of an unlikely ally: you." [Lobbyists] teamed up with PR gurus, social-media experts, political pollsters, data analysts and grassroots organizers to foment seemingly organic public outcries designed to pressure lawmakers and compel them to take actions that would benefit the lobbyists' corporate clients...

By the middle of 2011, an army of lobbyists working for the pillars of the corporate lobbying establishment — the major movie studios, the music industry, pharmaceutical manufacturers and the U.S. Chamber of Commerce — were executing a nearly $100 million campaign to win approval for the internet bill [the PROTECT IP Act, or "PIPA"]. They pressured scores of lawmakers to co-sponsor the legislation. At one point, 99 of the 100 members of the U.S. Senate appeared ready to support it — an astounding number, given that most bills have just a handful of co-sponsors before they are called up for a vote. When lobbyists for Google and its allies went to Capitol Hill, they made little headway. Against such well-financed and influential opponents, the futility of the traditional lobbying approach became clear. If tech companies were going to turn back the anti-piracy bills, they would need to find another way.

It was around this time that one of Google's Washington strategists suggested an alternative strategy. "Let's rally our users," Adam Kovacevich, then 34 and a senior member of Google's Washington office, told colleagues. Kovacevich turned Google's opposition to the anti-piracy legislation into a coast-to-coast political influence effort with all the bells and whistles of a presidential campaign. The goal: to whip up enough opposition to the legislation among ordinary Americans that Congress would be forced to abandon the effort... The campaign slogan they settled on — "Don't Kill the Internet" — exaggerated the likely impact of the bill, but it succeeded in stirring apprehension among web users.

The coup de grace came on Jan. 18, 2012, when Google and its allies pulled off the mother of all outside influence campaigns. When users logged on to the web that day, they discovered, to their great frustration, that many of the sites they'd come to rely on — Wikipedia, Reddit, Craigslist — were either blacked out or displayed text outlining the detrimental impacts of the proposed legislation. For its part, Google inserted a black censorship bar over its multicolored logo and posted a tool that enabled users to contact their elected representatives. "Tell Congress: Please don't censor the web!" a message on Google's home page read. With some 115,000 websites taking part, the protest achieved a staggering reach. Tens of millions of people visited Wikipedia's blacked-out website, 4.5 million users signed a Google petition opposing the legislation, and more than 2.4 million people took to Twitter to express their views on the bills. "We must stop [these bills] to keep the web open & free," the reality TV star Kim Kardashian wrote in a tweet to her 10 million followers...

Within two days, the legislation was dead...

Over the following decade, outside influence tactics would become the cornerstone of Washington's lobbying industry — and they remain so today.

"The 2012 effort is considered the most successful consumer mobilization in the history of internet policy," writes the Washington Post — agreeing that it's since spawned more app-based, crowdsourced lobbying campaigns. Sites like Airbnb "have also repeatedly asked their users to oppose city government restrictions on the apps." Uber, Lyft, DoorDash and other gig work companies also blitzed the apps' users with scenarios of higher prices or suspended service unless people voted for a 2020 California ballot measure on contract workers. Voters approved it."

The Wall Street Journal also details how lobbyists successfully killed higher taxes for tobacco products, the oil-and-gas industry, and even on private-equity investors — and note similar tactics were used against a bill targeting TikTok. "Some say the campaign backfired. Lawmakers complained that the effort showed how the Chinese government could co-opt internet users to do their bidding in the U.S., and the House of Representatives voted to ban the app if its owners did not agree to sell it.

"TikTok's lobbyists said they were pleased with the effort. They persuaded 65 members of the House to vote in favor of the company and are confident that the Senate will block the effort."

The Journal's article was adapted from an upcoming book titled "The Wolves of K Street: The Secret History of How Big Money Took Over Big Government." But the Washington Post argues the phenomenon raises two questions. "How much do you want technology companies to turn you into their lobbyists? And what's in it for you?"
AI

Hillary Clinton, Election Officials Warn AI Could Threaten Elections (wsj.com) 255

Hillary Clinton and U.S. election officials said they are concerned disinformation generated and spread by AI could threaten the 2024 presidential election [non-paywalled link]. WSJ: Clinton, a former secretary of state and 2016 presidential candidate, said she thinks foreign actors like Russian President Vladimir Putin could use AI to interfere in elections in the U.S. and elsewhere. Dozens of countries are running elections this year. "Anybody who's not worried is not paying attention," Clinton said Thursday at Columbia University, where election officials and tech executives discussed how AI could impact global elections.

She added: "It could only be a very small handful of people in St. Petersburg or Moldova or wherever they are right now who are lighting the fire, but because of the algorithms everyone gets burned." Clinton said Putin tried to undermine her before the 2016 election by spreading disinformation on Facebook, Twitter and Snapchat about "all these terrible things" she purportedly did. "I don't think any of us understood it," she said. "I did not understand it. I can tell you my campaign did not understand it. The so-called dark web was filled with these kinds of memes and stories and videos of all sorts portraying me in all kinds of less than flattering ways." Clinton added: "What they did to me was primitive and what we're talking about now is the leap in technology."

Social Networks

Users Shocked To Find Instagram Limits Political Content By Default (arstechnica.com) 58

Instagram has been limiting recommended political content by default without notifying users. Ars Technica reports: Instead, Instagram rolled out the change in February, announcing in a blog that the platform doesn't "want to proactively recommend political content from accounts you don't follow." That post confirmed that Meta "won't proactively recommend content about politics on recommendation surfaces across Instagram and Threads," so that those platforms can remain "a great experience for everyone." "This change does not impact posts from accounts people choose to follow; it impacts what the system recommends, and people can control if they want more," Meta's spokesperson Dani Lever told Ars. "We have been working for years to show people less political content based on what they told us they want, and what posts they told us are political."

To change the setting, users can navigate to Instagram's menu for "settings and activity" in their profiles, where they can update their "content preferences." On this menu, "political content" is the last item under a list of "suggested content" controls that allow users to set preferences for what content is recommended in their feeds. There are currently two options for controlling what political content users see. Choosing "don't limit" means "you might see more political or social topics in your suggested content," the app says. By default, all users are set to "limit," which means "you might see less political or social topics." "This affects suggestions in Explore, Reels, Feed, Recommendations, and Suggested Users," Instagram's settings menu explains. "It does not affect content from accounts you follow. This setting also applies to Threads."
"Did [y'all] know Instagram was actively limiting the reach of political content like this?!" an X user named Olayemi Olurin wrote in an X post. "I had no idea 'til I saw this comment and I checked my settings and sho nuff political content was limited."

"This is actually kinda wild that Instagram defaults everyone to this," another user wrote. "Obviously political content is toxic but during an election season it's a little weird to just hide it from everyone?"
Censorship

India Will Fact-Check Online Posts About Government Matters (techcrunch.com) 32

An anonymous reader quotes a report from TechCrunch: In India, a government-run agency will now monitor and undertake fact-checking for government related matters on social media even as tech giants expressed grave concerns about it last year. The Ministry of Electronics and IT on Wednesday wrote in a gazette notification that it is amending the IT Rules 2021 to cement into law the proposal to make the fact checking unit of Press Information Bureau the dedicated arbiter of truth for New Delhi matters. Tech companies as well as other firms that serve more than 5 million users in India will be required to "make reasonable efforts" to not display, store, transmit or otherwise share information that deceives or misleads users about matters pertaining to the government, the IT ministry said. India's move comes just weeks ahead of the general elections in the country. Relying on a government agency such as the Press Information Bureau as the sole source to fact-check government business without giving it a clear definition or providing clear checks and balances "may lead to misuse during implementation of the law, which will profoundly infringe on press freedom," Asia Internet Coalition, an industry group that represents Meta, Amazon, Google and Apple, cautioned last year.

Meanwhile, comedian Kunal Kamra, with support from the Editors Guild of India, cautioned that the move could create an environment that forces social media firms to welcome "a regime of self-interested censorship."
Medicine

5-Year Study Finds No Brain Abnormalities In 'Havana Syndrome' Patients (www.cbc.ca) 38

An anonymous reader quotes a report from CBC News: An array of advanced tests found no brain injuries or degeneration among U.S. diplomats and other government employees who suffer mysterious health problems once dubbed "Havana syndrome," researchers reported Monday. The National Institutes of Health's (NIH) nearly five-year study offers no explanation for symptoms including headaches, balance problems and difficulties with thinking and sleep that were first reported in Cuba in 2016 and later by hundreds of American personnel in multiple countries. But it did contradict some earlier findings that raised the spectre of brain injuries in people experiencing what the State Department now calls "anomalous health incidents."

"These individuals have real symptoms and are going through a very tough time," said Dr. Leighton Chan, NIH's chief of rehabilitation medicine, who helped lead the research. "They can be quite profound, disabling and difficult to treat." Yet sophisticated MRI scans detected no significant differences in brain volume, structure or white matter -- signs of injury or degeneration -- when Havana syndrome patients were compared to healthy government workers with similar jobs, including some in the same embassy. Nor were there significant differences in cognitive and other tests, according to findings published in the Journal of the American Medical Association.

China

CIA Used Chinese Social Media In Covert Influence Operation Against Xi Jinping's Government (reuters.com) 114

An anonymous reader quotes a report from Reuters: Two years into office, President Donald Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, according to former U.S. officials with direct knowledge of the highly classified operation. Three former officials told Reuters that the CIA created a small team of operatives who used bogus internet identities to spread negative narratives about Xi Jinping's government while leaking disparaging intelligence to overseas news outlets. The effort, which began in 2019, has not been previously reported.

The CIA team promoted allegations that members of the ruling Communist Party were hiding ill-gotten money overseas and slammed as corrupt and wasteful China's Belt and Road Initiative, which provides financing for infrastructure projects in the developing world, the sources told Reuters. Although the U.S. officials declined to provide specific details of these operations, they said the disparaging narratives were based in fact despite being secretly released by intelligence operatives under false cover. The efforts within China were intended to foment paranoia among top leaders there, forcing its government to expend resources chasing intrusions into Beijing's tightly controlled internet, two former officials said. "We wanted them chasing ghosts," one of these former officials said. [...]

The CIA operation came in response to years of aggressive covert efforts by China aimed at increasing its global influence, the sources said. During his presidency, Trump pushed a tougher response to China than had his predecessors. The CIA's campaign signaled a return to methods that marked Washington's struggle with the former Soviet Union. "The Cold War is back," said Tim Weiner, author of a book on the history of political warfare. Reuters was unable to determine the impact of the secret operations or whether the administration of President Joe Biden has maintained the CIA program.

Google

Google Restricts AI Chatbot Gemini From Answering Queries on Global Elections (reuters.com) 53

Google is restricting AI chatbot Gemini from answering questions about the global elections set to happen this year, the Alphabet-owned firm said on Tuesday, as it looks to avoid potential missteps in the deployment of the technology. From a report: The update comes at a time when advancements in generative AI, including image and video generation, have fanned concerns of misinformation and fake news among the public, prompting governments to regulate the technology.

When asked about elections such as the upcoming U.S. presidential match-up between Joe Biden and Donald Trump, Gemini responds with "I'm still learning how to answer this question. In the meantime, try Google Search". Google had announced restrictions within the U.S. in December, saying they would come into effect ahead of the election. "In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we are restricting the types of election-related queries for which Gemini will return responses," a company spokesperson said on Tuesday.

Canada

Police Now Need Warrant For IP Addresses, Canada's Top Court Rules (www.cbc.ca) 36

The Supreme Court of Canada ruled today that police must now have a warrant or court order to obtain a person or organization's IP address. CBC News reports: The top court was asked to consider whether an IP address alone, without any of the personal information attached to it, was protected by an expectation of privacy under the Charter. In a five-four split decision, the court said a reasonable expectation of privacy is attached to the numbers making up a person's IP address, and just getting those numbers alone constitutes a search. Writing for the majority, Justice Andromache Karakatsanis wrote that an IP address is "the crucial link between an internet user and their online activity." "Thus, the subject matter of this search was the information these IP addresses could reveal about specific internet users including, ultimately, their identity." Writing for the four dissenting judges, Justice Suzanne Cote disagreed with that central point, saying there should be no expectation of privacy around an IP address alone. [...]

In the Supreme Court majority decision, Karakatsanis said that only considering the information associated with an IP address to be protected by the Charter and not the IP address itself "reflects piecemeal reasoning" that ignores the broad purpose of the Charter. The ruling said the privacy interests cannot be limited to what the IP address can reveal on its own "without consideration of what it can reveal in combination with other available information, particularly from third-party websites." It went on to say that because an IP address unlocks a user's identity, it comes with a reasonable expectation of privacy and is therefore protected by the Charter. "If [the Charter] is to meaningfully protect the online privacy of Canadians in today's overwhelmingly digital world, it must protect their IP addresses," the ruling said.

Justice Cote, writing on behalf of justices Richard Wagner, Malcolm Rowe and Michelle O'Bonsawin, acknowledged that IP addresses "are not sought for their own sake" but are "sought for the information they reveal." "However, the evidentiary record in this case establishes that an IP address, on its own, reveals only limited information," she wrote. Cote said the biographical personal information the law was designed to protect are not revealed through having access to an IP address. Police must use that IP address to access personal information that is held by an ISP or a website that tracks customers' IP addresses to determine their habits. "On its own, an IP address does not even reveal browsing habits," Cote wrote. "What it reveals is a user's ISP -- hardly a more private piece of information than electricity usage or heat emissions." Cote said placing a reasonable expectation of privacy on an IP address alone upsets the careful balance the Supreme Court has struck between Canadians' privacy interests and the needs of law enforcement. "It would be inconsistent with a functional approach to defining the subject matter of the search to effectively hold that any step taken in an investigation engages a reasonable expectation of privacy," the dissenting opinion said.

AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
The Courts

New Bill Would Let Defendants Inspect Algorithms Used Against Them In Court (theverge.com) 47

Lauren Feiner reports via The Verge: Reps. Mark Takano (D-CA) and Dwight Evans (D-PA) reintroduced the Justice in Forensic Algorithms Act on Thursday, which would allow defendants to access the source code of software used to analyze evidence in their criminal proceedings. It would also require the National Institute of Standards and Technology (NIST) to create testing standards for forensic algorithms, which software used by federal enforcers would need to meet.

The bill would act as a check on unintended outcomes that could be created by using technology to help solve crimes. Academic research has highlighted the ways human bias can be built into software and how facial recognition systems often struggle to differentiate Black faces, in particular. The use of algorithms to make consequential decisions in many different sectors, including both crime-solving and health care, has raised alarms for consumers and advocates as a result of such research.

Takano acknowledged that gaining or hiring the deep expertise needed to analyze the source code might not be possible for every defendant. But requiring NIST to create standards for the tools could at least give them a starting point for understanding whether a program matches the basic standards. Takano introduced previous iterations of the bill in 2019 and 2021, but they were not taken up by a committee.

AI

Tech Companies Plan To Sign Accord To Combat AI-Generated Election Trickery (go.com) 82

At least six major tech companies, including Adobe, Google, Meta, Microsoft, OpenAI and TikTok, plan to sign an agreement this week that details how they'll attempt to stop the use of AI-generated election misinformation and deepfakes. ABC News reports: "In a critical year for global elections, technology companies are working on an accord to combat the deceptive use of AI targeted at voters," said a joint statement from several companies Tuesday. "Adobe, Google, Meta, Microsoft, OpenAI, TikTok and others are working jointly toward progress on this shared objective and we hope to finalize and present details on Friday at the Munich Security Conference."

The companies declined to share details of what's in the agreement. Many have already said they're putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they're seeing is real.

Social Networks

Instagram and Threads Will Stop Recommending Political Content (theverge.com) 19

In a blog post today, Meta announced that it'll stop showing political content across Instagram and Threads unless users explicitly choose to have it recommended to them. The Verge reports: Meta announced that it's expanding an existing Reels policy that limits political content from people you're not following (including posts about social issues) from appearing in recommended feeds to more broadly cover the company's Threads and Instagram platforms. "Our goal is to preserve the ability for people to choose to interact with political content, while respecting each person's appetite for it," said Instagram head Adam Mosseri, announcing on Threads that the changes will be applied over the next few weeks. Facebook is also expected to roll out these new controls at a later, undisclosed date.

Users who still want to have content "likely to mention governments, elections, or social topics that affect a group of people and/or society at large" recommended to them can choose to turn off this limitation within their account settings. The changes will apply to public accounts when enabled and only in places where content is being recommended, such as Explore, Reels, in-feed recommendations, and suggested users. The update won't change how users view content from accounts they choose to follow, so accounts that aren't eligible to be recommended can still post political content to their followers via their feed and Stories.

For creators, Meta says that "if your account is not eligible to be recommended, none of your content will be recommended regardless of whether or not all of your content goes against our recommendations guidelines." When these changes do go live, professional accounts on Instagram will be able to use the Account Status feature to check if posting political content is impacting their eligibility for recommendation. Professional accounts can also use Account Status to contest decisions that revoke this eligibility, alongside editing, removing, or pausing politically related posts until the account is eligible to be recommended again.

AI

Commerce Secretary 'Very Worried' About AI Being Used Nefariously in 2024 Election (go.com) 60

Commerce Secretary Gina Raimondo said she is "very worried" about AI being used nefariously in the 2024 election, she told reporters at a press conference in Washington, D.C. on Thursday. From a report: "AI can do amazing things and AI can disrupt our elections, here and around the world," she said. "We're already starting to see it." Raimondo was asked by ABC News about the robocall sent on the day of the New Hampshire primary purporting to be from President Biden and spreading misinformation about voting times.

She said the government is going to work "extensively" to start putting out AI framework that helps people -- including journalists -- be able to decipher what is real and what is fake. The Commerce Secretary added that AI companies want to do the right thing based on her conversations with them. "Am I worried? Yes," she said. "Do I think we have the tools to protect our election and our democracy? Yes. Do I feel based on my interactions with the private sector that they want to do the right thing? By and large, Yes. It's a big threat."

The Internet

Pakistan Cuts Off Phone and Internet Services On Election Day (techcrunch.com) 36

An anonymous reader quotes a report from TechCrunch: Pakistan has temporarily suspended mobile phone network and internet services across the country to combat any "possible threats," a top ministry said, as the South Asian nation commences its national election. In a statement, Pakistan's interior ministry said the move was prompted by recent incidents of terrorism in the country. The internet was accessible through wired broadband connections, local journalists posted on X earlier Thursday. But NetBlocks, an independent service that tracks outages, said later that Pakistan had started to block internet services as well. The polls have opened in the nation and will close at 5 p.m. The interior ministry didn't say when it will switch back on the mobile services.
AI

OpenAI Suspends Developer Behind Dean Phillips Bot 36

theodp writes: OpenAI has banned the developer of a bot that mimicked Democratic White House hopeful Rep. Dean Phillips, the first known instance where the maker of ChatGPT has restricted the use of AI in political campaigns. OpenAI suspended the account of the start-up Delphi, which had been contracted to build Dean.Bot, which could talk to voters in real-time via a website.

"Anyone who builds with our tools must follow our usage policies," a spokesperson for OpenAI said in a statement shared with Axios on Sunday. "We recently removed a developer account that was knowingly violating our API usage policies which disallow political campaigning, or impersonating an individual without consent." OpenAI apparently is not a fan of Richard Stallman's 'freedom 0' tenet, which argues software users should have the freedom to run programs as they wish, in order to do what they wish (Stallman is careful to note this freedom doesn't make one exempt from laws).

The suspension and subsequent bot removal occurred ahead of Tuesday's New Hampshire primary, where Phillips continues his long-shot presidential bid against President Biden.

Slashdot Top Deals