×
Japan

Japan Mandates App To Ensure National ID Cards Aren't Forged (theregister.com) 34

The Japanese government has released details of an app that verifies the legitimacy of its troubled My Number Card -- a national identity document. From a report: Beginning in 2015, every resident of Japan was assigned a 12 digit My Number that paved the way for linking social security, taxation, disaster response and other government services to both the number itself and a smartcard. The plan was to banish bureaucracy and improve public service delivery -- but that didn't happen.

My Number Card ran afoul of data breaches, reports of malfunctioning card readers, and database snafus that linked cards to other citizens' bank accounts. Public trust in the scheme fell, and adoption stalled. Now, according to Japan's Digital Ministry, counterfeit cards are proliferating to help miscreant purchase goods -- particularly mobile phones -- under fake identities. Digital minister Taro Kono yesterday presented his solution to the counterfeits: a soon to be mandatory app that confirms the legitimacy of the card. The app uses the camera on a smartphone to read information printed on the card -- like date of birth and name. It compares those details to what it reads from info stored in the smartcard's resident chip, and confirms the data match without the user ever needing to enter their four-digit PIN.

The Courts

US Sues TikTok Over 'Massive-Scale' Privacy Violations of Kids Under 13 (reuters.com) 10

An anonymous reader quotes a report from Reuters: The U.S. Justice Department filed a lawsuit Friday against TikTok and parent company ByteDance for failing to protect children's privacy on the social media app as the Biden administration continues its crackdown on the social media site. The government said TikTok violated the Children's Online Privacy Protection Act that requires services aimed at children to obtain parental consent to collect personal information from users under age 13. The suit (PDF), which was joined by the Federal Trade Commission, said it was aimed at putting an end "to TikTok's unlawful massive-scale invasions of children's privacy." Representative Frank Pallone, the top Democrat on the Energy and Commerce Committee, said the suit "underscores the importance of divesting TikTok from Chinese Communist Party control. We simply cannot continue to allow our adversaries to harvest vast troves of Americans' sensitive data."

The DOJ said TikTok knowingly permitted children to create regular TikTok accounts, and then create and share short-form videos and messages with adults and others on the regular TikTok platform. TikTok collected personal information from these children without obtaining consent from their parents. The U.S. alleges that for years millions of American children under 13 have been using TikTok and the site "has been collecting and retaining children's personal information." The FTC is seeking penalties of up to $51,744 per violation per day from TikTok for improperly collecting data, which could theoretically total billions of dollars if TikTok were found liable.
TikTok said Friday it disagrees "with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed. We are proud of our efforts to protect children, and we will continue to update and improve the platform."
Government

Secret Service's Tech Issues Helped Shooter Go Undetected At Trump Rally (theguardian.com) 155

An anonymous reader quotes a report from The Guardian: The technology flaws of the U.S. Secret Service helped the gunman who attempted to assassinate Donald Trump during a rally in Butler, Pennsylvania, last month evade detection. An officer broadcast "long gun!" over the local law enforcement radio system, according to congressional testimony from the Secret Service this week, the New York Times reported. The radio message should have travelled to a command center shared between local police and the Secret Service, but the message was never received by the Secret Service. About 30 seconds later, the shooter, Thomas Crooks, fired his first shots.

It was one of several technology issues facing the Secret Service on 13 July due to either malfunction, improper deployment or the Secret Service opting not to utilize them. The Secret Service had also previously rejected requests from the Trump campaign for more resources over the past two years. The use of a surveillance drone was turned down by the Secret Service at the rally site and the agency also did not bring in a system to boost the signals of agents' devices as the area had poor cell service. And a system to detect drone use in the area by others did not work, according to the report in the New York Times, due to the communications network in the area being overwhelmed by the number of people gathered at the rally. The federal agency did not use technology it had to bolster their communications system. The shooter flew his own drone over the site for 11 minutes without being detected, about two hours before Trump appeared at the rally.
Ronald Rowe Jr, the acting Secret Service director, said it never utilized the technological tools that could have spotted the shooter beforehand.

A former Secret Service officer also told the New York Times he "resigned in 2017 over frustration with the agency's delays in evaluating new technology and getting clearance and funding to obtain it and then train officers on it," notes The Guardian. Furthermore, the Secret Service failed to record communications between federal and local law enforcement at the rally.
United Kingdom

UK Government Shelves $1.66 Billion Tech and AI Plans 35

An anonymous reader shares a report: The new Labour government has shelved $1.66 bn of funding promised by the Conservatives for tech and Artificial Intelligence (AI) projects, the BBC has learned. It includes $1 bn for the creation of an exascale supercomputer at Edinburgh University and a further $640m for AI Research Resource, which funds computing power for AI. Both funds were unveiled less than 12 months ago.

The Department for Science, Innovation and Technology (DSIT) said the money was promised by the previous administration but was never allocated in its budget. Some in the industry have criticised the government's decision. Tech business founder Barney Hussey-Yeo posted on X that reducing investment risked "pushing more entrepreneurs to the US." Businessman Chris van der Kuyl described the move as "idiotic." Trade body techUK said the government now needed to make "new proposals quickly" or the UK risked "losing out" to other countries in what are crucial industries of the future.
China

China's Wind and Solar Energy Surpass Coal In Historic First (oilprice.com) 95

According to China's National Energy Administration (NEA), wind and solar energy have collectively eclipsed coal in capacity for the first time ever. By 2026, analysts forecast solar power alone will surpass coal as the country's primary energy source, with a cumulative capacity exceeding 1.38 terawatts (TW) -- 150 gigawatts (GW) more than coal. Oil Pricereports: This shift stems from a growing emphasis on cleaner energy sources and a move away from fossil fuels for the nation. Despite coal's early advantage, with around 50 GW of annual installations before 2016, China has made substantial investments to expand its renewable energy infrastructure. Since 2020, annual installations of wind and solar energy have consistently exceeded 100 GW, three to four times the capacity additions for coal. This momentum has only gathered pace since then, with last year seeing China set a record with 293 GW of wind and solar installations, bolstered by gigawatt-scale renewable hub projects from the NEA's first and second batches connected to the country's grid.

China's coal power sector is moving in the opposite direction. Last year, approximately 40 GW of coal power was added, but this figure plummeted to 8 GW in the first half of 2024, according to our estimates. Despite the expansion of renewable energy under supportive policies, the government has implemented stricter restrictions on new coal projects to meet carbon reduction goals. Efforts are now focused on phasing out smaller coal plants, upgrading existing ones to reduce emissions and enforcing more stringent standards for new projects. As a result, the annual capacity addition gap between coal and clean energy has widened dramatically, reaching a 16-fold difference in the first half of 2024.

Government

US Progressives Push For Nvidia Antitrust Investigation (reuters.com) 42

Progressive groups and Senator Elizabeth Warren are urging the Department of Justice to investigate Nvidia for potential antitrust violations due to its dominant position in the AI chip market. The groups criticize Nvidia's bundling of software and hardware, claiming it stifles innovation and locks in customers. Reuters reports: Demand Progress and nine other groups wrote a letter (PDF) this week, opens new tab urging Department of Justice antitrust chief Jonathan Kanter to probe business practices at Nvidia, whose market value hit $3 trillion this summer on demand for chips able to run the complex models behind generative AI. The groups, which oppose monopolies and promote government oversight of tech companies, among other issues, took aim at Nvidia's bundling of software and hardware, a practice that French antitrust enforcers have flagged as they prepare to bring charges.

"This aggressively proprietary approach, which is strongly contrary to industry norms about collaboration and interoperability, acts to lock in customers and stifles innovation," the groups wrote. Nvidia has roughly 80% of the AI chip market, including the custom AI processors made by cloud computing companies like Google, Microsoft and Amazon.com. The chips made by the cloud giants are not available for sale themselves but typically rented through each platform.
A spokesperson for Nvidia said: "Regulators need not be concerned, as we scrupulously adhere to all laws and ensure that NVIDIA is openly available in every cloud and on-prem for every enterprise. We'll continue to support aspiring innovators in every industry and market and are happy to provide any information regulators need."
Government

Senators Propose 'Digital Replication Right' For Likeness, Extending 70 Years After Death 46

An anonymous reader quotes a report from Ars Technica: On Wednesday, US Sens. Chris Coons (D-Del.), Marsha Blackburn (R.-Tenn.), Amy Klobuchar (D-Minn.), and Thom Tillis (R-NC) introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2024. The bipartisan legislation, up for consideration in the US Senate, aims to protect individuals from unauthorized AI-generated replicas of their voice or likeness. The NO FAKES Act would create legal recourse for people whose digital representations are created without consent. It would hold both individuals and companies liable for producing, hosting, or sharing these unauthorized digital replicas, including those created by generative AI. Due to generative AI technology that has become mainstream in the past two years, creating audio or image media fakes of people has become fairly trivial, with easy photorealistic video replicas likely next to arrive. [...]

To protect a person's digital likeness, the NO FAKES Act introduces a "digital replication right" that gives individuals exclusive control over the use of their voice or visual likeness in digital replicas. This right extends 10 years after death, with possible five-year extensions if actively used. It can be licensed during life and inherited after death, lasting up to 70 years after an individual's death. Along the way, the bill defines what it considers to be a "digital replica": "DIGITAL REPLICA.-The term "digital replica" means a newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that- (A) is embodied in a sound recording, image, audiovisual work, including an audiovisual work that does not have any accompanying sounds, or transmission- (i) in which the actual individual did not actually perform or appear; or (ii) that is a version of a sound recording, image, or audiovisual work in which the actual individual did perform or appear, in which the fundamental character of the performance or appearance has been materially altered; and (B) does not include the electronic reproduction, use of a sample of one sound recording or audiovisual work into another, remixing, mastering, or digital remastering of a sound recording or audiovisual work authorized by the copyright holder."
The NO FAKES Act "includes provisions that aim to balance IP protection with free speech," notes Ars. "It provides exclusions for recognized First Amendment protections, such as documentaries, biographical works, and content created for purposes of comment, criticism, or parody."
China

Germany Says China Was Behind a 2021 Cyberattack on Government Agency (apnews.com) 31

An investigation has determined that "Chinese state actors" were responsible for a 2021 cyberattack on Germany's national office for cartography, officials in Berlin said Wednesday. From a report: The Chinese ambassador was summoned to the Foreign Ministry for a protest for the first time in decades. Foreign Ministry spokesperson Sebastian Fischer said the German government has "reliable information from our intelligence services" about the source of the attack on the Federal Agency for Cartography and Geodesy, which he said was carried out "for the purpose of espionage."

"This serious cyberattack on a federal agency shows how big the danger is from Chinese cyberattacks and spying," Interior Minister Nancy Faeser said in a statement. "We call on China to refrain from and prevent such cyberattacks. These cyberattacks threaten the digital sovereignty of Germany and Europe." Fischer declined to elaborate on who exactly in China was responsible. He said a Chinese ambassador was last summoned to the German Foreign Ministry in 1989 after the Tiananmen Square crackdown.

Earth

Brazil's Radical Plan To Tax Global Super-Rich To Tackle Climate Crisis (theguardian.com) 167

An anonymous reader quotes a report from The Guardian: Proposals to slap a wealth tax on the world's super-rich could yield $250 billion a year to tackle the climate crisis and address poverty and inequality, but would affect only a small number of billionaire families, Brazil's climate chief has said. Ministers from the G20 group of the world's biggest developed and emerging economies are meeting in Rio de Janeiro this weekend, where Brazil's proposal for a 2% wealth tax on those with assets worth more than $1 billion is near the top of the agenda. No government was speaking out against the tax, said Ana Toni, who is national secretary for climate change in the government of President Luiz Inacio Lula da Silva. "Our feeling is that, morally, nobody's against," she told the Observer in an interview. "But the level of support from some countries is bigger than others."

However, the lack of overt opposition does not mean the tax proposal is likely to be approved. Many governments are privately skeptical but unwilling to publicly criticize a plan that would shave a tiny amount from the rapidly accumulating wealth of the planet's richest few, and raise money to address the pressing global climate emergency. Janet Yellen, the US Treasury secretary, told journalists in Rio that the US "did not see the need" for a global initiative. "People are not keen on global taxes," Toni admitted. "And there is a question over how you implement global taxes." But she said levying and raising a tax globally was possible, as had been shown by G7 finance ministers' agreement to levy a minimum 15% corporate tax. "It should be at a global level, because otherwise, obviously, rich people will move from one country to another," she said.

Only about 100 families around the world would be affected by the proposed 2% levy, she added. The world's richest 1% have added $42 trillion to their wealth in the past decade, roughly 36 times more than the bottom half of the world's population did. The question of how funds raised by such taxation should be spent had also not been settled, noted Toni. Some economists have argued that the idea was more likely to be accepted if the proceeds were devoted to solving the climate crisis than if they were used to address global inequality. Other experts say at least some of the money should be used for poverty alleviation.

Open Source

A New White House Report Embraces Open-Source AI 15

An anonymous reader quotes a report from ZDNet: According to a new statement, the White House realizes open source is key to artificial intelligence (AI) development -- much like many businesses using the technology. On Tuesday, the National Telecommunications and Information Administration (NTIA) issued a report supporting open-source and open models to promote innovation in AI while emphasizing the need for vigilant risk monitoring. The report recommends that the US continue to support AI openness while working on new capabilities to monitor potential AI risks but refrain from restricting the availability of open model weights.
Government

Senate Passes the Kids Online Safety Act (theverge.com) 84

An anonymous reader quotes a report from The Verge: The Senate passed the Kids Online Safety Act (KOSA) and the Children and Teens' Online Privacy Protection Act (also known as COPPA 2.0), the first major internet bills meant to protect children to reach that milestone in two decades. A legislative vehicle that included both KOSA and COPPA 2.0 passed 91-3. Senate Majority Leader Chuck Schumer (D-NY) called it "a momentous day" in a speech ahead of the vote, saying that "the Senate keeps its promise to every parent who's lost a child because of the risks of social media." He called for the House to pass the bills "as soon as they can."

KOSA is a landmark piece of legislation that a persistent group of parent advocates played a key role in pushing forward -- meeting with lawmakers, showing up at hearings with tech CEOs, and bringing along photos of their children, who, in many cases, died by suicide after experiencing cyberbullying or other harms from social media. These parents say that a bill like KOSA could have saved their own children from suffering and hope it will do the same for other children. The bill works by creating a duty of care for online platforms that are used by minors, requiring they take "reasonable" measures in how they design their products to mitigate a list of harms, including online bullying, sexual exploitation, drug promotion, and eating disorders. It specifies that the bill doesn't prevent platforms from letting minors search for any specific content or providing resources to mitigate any of the listed harms, "including evidence-informed information and clinical resources."
The legislation faces significant opposition from digital rights, free speech, and LGBTQ+ advocates who fear it could lead to censorship and privacy issues. Critics argue that the duty of care may result in aggressive content filtering and mandatory age verification, potentially blocking important educational and lifesaving content.

The bill may also face legal challenges from tech platforms citing First Amendment violations.
Earth

Goals To Stop Decline of Nature in England 'Off Track,' Report Warns (theguardian.com) 31

Goals to stop the decline of nature and clean up the air and water in England are slipping out of reach, a new report has warned. From a report: An audit of the Environmental Improvement Plan (EIP), which is the mechanism by which the government's legally binding targets for improving nature should be met, has found that plans for thriving plants and wildlife and clean air are deteriorating. This plan was supposed to replace the EU-derived environmental regulations the UK used until the Environment Act was passed in 2021 after Brexit.

The report found that there was no data to measure many of the metrics such as habitat creation for wildlife and the status of sites of special scientific interest. It also highlighted that the government was off track to meet its woodland creation targets, and that water leakage from pipes had in fact increased since the targets were set. The Labour party announced on Tuesday that it would overhaul these goals. The environment secretary, Steve Reed, said the government would lay out detailed delivery plans for each target, such as tree planting and air quality, working with environment groups to do so.

The Almighty Buck

Delta Seeks Damages From CrowdStrike, Microsoft After Outage (cnbc.com) 201

An anonymous reader quotes a report from CNBC: Delta Air Lines has hired prominent attorney David Boies to seek damages from CrowdStrike and Microsoft following an outage this month that caused millions of computers to crash, leading to thousands of flight cancellations. CrowdStrike shares fell as much as 5% in extended trading on Monday after CNBC's Phil Lebeau reported on Delta's hiring of Boies, chairman of Boies Schiller Flexner. Microsoft was little changed. [...] While no suit has been filed, Delta plans to seek compensation from Microsoft and CrowdStrike, Lebeau reported. The outages cost Delta an estimated $350 million to $500 million. Delta is dealing with over 176,000 refund or reimbursement requests after almost 7,000 flights were canceled.

Boies is known for representing the U.S. government in its landmark antitrust case against Microsoft and for helping win a decision that overturned California's ban on gay marriage. He also worked with Harvey Weinstein, the imprisoned former Hollywood mogul, and Theranos founder Elizabeth Holmes, who is currently serving a prison sentence for defrauding investors. Insurance startup Parametrix estimated that the CrowdStrike incident resulted in a total loss of $5.4 billion for Fortune 500 companies, not including Microsoft.

China

China Ponders Creating a National 'Cyberspace ID' (theregister.com) 52

China has proposed issuing "cyberspace IDs" to its citizens in order to protect their personal information, regulate the public service for authentication of cyberspace IDs, and accelerate the implementation of the trusted online identity strategy. The Register reports: The ID will take two forms: one as a series of letter and numbers, and the other as an online credential. Both will correspond to the citizen's real-life identity, but with no details in plaintext -- presumably encryption will be applied. A government national service platform will be responsible for authenticating and issuing the cyberspace IDs. The draft comes from the Ministry of Public Security and the Cyberspace Administration of China (CAC). It clarifies that the ID will be voluntary -- for now -- and eliminate the need for citizens to provide their real-life personal information to internet service providers (ISPs). Those under the age of fourteen would need parental consent to apply.

China is one of the few countries in the world that requires citizens to use their real names on the internet. [...] Relying instead on a national ID means "the excessive collection and retention of citizens' personal information by internet service providers will be prevented and minimized," reasoned Beijing. "Without the separate consent of a natural person, an internet platform may not process or provide relevant data and information to the outside without authorization, except as otherwise provided by laws and administrative regulations," reads the draft.

The Internet

Low-Income Homes Drop Internet Service After Congress Kills Discount Program (arstechnica.com) 240

An anonymous reader quotes a report from Ars Technica: The death of the US government's Affordable Connectivity Program (ACP) is starting to result in disconnection of Internet service for Americans with low incomes. On Friday, Charter Communications reported a net loss of 154,000 Internet subscribers that it said was mostly driven by customers canceling after losing the federal discount. About 100,000 of those subscribers were reportedly getting the discount, which in some cases made Internet service free to the consumer. The $30 monthly broadband discounts provided by the ACP ended in May after Congress failed to allocate more funding. The Biden administration requested (PDF) $6 billion to fund the ACP through December 2024, but Republicans called the program "wasteful."

Republican lawmakers' main complaint was that most of the ACP money went to households that already had broadband before the subsidy was created. FCC Chairwoman Jessica Rosenworcel warned that killing the discounts would reduce Internet access, saying (PDF) an FCC survey found that 77 percent of participating households would change their plan or drop Internet service entirely once the discounts expired. Charter's Q2 2024 earnings report provides some of the first evidence of users dropping Internet service after losing the discount. "Second quarter residential Internet customers decreased by 154,000, largely driven by the end of the FCC's Affordable Connectivity Program subsidies in the second quarter, compared to an increase of 70,000 during the second quarter of 2023," Charter said.

Across all ISPs, there were 23 million US households enrolled in the ACP. Research released in January 2024 found that Charter was serving over 4 million ACP recipients and that up to 300,000 of those Charter customers would be "at risk" of dropping Internet service if the discounts expired. Given that ACP recipients must meet low-income eligibility requirements, losing the discounts could put a strain on their overall finances even if they choose to keep paying for Internet service. [...] Light Reading reported that Charter attributed about 100,000 of the 154,000 customer losses to the ACP shutdown. Charter said it retained most of its ACP subscribers so far, but that low-income households might not be able to continue paying for Internet service without a new subsidy for much longer.

AI

From Sci-Fi To State Law: California's Plan To Prevent AI Catastrophe (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: California's "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall "safety" of large artificial intelligence models. But critics are concerned that the bill's overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today. SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to "safety incidents."

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of "critical harms" that an AI system might enable. That includes harms leading to "mass casualties or at least $500 million of damage," such as "the creation or use of chemical, biological, radiological, or nuclear weapon" (hello, Skynet?) or "precise instructions for conducting a cyberattack... on critical infrastructure." The bill also alludes to "other grave harms to public safety and security that are of comparable severity" to those laid out explicitly. An AI model's creator can't be held liable for harm caused through the sharing of "publicly accessible" information from outside the model -- simply asking an LLM to summarize The Anarchist's Cookbook probably wouldn't put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with "novel threats to public safety and security." More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI "autonomously engaging in behavior other than at the request of a user" while acting "with limited human oversight, intervention, or supervision."

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must "implement the capability to promptly enact a full shutdown" and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require "intent, recklessness, or gross negligence" if performed by a human, suggesting a degree of agency that does not exist in today's large language models.
The bill's supporters include AI experts Geoffrey Hinton and Yoshua Bengio, who believe the bill is a necessary precaution against potential catastrophic AI risks.

Bill critics include tech policy expert Nirit Weiss-Blatt and AI community voice Daniel Jeffries. They argue that the bill is based on science fiction fears and could harm technological advancement. Ars Technica contributor Timothy Lee and Meta's Yann LeCun say that the bill's regulations could hinder "open weight" AI models and innovation in AI research.

Instead, some experts suggest a better approach would be to focus on regulating harmful AI applications rather than the technology itself -- for example, outlawing nonconsensual deepfake pornography and improving AI safety research.
United States

Justice Dept. Says TikTok Could Allow China To Influence Elections 84

The Justice Department has ramped up the case to ban TikTok, saying in a court filing Friday that allowing the app to continue operating in its current state could result in voter manipulation in elections. From a report: The filing was made in response to a TikTok lawsuit attempting to block the government's ban. The Justice Department warned that the app's algorithm and parent company ByteDance's alleged ties to the Chinese government could be used for a "secret manipulation" campaign.

"Among other things, it would allow a foreign government to illicitly interfere with our political system and political discourse, including our elections...if, for example, the Chinese government were to determine that the outcome of a particular American election was sufficiently important to Chinese interests," the filing said. Under a law passed in April, TikTok has until January 2025 to find a new owner or it will be banned in the U.S. The company is suing to have that law overturned, saying it violates the company's First Amendment rights. The Justice Department disputed those claims. "The statute is aimed at national-security concerns unique to TikTok's connection to a hostile foreign power, not at any suppression of protected speech," officials wrote.
IT

Apple Makes Its Very First Labor Agreement With a Union (cnn.com) 17

"Apple and the union representing retail workers at its store in Towson, Maryland, agreed to a tentative labor deal late Friday," reports CNN, "in the first US labor agreement not only for an Apple store but for any US workers of the tech giant." Workers at the Apple store in Towson had voted to join the International Association of Machinists union in June 2022 and have since been seeking their first contract. In May, they voted to authorize a strike without providing a deadline. The labor deal, which needs to be ratified by a vote of the 85 rank-and-file members at the store before it can take effect, is a significant milestone. Other high-profile union organizing efforts, such as those at Starbucks and Amazon, have yet to produce deals for those workers, even though workers at those companies voted to join unions well before the workers at the Apple store in Maryland.

There are not many legal requirements to force a company to reach a labor agreement with a new union once that union has been recognized by the National Labor Relations Board, the government body that oversees labor relations for most US business. But the process can take a long time, as one recent study by Bloomberg Law found the average time for reaching a first contract is 465 days, or roughly 15 months. In many cases, it can take longer. A separate 2023 academic study found 43% of new unions were still seeking their first contract two years after winning a representation election.

The union said their deal includes pay increases of 10% over the three-year life of the contract and guaranteed severance packages for laid-off workers.
Bitcoin

Edward Snowden Skeptical of Politicians at Bitcoin Conference - and Public Ledgers (msn.com) 45

Former U.S. president Donald Trump spoke at Nashville's Bitcoin Conference on Saturday.

But he wasn't the only one there making headlines, according to a local newspaper called the Tennesseean: Republican Sens. Cynthia Lummis and Tim Scott pledged their resolute support for the cryptocurrency industry at Nashville's Bitcoin2024 conference Friday — moments before whistleblower and political dissident Edward Snowden warned attendees to be wary of politicians trying to win them over. "Cast a vote, but don't join a cult," Snowden said. "They are not our tribe. They are not your personality. They have their own interests, their own values, their own things that they're chasing. Try to get what you need from them, but don't give yourself to them."

Snowden didn't call out any politicians specifically, but the conference has drawn national attention for its robust lineup of legislators including former President Donald Trump, independent presidential nominee Robert F. Kennedy Jr, former presidential candidate Vivek Ramaswamy and a number of other senators. "Does this feel normal to you?" Snowden said. "When you look at the candidates, when you look at the dynamics, even the people on stage giving all the speeches, I'm not saying they're terrible at all, but it's a little unusual. The fact that they're here is a little unusual...."

Two key tenets of Bitcoin are transparency and decentralization, which means anyone can view all Bitcoin transactions on a public ledger. Snowden said this kind of metadata could be dangerous in the wrong hands, especially with artificial intelligence innovations making it easier to collect. "It is fantasy to imagine they're not doing this," he said.... He added that other countries like China or Russia could be collecting this same data. Snowden said he's afraid the collection of transaction data could happen across financial institutions and ultimately be used against the customers.

Also speaking was RFK Jr — who asked why Snowden hadn't already been pardoned, along with Julian Assange and Ross Ulbricht, when Donald Trump was president (as Kennedy promised to do). According to USA Today, Kennedy promised more than just creating a strategic reserve of Bitcoin worth more than half a trillion dollars: Kennedy also pledged to sign an executive order directing the IRS to treat Bitcoin as an eligible asset for 1031 Exchange into real property — making transactions unreportable and by extension nontaxable — which prompted a roar of approval from the crowd.
Though Trump's appearance also ended with a promise to have the government create a "strategic national bitcoin stockpile," NBC News notes that Trump "stopped short of offering many details." Immediately following Trump's remarks, Senator Cynthia Lummis, R-Wyo., said she would introduce a bill to create the reserve. However, the price of bitcoin fell slightly in the wake of Trump's remarks Saturday, perhaps reflecting crypto traders' unmet expectations for a more definitive commitment on the reserve idea from the presidential candidate...

Shortly after his morning remarks, Bitcoin Magazine reported that a group of Democratic representatives and candidates had sent a letter to the Democratic National Committee urging party leaders to be more supportive of crypto...

On Saturday, the Financial Times reported [presidential candidate Kamala] Harris had approached top crypto companies seeking a "reset" of relations, citing unnamed sources.

Ironically, in the end one conference attendee ended up telling Bloomberg that "It doesn't really matter who the president is. I don't really care much about it, because Bitcoin will do its thing regardless."
AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Slashdot Top Deals