Robotics

DHS Has a DoS Robot To Disable Internet of Things 'Booby Traps' Inside Homes (404media.co) 140

An anonymous reader quotes a report from 404 Media's Jason Koebler: The Department of Homeland Security bought a dog-like robot that it has modified with an "antenna array" that gives law enforcement the ability to overload people's home networks in an attempt to disable any internet of things devices they have, according to the transcript of a speech given by a DHS official at a border security conference for cops obtained by 404 Media. The DHS has also built an "Internet of Things" house to train officers on how to raid homes that suspects may have "booby trapped" using smart home devices, the official said.

The robot, called "NEO," is a modified version of the "Quadruped Unmanned Ground Vehicle (Q-UGV) sold to law enforcement by a company called Ghost Robotics. Benjamine Huffman, the director of DHS's Federal Law Enforcement Training Centers (FLETC), told police at the 2024 Border Security Expo in Texas that DHS is increasingly worried about criminals setting "booby traps" with internet of things and smart home devices, and that NEO allows DHS to remotely disable the home networks of a home or building law enforcement is raiding. The Border Security Expo is open only to law enforcement and defense contractors. A transcript of Huffman's speech was obtained by the Electronic Frontier Foundation's Dave Maass using a Freedom of Information Act request and was shared with 404 Media. [...]

The robot is a modified version of Ghost Robotics' Vision 60 Q-UGV, which the company says it has sold to "25+ National Security Customers" and which is marketed to both law enforcement and the military. "Our goal is to make our Q-UGVs an indispensable tool and continuously push the limits to improve its ability to walk, run, crawl, climb, and eventually swim in complex environments," the company notes on its website. "Ultimately, our robot is made to keep our warfighters, workers, and K9s out of harm's way."
"NEO can enter a potentially dangerous environment to provide video and audio feedback to the officers before entry and allow them to communicate with those in that environment," Huffman said, according to the transcript. "NEO carries an onboard computer and antenna array that will allow officers the ability to create a 'denial-of-service' (DoS) event to disable 'Internet of Things' devices that could potentially cause harm while entry is made."
Education

Should Kids Still Learn to Code in the Age of AI? (yahoo.com) 170

This week the Computer Science Teachers Association conference kicked off Tuesday in Las Vegas, writes long-time Slashdot reader theodp.

And the "TeachAI" education initiative teamed with the Computer Science Teachers Association to release three briefs "arguing that K-12 computer science education is more important than ever in an age of AI." From the press release: "As AI becomes increasingly present in the classroom, educators are understandably concerned about how it might disrupt the teaching of core CS skills like programming. With these briefs, TeachAI and CSTA hope to reinforce the idea that learning to program is the cornerstone of computational thinking and an important gateway to the problem-solving, critical thinking, and creative thinking skills necessary to thrive in today's digitally driven world. The rise of AI only makes CS education more important."

To help drive home the point to educators, the 39-page Guidance on the Future of Computer Science Education in an Age of AI (penned by five authors from nonprofits CSTA and Code.org) includes a pretty grim comic entitled Learn to Program or Follow Commands. In the panel, two high school students who scoff at the idea of having to learn to code and instead use GenAI to create their Python apps wind up getting stuck in miserable warehouse jobs several years later as a result where they're ordered about by an AI robot.

"The rise of AI only makes CS education more important," according to the group's press release, "with early research showing that people with a greater grasp of underlying computing concepts are able to use AI tools more effectively than those without." A survey by the group also found that 80% of teachers "agree that core concepts in CS education should be updated to emphasize topics that better support learning about AI."

But I'd be curious to hear what Slashdot's readers think. Share your thoughts and opinions in the comments.

Should children still be taught to code in the age of AI?
AI

'Cyclists Can't Decide Whether To Fear Or Love Self-Driving Cars' (yahoo.com) 210

"Many bike riders are hopeful about a world of robot drivers that never experience road rage or get distracted by their phones," reports the Washington Post. "But some resent being guinea pigs for driverless vehicles that veer into bike lanes, suddenly stop short and confuse cyclists trying to navigate around them.

"In more than a dozen complaints submitted to the DMV, cyclists describe upsetting near misses and close calls... " Of the nearly 200 California DMV complaints analyzed by The Post, about 60 percent involved Cruise vehicles; the rest mostly involved Waymo. About a third describe erratic or reckless driving, while another third document near misses with pedestrians. The remainder involve reports of autonomous cars blocking traffic and disobeying road markings or traffic signals... Only 17 complaints involved bicyclists or bike lane disruptions. But interviews with cyclists suggest the DMV complaints represent a fraction of bikers' negative interactions with self-driving vehicles. And while most of the complaints describe relatively minor incidents, they raise questions about corporate boasts that the cars are safer than human drivers, said Christopher White, executive director of the San Francisco Bike Coalition... Robot cars could one day make roads safer, White said, "but we don't yet see the tech fully living up to the promise. ... The companies are talking about it as a much safer alternative to people driving. If that's the promise that they're making, then they have to live up to it...."

Many bicycle safety advocates support the mission of autonomous vehicles, optimistic the technology will cut injuries and deaths. They are quick to point out the carnage associated with human-driven cars: There were 2,520 collisions in San Francisco involving at least one cyclist from 2017 to 2022, according to state data analyzed by local law firm Walkup, Melodia, Kelly & Schoenberger. In those crashes, 10 cyclists died and another 243 riders were severely injured, the law firm found. Nationally, there were 1,105 cyclists killed by drivers in 2022, according to NHTSA, the highest on record...

Meanwhile, the fraction of complaints to the DMV related to bicycles demonstrates the shaky relationship between self-driving cars and cyclists. In April 2023, a Waymo edged into a crosswalk, confusing a cyclist and causing him to crash and fracture his elbow, according to the complaint filed by the cyclist. Then, in August — days after the state approved an expansion of these vehicles — a Cruise car allegedly made a right turn that cut off a cyclist. The rider attempted to stop but then flipped over their bike. "It clearly didn't react or see me!" the complaint said.

Even if self-driving cars are proven to be safer than human drivers, they should still receive extra scrutiny and aren't the only way to make roads safer, several cyclists said.

Thanks to Slashdot reader echo123 for sharing the article.
Japan

Japan Introduces Enormous Humanoid Robot To Maintain Train Lines (theguardian.com) 33

An anonymous reader shares a report: It resembles an enormous, malevolent robot from 1980s sci-fi but West Japan Railway's new humanoid employee was designed with nothing more sinister than a spot of painting and gardening in mind. Starting this month, the large machine with enormous arms, a crude, disproportionately small Wall-E-like head and coke-bottle eyes mounted on a truck -- which can drive on rails -- will be put to use for maintenance work on the company's network. Its operator sits in a cockpit on the truck, "seeing" through the robot's eyes via cameras and operating its powerful limbs and hands remotely. With a vertical reach of 12 metres (40ft), the machine can use various attachments for its arms to carry objects as heavy as 40kg (88lb), hold a brush to paint or use a chainsaw. For now, the robot's primary task will focus on trimming tree branches along rails and painting metal frames that hold cables above trains, the company said. The technology will help fill worker shortages in ageing Japan as well as reduce accidents such as workers falling from high places or suffering electric shocks, the company said.
AI

MIT Robotics Pioneer Rodney Brooks On Generative AI 41

An anonymous reader quotes a report from TechCrunch: When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot and his current endeavor, Robust.ai. Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997. In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog of how well he's doing. He knows what he's talking about, and he thinks maybe it's time to put the brakes on the screaming hype that is generative AI. Brooks thinks it's impressive technology, but maybe not quite as capable as many are suggesting. "I'm not saying LLMs are not important, but we have to be careful [with] how we evaluate them," he told TechCrunch.

He says the trouble with generative AI is that, while it's perfectly capable of performing a certain set of tasks, it can't do everything a human can, and humans tend to overestimate its capabilities. "When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that," Brooks said. "And they're usually very over-optimistic, and that's because they use a model of a person's performance on a task." He added that the problem is that generative AI is not human or even human-like, and it's flawed to try and assign human capabilities to it. He says people see it as so capable they even want to use it for applications that don't make sense.

Brooks offers his latest company, Robust.ai, a warehouse robotics system, as an example of this. Someone suggested to him recently that it would be cool and efficient to tell his warehouse robots where to go by building an LLM for his system. In his estimation, however, this is not a reasonable use case for generative AI and would actually slow things down. It's instead much simpler to connect the robots to a stream of data coming from the warehouse management software. "When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not gonna help; it's just going to slow things down," he said. "We have massive data processing and massive AI optimization techniques and planning. And that's how we get the orders completed fast."
"People say, 'Oh, the large language models are gonna make robots be able to do things they couldn't do.' That's not where the problem is. The problem with being able to do stuff is about control theory and all sorts of other hardcore math optimization," he said.

"It's not useful in the warehouse to tell an individual robot to go out and get one thing for one order, but it may be useful for eldercare in homes for people to be able to say things to the robots," he said.
Robotics

Amazon Discontinues Astro for Business Robot Security Guard To Focus on Astro Home Robot (geekwire.com) 20

Astro is leaving its job to spend more time with family. From a report: Amazon informed customers and employees Wednesday morning that it plans to discontinue its Astro for Business program, less than a year after launching the robot security guard for small- and medium-sized businesses. The decision will help the company focus on its home version of Astro, according to an internal email. Astro for Business robots will stop working Sept. 25, the company said in a separate email to customers, encouraging them to recycle the devices.

Businesses will receive full refunds for the original cost of the device, plus a $300 credit "to help support a replacement solution for your workplace," the email said. They will also receive refunds for unused, pre-paid Astro Secure subscription fees. Announced in November 2023, the business version of Amazon's rolling robot used an HD periscope and night vision technology to autonomously patrol and map up to 5,000 square feet of space. It followed preprogrammed routes and routines, and could be controlled manually and remotely via the Amazon Astro app.

Robotics

Public Servants Uneasy As Government 'Spy' Robot Prowls Federal Offices (www.cbc.ca) 72

An anonymous reader quotes a report from CBC News: A device federal public servants call "the little robot" began appearing in Gatineau office buildings in March. It travels through the workplace to collect data using about 20 sensors and a 360-degree camera, according to Yahya Saad, co-founder of GlobalDWS, which created the robot. "Using AI on the robot, the camera takes the picture, analyzes and counts the number of people and then discards the image," he said. Part of a platform known as VirBrix, the robot also gathers information on air quality, light levels, noise, humidity, temperature and even measures CO2, methane and radon gas. The aim is to create a better work environment for humans -- one that isn't too hot, humid or dim. Saad said that means more comfortable and productive employees. The technology can also help reduce heating, cooling and hydro costs, he said. "All these measures are done to save on energy and reduce the carbon footprint," Saad explained. After the pilot program in March, VirBrix is set to return in July and October, and the government hasn't ruled out extending its use. It's paying $39,663 to lease the robot for two years.

Bruce Roy, national president of the Government Services Union, called the robot's presence in federal workplaces "intrusive" and "insulting." "People feel observed all the time," he said in French. "It's a spy. The robot is a spy for management." Roy, whose union represents more than 12,000 federal workers across several departments, said the robot is unnecessary because the employer already has ways of monitoring employee attendance and performance. "We believe that one of the robot's tasks is to monitor who is there and who is not," he said. "Folks say, why is there a robot here? Doesn't my employer trust that I'm here and doing my work properly?" [...] Jean-Yves Duclos, the minister of public services and procurement, said the government is instead using the technology as it looks to cut its office space footprint in half over the coming years. "These robots, as we call them, these sensors observe the utilization of office space and will be able to give us information over the next few years to better provide the kind of workplace employees need to do their job," Duclos said in French. "These are totally anonymous methods that allow us to evaluate which spaces are the most used and which spaces are not used, so we can better arrange them."
"In those cases we keep the images, but the whole body, not just the face, the whole body of the person is blurred," said Saad. "These are exceptional cases where we need to keep images and then the images would be handed over to the client."

The data is then stored on a server on Canadian soil, according to GlobalDWS.
Space

Is There Life on This Saturn Moon? Scientists Plan a Mission to Find Out (theguardian.com) 52

It's one of Saturn's 146 moons — just 310 miles in diameter (or 498 kilometers). Yet the European Space Agency plans to send a robot on a one-billion mile trip to visit it. Why?

Because astronomers have discovered Enceladus "possesses geysers that regularly erupt from its surface and spray water into space," reports the Guardian: Even more astonishing, these plumes contain complex organic compounds, including propane and ethane. "Enceladus has three key ingredients that are considered to be essential for the appearance of life," said astronomer Professor Michele Dougherty of Imperial College London. "It has got liquid water, organic material and a source of heat. That combination makes it my favourite moon in the whole solar system."
A panel of expert scientists have now recommended the Saturn moon for an ESA mission by 2040, according to the article, "with the aim of either landing on the moon or flying through the geysers spraying water and carbon chemicals from its surface into space. Preferably, both goals would be attempted, the panel added."

It will be tricky. Dougherty warns that Enceladus "is small with weak gravity, which means you will need a lot of fuel to slow it down so that it does not whiz past its target into deep space. That is going to be a tricky issue for those designing the mission." But Dougherty has a special interest, as the principal investigator for the magnetometer flown on the Cassini mission that studied Saturn and its moons between 2004 and 2017. "At one point, Cassini passed close to Enceladus and our instrument indicated Saturn's magnetic field was being dragged round the moon in a way that suggested the little moon had an atmosphere," said Dougherty. Cassini's managers agreed to direct the probe to take a closer look and, in July 2005, the spaceship swept over the moon's surface at a height of 173km — and detected significant amounts of water vapour. "It was wonderful," recalls Dougherty.

Subsequent sweeps produced even greater wonders. Huge geysers of water were pictured erupting from geological fault lines at the south pole. The only other body in the solar system, apart from Earth, possessing liquid water on its surface had been revealed. Finally came the discovery of organics in those plumes and Enceladus went from being rated a minor, unimportant moon to a world that is now set to trigger the expenditure of billions of euros and decades of effort by European astronomers and space engineers.

Thanks to long-time Slashdot reader thephydes for sharing the article.
China

China Is Testing More Driverless Cars Than Any Other Country (nytimes.com) 50

Assisted driving systems and robot taxis are becoming more popular in China with government help, as cities designate large areas for testing on public roads. From a report: The world's largest experiment in driverless cars is underway on the busy streets of Wuhan, a city in central China with 11 million people, 4.5 million cars, eight-lane expressways and towering bridges over the muddy waters of the Yangtze River. A fleet of 500 taxis navigated by computers, often with no safety drivers in them for backup, buzz around. The company that operates them, the tech giant Baidu, said last month that it would add a further 1,000 of the so-called robot taxis in Wuhan.

Across China, 16 or more cities have allowed companies to test driverless vehicles on public roads, and at least 19 Chinese automakers and their suppliers are competing to establish global leadership in the field. No other country is moving as aggressively. The government is providing the companies significant help. In addition to cities designating on-road testing areas for robot taxis, censors are limiting online discussion of safety incidents and crashes to restrain public fears about the nascent technology.

Surveys by J.D. Power, an automotive consulting firm, found that Chinese drivers are more willing than Americans to trust computers to guide their cars. "I think there's no need to worry too much about safety -- it must have passed safety approval," said Zhang Ming, the owner of a small grocery store near Wuhan's Qingchuan Pavilion, where many Baidu robot taxis stop. Another reason for China's lead in the development of driverless cars is its strict and ever-tightening control of data. Chinese companies set up crucial research facilities in the United States and Europe and sent the results back home. But any research in China is not allowed to leave the country. As a result, it's difficult for foreign carmakers to use what they learn in China for cars they sell in other countries.

Robotics

Dutch Police Test AI-Powered Robot Dog to Raid Drug Labs (interestingengineering.com) 29

"Police and search and rescue forces worldwide are increasingly using robots to assist in carrying out their operations," writes Interesting Engineering. "Now, the Dutch police are looking at employing AI-powered autonomous robot dogs in drug lab raids to protect officers from criminal risks, hazardous chemicals, and explosions."

New Scientist's Matthew Sparkes (also a long-time Slashdot reader) shares this report: Dutch police are planning to use an autonomous robotic dog in drug lab raids to avoid placing officers at risk from criminals, dangerous chemicals and explosions. If tests in mocked-up scenarios go well, the artificial intelligence-powered robot will be deployed in real raids, say police. Simon Prins at Politie Nederland, the Dutch police force, has been testing and using robots in criminal investigations for more than two decades, but says they are only now growing capable enough to be practical for more...
Some context from Interesting Engineering: The police force in the Netherlands carries out such raids at least three to four times a week... Since 2021, the force has already been using a Spot quadruped, fitted with a robotic arm, from Boston Dynamics to carry out drug raids and surveillance. However, the Spot is remotely controlled by a handler... [Significant technological advancements] have prompted the Dutch force to explore fully autonomous operations with Spot.

Reportedly, such AI-enabled autonomous robots are expected to inspect drug labs, ensure no criminals are present, map the area, and identify dangerous chemicals... Initial tests by force suggest that Spot could explore and map a mock drug lab measuring 15 meters by 20 meters. It was able to find hazardous chemicals and put them away into a designated storage container.

Their article notes that Spot "can do laser scans and visual, thermal, radiation, and acoustic inspections using add-on payloads and onboard cameras." (A video from Boston Dynamics — the company behind Spot — also seems to show the robot dog spraying something on a fire.)

The video seems aimed at police departments, touting the robot dog's advantages for "safety and incident response":
  • Enables safer investigation of suspicious packages
  • Detection of hazardous chemicals
  • De-escalation of tense or dangerous situations
  • Get eyes on dangerous situations

It also notes the robot "can be operated from a safe distance," suggesting customers "Use Spot® to place cameras, radios, and more for tactical reconnaissance."


AI

Could AI Replace CEOs? (msn.com) 132

'"As AI programs shake up the office, potentially making millions of jobs obsolete, one group of perpetually stressed workers seems especially vulnerable..." writes the New York Times.

"The chief executive is increasingly imperiled by A.I." These employees analyze new markets and discern trends, both tasks a computer could do more efficiently. They spend much of their time communicating with colleagues, a laborious activity that is being automated with voice and image generators. Sometimes they must make difficult decisions — and who is better at being dispassionate than a machine?

Finally, these jobs are very well paid, which means the cost savings of eliminating them is considerable...

This is not just a prediction. A few successful companies have begun to publicly experiment with the notion of an A.I. leader, even if at the moment it might largely be a branding exercise... [The article gives the example of the Chinese online game company NetDragon Websoft, which has 5,000 employees, and the upscale Polish rum company Dictador.]

Chief executives themselves seem enthusiastic about the prospect — or maybe just fatalistic. EdX, the online learning platform created by administrators at Harvard and M.I.T. that is now a part of publicly traded 2U Inc., surveyed hundreds of chief executives and other executives last summer about the issue. Respondents were invited to take part and given what edX called "a small monetary incentive" to do so. The response was striking. Nearly half — 47 percent — of the executives surveyed said they believed "most" or "all" of the chief executive role should be completely automated or replaced by A.I. Even executives believe executives are superfluous in the late digital age...

The pandemic prepared people for this. Many office workers worked from home in 2020, and quite a few still do, at least several days a week. Communication with colleagues and executives is done through machines. It's just a small step to communicating with a machine that doesn't have a person at the other end of it. "Some people like the social aspects of having a human boss," said Phoebe V. Moore, professor of management and the futures of work at the University of Essex Business School. "But after Covid, many are also fine with not having one."

The article also notes that a 2017 survey of 1,000 British workers found 42% saying they'd be "comfortable" taking orders from a computer.
Robotics

A Robot Will Soon Try To Remove Melted Nuclear Fuel From Japan's Destroyed Fukushima Reactor (apnews.com) 56

Tokyo Electric Power Company Holdings (TEPCO) showcased a remote-controlled robot on Tuesday that will retrieve small pieces of melted fuel debris from the damaged Fukushima Daiichi nuclear power plant later this year. The robot, developed by Mitsubishi Heavy Industries, features an extendable pipe and tongs capable of picking up granule-sized debris. TEPCO plans to remove less than 3 grams of debris during the test at the No. 2 reactor, marking the first such operation since the 2011 meltdown caused by a magnitude 9.0 earthquake and tsunami. The removal of the estimated 880 tons of highly radioactive melted fuel from the three damaged reactors is crucial for the plant's decommissioning, which critics say may take longer than the government's 30-40 year target.
Robotics

Technical Issues' Stall MLB's Adoption of Robots to Call Balls and Strikes (cbssports.com) 39

Will Major League Baseball games use "automated" umpires next year to watch pitches from home plate and call balls and strikes?

"We still have some technical issues," baseball Commissioner Rob Manfred said Thursday. NBC News reports: "We haven't made as much progress in the minor leagues this year as we sort of hoped at this point. I think it's becoming more and more likely that this will not be a go for '25."

Major League Baseball has been experimenting with the automated ball-strike system in minor leagues since 2019. It is being used at all Triple-A parks this year for the second straight season, the robot alone for the first three games of each series and a human with a [robot-assisted] challenge system in the final three.

In "challenge-system" games, robo-umpires are only used for quickly ruling on challenges to calls from human umpires. (As demonstrated in this 11-second video.)

CBS Sports explains: Each team is given a limited number of "incorrect" challenges per game, which incentivizes judicious use of challenges... In some ways, the challenge system is a compromise between the traditional method of making ball-strike calls and the fully automated approach. That middle ground may make approval by the various stakeholders more likely to happen and may lay the foundation for full automation at some future point.
Manfred cites "a growing consensus in large part" from Major League players that that's how they'd want to see robo-umpiring implemented, according to a post on X.com from The Athletic's Evan Drellich. (NBC notes one concern is eliminating the artful way catchers "frame" caught pitches to convince umpires a pitch passed through the strike zone.)

But umpires face greater challenges today, adds CBS Sports: The strong trend, stretching across years, of increased pitch velocity in the big leagues has complicated the calling of balls and strikes, as has the emphasis on high-spin breaking pitches. Discerning balls from strikes has always been challenging, and the stuff of the contemporary major-league pitcher has made anything like perfect accuracy beyond the capabilities of the human eye. Big-league umpires are highly skilled, but the move toward ball-strike automation and thus a higher tier of accuracy is likely inevitable. Manfred's Wednesday remarks reinforce that perception.
Sci-Fi

Netflix's Sci-Fi Movie 'Atlas': AI Apocalypse Blockbuster Gets 'Shocking' Reviews (tomsguide.com) 94

Space.com calls it a movie "adding more combustible material to the inferno of AI unease sweeping the globe." Its director tells them James Cameron was a huge inspiration, saying Atlas "has an Aliens-like vibe because of the grounded, grittiness to it." (You can watch the movie's trailer here...)

But Tom's Guide says "the reviews are just as shocking as the movie's AI." Its "audience score" on Rotten Tomatoes is 55% — but its aggregate score from professional film critics is 16%. The Hollywood Reporter called it "another Netflix movie to half-watch while doing laundry." ("The star plays a data analyst forced to team up with an AI robot in order to prevent an apocalypse orchestrated by a different AI robot...") The site Giant Freakin Robot says "there seems to be a direct correlation between how much money the streaming platform spends on green screen effects and how bad the movie is" (noting the film's rumored budget of $100 million)...

But Tom's Guide defends it as a big-budget sci-fi thriller that "has an interesting premise that makes you think about the potential dangers of AI progression." Our world has always been interested in computers and machines, and the very idea of technology turning against us is unsettling. That's why "Atlas" works as a movie, but professional critics have other things to say. Ross McIndoe from Slant Magazine said: "Atlas seems like a story that should have been experienced with a gamepad in hand...." Todd Gilchrist from Variety didn't enjoy the conventional structure that "Atlas" followed...

However, even though the score is low and the reviews are pretty negative, I don't want to completely bash this movie... If I'm being completely honest, most movies and TV shows nowadays are taken too seriously. The more general blockbusters are supposed to be entertaining and fun, with visually pleasing effects that keep you hooked on the action. This is much like "Atlas", which is a fun watch with an unsettling undertone focused on the dangers of evolving AI...

Being part of the audience, we're supposed to just take it in and enjoy the movie as a casual viewer. This is why I think you should give "Atlas" a chance, especially if you're big into dramatic action sequences and have enjoyed movies like "Terminator" and "Pacific Rim".

The Military

Robot Dogs Armed With AI-aimed Rifles Undergo US Marines Special Ops Evaluation (arstechnica.com) 74

Long-time Slashdot reader SonicSpike shared this report from Ars Technica: The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic "dogs" developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

While MARSOC is testing Ghost Robotics' quadrupedal unmanned ground vehicles (called "Q-UGVs" for short) for various applications, including reconnaissance and surveillance, it's the possibility of arming them with weapons for remote engagement that may draw the most attention. But it's not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.

MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx's SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously. On LinkedIn, Onyx Industries shared a video of a similar system in action.

In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs "should not be construed as a capability or a singular interest in one of many use cases during an evaluation."

Moon

NASA's Plan To Build a Levitating Robot Train on the Moon (livescience.com) 28

"Does a levitating robot train on the moon sound far-fetched?" asks LiveScience.

"NASA doesn't seem to think so, as the agency has just greenlit further funding for a study looking into the concept." The project, called "Flexible Levitation on a Track" (FLOAT), has been moved to phase two of NASA's Innovative Advanced Concepts program (NIAC) , which aims to develop "science fiction-like" projects for future space exploration. The FLOAT project could result in materials being transported across the moon's surface as soon as the 2030s, according to the agency... According to NASA's initial design, FLOAT will consist of magnetic robots levitating over a three-layer film track to reduce abrasion from dust on the lunar surface. Carts will be mounted on these robots and will move at roughly 1 mph (1.61 km/h). They could transport roughly 100 tons (90 metric tons) of material a day to and from NASA's future lunar base.
"A durable, long-life robotic transport system will be critical to the daily operations of a sustainable lunar base in the 2030's," according to NASA's blog post, arguing it could be used to
  • Transport moon materials mined to produce on-site resources like water, liquid oxygen, liquid hydrogen, or construction materials
  • Transport payloads around the lunar base and to and from landing zones or other outposts

Thanks to long-time Slashdot reader AmiMoJo for sharing the article.


AI

OpenAI Exec Says Today's ChatGPT Will Be 'Laughably Bad' In 12 Months (businessinsider.com) 68

At the 27th annual Milken Institute Global Conference on Monday, OpenAI COO Brad Lightcap said today's ChatGPT chatbot "will be laughably bad" compared to what it'll be capable of a year from now. "We think we're going to move toward a world where they're much more capable," he added. Business Insider reports: Lightcap says large language models, which people use to help do their jobs and meet their personal goals, will soon be able to take on "more complex work." He adds that AI will have more of a "system relationship" with users, meaning the technology will serve as a "great teammate" that can assist users on "any given problem." "That's going to be a different way of using software," the OpenAI exec said on the panel regarding AI's foreseeable capabilities.

In light of his predictions, Lightcap acknowledges that it can be tough for people to "really understand" and "internalize" what a world with robot assistants would look like. But in the next decade, the COO believes talking to an AI like you would with a friend, teammate, or project collaborator will be the new norm. "I think that's a profound shift that we haven't quite grasped," he said, referring to his 10-year forecast. "We're just scratching the surface on the full kind of set of capabilities that these systems have," he said at the Milken Institute conference. "That's going to surprise us."
You can watch/listen to the talk here.
AI

Austria Calls For Rapid Regulation as It Hosts Meeting on 'Killer Robots' (reuters.com) 38

Austria called on Monday for fresh efforts to regulate the use of AI in weapons systems that could create so-called 'killer robots', as it hosted a conference aimed at reviving largely stalled discussions on the issue. From a report: With AI technology advancing rapidly, weapons systems that could kill without human intervention are coming ever closer, posing ethical and legal challenges that most countries say need addressing soon. "We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control," Austrian Foreign Minister Alexander Schallenberg told the meeting of non-governmental and international organisations as well as envoys from 143 countries.

"At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines," he said in an opening speech to the conference entitled "Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation." Years of discussions at the United Nations have produced few tangible results and many participants at the two-day conference in Vienna said the window for action was closing rapidly.

IT

Captchas Are Getting Harder (wsj.com) 88

Captchas that aim to distinguish humans from nefarious bots are demanding more brain power. WSJ: The companies and cybersecurity experts who design Captchas have been doing all they can to stay one step ahead of the bad actors figuring out how to crack them. A cottage industry of third-party Captcha-solving firms -- essentially, humans hired to solve the puzzles all day -- has emerged. More alarmingly, so has technology that can automatically solve the more rudimentary tests, such as identifying photos of motorcycles and reading distorted text. "Software has gotten really good at labeling photos," said Kevin Gosschalk, the founder and CEO of Arkose Labs, which designs what it calls "fraud and abuse prevention solutions," including Captchas. "So now enters a new era of Captcha -- logic based."

That shift explains why Captchas have started to both annoy and perplex. Users no longer have to simply identify things. They need to identify things and do something with that information -- move a puzzle piece, rotate an object, find the specter of a number hidden in a roomscape. Compounding this bewilderment is the addition to the mix of generative AI images, which creates new objects difficult for robots to identify but baffles humans who just want to log in. "Things are going to get even stranger, to be honest, because now you have to do something that's nonsensical," Gosschalk said. "Otherwise, large multimodal models will be able to understand."

Slashdot Top Deals