AI

Editor At 184-Year-Old Ohio Newspaper Pushes To Let AI Draft News Articles (washingtonpost.com) 46

An anonymous reader quotes a report from the Washington Post: The Plain Dealer, Cleveland's largest newspaper, has begun to feature a new byline. On recent articles about an ice carving festival, a medical research discovery and a roaming pack of chicken-slaying dogs, a reporter's name is paired with the words "Advance Local Express Desk." It means: This article was drafted by artificial intelligence. "This article was produced with assistance from AI tools and reviewed by Cleveland.com staff," reads a note at the bottom of each robot-penned piece, differentiating it from those still written primarily by journalists. The disclosure has done little to stem the backlash that caromed across the news industry after the paper's editor, Chris Quinn, published a Feb. 14 column lamenting that a fresh-out-of-college job applicant withdrew from a reporting fellowship when they found out the position included no writing -- just filing notes to an AI writing tool.

"Artificial intelligence is not bad for newsrooms. It's the future of them," Quinn wrote, adding that "by removing writing from reporters' workloads, we've effectively freed up an extra workday for them each week." [...] Quinn, for his part, says his paper's use of AI to find, draft and edit stories is a success story that others must emulate if they want to survive. "It's a tool," he said in a phone interview last week. "If AI can do part of our job, then why not let it -- and have people do the part it can't do?" He added that the paper's embrace of technology -- including using AI to write stories summarizing its reporters' podcasts and its readers' letters to the editor -- is already boosting its bottom line, helping it retain staff at a time when other newspapers are shrinking or even shutting down. Just 130 miles east of Cleveland, the 240-year-old Pittsburgh Post-Gazette said in January that it will close its doors this spring.

Quinn, who has led the Plain Dealer's newsroom since 2013, said its newsroom has shrunk from some 400 employees in the late 1990s to just 71 today. Over the past three years, Quinn has implemented a suite of AI tools with various purposes: transcribing local government meetings, scraping municipal websites for story leads, cleaning up typos in story drafts, suggesting headlines and helping reporters draft follow-ups to articles they've already written. He said he is particularly pleased with an AI tool that turns podcasts by the paper's reporters into stories for the website, which he said generated more than 10 million page views last year. He has documented those efforts in letters to readers and sought their feedback. But the paper's latest experiment -- using AI to turn reporters' notes into full story drafts -- has aroused indignation online and anxiety within the paper's ranks.

AI

Lenovo Unveils an Attachable AI Agent 'Companion' for Their Laptops (cnet.com) 35

As the Mobile World Conference begins in Spain, Lenovo brought a new attachable accessory for their laptops — an AI agent. CNET reports: The little circular module perches on the top of your Lenovo laptop display, attached via the magnetic Magic Bay on the rear. The module is home to an adorable animated companion called Tiko, who you can interact with via text or voice... [I]t can start and stop your music, open a web page for you or answer a question. You can also interact with it by using emoji. Give it a book emoji, for example, and it will pop on its glasses and sit reading with you while you work... The company wants to sell the Magic Bay accessory later this year — although it doesn't know exactly when, or how much it will cost.
It even comes with a timer (for working in Pomodoro-style intervals) — but Lenovo has also created another "concept" AI companion that CNET describes as "a kind of stationary tabletop robot, not dissimilar to the Pixar lamp, but with an orb for a head." With a combination of cameras, microphones and projectors, the AI Workmate can undertake a variety of tasks, including helping you generate and display presentations or turn your written work or art into a digital asset... It's robotic head swivelled around and projected the slides onto the wall next to me.
Lenovo created a video to show this "next-generation AI work companion" — with animated eyes — "designed to transform how modern professionals interact with their workspace." It bridges the physical and digital worlds — capturing handwritten notes, recognizing gestures, summarizing tasks, and proactively helping you stay ahead of your day. The moment you sit down, Lenovo AI Workmate greets you, surfaces priority tasks, and keeps your work organized without switching apps or losing context. From turning sketches into presentations to projecting information for instant collaboration, [it] brings on-device AI intelligence directly to your desk — secure, responsive, and always ready... It's not just software. It's a smarter way to work.
It looks like Lenovo once considered naming it "AI Sphere" (since that name still appears in its description on YouTube).

Lenovo also showed another "concept" laptop idea that PC Magazine called "futuristic": The ThinkBook Modular AI PC looks like a traditional laptop at first glance, but a second, removable screen fastens onto the lid. You can swap that screen onto the keyboard deck (in place of the keyboard, which can then be used wirelessly), or use it alongside the laptop as a portable monitor, attached via an included cable.... While Lenovo is still working on this device, and it's very much in the concept phase, it feels like one of its best-thought-out prototypes, one likely to make it to store shelves at some point.
Another "concept" laptop is Lenovo's Yoga Book Pro 3D Concept, ofering directional backlight and eye-tracking technology for the illusion of 3D (playing slightly different images to each of your eyes). It offers gesture control for 3D models, two OLED displays, and some magical "snap-on pads" which, when laid on the display — make the GUI appear on the screen for a new control menu to "provide quick-access shortcuts for adjusting lighting, viewing angle, and tone".
Biotech

Human Brain Cells On a Chip Learned To Play Doom In a Week (newscientist.com) 35

Researchers at Cortical Labs used living human neurons grown on a chip to learn how to play Doom in about a week. "While its performance is not up to par with humans, experts say it brings biological computers a step closer to useful real-world applications, like controlling robot arms," reports New Scientist. From the report: In 2021, the Australian company Cortical Labs used its neuron-powered computer chips to play Pong. The chips consisted of clumps of more than 800,000 living brain cells grown on top of microelectrode arrays that can both send and receive electrical signals. Researchers had to carefully train the chips to control the paddles on either side of the screen. Now, Cortical Labs has developed an interface that makes it easier to program these chips using the popular programming language Python. An independent developer, Sean Cole, then used Python to teach the chips to play Doom, which he did in around a week.

"Unlike the Pong work that we did a few years ago, which represented years of painstaking scientific effort, this demonstration has been done in a matter of days by someone who previously had relatively little expertise working directly with biology," says Brett Kagan of Cortical Labs. "It's this accessibility and this flexibility that makes it truly exciting."

The neuronal computer chip, which used about a quarter as many neurons as the Pong demonstration, played Doom better than a randomly firing player, but far below the performance of the best human players. However, it learnt much faster than traditional, silicon-based machine learning systems and should be able to improve its performance with newer learning algorithms, says Kagan. However, it's not useful to compare the chips with human brains, he says. "Yes, it's alive, and yes, it's biological, but really what it is being used as is a material that can process information in very special ways that we can't recreate in silicon."
Cortical Labs posted a YouTube video showing its CL1 biological computer running Doom. There's also source code available on GitHub, with additional details in a README file.
Robotics

Researchers Develop Detachable Crawling Robotic Hand (sciencenews.org) 32

Long-time Slashdot reader fahrbot-bot writes: Researchers have developed a robotic hand that can not only skitter about on its fingertips, it can also bend its fingers backward, connect and disconnect from a robotic arm, and pick up and carry one or more objects at a time. This article in Science News includes footage of the robotic arm reattaching itself to the skittering robot hand, which can also hold objects against both sides of its palm simultaneously, and "can even unscrew the cap off a mustard bottle while holding the bottle in place." With its unusual agility, it could navigate and retrieve objects in spaces too confined for human hands. When attached to the mechanical arm, the robotic hand could pick up objects much like a human hand. The bot pinched a ball between two fingers, wrapped four fingers around a metal rod and held a flat disc between fingers and palm.

But the bot isn't constrained by human anatomy... When the robot was separated from the arm, it was most stable walking on four or five fingers and using one or two fingers for grabbing and carrying things, the team found. In one set of trials with both bots, the hand detached from the robotic arm and used its fingers as legs to skitter over to a wooden block. Once there, it picked up the block with one finger and carried it back to the arm.

The crawling bot could one day aid in industrial inspections of pipes and equipment too small for a human or larger robot to access, says Xiao Gao, a roboticist now at Wuhan University in China. It might retrieve objects in a warehouse or navigate confined spaces in disaster response efforts.

AI

AI Now Helps Manage 16% of America's Apartments (sfgate.com) 37

Imagine a 280-unit apartment complex offering no on-site leasing office with a human agent for questions. "Instead, the entire process has been outsourced to AI..." reports SFGate, "from touring to signing the lease to completing management tasks once you actually move in."

Now imagine it's far more than just one apartment complex... At two other Jack London Square apartment buildings, my initial interactions were also with a robot. At the Allegro, my fiance and I entered the leasing office for our tour and asked for "Grace P," the leasing agent who had emailed us. "Oh, that's just our AI assistant," the woman at the front desk told us... At Aqua Via, another towering apartment complex across the street, I emailed back and forth with a very helpful and polite "Sofia M." My pal Sofia seemed so human-like in her responses that I did not realize she was AI until I looked a little closer at a text she'd sent me. "Msgs may be AI or human generated...." [S]he continued to text me for weeks after I'd moved on, trying to win me back. When I looked at the fine print, I realized both of these complexes were using EliseAI, a leading AI housing startup that claims to be involved in managing 1 in 6 apartments in the U.S...

[50 corporate landlords have funded a VC named RET Ventures to invest in and deploy rental-automating AI, and SFGate's reporter spoke to partner Christopher Yip.] According to Yip, AI is common in large apartment complexes not just in the tech-centric Bay Area, but across the entire country. It all kicked off at the onset of the COVID-19 pandemic in 2020, he said, when contactless, self-guided apartment tours and completely virtual tours where people rented apartments sight unseen became commonplace. Technology's infiltration into the renting process has only grown deeper in the years since, Yip said, mirroring how pervasive AI has become in many other facets of our lives. "From an industry perspective, it's really about meeting the renter where they are," Yip said. He pointed to how many renters now prefer to interact through text and email, and want to tour apartments at their convenience — say, at 7 p.m. after work, when a typical leasing office might be closed.

The latest updates in technology not only allow you to take a self-guided tour with AI unlocking the door for you, but also to ask AI questions by conversing with voice AI as you wander through the kitchen and bedroom at your leisure. And while a human leasing agent might ghost you for days or weeks at a time, AI responds almost instantly — EliseAI typically responds within 30 seconds, [said Fran Loftus, chief experience officer at EliseAI]... [I]n some scenarios, the goal does seem to be to eliminate humans entirely. "We do have long-term plans of building fully autonomous buildings," Loftus said.... "We think there's a time and a place for that, depending on the type of property. But really right now, it's about helping with this crazy turnover in this industry."

The reporter says they missed the human touch, since "The second AI was involved, the interaction felt cold. When a human couldn't even be bothered to show up to give me a tour, my trust evaporated."

But they conclude that in the years ahead, human landlords offering tours "will probably go the way of landlines and VCRs."
Robotics

Man Accidentally Gains Control of 7,000 Robot Vacuums (popsci.com) 51

A software engineer tried steering his robot vacuum with a videogame controller, reports Popular Science — but ended up with "a sneak peak into thousands of people's homes." While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI's remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries.

The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing. Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw... He also claims he could compile 2D floor plans of the homes the robots were operating in. A quick look at the robots' IP addresses also revealed their approximate locations.

DJI told Popular Science the issue was addressed "through two updates, with an initial patch deployed on February 8 and a follow-up update completed on February 10."
AI

India Tells University To Leave AI Summit After Presenting Chinese Robot as Its Own (reuters.com) 11

An anonymous reader shares a report: An Indian university has been asked to vacate its stall at the country's flagship AI summit after a staff member was caught presenting a commercially available robotic dog made in China as its own creation, two government sources said.

"You need to meet Orion. This has been developed by the Centre of Excellence at Galgotias University," Neha Singh, a professor of communications, told state-run broadcaster DD News this week in remarks that have since gone viral.

But social media users quickly identified the robot as the Unitree Go2, sold by China's Unitree Robotics for about $2,800 and widely used in research and education globally. The episode has drawn sharp criticism and has cast an uncomfortable spotlight on India's artificial intelligence ambitions.

Robotics

Small Crowd Pays to Watch a Boxing Match Between 80-Pound Chinese Robots (restofworld.org) 37

Recently a small crowd paid to watch robots boxing, reports Rest of World. (Almost 3,000 people have now watched the match's 83-minute webcast.) The match was organized by Rek, a San Francisco-based company, and drew hundreds of spectators who had paid about $60-$80 for a ticket to watch modified G1 robots go at each other. Made by Unitree, the dominant Chinese robot maker, they weighed in at around 80 pounds and stood 4.5 feet tall, with human-like hands and dozens of joint motors for flexibility. The match had all the bells and whistles of a regular boxing bout: pulsing music, cameras capturing all the angles, hyped-up introductions, a human referee, and even two commentators. The evening featured two bouts made up of five rounds, each lasting 60 seconds. The robots pranced around the cage, throwing jabs and punches, drawing ohs and ahs from the crowd. They fell sometimes, and needed human intervention to get them back on their feet.
The robots were controlled by humans using VR interfaces, which led to some odd moments with robots hitting into the air, throwing multiple punches that failed to even connect with their opponents. One robot controller was a former UFC fighter, the article points out, but "The crowd cheered as a 13-year-old VR pilot named Dash beat his older competitor...."

The company behind this event plans more boxing matches with their VR-controlled robots, and even wants to develop "a league of robot boxers, including full-height robots that weigh about 200 pounds and are nearly 6 feet tall."
Moon

Lost Soviet Moon Lander May Have Been Found (nytimes.com) 51

An anonymous reader shares a report: In 1966, a beach-ball-size robot bounced across the moon. Once it rolled to a stop, its four petal-like covers opened, exposing a camera that sent back the first picture taken on the surface of another world. This was Luna 9, the Soviet lander that was the earliest spacecraft to safely touchdown on the moon. While it paved the way toward interplanetary exploration, Luna 9's precise whereabouts have remained a mystery ever since.

That may soon change. Two research teams think they might have tracked down the long-lost remains of Luna 9. But there's a catch: The teams do not agree on the location. "One of them is wrong," said Anatoly Zak, a space journalist and author who runs RussianSpaceWeb.com and reported on the story last week. The dueling finds highlight a strange fact of the early moon race: The precise resting places of a number of spacecraft that crashed or landed on the moon in the run up to NASA's Apollo missions are lost to obscurity. A newer generation of spacecraft may at last resolve these mysteries.

Luna 9 launched to the moon on Jan. 31, 1966. While a number of spacecraft had crashed into the lunar surface at that stage of the moon race, it was among the earliest to try what rocket engineers call a soft landing. Its core unit, a spherical suite of scientific instruments, was about two feet across. That size makes it difficult to spot from orbit. "Luna 9 is a very, very small vehicle," said Mark Robinson, a geologist at the company Intuitive Machines, which has twice landed spacecraft on the moon.

Technology

Google Home Finally Adds Support For Buttons (theverge.com) 33

An anonymous reader shares a report: Google Home users, your long nightmare is over. The platform has finally added support for buttons. The release notes for a February 2 update state that several new starter conditions for automations are now available, including "Switch or button pressed."

Smart buttons are physical, programmable switches that you can press to trigger automations or control devices in your smart home, such as turning lights on or off, opening and closing shades, running a Good Night scene, or starting a robot vacuum. A great alternative to voice and app control when you want to control multiple devices, smart buttons are often wireless and generally have several ways to press them: single press, double press, and long press, meaning one button can do multiple things.

Robotics

Scientists Create Programmable, Autonomous Robots Smaller Than a Grain of Salt (upenn.edu) 46

Researchers at the University of Pennsylvania and University of Michigan "have created the world's smallest fully programmable, autonomous robots," according to a recent announcement.

The announcement calls them "microscopic swimming machines that can independently sense and respond to their surroundings, operate for months and cost just a penny each." Barely visible to the naked eye, each robot measures about 200 by 300 by 50 micrometers, smaller than a grain of salt. Operating at the scale of many biological microorganisms, the robots could advance medicine by monitoring the health of individual cells and manufacturing by helping construct microscale devices. Powered by light, the robots carry microscopic computers and can be programmed to move in complex patterns, sense local temperatures and adjust their paths accordingly... "We've made autonomous robots 10,000 times smaller," says Marc Miskin, Assistant Professor in Electrical and Systems Engineering at Penn Engineering and the papers' senior author. "That opens up an entirely new scale for programmable robots."
The announcement describes them as "the first truly autonomous, programmable robots at this scale" (as described in two recent academic articles). The team had to design a new propulsion system that utilized the unique locomotion physics in the microscopic realm, according to the university's announcement. So the robots "generate an electrical field that nudges ions in the surrounding solution." Those ions, in turn, push on nearby water molecules, animating the water around the robot's body. "It's as if the robot is in a moving river," says Miskin, "but the robot is also causing the river to move." The robots can adjust the electrical field that causes the effect, allowing them to move in complex patterns and even travel in coordinated groups, much like a school of fish, at speeds of up to one body length per second...

To be truly autonomous, a robot needs a computer to make decisions, electronics to sense its surroundings and control its propulsion, and tiny solar panels to power everything, and all that needs to fit on a chip that is a fraction of a millimeter in size. This is where David Blaauw's team at the University of Michigan came into action... The robots are programmed by pulses of light that also power them. Each robot has a unique address that allows the researchers to load different programs on each robot. "This opens up a host of possibilities," adds Blaauw, "with each robot potentially performing a different role in a larger, joint task."

Thanks to long-time Slashdot reader fahrbot-bot for sharing the news.
Transportation

'Hundreds' of Gatik Robot Delivery Trucks Headed For US Roads (forbes.com) 28

An anonymous reader quotes a report from Forbes: Gatik, a Silicon Valley startup developing self-driving delivery trucks, says its commercial operations are about to scale up dramatically, from fewer than a dozen driverless units running in multiple U.S. states now to hundreds of box trucks by the end of the year. CEO Gautam Narang said it's also booked contracts with retailers worth at least $600 million for its automated fleet. "We have 10 fully driverless, revenue-generating trucks on public roads. Very soon, in the coming weeks, we expect that increase to 60 trucks," he told Forbes. "We expect to end the year with hundreds of driverless trucks -- revenue-generating -- deployed across multiple markets in the U.S."

Though the Mountain View, California-based company hasn't raised as much funding as rivals, including Aurora, Kodiak and Canada's Waabi, Gatik said it's actually scaling up faster than any other robot truck developer. Unlike those companies, it focuses on smaller freight delivery vehicles, rather than full-size semis, supplied by truckmaker Isuzu that operate mainly between warehouses and supermarkets and other large stores. The company's focus has been on so-called middle-mile trucking, which, like long-haul routes, has a severe shortage of human drivers, according to Narang. Currently, its trucks are on the road in Texas, Arkansas, Arizona, Nebraska and Ontario, Canada.

The company has been generating revenue since shortly after its founding in 2017, hauling loads for customers like Walmart in trucks with human safety drivers at the wheel. Beginning late last year, it began shifting to fully driverless units and is getting more trucks from Isuzu built specifically to incorporate its tech, Narang said. "The hardware that we are using, this is our latest generation, has been designed to enable driver-out across thousands of trucks."

Crime

California Tech CEO and EV Pioneer Arrested, Accused of Murder (sfgate.com) 25

California tech executive Gordon Abas Goodarzi has been arrested and charged with murder in the death of his estranged wife, Aryan Papoli, whose body was found last November down an embankment off Highway 138 in San Bernardino County. Authorities initially believed the injuries were consistent with a fall, but the case was later ruled a homicide following a months-long investigation by the San Bernardino County Sheriff's Department. "Arrest records show that Goodarzi is currently in custody without bail and faces a murder charge and that he is set to appear in court Monday," reports SFGATE. From the report: Goodarzi, a California tech executive with ties to BattleBots, is publicly listed as the president and CEO of Magmotor, which describes itself as a "proud" supporter of the combat robot community and claims to support several teams each year. According to his LinkedIn, Goodarzi also previously worked as a research affiliate at UCLA's B. John Garrick Institute for the Risk Sciences since 2023.

Originally from Iran, Papoli and Goodarzi settled in Los Angeles County's verdant Rolling Hills community because of its tranquility and natural beauty, Papoli previously wrote. [...] She described her husband, Goodarzi, as a pioneer in the world of renewable energy, developing both electric and hybrid vehicles since the 1980s. According to Papoli, he also worked as the technical director at Hughes Electronics, which developed and manufactured the EV1, an early iteration of the electric car, in the 1990s.

China

China Birth Rate Falls To Lowest Since 1949 (theguardian.com) 49

China's birth rate fell to 5.6 per 1,000 people in 2025, the lowest figure since the founding of the People's Republic in 1949, and the country's total population contracted by 3.39 million, the sharpest decline since the Mao Zedong era. The drop marks the fourth straight year of population decline and comes despite government efforts to encourage childbearing, including subsidies of about $500 annually per child born on or after January 1, 2025.

Beijing has also imposed a 13% value-added tax on contraceptives this year. The government is betting on automation and productivity to offset the shrinking workforce -- China already leads the world in robot installations -- and President Xi Jinping has written that population policy must transition "from being mainly about regulating quantity to improving quality."
Robotics

Samsung's Rolling Ballie Robot Indefinitely Shelved After Delays (msn.com) 8

Samsung Electronics has once again sidelined Ballie, a long-anticipated robot that was first announced six years ago but never released. Bloomberg News: The device -- designed to roll and roam throughout the home -- is completely absent from this week's CES, the biggest electronics trade show. And though Samsung said last year that Ballie was nearly ready for a retail release, the product is now unlikely to resurface soon.

In an emailed statement, Samsung referred to Ballie as an "active innovation platform" within the company, rather than a forthcoming consumer device. "After multiple years of real-world testing, it continues to inform how Samsung designs spatially aware, context-driven experiences, particularly in areas like smart home intelligence, ambient AI and privacy-by-design," a Samsung spokesperson said in the statement.

Robotics

Hyundai and Boston Dynamics Unveil Humanoid Robot Atlas At CES (nbcnews.com) 38

At CES 2026 today, Hyundai and Boston Dynamics publicly demonstrated its humanoid robot Atlas, showing off fluid movement and announcing plans to deploy a production version in Hyundai's EV factory by 2028. NBC News reports: "For the first time ever in public, please welcome Atlas to the stage," said Boston Dynamics' Zachary Jackowski as a life-sized robot with two arms and two legs picked itself up from the floor at a Las Vegas hotel ballroom. It then fluidly walked around the stage for several minutes, sometimes waving to the crowd and swiveling its head like an owl. An engineer remotely piloted the robot from nearby for the purpose of the demonstration, though in real life Atlas will move around on its own, said Jackowski, the company's general manager for humanoid robots.

[...] Hyundai also announced a new partnership with Google's DeepMind, which will supply its artificial intelligence technology to Boston Dynamics robots. It's a return to a familiar partnership for Google, which bought Boston Dynamics in 2013 before selling it to Japanese tech giant SoftBank several years later. Hyundai acquired it from SoftBank in 2021. [...] At the end of Monday's live Atlas demonstration, which appeared flawless, the humanoid prototype swung its arms in a theatrical gesture to introduce a static model of the new product version of Atlas, which looked slightly different and was blue in color.
"I think the question comes back to what are the use cases and where is the applicability of the technology," said Alex Panas, a partner at consultancy McKinsey who helped lead a CES robotics panel that attracted hundreds of people earlier in the day. "In some cases, it may look more humanoid. In some cases, it may not."

Either way, Panas said, "the software, the chipsets, the communication, all the other pieces of the technology are coming together, and they will create new applications."

You can watch a video of the demonstration on YouTube.
Robotics

Researchers Make 'Neuromorphic' Artificial Skin For Robots (arstechnica.com) 7

An anonymous reader quotes a report from Ars Technica: The nervous system does an astonishing job of tracking sensory information, and does so using signals that would drive many computer scientists insane: a noisy stream of activity spikes that may be transmitted to hundreds of additional neurons, where they are integrated with similar spike trains coming from still other neurons. Now, researchers have used spiking circuitry to build an artificial robotic skin, adopting some of the principles of how signals from our sensory neurons are transmitted and integrated. While the system relies on a few decidedly not-neural features, it has the advantage that we have chips that can run neural networks using spiking signals, which would allow this system to integrate smoothly with some energy-efficient hardware to run AI-based control software.

[...] There are four ways that these trains of spikes can convey information: the shape of an individual pulse, through their magnitude, through the length of the spike, and through the frequency of the spikes. Spike frequency is the most commonly used means of conveying information in biological systems, and the researchers use that to convey the pressure experienced by a sensor. The remaining forms of information are used to create something akin to a bar code that helps identify which sensor the reading came from. In addition to registering the pressure, the researchers had each sensor send a "I'm still here" signal at regular time intervals. Failure to receive this would be an indication that something has gone wrong with a sensor.

The spiking signals allow the next layer of the system to identify any pressure being experienced by the skin, as well as where it originated. This layer can also do basic evaluation of the sensory input: "Pressure-initiated raw pulses from the pulse generator accumulated in the signal cache center until a predefined pain threshold is surpassed, activating a pain signal." This can allow the equivalent of basic reflex reactions that don't involve higher-level control systems. For example, the researchers set up a robotic arm covered with their artificial skin, and got it to move the arm whenever it experiences pressure that can cause damage. The second layer also combines and filters signals from the skin before sending the information on to the arm's controller, which is the equivalent of the brain in this situation. So, the same system caused a robotic face to change expressions based on how much pressure its arm was sensing.

[...] The skin is designed to be assembled from a collection of segments that can snap together using magnetic interlocks. These automatically link up any necessary wiring, and each segment of skin broadcasts a unique identity code. So, if the system identifies damage, it's relatively easy for an operator to pop out the damaged segment and replace it with fresh hardware, and then update any data that links the new segment's ID with its location. The researchers call their development a neuromorphic robotic e-skin, or NRE-skin. "Neuromorphic" as a term is a bit vague, with some people using it to mean a technology that directly follows the principles used by the nervous system. That's definitely not this skin. Instead, it uses "neuromorphic" far more loosely, with the operation of the nervous system acting as an inspiration for the system.
The findings have been published in the journal PNAS.
AI

Ask Slashdot: What's the Stupidest Use of AI You Saw In 2025? 61

Long-time Slashdot reader destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI?

With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the web page. But there've been other AI projects that were just exquisitely, quintessentially bad...

— Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot.

— Disneyland imagineers used deep reinforcement learning to program a talking robot snowman.

— Attendees at LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20.

— And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear.

What did I miss? What "AI fails" will you remember most about 2025?

Share your own thoughts and observations in the comments.

What's the stupidest use of AI you saw In 2025?
Robotics

Researchers Show Some Robots Can Be Hijacked Just Through Spoken Commands (interestingengineering.com) 25

An anonymous Slashdot reader shared this story from Interesting Engineering: Cybersecurity specialists from the research group DARKNAVY have demonstrated how modern humanoid robots can be compromised and weaponised through weaknesses in their AI-driven control systems.

In a controlled test, the team demonstrated that a commercially available humanoid robot could be hijacked with nothing more than spoken commands, exposing how voice-based interaction can serve as an attack vector rather than a safeguard, reports Yicaiglobal... Using short-range wireless communication, the hijacked machine transmitted the exploit to another robot that was not connected to the network. Within minutes, this second robot was also taken over, demonstrating how a single breach could cascade through a group of machines. To underline the real-world implications, the researchers issued a hostile command during the demonstration. The robot advanced toward a mannequin on stage and struck it, illustrating the potential for physical harm.

Transportation

Waymo Pays Workers $22 To Close Doors on Stranded Robotaxis (msn.com) 72

Waymo's fleet of autonomous robotaxis can navigate city streets and compete with human taxi drivers, but they become stranded when a passenger leaves a door ajar -- prompting the company to pay tow truck operators around $20 to $24 through an app called Honk just to push a door shut. The owner of a towing company in Inglewood, California, completes up to three such jobs a week for Waymo, sometimes freeing vehicles by removing seat belts caught in doors. Another Los Angeles tow operator said locating stuck robotaxis can take 10 minutes to an hour because the precise location isn't always provided, forcing workers to search on foot through narrow streets too narrow for flatbed rigs.

Tow operators also retrieve Waymos that run out of battery before reaching charging stations, earning $60 to $80 per tow -- rates that aren't always profitable after factoring in fuel and labor. During a San Francisco power outage last weekend, multiple operators received a flurry of retrieval requests as robotaxis blocked intersections across the city. One San Francisco tow company manager declined because Waymo's offered rate fell below his standard $250 flatbed fee.

Waymo said in a blog post that the outage caused a "backlog" in requests to remote human workers who help vehicles navigate defunct traffic signals. San Francisco Supervisor Bilal Mahmood called for a hearing into Waymo's operations, saying the traffic disruptions were "dangerous and unacceptable." A retired Carnegie Mellon engineering professor who studied autonomous vehicles for nearly 30 years said paying humans to close doors and retrieve stalled cars is expensive and will need to be minimized as Waymo scales up. The company is testing next-generation Zeekr vehicles in San Francisco that feature automatic sliding doors.

Slashdot Top Deals