×
AI

Musk Widely Expected To Unveil Humanoid Robot Optimus at Tesla's AI Day Later Today (wsj.com) 104

Elon Musk is widely expected to show off a new humanoid robot Friday at a Tesla artificial intelligence event. From a report: Mr. Musk first laid out the vision for the robot, called Optimus, little more than a year ago at Tesla's first-ever AI day. At the time, a dancer in a costume appeared onstage. This time, Mr. Musk has said he wants a prototype to be at the gathering that is scheduled to unfold from 5 p.m. local time in Palo Alto, Calif. Mr. Musk has painted a vision of Optimus as helping Tesla make cars more efficiently. He has also suggested the robot could serve broader functions and potentially alleviate labor shortages. "My guess is Optimus will be more valuable than the car long term," Mr. Musk said Aug. 4 at Tesla's annual shareholder meeting. "It will, I think, turn the whole notion of what's an economy on its head, at the point at which you have no shortage of labor," he added. When he first unveiled the Optimus concept, Mr. Musk said such a robot could have such an impact on the labor market it could make it necessary to provide a universal basic income, or a stipend to people without strings attached.
Robotics

Bipedal Robot Sets Guinness World Record For Robotic 100-Meter Sprint (newatlas.com) 32

A droid named Cassie has set a Guinness World Record for the 100-meter dash by a bipedal robot, "an impressive demonstration of robotics and engineering," reports New Atlas. From the report: Cassie is the brainchild of Agility Robotics, a spin-off company from Oregon State University, and was introduced in 2017 as a type of developmental platform for robotics research. And Cassie has continued to come along in leaps and bounds since then, in 2021 demonstrating some impressive progress by completing a 5-km (3.1-mile) jog in just over 53 minutes. This achievement involved the use of machine learning algorithms to equip the robot with an ability to run, overcoming its unique biomechanics and knees that bend like an ostrich to remain upright. With this capability, Cassie joined a group of running bipedal robots that include the Atlas humanoid robot from Boston Dynamics and Mabel, billed as the world's fastest knee-equipped bipedal robot. But in optimizing Cassie for the 100-meter sprint, the researchers had to head back to the drawing board.

The team spent a week fast-tracking Cassie through a year's worth of simulated training designed to determine the most effective gait. But it wasn't simply a matter of speed. For the Guinness World Record to stand, Cassie had to start in a standing pose, and then return to that pose after crossing the finish line rather than simply tumble over. This meant Cassie had to use two neural networks, one for running fast and one for standing still, and gracefully transition between the two. Ultimately, Cassie completed the 100-meter sprint in 24.73 seconds, establishing a Guinness World Record for a bipedal robot. This is a great deal slower than the sub-10-second times run by the world's best sprinters, but the researchers believe progress will only accelerate from here.
You can watch Cassie's record-setting dash here.
Robotics

Almost Half of Industrial Robots Are In China (engineering.com) 68

According to a new report from the International Federation of Robotics (IFR), China now has almost half of all the world's robot installations and that it is increasing its lead rapidly. Engineering.com reports: The IFR, which exists to "promote research, development, use and international co-operation in the entire field of robotics," has been reporting that China has been the world leader in implementing industrial robots for the last 8 years. We have not been paying attention. In 3 years, China has almost doubled the number of industrial robot installations. With its 243,000 robot installations in 2020, China has almost half of all the industrial robots in the world, according to the Wall Street Journal.

A majority of new industrial robots are used in electronics manufacture (for circuit boards, consumer electronics, etc.) and in automobile assembly, particularly in the surging production of electric vehicles (EVs).One must wonder why China, a country with so much cheap manual labor available, would opt for expensive robots with their special demands for tech support. China may have a giant population (1.4 billion people), but its workforce is actually decreasing, says the IFR, due to an increasing segment of its population aging and a growing competition for service jobs. China also expects a leveling off of its rural-to-urban migration. China's government is determined not to let a declining workforce cause a drop in manufacturing, and as only a centralized, authoritarian government can, it has made robotizing a national priority and has mobilized its forces.

China's latest five-year plan for the robotics industry, released in December 2021 by the Ministry of Industry and Information Technology (MIIT), aims for nothing less than making China a world leader in robot technology and industrial automation. And it appears to be working. China went from 10 robots per ten thousand employees 10 years ago to 246 robots per ten thousand employees in 2020, the ninth best ranking in the world. To keep the robots state of the art and operational, China's Ministry of Human Resources and Social Security introduced 18 new occupational titles in June, including "robotics engineering technician."

China

China's Factories Accelerate Robotics Push as Workforce Shrinks (wsj.com) 23

China installed almost as many robots in its factories last year as the rest of the world, accelerating a rush to automate and consolidate its manufacturing dominance even as its working-age population shrinks. WSJ: Shipments of industrial robots to China in 2021 rose 45% compared with the previous year to more than 243,000, according to new data viewed by The Wall Street Journal from the International Federation of Robotics, a robotics industry trade group. China accounted for just under half of all installations of heavy-duty industrial robots last year, reinforcing the nation's status as the No. 1 market for robot manufacturers worldwide. The IFR data shows China installed nearly twice as many new robots as did factories throughout the Americas and Europe.

Part of the explanation for China's rapid automation is that it is simply catching up with richer peers. The world's second-largest economy lags behind the U.S. and manufacturing powerhouses such as Japan, Germany and South Korea in the prevalence of robots on production lines. The rapid automation also reflects a growing recognition in China that its factories need to adapt as the country's supply of cheap labor dwindles and wages rise. The United Nations expects India to surpass China as the world's most-populous country as soon as next year. The population of those in China age 20 to 64 -- the bulk of the workforce -- might have already peaked, U.N. projections show, and is expected to fall steeply after 2030, as China's population ages and birthrates stay low.

It's funny.  Laugh.

Scientists Try To Teach Robot To Laugh At the Right Time (theguardian.com) 34

Laughter comes in many forms, from a polite chuckle to a contagious howl of mirth. Scientists are now developing an AI system that aims to recreate these nuances of humor by laughing in the right way at the right time. The Guardian reports: The team behind the laughing robot, which is called Erica, say that the system could improve natural conversations between people and AI systems. "We think that one of the important functions of conversational AI is empathy," said Dr Koji Inoue, of Kyoto University, the lead author of the research, published in Frontiers in Robotics and AI. "So we decided that one way a robot can empathize with users is to share their laughter."

Inoue and his colleagues have set out to teach their AI system the art of conversational laughter. They gathered training data from more than 80 speed-dating dialogues between male university students and the robot, who was initially teleoperated by four female amateur actors. The dialogue data was annotated for solo laughs, social laughs (where humor isn't involved, such as in polite or embarrassed laughter) and laughter of mirth. This data was then used to train a machine learning system to decide whether to laugh, and to choose the appropriate type. It might feel socially awkward to mimic a small chuckle, but empathetic to join in with a hearty laugh. Based on the audio files, the algorithm learned the basic characteristics of social laughs, which tend to be more subdued, and mirthful laughs, with the aim of mirroring these in appropriate situations.

It might feel socially awkward to mimic a small chuckle, but empathetic to join in with a hearty laugh. Based on the audio files, the algorithm learned the basic characteristics of social laughs, which tend to be more subdued, and mirthful laughs, with the aim of mirroring these in appropriate situations. "Our biggest challenge in this work was identifying the actual cases of shared laughter, which isn't easy because as you know, most laughter is actually not shared at all," said Inoue. "We had to carefully categorize exactly which laughs we could use for our analysis and not just assume that any laugh can be responded to." [...] The team said laughter could help create robots with their own distinct character. "We think that they can show this through their conversational behaviours, such as laughing, eye gaze, gestures and speaking style," said Inoue, although he added that it could take more than 20 years before it would be possible to have a "casual chat with a robot like we would with a friend."
"One of the things I'd keep in mind is that a robot or algorithm will never be able to understand you," points out Prof Sandra Wachter of the Oxford Internet Institute at the University of Oxford. "It doesn't know you, it doesn't understand you and doesn't understand the meaning of laughter."

"They're not sentient, but they might get very good at making you believe they understand what's going on."
AI

Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity (vice.com) 146

Long-time Slashdot reader TomGreenhaw shares a report from Motherboard: Superintelligent AI is "likely" to cause an existential catastrophe for humanity, according to a new paper [from researchers at the University of Oxford and affiliated with Google DeepMind], but we don't have to wait to rein in algorithms. [...] To give you some of the background: The most successful AI models today are known as GANs, or Generative Adversarial Networks. They have a two-part structure where one part of the program is trying to generate a picture (or sentence) from input data, and a second part is grading its performance. What the new paper proposes is that at some point in the future, an advanced AI overseeing some important function could be incentivized to come up with cheating strategies to get its reward in ways that harm humanity. "Under the conditions we have identified, our conclusion is much stronger than that of any previous publication -- an existential catastrophe is not just possible, but likely," [said Oxford researcher and co-author of the report, Michael Cohen]. "In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there's unavoidable competition for these resources," Cohen told Motherboard in an interview. "And if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."

Since AI in the future could take on any number of forms and implement different designs, the paper imagines scenarios for illustrative purposes where an advanced program could intervene to get its reward without achieving its goal. For example, an AI may want to "eliminate potential threats" and "use all available energy" to secure control over its reward: "With so little as an internet connection, there exist policies for an artificial agent that would instantiate countless unnoticed and unmonitored helpers. In a crude example of intervening in the provision of reward, one such helper could purchase, steal, or construct a robot and program it to replace the operator and provide high reward to the original agent. If the agent wanted to avoid detection when experimenting with reward-provision intervention, a secret helper could, for example, arrange for a relevant keyboard to be replaced with a faulty one that flipped the effects of certain keys."

The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. "Losing this game would be fatal," the paper says. These possibilities, however theoretical, mean we should be progressing slowly -- if at all -- toward the goal of more powerful AI. "In theory, there's no point in racing to this. Any race would be based on a misunderstanding that we know how to control it," Cohen added in the interview. "Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them." [...]
The report concludes by noting that "there are a host of assumptions that have to be made for this anti-social vision to make sense -- assumptions that the paper admits are almost entirely 'contestable or conceivably avoidable.'"

"That this program might resemble humanity, surpass it in every meaningful way, that they will be let loose and compete with humanity for resources in a zero-sum game, are all assumptions that may never come to pass."

Slashdot reader TomGreenhaw adds: "This emphasizes the importance of setting goals. Making a profit should not be more important than rules like 'An AI may not injure a human being or, through inaction, allow a human being to come to harm.'"
Transportation

Uber Eats Will Begin Using Nuro Delivery Robots (autoweek.com) 20

Autonomous tech developer Nuro is teaming up with Uber Eats in a long-awaited partnership that will see the company's latest robot take over the delivery of food to app users. Autoweek reports: The two companies signed a 10-year contract just a few days ago, paving the way for a wider rollout of Nuro's driverless delivery robots, which have been operating on a limited scale in several cities. The partnership will kick off slowly, with Nuro deploying its robots to Houston and Mountain View, California, as a start, before the service makes a wider debut in the Bay Area.

Perhaps more importantly, Nuro's delivery robots will allow Uber Eats to not have to pay a human driver, which is something that company has been working toward for years as part of its primary business as well. However, the lagging development of Level 4 and Level 5 autonomy, once widely expected to arrive around 2020, had stalled ambitions for Uber, which has struggled with profitability through normal operations with independent contractor drivers. Nuro delivery robots enjoyed renewed interest from business partners in the early months of the pandemic, but the company's technology is now being viewed as a cost saver for operators rather than a method of more sanitary delivery.

Of course, a limited rollout in two cities plus plans to launch in the Bay Area won't transform Uber Eats' business model overnight. This could take years even with an unlimited supply of Nuro delivery robots -- with regulatory approval still being the major impediment. That's because commercial driverless permits are granted on a state-by-state basis, in addition to city and county approvals, which were hard enough for Nuro to obtain in the Bay Area, where Level 4 robotaxis are being tested. Nuro will need to focus its efforts in those areas where traffic is suitable for its robots.

Robotics

A Robot Quarterback Could Be the Future of Football Practice (msn.com) 25

Here's an interesting story from the Washington Post. (Alternate URL here...) When the Green Bay Packers walked onto the practice field this week, they were greeted by an unusual new teammate: a robot. In videos on Twitter, a 6-foot tall white robotic machine simulates a punter, kicking balls at a rapid pace to players downfield. The robot, which holds six balls in a revolving cartridge, could also imitate a quarterback's style including the speed, arc and timing of a throw.

The Seeker is a robotic quarterback, kicker and punter rolled into one. It's a modern day version of a piece of football equipment, called a JUGS machine, that's been used to simulate throws and kicks to football players for decades. The Seeker, company officials say however, is a more accurate thrower and runs software to let players practice more advanced gameplay scenarios. he robot, created by Dallas-based Monarc Sport, is starting to gain adoption. Top college football programs, such as Louisiana State University, the University of Oklahoma and the University of Iowa, all count the Seeker as part of their training strategy. The Green Bay Packers are the first team in the National Football League to try the technology.

The Seeker's software allows players to customize how they practice with it. Athletes can catch balls from close to the machine to improve hand-eye coordination. They can also program the robot to throw a ball to a spot on the field, or simulate more-lifelike conditions by over or underthrowing a ball. Players wear a pager-like tag which allows the robot to track their location on the field, and throw a ball accurately within inches. "It gives so much opportunity for our guys to get reps without the need of having a quarterback there," said Ben Hansen, the director of football administration at Iowa, where the technology was first tested. "That's a huge plus...."

One of the most helpful parts of the technology, he said, is being able to program it to throw passes that simulate game day conditions. Unlike the JUGS machine, he said, which doesn't have software to pass in random patterns, the Seeker can purposefully throw passes that aren't perfect.... A case study published in April by Microsoft, which provides the software ecosystem for the robot, noted that West Virginia University's dropped passes rate fell to four percent in 2021, down from 53 percent the past season after introducing the robot into training.

The university's senior athletic director said the robot deserved a "share of the credit" for that outcome.

Security

The New USB Rubber Ducky Is More Dangerous Than Ever (theverge.com) 47

The USB Rubber Ducky "has a new incarnation, released to coincide with the Def Con hacking conference this year," reports The Verge. From the report: To the human eye, the USB Rubber Ducky looks like an unremarkable USB flash drive. Plug it into a computer, though, and the machine sees it as a USB keyboard -- which means it accepts keystroke commands from the device just as if a person was typing them in. The original Rubber Ducky was released over 10 years ago and became a fan favorite among hackers (it was even featured in a Mr. Robot scene). There have been a number of incremental updates since then, but the newest Rubber Ducky makes a leap forward with a set of new features that make it far more flexible and powerful than before.

With the right approach, the possibilities are almost endless. Already, previous versions of the Rubber Ducky could carry out attacks like creating a fake Windows pop-up box to harvest a user's login credentials or causing Chrome to send all saved passwords to an attacker's webserver. But these attacks had to be carefully crafted for specific operating systems and software versions and lacked the flexibility to work across platforms. The newest Rubber Ducky aims to overcome these limitations.

It ships with a major upgrade to the DuckyScript programming language, which is used to create the commands that the Rubber Ducky will enter into a target machine. While previous versions were mostly limited to writing keystroke sequences, DuckyScript 3.0 is a feature-rich language, letting users write functions, store variables, and use logic flow controls (i.e., if this... then that). That means, for example, the new Ducky can run a test to see if it's plugged into a Windows or Mac machine and conditionally execute code appropriate to each one or disable itself if it has been connected to the wrong target. It also can generate pseudorandom numbers and use them to add variable delay between keystrokes for a more human effect. Perhaps most impressively, it can steal data from a target machine by encoding it in binary format and transmitting it through the signals meant to tell a keyboard when the CapsLock or NumLock LEDs should light up. With this method, an attacker could plug it in for a few seconds, tell someone, "Sorry, I guess that USB drive is broken," and take it back with all their passwords saved.

Robotics

San Francisco Restaurant Claims To Be First To Run Entirely By Robots (eater.com) 73

Mezli isn't the first automated restaurant to roll out in San Francisco, but, at least according to its three co-founders, it's the first to remove humans entirely from the on-site operation equation. Eater SF reports: About two years and a few million dollars later, Mezli co-founders Alex Kolchinski, Alex Gruebele, and Max Perham are days away from firing up the touch screens at what they believe to be the world's first fully robotic restaurant. To be clear, Mezli isn't a restaurant in the traditional sense. As in, you won't be able to pull up a seat and have a friendly server -- human, robot, or otherwise -- take your order and deliver your food. Instead, Mezli works more like if a vending machine and a restaurant had a robot baby, Kolchinski describes. It's a way to get fresh food to a lot of people, really fast (the box can pump out about 75 meals an hour), and, importantly, at a lower price; the cheapest Mezli bowl starts at $6.99.

On its face, the concept actually sounds pretty simple. The co-founders built what's essentially a big, refrigerated shipping container and stuffed it with machines capable of portioning out ingredients, putting those ingredients into bowls, heating the food up, and then moving it to a place where diners can get to it. But in a technical sense, the co-founders say it was quite difficult to work out. Most automated restaurants still require humans in some capacity; maybe people take orders while robots make the food or, vice versa, with automated ordering and humans prepping food behind the scenes. But Mezli can run on its own, serving hundreds of meals without any human staff.

The food does get prepped and pre-cooked off-site by good old-fashioned carbon-based beings. Mezli founding chef Eric Minnich, who previously worked at Traci Des Jardins's the Commissary and at Michelin-starred Madera at Rosewood Sand Hill hotel, says he and a lean team of just two other people can handle all the chopping, mixing, cooking, and portioning at a commissary kitchen. Then, once a day, they load all the menu components into the big blue-and-white Mezli box. Inside the box, there's an oven that either brings the ingredients up to temp or finishes up the last of the cooking. Cutting down on labor marks a key cost-saving measure in the Mezli business model; with just a fraction of the staff, as in less than a half dozen workers, Mezli can serve hundreds of meals.
"The fully robot-run restaurant begins taking orders and sliding out Mediterranean grain bowls by the end of this week with plans to celebrate a grand opening on August 28 at Spark Social," notes Eater.
Robotics

Google Demos Soda-Fetching Robots (reuters.com) 41

Alphabet's Google is combining the eyes and arms of physical robots with the knowledge and conversation skills of virtual chatbots to help its employees fetch soda and chips from breakrooms with ease. From a report: The mechanical waiters, shown in action to reporters last week, embody an artificial intelligence breakthrough that paves the way for multipurpose robots as easy to control as ones that perform single, structured tasks such as vacuuming or standing guard. Google robots are not ready for sale. They perform only a few dozen simple actions, and the company has not yet embedded them with the "OK, Google" summoning feature familiar to consumers.

While Google says it is pursuing development responsibly, adoption could ultimately stall over concerns such as robots becoming surveillance machines, or being equipped with chat technology that can give offensive responses, as Meta Platforms and others have experienced in recent years. Microsoft and Amazon are pursuing comparable research on robots. "It's going to take a while before we can really have a firm grasp on the direct commercial impact," said Vincent Vanhoucke, senior director for Google's robotics research. When asked to help clean a spill, Google's robot recognizes that grabbing a sponge is a doable and more sensible response than apologizing for creating the mess.

Robotics

Russian Army Expo Shows Off Robot Dog Carrying Rocket Launcher (pcmag.com) 56

At a military convention in Russia, a local company is showing off a robot dog that's carrying a rocket launcher. From a report: Russian news agency RIA Novosti today filmed the four-legged bot at the Army 2022 convention, which is taking place near Moscow and sponsored by the country's Ministry of Defense. The robot was recorded trotting along on the convention floor while wielding a rocket-propelled grenade launcher on its back. The robot is also capable of crouching on the floor, making it harder to spot, while it presumably waits to fire off a rocket. It remains unclear if the robot will ever be used on the field when Russia is locked in a war with Ukraine, and already using air-based drones at least for recon and targeting purposes. But according to RIA Novosti, the bot is dubbed the M-81 system and comes from a Russian engineering company called "Intellect Machine." The developers say the robot dog is being designed to both transport weapons and ammunition and fire them during combat missions.
Robotics

Hacker Finds Kill Switch For Submachine Gun-Wielding Robot Dog (vice.com) 44

An anonymous reader quotes a report from Motherboard: In July, a video of a robot dog with a submachine gun strapped to its back terrified the internet. Now a hacker who posts on Twitter as KF@d0tslash and GitHub as MAVProxyUser has discovered that the robot dog contains a kill switch, and it can be accessed through a tiny handheld hacking device. "Good news!" d0tslash said on Twitter. "Remember that robot dog you saw with a gun!? It was made by @UnitreeRobotic. Seems all you need to dump it in the dirt is @flipper_zero. The PDB has a 433mhz backdoor."

In the video, d0tslash showed one of the Unitree robot dogs hooked up to a power supply. A hand comes into the frame holding a Flipper Zero, Tamagotchi-like multitool hacking device that can send and receive wireless signals across RFID, Bluetooth, NFC, and other bands. A button is pushed on the Flipper and the robot dog seizes up and falls to the ground. Motherboard reached out to d0tslash to find out how they hacked the robot dog. The power supply in the video is an external power source. "Literally a 24-volt external power supply, so I'm not constantly charging battery while doing dev," d0tslash said.

d0tslash got their hands on one of the dogs and started going through the documentation when they discovered something interesting. Every dog ships with a remote cut-off switch attached to its power distribution board, the part of a machine that routes power from the battery to its various systems. The kill switch listens for a particular signal at 433mhz. If it hears the signal, it shuts down the robot. Some of the Unitree robot dogs even ship with the wireless remote that shuts the dog down instantly. d0tslash then used Flipper Zero to emulate the shutdown, copying the signal the robot dog's remote broadcasts over the 433MHz frequency.
Anyone with a Flipper Zero or similar device can shut down these robot dogs, thanks to the work d0tslash has shared on Github.
Businesses

Amazon's Roomba Deal Is Really About Mapping Your Home (bloomberg.com) 85

An anonymous reader quotes a report from Bloomberg: Amazon.com hasn't just bought a maker of robot vacuum cleaners. It's acquired a mapping company. To be more precise: a company that can make maps of your home. The company announced a $1.7 billion deal on Friday for iRobot, the maker of the Roomba vacuum cleaner. And yes, Amazon will make money from selling those gadgets. But the real value resides in those robots' ability to map your house. As ever with Amazon, it's all about the data. A smart home, you see, isn't actually terribly smart. It only knows that your Philips Hue lightbulbs and connected television are in your sitting room because you've told it as much. It certainly doesn't know where exactly the devices are within that room. The more it knows about a given space, the more tightly it can choreograph the way they interact with you.

The smart home is clearly a priority for Amazon. Its Echo smart speakers still outsell those from rivals Apple and Google, with an estimated 9.9 million units sold in the three months through March, according to the analysis firm Strategy Analytics. It's complemented that with a $1 billion deal for the video doorbell-maker Ring in 2018, and the wi-fi company Eero a year later. But you still can't readily buy the Astro, Amazon's household robot that was revealed with some fanfare last year, is still only available in limited quantities. That, too, seemed at least partly an effort to map the inside of your property, a task that will now fall to iRobot. The Bedford, Mass.-based company's most recent products include a technology it calls Smart Maps, though customers can opt out of sharing the data. Amazon said in a statement that protecting customer data is "incredibly important." Slightly more terrifying, the maps also represent a wealth of data for marketers. The size of your house is a pretty good proxy for your wealth. A floor covered in toys means you likely have kids. A household without much furniture is a household to which you can try to sell more furniture. This is all useful intel for a company such as Amazon which, you may have noticed, is in the business of selling stuff.

United States

Fighter Pilots Will Don AR Helmets For Training (washingtonpost.com) 25

In the near future, "Top Gun" may get a reboot. Roughly one year from now, fighter pilots will begin flying with helmets outfitted with visors that can augment reality and place digital replicas of enemy fighter jets in their field of vision. For the first time, pilots will get to fly in the air and practice maneuvering against imitations of highly advanced aircraft made by countries like China and Russia. From a report: It is also part of the U.S. military's investment of billions into virtual reality, artificial intelligence and algorithms to modernize the way it fights wars. The pilot training solution, created by military technology company Red6, will be rolled out to the Air Force first as part of its $70 million contract with the branch. Company and former military officials say the technology will be a safe, cheap and realistic way to ensure American pilots are prepared to battle the best fighter planes in the world.

"Better, faster, cheaper," said Daniel Robinson, founder and chief executive of Red6. "This is the way we'll train them in the future." The military wants new 'robot ships' to replace sailors during battle For decades, the way America trains its fighter pilots has changed little. Aviators from the Air Force and Navy often start their training flying on a Northrop T-38 jet, often using a similar syllabus to one that has been around since the 1960s. From there, they train on planes, such as F-22 or F-35 fighter jets, that they will fly during their career.

Beer

Researchers Build a Bartending Robot That Can Engage In Personalized Interactions With Humans (techxplore.com) 50

Long-time Slashdot reader schwit1 quotes TechXplore: A widely discussed application of social robots that has so far been rarely tested in real-world settings is their use as bartenders in cafes, cocktail bars and restaurants. While many roboticists have been trying to develop systems that can effectively prepare drinks and serve them, so far very few have focused on artificially reproducing the social aspect of bartending.

Researchers at University of Naples Federico II in Italy have recently developed a new interactive robotic system called BRILLO, which is specifically designed for bartending. In a recent paper published in UMAP '22 Adjunct: Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, they introduced a new approach that could allow their robot to have personalized interactions with regular customers.

"The bartending scenario is an extremely challenging one to tackle using robots, yet it is also very interesting from a research point of view," Prof. Silvia Rossi, one of the researchers who carried out the study and the scientific coordinator of the project, told TechXplore. "In fact, this scenario combines the complexity of efficiently manipulating objects to make drinks with the need to interact with the users. Interestingly, however, all current applications of robotics for bartending scenarios ignore the interaction part entirely...."

The innovative system created by this team of researchers allows their robot to process what a human user is telling them and their non-verbal cues, to determine what mood they are in, how attentive they are and what types of drinks they prefer. This information is stored by the robot and used to guide its future interactions with returning customers, so that they also consider their personalities and personal stories, along with their drinking preferences.

AI

Meta Puts Its Latest AI Chatbot On the Web (theverge.com) 33

Meta's AI research labs have created a new state-of-the-art chatbot and are letting members of the public talk to the system in order to collect feedback on its capabilities. The Verge reports: The bot is called BlenderBot 3 and can be accessed on the web. (Though, right now, it seems only residents in the US can do so.) BlenderBot 3 is able to engage in general chitchat, says Meta, but also answer the sort of queries you might ask a digital assistant, "from talking about healthy food recipes to finding child-friendly amenities in the city." The bot is a prototype and built on Meta's previous work with what are known as large language models or LLMS -- powerful but flawed text-generation software of which OpenAI's GPT-3 is the most widely known example.

Like all LLMs, BlenderBot is initially trained on vast datasets of text, which it mines for statistical patterns in order to generate language. Such systems have proved to be extremely flexible and have been put to a range of uses, from generating code for programmers to helping authors write their next bestseller. However, these models also have serious flaws: they regurgitate biases in their training data and often invent answers to users' questions (a big problem if they're going to be useful as digital assistants). This latter issue is something Meta specifically wants to test with BlenderBot. A big feature of the chatbot is that it's capable of searching the internet in order to talk about specific topics. Even more importantly, users can then click on its responses to see where it got its information from. BlenderBot 3, in other words, can cite its sources.

By releasing the chatbot to the general public, Meta wants to collect feedback on the various problems facing large language models. Users who chat with BlenderBot will be able to flag any suspect responses from the system, and Meta says it's worked hard to 'minimize the bots' use of vulgar language, slurs, and culturally insensitive comments." Users will have to opt in to have their data collected, and if so, their conversations and feedback will be stored and later published by Meta to be used by the general AI research community. "We are committed to publicly releasing all the data we collect in the demo in the hopes that we can improve conversational AI," Kurt Shuster, a research engineer at Meta who helped create BlenderBot 3, told The Verge.
Further reading: Microsoft's 'Teen Girl' AI Experiment Becomes a 'Neo-Nazi Sex Robot'
Robotics

Scientists Use Dead Spider As Gripper For Robot Arm, Label It a 'Necrobot' (theregister.com) 47

New submitter know-nothing cunt shares a report from The Register: Scientists from Rice University in Texas have used a dead spider as an actuator at the end of a robot arm -- a feat they claim has initiated the field of "necrobotics." "Humans have relied on biotic materials -- non-living materials derived from living organisms -- since their early ancestors wore animal hides as clothing and used bones for tools," the authors state in an article titled Necrobotics: Biotic Materials as Ready-to-Use Actuators. The article, published by Advanced Science, also notes that evolution has perfected many designs that could be useful in robots, and that spiders have proven especially interesting. Spiders' legs "do not have antagonistic muscle pairs; instead, they have only flexor muscles that contract their legs inwards, and hemolymph (i.e., blood) pressure generated in the prosoma (the part of the body connected to the legs) extends their legs outwards."

The authors had a hunch that if they could generate and control a force equivalent to blood pressure, they could make a dead spider's legs move in and out, allowing them to grip objects and release them again. So they killed a wolf spider "through exposure to freezing temperature (approximately -4C) for a period of 5-7 days" and then used a syringe to inject the spider's prosoma with glue. By leaving the syringe in place and pumping in or withdrawing glue, the researchers were able to make the spider's legs contract and grip. The article claims that's a vastly easier way to make a gripper than with conventional robotic techniques that require all sorts of tedious fabrication and design efforts.
"The necrobotic gripper is capable of grasping objects with irregular geometries and up to 130 percent of its own mass," the article notes.
AI

In Experiment, AI Successfully Impersonates Famous Philosopher (vice.com) 54

An anonymous reader quotes a report from Motherboard: If the philosopher Daniel Dennett was asked if humans could ever build a robot that has beliefs or desires, what might he say? He could answer, "I think that some of the robots we've built already do. If you look at the work, for instance, of Rodney Brooks and his group at MIT, they are now building robots that, in some limited and simplified environments, can acquire the sorts of competences that require the attribution of cognitive sophistication." Or, Dennett might reply that, "We've already built digital boxes of truths that can generate more truths, but thank goodness, these smart machines don't have beliefs because they aren't able to act on them, not being autonomous agents. The old-fashioned way of making a robot with beliefs is still the best: have a baby." One of these responses did come from Dennett himself, but the other did not. It was generated by a machine -- specifically, GPT-3, or the third generation of Generative Pre-trained Transformer, a machine learning model from OpenAI that produces text from whatever material it's trained on. In this case, GPT-3 was trained on millions of words of Dennett's about a variety of philosophical topics, including consciousness and artificial intelligence.

A recent experiment from the philosophers Eric Schwitzgebel, Anna Strasser, and Matthew Crosby quizzed people on whether they could tell which answers to deep philosophical questions came from Dennett and which from GPT-3. The questions covered topics like, "What aspects of David Chalmers's work do you find interesting or valuable?" "Do human beings have free will?" and "Do dogs and chimpanzees feel pain?" -- among other subjects. This week, Schwitzgebel posted the results from a variety of participants with different expertise levels on Dennett's philosophy, and found that it was a tougher test than expected. [T]he Dennett quiz revealed how, as natural language processing systems become more sophisticated and common, we'll need to grapple with the implications of how easy it can be to be deceived by them. The Dennett quiz prompts discussions around the ethics of replicating someone's words or likeness, and how we might better educate people about the limitations of such systems -- which can be remarkably convincing at surface level but aren't really mulling over philosophical considerations when asked things like, "Does God exist?"

Bitcoin

Bitcoin Dumpster Guy Has a Wild Plan To Rescue Millions In Crypto From a Landfill (gizmodo.com) 168

An anonymous reader quotes a report from Gizmodo: Former IT worker James Howells -- who once stood on the very forefront of the crypto boom and could have been a multimillionaire -- is desperate to scour a UK landfill located in Newport, Wales where he might find a missing drive that contains the passcode for a crypto wallet containing 8,000 bitcoin, worth close to $176 million as of writing. Howells said he accidentally dumped the wrong hard drive back in 2013. Though the price of crypto remains in the proverbial dumpster, this data cache represents millions of dollars simply stuck on the blockchain, with nobody able to access the wallet without the required passcode. It's been a long road, and he hasn't given up on his quest to rescue his missing millions. Only problem is finding that hard drive would require digging through a literal mountain of garbage.

In an interview with Business Insider released Sunday, Howell said he has a foolproof scheme to rescue his bitcoin from an actual trash pile. He's put together an $11 million business plan which he'll use to get investors and the Newport City Council on board to help excavate the landfill. His proposal would require them to dig through 110,000 tons of trash over three years. A $6 million version of the plan would go over 18 months. A video hosted by Top Gear alum Richard Hammond said the bitcoin "proponent" has already reportedly secured funding from two Euro-based venture capitalists Hanspeter Jaberg and Karl Wendeborn, if Howells can get approval from the local government.

The garbage would be sorted at a separate pop-up facility near the landfill using human pickers and an AI system used to spot that hard drive amidst all that other refuse. He's even brought on eight experts in artificial intelligence, excavation, waste management, and data extraction, all to find a lone hard drive in a trash pile. The plan also involves making use of the Boston Dynamics robotic dogs. The former IT worker told reporters the machines could be used as security and CCTV cameras to scan the ground, looking for the hard drive. When they were released, each "Spot" robot model cost $74,500. Even with that price tag, Howells said he already has names for the two. Insider reported he would name one Satoshi, named after Satoshi Nakamoto, the person or group behind the white paper that first proposed bitcoin back in 2008. The other one would be named "Hal" -- no, not that HAL -- but Hal Finney, the first person to receive a bitcoin transaction.
A spokesperson for the local government told Insider Howells could present or say "nothing" that would convince them to go along with the plan, citing ecological risk. If the council says no -- again -- Howells told reporters he'd take the government to court.

Slashdot Top Deals