×
Robotics

Slashdot Asks: Which is Better, a Basic Income or a Guaranteed Job? (timharford.com) 899

Barack Obama said this month that AI research is accelerating, making it harder to find jobs for everybody, and concluding "we're going to have to consider new ways of thinking about these problems, like a universal income."

But a Financial Times columnist adds that "an intriguing debate has broken out over how to look after disadvantaged workers both now and in this robot future. Should everyone be given free money? Or should everyone receive the guarantee of a decently-paid job?" An anonymous reader quotes some of the highlights: Psychologists have found that we like and benefit from feeling in control. That is a mark in favour of a universal basic income: being unconditional, it is likely to enhance our feelings of control. The money would be ours, by right, to do with as we wish. A job guarantee might work the other way: it makes money conditional on punching the clock. On the other hand (again!), we like to keep busy. Harvard researchers Matthew Killingsworth and Daniel Gilbert (UK) (US) have found that "a wandering mind is an unhappy mind". And social contact is generally good for our wellbeing. Maybe guaranteed jobs would help keep us active and socially connected.

The truth is, we don't really know... It is good to see that the more thoughtful advocates of either policy -- or both policies simultaneously -- are asking for large-scale trials to learn more.

He titled the column "The secret to happiness after the robot takeover." But what say Slashdot readers?

Is it better to be given a basic income -- or a guaranteed job?
Robotics

Should Bots Be Required To Tell You That They're Not Human? (buzzfeednews.com) 92

"BuzzFeed has this story about proposals to make social media bots identify themselves as fake people," writes an anonymous Slashdot reader. "[It's] based on a paper by a law professor and a fellow researcher." From the report: General concerns about the ethical implications of misleading people with convincingly humanlike bots, as well as specific concerns about the extensive use of bots in the 2016 election, have led many to call for rules regulating the manner in which bots interact with the world. "An AI system must clearly disclose that it is not human," the president of the Allen Institute on Artificial Intelligence, hardly a Luddite, argued in the New York Times. Legislators in California and elsewhere have taken up such calls. SB-1001, a bill that comfortably passed the California Senate, would effectively require bots to disclose that they are not people in many settings. Sen. Dianne Feinstein has introduced a similar bill for consideration in the United States Senate.

In our essay, we outline several principles for regulating bot speech. Free from the formal limits of the First Amendment, online platforms such as Twitter and Facebook have more leeway to regulate automated misbehavior. These platforms may be better positioned to address bots' unique and systematic impacts. Browser extensions, platform settings, and other tools could be used to filter or minimize undesirable bot speech more effectively and without requiring government intervention that could potentially run afoul of the First Amendment. A better role for government might be to hold platforms accountable for doing too little to address legitimate societal concerns over automated speech. [A]ny regulatory effort to domesticate the problem of bots must be sensitive to free speech concerns and justified in reference to the harms bots present. Blanket calls for bot disclosure to date lack the subtlety needed to address bot speech effectively without raising the specter of censorship.

Google

Google is Building 'Virtual Agents' To Handle Call Centers' Grunt Work (qz.com) 129

Google is officially building AI technology to replace some of the work in call centers, the company announced at its Cloud Next conference today, confirming earlier reports. From a report: The software is called Contact Center AI, and Google is working with at least a dozen partners, such as Cisco and Vonage, to install "virtual agents" that will be the first to pick up the phone when a customer is routed to a call center. When the customer asks something that the AI can't do, it will automatically forward the call to a human, according to a blog post by Google Cloud chief scientist Fei-Fei Li. Li writes that new AI shares some underlying technology as Google Duplex, the AI service showed off earlier this year that emulates a human voice to call restaurants and make reservations. This means that with Contact Center AI, it's unlikely a customer would know they're talking to a robot unless it was disclosed at the beginning of the call.
Robotics

Boston Dynamics Is Gearing Up To Produce Thousands of Robot Dogs (fortune.com) 83

Boston Dynamics, maker of uncannily agile robots, is poised to bring its first commercial product to market -- a small, dog-like robot called the SpotMini. From a report: The launch was announced in May, and founder Marc Raibert recently said that by July of next year, Boston Dynamics will be producing the SpotMini at the rate of around 1,000 units per year. The broader goal, as reported by Inverse, is to create a flexible platform for a variety of applications. According to Raibert, SpotMini is currently being tested for use in construction, delivery, security, and home assistance applications. The SpotMini moves with the same weirdly smooth confidence as previous experimental Boston Dynamics robots with names like Cheetah, BigDog, and Spot.
Robotics

State Senator Wants A Law Forcing Bots To Admit They're Not Human (brisbanetimes.com.au) 151

An anonymous reader writes: Several commentators are calling for a law that requires bots to admit they are not human. There is a bill in California that would do just that. A new paper argues that these laws may look Constitutional but actually raise First Amendment issues.
The New York Times reports: Bots are easy to make and widely employed, and social media companies are under no legal obligation to get rid of them. A law that discourages their use could help, but experts aren't sure how the one [state senator Robert] Hertzberg is trying to push through, in California, might work. For starters, would bots be forced to identify themselves in every Facebook post? In their Instagram bios? In their Twitter handles? The measure, SB-1001, a version of which has already left the senate floor and is working its way through the state's Assembly, also doesn't mandate that tech companies enforce the regulation. And it's unclear how a bill that is specific only to California would apply to a global internet...

All parties agree that the bill illustrates the difficulty that lawmakers have in crafting legislation that effectively addresses the problems constituents confront online. As the pace of technological development has raced ahead of government, the laws that exist on the books -- not to mention some lawmakers' understandings of technology -- have remained comparatively stagnant.

The Times seems to question whether the law should be targeted at the creators of bots instead of the platforms that host them, pointing out that tech companies like Twitter "have the power to change dynamics on their platforms directly and at the scale that those problems require."
Biotech

AI Plus a Chemistry Robot Finds All the Reactions That Will Work (arstechnica.com) 39

A team of researchers at Glasgow University have built a robot that uses machine learning to run and analyze its own chemical reaction. The system is able to figure out every reaction that's possible from a given set of starting materials. Ars Technica reports: Most of its parts are dispersed through a fume hood, which ensures safe ventilation of any products that somehow escape the system. The upper right is a collection of tanks containing starting materials and pumps that send them into one of six reaction chambers, which can be operated in parallel. The outcomes of these reactions can then be sent on for analysis. Pumps can feed samples into an IR spectrometer, a mass spectrometer, and a compact NMR machine -- the latter being the only bit of equipment that didn't fit in the fume hood. Collectively, these can create a fingerprint of the molecules that occupy a reaction chamber. By comparing this to the fingerprint of the starting materials, it's possible to determine whether a chemical reaction took place and infer some things about its products.

All of that is a substitute for a chemist's hands, but it doesn't replace the brains that evaluate potential reactions. That's where a machine-learning algorithm comes in. The system was given a set of 72 reactions with known products and used those to generate predictions of the outcomes of further reactions. From there, it started choosing reactions at random from the remaining list of options and determining whether they, too, produced products. By the time the algorithm had sampled 10 percent of the total possible reactions, it was able to predict the outcome of untested reactions with more than 80-percent accuracy. And, since the earlier reactions it tested were chosen at random, the system wasn't biased by human expectations of what reactions would or wouldn't work.
The research has been published in the journal Nature.
Robotics

Killer Robots Would Be 'Dangerously Destabilizing' Force in the World, Tech Leaders Warn (washingtonpost.com) 163

Thousands of artificial intelligence experts are calling on governments to take preemptive action before it's too late. The list is extensive and includes some of the most influential names in the overlapping worlds of technology, science and academia. From a report: Among them are billionaire inventor and OpenAI founder Elon Musk, Skype co-founder Jaan Tallinn, artificial intelligence researcher Stuart Russell, as well as the three founders of Google DeepMind -- the company's premier machine learning research group. In total, more than 160 organizations and 2,460 individuals from 90 countries promised this week to not participate in or support the development and use of lethal autonomous weapons. The pledge says artificial intelligence is expected to play an increasing role in military systems and calls upon governments and politicians to introduce laws regulating such weapons "to create a future with strong international norms."

"Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems," the pledge says. "Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage," the pledge adds.

AI

DeepMind, Elon Musk and Others Pledge Not To Make Autonomous AI Weapons (engadget.com) 122

An anonymous reader quotes a report from Engadget: Yesterday, during the Joint Conference on Artificial Intelligence, the Future of Life Institute announced that more than 2,400 individuals and 160 companies and organizations have signed a pledge, declaring that they will "neither participate in nor support the development, manufacture, trade or use of lethal autonomous weapons." The signatories, representing 90 countries, also call on governments to pass laws against such weapons. Google DeepMind and the Xprize Foundation are among the groups who've signed on while Elon Musk and DeepMind co-founders Demis Hassabis, Shane Legg and Mustafa Suleyman have made the pledge as well.

"Thousands of AI researchers agree that by removing the risk, attributability and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems," says the pledge. It adds that those who sign agree that "the decision to take a human life should never be delegated to a machine."
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," Future of Life Institute President Max Tegmark said in a statement. "AI has huge potential to help the world -- if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
Transportation

Secretive Startup Zoox Is Building a Bidirectional Autonomous Car From the Ground Up (bloomberg.com) 93

A secretive Australian startup called Zoox (an abbreviation of zooxanthellae, the algae that helps fuel coral reef growth) is working on an autonomous vehicle that is unlike any other. Theirs is all-electric and bidirectional, meaning it can cruise into a parking spot traveling one way and cruise out the other. It can make noises to communicate with pedestrians. It even has displays on the windows for passengers to interact with. Bloomberg sheds some light on this company, reporting on their ambitions to build the safest and most inventive autonomous vehicle on the road: Zoox founders Tim Kentley-Klay and Jesse Levinson say everyone else involved in the race to build a self-driving car is doing it wrong. Both founders sound quite serious as they argue that Zoox is obvious, almost inevitable. The world will eventually move to perfectly engineered robotic vehicles, so why waste time trying to incorporate self-driving technology into yesteryear's cars? Levinson, whose father, Arthur, ran Genentech Inc., chairs Apple Inc., and mentored Steve Jobs, comes from Silicon Valley royalty. Together, they've raised an impressive pile of venture capital: about $800 million to date, including $500 million in early July at a valuation of $3.2 billion. Even with all that cash, Zoox will be lucky to make it to 2020, when it expects to put its first vehicles on the road.
Robotics

Rolls-Royce Is Developing Tiny 'Cockroach' Robots To Fix Airplane Engines (cnbc.com) 49

Rolls-Royce announced today that it is teaming up with robotics experts at Harvard University and University of Nottingham to develop tiny "cockroach" robots that can crawl inside aircraft engines to spot and fix problems. These robots will be able to speed up inspections and eliminate the need to remove an engine from an aircraft for repair work to take place. CNBC reports: Sebastian de Rivaz, a research fellow at Harvard Institute, said the inspiration for their design came from the cockroach and that the robotic bugs had been in development for eight years. He added that the next step was to mount cameras on the robots and scale them down to a 15-milimeter size. De Rivaz said that once the robots had performed their duty they could be programed to leave the engine or could simply be "flushed out" by the engine itself.

Also under development are "snake" robots that are flexible enough to travel through an engine like an endoscope. These would enter through a combustion chamber and would inspect damage and remove any debris. The second "snake" would deposit a patch repair that would sit temporarily until the engine was ready for full repair. No schedule is placed on when the crawling robots will be available.
You can view animations of each robot type here.
Robotics

'A Lot of Hoped-for Automation Was Counterproductive', Remembers Elon Musk (bloomberg.com) 208

Thursday Elon Musk gave a surprisingly candid interview about Tesla's massive push to increase production of Model 3 sedans to 5,000 a week. An anonymous reader quotes Musk's remarks to Bloomberg: I spent almost the entire time in the factory the final week, and yeah, it was essentially three months with a tiny break of like one day that I wasn't there. I was wearing the same clothes for five days. Yeah, it was really intense. And everybody else was really intense, too... I think we had to prove that we could make 5,000 cars in a week -- 5,000 Model 3s and at the same time make 2,000 S and X's, so essentially show that we could make 7,000 cars. We had to prove ourselves. The number of people who thought we would actually make it is very tiny, like vanishingly small. There was suddenly the credibility of the company, my credibility, you know, the credibility of the whole team. It was like, "Can you actually do this or not?"

There were a lot of issues that we had to address in order to do it. You know, we had to create the new general assembly line in basically less than a month -- to create it and get to an excess of a 1,000-cars-a-week rate in like four weeks... A lot of the hoped-for automation was counterproductive. It's not like we knew it would be bad, because why would we buy a ticket to hell...? A whole bunch of the robots are turned off, and it was reverted to a manual station because the robots kept faulting out. When the robot faults out -- like the vision system can't figure out how to put the object in -- then you've got to reset the system. You've got to manually seat the components. It stops the whole production line while you sort out why the robot faults out.

When the interviewer asks why that happens, Musk replies, "Because we were huge idiots and didn't know what we were doing. That's why."
Robotics

Autonomous Robots Could be the Future of High Flying Stunts in Hollywood (cnet.com) 41

From a report: Visitors to Disneyland and other Disney resorts could end up seeing robots tackling some pretty crazy, death-defying stunts usually reserved for Marvel superheroes and Star Wars Jedi Masters. Disney's latest Stuntronics experiments with robots include teaching them to crawl, row and now, more impressively, perform daring aerial acrobatics. A new video features the robots propelled into the sky to spin and leap like robotic superheroes. And they look even more advanced and human-like than the last time we saw them. The robots, initially nicknamed Stickman, work by using on-board accelerometers, gyroscopes and laser range-finding data to determine how to perform impressive stunts like single and double backflips.
Robotics

Surgical Robots Cut Training Time Down From 80 Sessions To 30 Minutes (theguardian.com) 113

From a report: It is the most exacting of surgical skills: tying a knot deep inside a patient's abdomen, pivoting long graspers through keyhole incisions with no direct view of the thread. Trainee surgeons typically require 60 to 80 hours of practice, but in a mock-up operating theatre outside Cambridge, a non-medic with just a few hours of experience is expertly wielding a hook-shaped needle -- in this case stitching a square of pink sponge rather than an artery or appendix.

The feat is performed with the assistance of Versius, the world's smallest surgical robot, which could be used in NHS operating theatres for the first time later this year if approved for clinical use. Versius is one of a handful of advanced surgical robots that are predicted to transform the way operations are performed by allowing tens or hundreds of thousands more surgeries each year to be carried out as keyhole procedures. The Versius robot cuts down the time required to learn to tie a surgical knot from more than 100 training sessions, when using traditional manual tools, to just half an hour, according to Slack.

AI

Some Startups Have Worked Out It's Cheaper and Easier To Get Humans To Behave Like Robots Than it is To Get Machines To Behave Like Humans (theguardian.com) 112

"Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn't scale, obviously, but it allows you to build something and skip the hard part early on," said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of "pseudo-AIs." It's essentially prototyping the AI with human beings, he said. From a report: This practice was brought to the fore this week in a Wall Street Journal article highlighting the hundreds of third-party app developers that Google allows to access people's inboxes. In the case of the San Jose-based company Edison Software, artificial intelligence engineers went through the personal email messages of hundreds of users -- with their identities redacted -- to improve a "smart replies" feature. The company did not mention that humans would view users' emails in its privacy policy. The third parties highlighted in the WSJ article are far from the first ones to do it. In 2008, Spinvox, a company that converted voicemails into text messages, was accused of using humans in overseas call centres rather than machines to do its work. In 2016, Bloomberg highlighted the plight of the humans spending 12 hours a day pretending to be chatbots for calendar scheduling services such as X.ai and Clara. The job was so mind-numbing that human employees said they were looking forward to being replaced by bots.
Earth

Are the Wealthy Plotting To Leave Us Behind? (medium.com) 412

"The wealthy are plotting to leave us behind," writes Douglas Rushkoff, describing what he learned from a high-paying speaking gig about the future of technology for "five super-wealthy guys...from the upper echelon of the hedge fund world," -- and what it says about perceptions of technology today. The Event. That was their euphemism for the environmental collapse, social unrest, nuclear explosion, unstoppable virus, or Mr. Robot hack that takes everything down. This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader...?

That's when it hit me: At least as far as these gentlemen were concerned, this was a talk about the future of technology. Taking their cue from Elon Musk colonizing Mars, Peter Thiel reversing the aging process, or Sam Altman and Ray Kurzweil uploading their minds into supercomputers, they were preparing for a digital future that had a whole lot less to do with making the world a better place than it did with transcending the human condition altogether and insulating themselves from a very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. For them, the future of technology is really about just one thing: escape.

There's nothing wrong with madly optimistic appraisals of how technology might benefit human society. But the current drive for a post-human utopia is something else. It's less a vision for the wholesale migration of humanity to a new state of being than a quest to transcend all that is human: the body, interdependence, compassion, vulnerability, and complexity.... It's a reduction of human evolution to a video game that someone wins by finding the escape hatch and then letting a few of his BFFs come along for the ride... The future became less a thing we create through our present-day choices or hopes for humankind than a predestined scenario we bet on with our venture capital but arrive at passively. This freed everyone from the moral implications of their activities... Ultimately, according to the technosolutionist orthodoxy, the human future climaxes by uploading our consciousness to a computer or, perhaps better, accepting that technology itself is our evolutionary successor.

The piece -- titled "Survival of the Richest" -- is an interesting read, and ends by suggesting this inspiring counter-philosophy.

"Being human is not about individual survival or escape. It's a team sport."
Google

Google's Controversial Voice Assistant Could Talk Its Way Into Call Centers (theinformation.com) 74

More details have emerged about where Google intends -- or at least intended until a few weeks ago -- to take its controversial AI Duplex, which it first demonstrated to the public at its developer conference in May. The AI system is capable of making calls to local businesses to place reservations on behalf of Google Assistant users. And it does so in a voice that most people can't distinguish from that of a normal human being. This resulted in a public outcry at the implication of people in the future not knowing whether they were talking to humans or machines, which led Google to adapt the bot's introduction so it clearly explains it's not a human. The Information reports: Some big companies are in the very early stages of testing Google's technology for use in other applications, such as call centers, where it might be able to replace some of the work currently done by humans [Editor's note: the link may be paywalled; alternative source], according to a person familiar with the plans. The market for cloud-based customer call center market is expected to hit more than $20.9 billion by 2022, up from around $6.8 billion last year, according to research firm MarketsandMarkets. [...] At least one potential customer, a large insurance company, is looking at ways it can use the technology, according to the person with knowledge of the project, including for call centers where the voice assistant could handle simple and repetitive customer calls while humans step in when the conversations get more complicated. But the ethical concerns that overshadowed the original presentation have slowed work on the project, this person said.
Robotics

Economists Worry We Aren't Prepared For the Fallout From Automation (theverge.com) 365

A new paper from the Center for Global Development says we are spending too much time discussing whether robots can take your job and not enough time discussing what happens next. The Verge reports: The paper's authors, Lukas Schlogl and Andy Sumner, say it's impossible to know exactly how many jobs will be destroyed or disrupted by new technology. But, they add, it's fairly certain there are going to be significant effects -- especially in developing economies, where the labor market is skewed toward work that requires the sort of routine, manual labor that's so susceptible to automation. Think unskilled jobs in factories or agriculture.

One class of solution they call "quasi-Luddite" -- measures that try to stall or reverse the trend of automation. These include taxes on goods made with robots (or taxes on the robots themselves) and regulations that make it difficult to automate existing jobs. They suggest that these measures are challenging to implement in "an open economy," because if automation makes for cheaper goods or services, then customers will naturally look for them elsewhere; i.e. outside the area covered by such regulations. [...] The other class of solution they call "coping strategies," which tend to focus on one of two things: re-skilling workers whose jobs are threatened by automation or providing economic safety nets to those affected (for example, a universal basic income or UBI).
They conclude that there's simply not enough work being done researching the political and economic solutions to what could be a growing global crisis. "Questions like profitability, labor regulations, unionization, and corporate-social expectations will be at least as important as technical constraints in determining which jobs get automated," they write.
Cloud

'Why You Should Not Use Google Cloud' (medium.com) 508

A user on Medium named "Punch a Server" says you should not use Google Cloud due to the "'no-warnings-given, abrupt way' they pull the plug on your entire system if they (or the machines) believe something is wrong." The user has a project running in production on Google Cloud (GCP) that is used to monitor hundreds of wind turbines and scores of solar plants scattered across 8 countries. When their project goes down, money is lost. An anonymous Slashdot reader shares the report: Early today morning (June 28, 2018) I receive an alert from Uptime Robot telling me my entire site is down. I receive a barrage of emails from Google saying there is some "potential suspicious activity" and all my systems have been turned off. EVERYTHING IS OFF. THE MACHINE HAS PULLED THE PLUG WITH NO WARNING. The site is down, app engine, databases are unreachable, multiple Firebases say I've been downgraded and therefore exceeded limits.

Customer service chat is off. There's no phone to call. I have an email asking me to fill in a form and upload a picture of the credit card and a government issued photo id of the card holder. Great, let's wake up the CFO who happens to be the card holder. What if the card holder is on leave and is unreachable for three days? We would have lost everything -- years of work -- millions of dollars in lost revenue. I fill in the form with the details and thankfully within 20 minutes all the services started coming alive. The first time this happened, we were down for a few hours. In all we lost everything for about an hour. An automated email arrives apologizing for "inconvenience" caused. Unfortunately The Machine has no understanding of the "quantum of inconvenience" caused.

AI

SpaceX Will Send an AI Robot To Join Astronauts On ISS (seattletimes.com) 64

An anonymous reader quotes a report from the Seattle Times: A robot with true artificial intelligence is about to invade space. The large, round, plastic robot head is part of SpaceX's latest supply delivery to the International Space Station. Friday's pre-dawn liftoff also includes two sets of genetically identical female mice, 20 mousestronauts that will pick up where NASA's identical twin brother astronauts left off a few years ago. Super-caffeinated coffee is also flying up for the space station's java-craving crew.

As intriguing as identical space siblings and turbo-charged space coffee may be, it's the German robot -- named Cimon, pronounced Simon, after a genius doctor in science fiction's "Captain Future" -- that's stealing the show. Like HAL, the autonomous Cimon is an acronym: it stands for Crew Interactive Mobile Companion. Its AI brain is courtesy of IBM. German astronaut Alexander Gerst, who arrived at the orbiting lab a month ago, will introduce Cimon to space life during three one-hour sessions. Already savvy about Gerst's science experiments, the self-propelling Cimon will float at the astronaut's side and help, when asked, with research procedures. To get Cimon's attention, Gerst will need only to call its name. Their common language will be English, the official language of the space station.

Security

What's Up With ProtonMail Outages? (bleepingcomputer.com) 88

ProtonMail, a secure email service provider used by more than two million users and references of which has been made in shows like Mr. Robot, has been facing outages for the last two days as it fights numerous DDoS attacks. "The attacks went on for several hours, although the outages were far more brief, usually several minutes at a time with the longest outage on the order of 10 minutes," a ProtonMail spokesperson told BleepingComputer, adding that it has tracked the attack to a group that claims to have ties to Russia. But things are more complicated than that, and it appears ProtonMail users, who are already annoyed at the frequent outages over the last few days, are up for more such downtimes in the coming days. BleepingComputer: But in reality, the DDoS attacks have no ties to Russia, weren't even planned to in the first place, and the group behind the attacks denounced being Russian, to begin with. Responsible for the attacks is a hacker group named Apophis Squad. In a private conversation with Bleeping Computer today, one of the group's members detailed yesterday's chain of events. The Apophis member says they targeted ProtonMail at random while testing a beta version of a DDoS booter service the group is developing and preparing to launch.

The group didn't cite any reason outside "testing" for the initial and uncalled for attack on ProtonMail, which they later revealed to have been a 200 Gbps SSDP flood, according to one of their tweets. "After we sent the first attack, we downed it for 60 seconds," an Apophis Squad member told us. He said the group didn't intend to harass ProtonMail all day yesterday or today but decided to do so after ProtonMail's CTO, Bart Butler, responded to one of their tweets calling the group "clowns."

This was a questionable response on the part of the ProtonMail CTO, as it set the hackers against his company even more. "So we then downed them for a few hours," the Apophis Squad said. Subsequent attacks included a whopping TCP-SYN flood estimated at 500 Gbps, as claimed by the group.

Slashdot Top Deals