×
IT

HPE Announces World's Largest ARM-based Supercomputer (zdnet.com) 41

The race to exascale speed is getting a little more interesting with the introduction of HPE's Astra -- what will be the world's largest ARM-based supercomputer. From a report: HPE is building Astra for Sandia National Laboratories and the US Department of Energy's National Nuclear Security Administration (NNSA). The NNSA will use the supercomputer to run advanced modeling and simulation workloads for things like national security, energy, science and health care.

HPE is involved in building other ARM-based supercomputing installations, but when Astra is delivered later this year, "it will hands down be the world's largest ARM-based supercomputer ever built," Mike Vildibill, VP of Advanced Technologies Group at HPE, told ZDNet. The HPC system is comprised of 5,184 ARM-based processors -- the Thunder X2 processor, built by Cavium. Each processor has 28 cores and runs at 2 GHz. Astra will deliver over 2.3 theoretical peak petaflops of performance, which should put it well within the top 100 supercomputers ever built -- a milestone for an ARM-based machine, Vildibill said.

Cloud

Nvidia Debuts Cloud Server Platform To Unify AI and High-Performance Computing (siliconangle.com) 15

Hoping to maintain the high ground in AI and high-performance computing, Nvidia late Tuesday debuted a new computing architecture that it claims will unify both fast-growing areas of the industry. From a report: The announcement of the HGX-2 cloud-server platform, made by Nvidia Chief Executive Jensen Huang at its GPU Technology Conference in Taipei, Taiwan, is aimed at many new applications that combine AI and HPC. "We believe the future requires a unified platform for AI and high-performance computing," Paresh Kharya, product marketing manager for Nvidiaâ(TM)s accelerated-computing group, said during a press call Tuesday.

Others agree. "I think that AI will revolutionize HPC," Karl Freund, a senior analyst at Moor Insights & Strategy, told SiliconANGLE. "I suspect many supercomputing centers will deploy HGX2 as it can add dramatic computational capacity for both HPC and AI." More specifically, the new architecture enables applications involving scientific computing and simulations, such as weather forecasting, as well as both training and running of AI models such as deep learning neural networks, for jobs such as image and speech recognition and navigation for self-driving cars.

Network

On This Day 25 Years Ago, the Web Became Public Domain (popularmechanics.com) 87

On April 30, 1993, CERN -- the European Organization for Nuclear Research -- announced that it was putting a piece of software developed by one of its researchers, Tim Berners-Lee, into the public domain. That software was a "global computer networked information system" called the World Wide Web, and CERN's decision meant that anyone, anywhere, could run a website and do anything with it. From a report: While the proto-internet dates back to the 1960s, the World Wide Web as we know it had been invented four year earlier in 1989 by CERN employee Tim Berners-Lee. The internet at that point was growing in popularity among academic circles but still had limited mainstream utility. Scientists Robert Kahn and Vinton Cerf had developed Transmission Control Protocol and Internet Protocol (TCP/IP), which allowed for easier transfer of information. But there was the fundamental problem of how to organize all that information.

In the late 80s, Berners-Lee suggested a web-like system of mangement, tied together by a series of what he called hyperlinks. In a proposal, Berners-Lee asked CERN management to "imagine, then, the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document you could skip to them with a click of the mouse."

Four years later, the project was still growing. In January 1993, the first major web browser, known as MOSAIC, was released by the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign. While there was a free version of MOSAIC, for-profit software companies purchased nonexclusive licenses to sell and support it. Licensing MOSAIC at the time cost $100,000 plus $5 each for any number of copies.

The Internet

Mosaic, the First HTML Browser That Could Display Images Alongside Text, Turns 25 (wired.com) 132

NCSA Mosaic 1.0, the first web browser to achieve popularity among the general public, was released on April 22, 1993. It was developed by a team of students at the University of Illinois' National Center for Supercomputing Applications (NCSA), and had the ability to display text and images inline, meaning you could put pictures and text on the same page together, in the same window. Wired reports: It was a radical step forward for the web, which was at that point, a rather dull experience. It took the boring "document" layout of your standard web page and transformed it into something much more visually exciting, like a magazine. And, wow, it was easy. If you wanted to go somewhere, you just clicked. Links were blue and underlined, easy to pick out. You could follow your own virtual trail of breadcrumbs backwards by clicking the big button up there in the corner. At the time of its release, NCSA Mosaic was free software, but it was available only on Unix. That made it common at universities and institutions, but not on Windows desktops in people's homes.

The NCSA team put out Windows and Mac versions in late 1993. They were also released under a noncommercial software license, meaning people at home could download it for free. The installer was very simple, making it easy for just about anyone to get up and running on the web. It was then that the excitement really began to spread. Mosaic made the web come to life with color and images, something that, for many people, finally provided the online experience they were missing. It made the web a pleasure to use.

Networking

There's A Cluster of 750 Raspberry Pi's at Los Alamos National Lab (insidehpc.com) 128

Slashdot reader overheardinpdx shares a video from the SC17 supercomputing conference where Bruce Tulloch from BitScope "describes a low-cost Rasberry Pi cluster that Los Alamos National Lab is using to simulate large-scale supercomputers." Slashdot reader mspohr describes them as "five rack-mount Bitscope Cluster Modules, each with 150 Raspberry Pi boards with integrated network switches." With each of the 750 chips packing four cores, it offers a 3,000-core highly parallelizable platform that emulates an ARM-based supercomputer, allowing researchers to test development code without requiring a power-hungry machine at significant cost to the taxpayer. The full 750-node cluster, running 2-3 W per processor, runs at 1000W idle, 3000W at typical and 4000W at peak (with the switches) and is substantially cheaper, if also computationally a lot slower. After development using the Pi clusters, frameworks can then be ported to the larger scale supercomputers available at Los Alamos National Lab, such as Trinity and Crossroads.
BitScope's Tulloch points out the cluster is fully integrated with the network switching infrastructure at Los Alamos National Lab, and applauds the Raspberry Bi cluster as "affordable, scalable, highly parallel testbed for high-performance-computing system-software developers."
China

All 500 of the World's Top 500 Supercomputers Are Running Linux (zdnet.com) 288

Freshly Exhumed shares a report from ZDnet: Linux rules supercomputing. This day has been coming since 1998, when Linux first appeared on the TOP500 Supercomputer list. Today, it finally happened: All 500 of the world's fastest supercomputers are running Linux. The last two non-Linux systems, a pair of Chinese IBM POWER computers running AIX, dropped off the November 2017 TOP500 Supercomputer list. When the first TOP500 supercomputer list was compiled in June 1993, Linux was barely more than a toy. It hadn't even adopted Tux as its mascot yet. It didn't take long for Linux to start its march on supercomputing.

From when it first appeared on the TOP500 in 1998, Linux was on its way to the top. Before Linux took the lead, Unix was supercomputing's top operating system. Since 2003, the TOP500 was on its way to Linux domination. By 2004, Linux had taken the lead for good. This happened for two reasons: First, since most of the world's top supercomputers are research machines built for specialized tasks, each machine is a standalone project with unique characteristics and optimization requirements. To save costs, no one wants to develop a custom operating system for each of these systems. With Linux, however, research teams can easily modify and optimize Linux's open-source code to their one-off designs.
The semiannual TOP500 Supercomputer List was released yesterday. It also shows that China now claims 202 systems within the TOP500, while the United States claims 143 systems.
China

China Overtakes US In Latest Top 500 Supercomputer List (enterprisecloudnews.com) 110

An anonymous reader quotes a report from Enterprise Cloud News: The release of the semiannual Top 500 Supercomputer List is a chance to gauge the who's who of countries that are pushing the boundaries of high-performance computing. The most recent list, released Monday, shows that China is now in a class by itself. China now claims 202 systems within the Top 500, while the United States -- once the dominant player -- tumbles to second place with 143 systems represented on the list. Only a few months ago, the U.S. had 169 systems within the Top 500 compared to China's 160. The growth of China and the decline of the United States within the Top 500 has prompted the U.S. Department of Energy to doll out $258 million in grants to several tech companies to develop exascale systems, the next great leap in HPC. These systems can handle a billion billion calculations a second, or 1 exaflop. However, even as these physical machines grow more and more powerful, a good portion of supercomputing power is moving to the cloud, where it can be accessed by more researchers and scientists, making the technology more democratic.
China

China Arms Upgraded Tianhe-2A Hybrid Supercomputer (nextplatform.com) 23

New submitter kipperstem77 shares an excerpt from a report via The Next Platform: The National University of Defense Technology (NUDT) has, according to James Lin, vice director for the Center of High Performance Computing (HPC) at Shanghai Jiao Tong University, who divulged the plans last year, is building one of the three pre-exascale machines [that China is currently investing in], in this case a kicker to the Tianhe-1A CPU-GPU hybrid that was deployed in 2010 and that put China on the HPC map. This exascale system will be installed at the National Supercomputer Center in Tianjin, not the one in Guangzhou, according to Lin. This machine is expected to use ARM processors, and we think it will very likely use Matrix2000 DSP accelerators, too, but this has not been confirmed. The second pre-exascale machine will be an upgrade to the TaihuLight system using a future Shenwei processor, but it will be installed at the National Supercomputing Center in Jinan. And the third pre-exascale machine being funded by China is being architected in conjunction with AMD, with licensed server processor technology, and which everyone now thinks is going to be based on Epyc processors and possibly with Radeon Instinct GPU coprocessors. The Next Platform has a slide embedded in its report "showing the comparison between Tianhe-2, which was the fastest supercomputer in the world for two years, and Tianhe-2A, which will be vying for the top spot when the next list comes out." Every part of this system shows improvements.
AMD

Six Companies Awarded $258 Million From US Government To Build Exascale Supercomputers (digitaltrends.com) 40

The U.S. Department of Energy will be investing $258 million to help six leading technology firms -- AMD, Cray Inc., Hewlett Packard Enterprise, IBM, Intel, and Nvidia -- research and build exascale supercomputers. Digital Trends reports: The funding will be allocated to them over the course of a three-year period, with each company providing 40 percent of the overall project cost, contributing to an overall investment of $430 million in the project. "Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation," U.S. Secretary of Energy Rick Perry said. "These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing -- exascale-capable systems." The funding will finance research and development in three key areas; hardware technology, software technology, and application development. There are hopes that one of the companies involved in the initiative will be able to deliver an exascale-capable supercomputer by 2021.
The Internet

NYU Accidentally Exposed Military Code-breaking Computer Project To Entire Internet (theintercept.com) 75

An anonymous reader writes: A confidential computer project designed to break military codes was accidentally made public by New York University engineers. An anonymous digital security researcher identified files related to the project while hunting for things on the internet that shouldn't be, The Intercept reported. He used a program called Shodan, a search engine for internet-connected devices, to locate the project. It is the product of a joint initiative by NYU's Institute for Mathematics and Advanced Supercomputing, headed by the world-renowned Chudnovsky brothers, David and Gregory, the Department of Defense, and IBM. Information on an exposed backup drive described the supercomputer, called -- WindsorGreen -- as a system capable of cracking passwords.
Government

US Federal Budget Proposal Cuts Science Funding (washingtonpost.com) 649

hey! writes: The U.S. Office of Management and Budget has released a budget "blueprint" which outlines substantial cuts in both basic research and applied technology funding. The proposal includes a whopping 18% reduction in National Institutes of Health medical research. NIH does get a new $500 million fund to track emerging infectious agents like Zika in the U.S., but loses its funding to monitor those agents overseas. The Department of Energy's research programs also get an 18% cut in research, potentially affecting basic physics research, high energy physics, fusion research, and supercomputing. Advanced Research Projects Agency (ARPA-E) gets the ax, as does the Advanced Technology Vehicle Manufacturing Program, which enabled Tesla to manufacture its Model S sedan. EPA loses all climate research funding, and about half the research funding targeted at human health impacts of pollution. The Energy Star program is eliminated; Superfund funding is drastically reduced. The Chesapeake Bay and Great Lakes cleanup programs are also eliminated, as is all screening of pesticides for endocrine disruption. In the Department of Commerce, Sea Grant is eliminated, along with all coastal zone research funding. Existing weather satellites GOES and JPSS continue funding, but JPSS-3 and -4 appear to be getting the ax. Support for transfer of federally funded research and technology to small and mid-sized manufacturers is eliminated. NASA gets a slight trim, and a new focus on deep space exploration paid for by an elimination of Earth Science programs. You can read more about this "blueprint" in Nature, Science, and the Washington Post, which broke the story. The Environmental Protection Agency, the State Department and Agriculture Department took the hardest hits, while the Defense Department, Department of Homeland Security, and Department of Veterans Affairs have seen their budgets grow.
China

NSA, DOE Say China's Supercomputing Advances Put US At Risk (computerworld.com) 130

dcblogs quotes a report from Computerworld: Advanced computing experts at the National Security Agency and the Department of Energy are warning that China is "extremely likely" to take leadership in supercomputing as early as 2020, unless the U.S. acts quickly to increase spending. China's supercomputing advances are not only putting national security at risk, but also U.S. leadership in high-tech manufacturing. If China succeeds, it may "undermine profitable parts of the U.S. economy," according to a report titled U.S. Leadership in High Performance Computing by HPC technical experts at the NSA, the DOE, the National Science Foundation and other agencies. The report stems from a workshop held in September that was attended by 60 people, many scientists, 40 of whom work in government, with the balance representing industry and academia. "Meeting participants, especially those from industry, noted that it can be easy for Americans to draw the wrong conclusions about what HPC investments by China mean -- without considering China's motivations," the report states. "These participants stressed that their personal interactions with Chinese researchers and at supercomputing centers showed a mindset where computing is first and foremost a strategic capability for improving the country; for pulling a billion people out of poverty; for supporting companies that are looking to build better products, or bridges, or rail networks; for transitioning away from a role as a low-cost manufacturer for the world; for enabling the economy to move from 'Made in China' to 'Made by China.'"
Supercomputing

D-Wave Open Sources Its Quantum Computing Tool (gcn.com) 45

Long-time Slashdot reader haruchai writes: Canadian company D-Wave has released their qbsolv tool on GitHub to help bolster interest and familiarity with quantum computing. "qbsolv is a metaheuristic or partitioning solver that solves a potentially large QUBO problem by splitting it into pieces that are solved either on a D-Wave system or via a classical tabu solver," they write on GitHub.

This joins the QMASM macro assembler for D-Wave systems, a tool written in Python by Scott Pakin of Los Alamos National Labs. D-Wave president Bo Ewald says "D-Wave is driving the hardware forward but we need more smart people thinking about applications, and another set thinking about software tools."

AI

IBM's Watson Used In Life-Saving Medical Diagnosis (businessinsider.co.id) 83

"Supercomputing has another use," writes Slashdot reader rmdingler, sharing a story that quotes David Kenny, the General Manager of IBM Watson: "There's a 60-year-old woman in Tokyo. She was at the University of Tokyo. She had been diagnosed with leukemia six years ago. She was living, but not healthy. So the University of Tokyo ran her genomic sequence through Watson and it was able to ascertain that they were off by one thing. Actually, she had two strains of leukemia. They did treat her and she is healthy."

"That's one example. Statistically, we're seeing that about one third of the time, Watson is proposing an additional diagnosis."

Japan

Japan Eyes World's Fastest-Known Supercomputer, To Spend Over $150M On It (reuters.com) 35

Japan plans to build the world's fastest-known supercomputer in a bid to arm the country's manufacturers with a platform for research that could help them develop and improve driverless cars, robotics and medical diagnostics. From a Reuters report: The Ministry of Economy, Trade and Industry will spend 19.5 billion yen ($173 million) on the previously unreported project, a budget breakdown shows, as part of a government policy to get back Japan's mojo in the world of technology. The country has lost its edge in many electronic fields amid intensifying competition from South Korea and China, home to the world's current best-performing machine. In a move that is expected to vault Japan to the top of the supercomputing heap, its engineers will be tasked with building a machine that can make 130 quadrillion calculations per second -- or 130 petaflops in scientific parlance -- as early as next year, sources involved in the project told Reuters. At that speed, Japan's computer would be ahead of China's Sunway Taihulight that is capable of 93 petaflops. "As far as we know, there is nothing out there that is as fast," said Satoshi Sekiguchi, a director general at Japan's âZNational Institute of Advanced Industrial Science and Technology, where the computer will be built.
United States

US Sets Plan To Build Two Exascale Supercomputers (computerworld.com) 59

dcblogs quotes a report from Computerworld: The U.S. believes it will be ready to seek vendor proposals to build two exascale supercomputers -- costing roughly $200 to $300 million each -- by 2019. The two systems will be built at the same time and be ready for use by 2023, although it's possible one of the systems could be ready a year earlier, according to U.S. Department of Energy officials. The U.S. will award the exascale contracts to vendors with two different architectures. But the scientists and vendors developing exascale systems do not yet know whether President-Elect Donald Trump's administration will change directions. The incoming administration is a wild card. Supercomputing wasn't a topic during the campaign, and Trump's dismissal of climate change as a hoax, in particular, has researchers nervous that science funding may suffer. At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama's administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that "pointed-head geeks are not going to be well appreciated."
Supercomputing

A British Supercomputer Can Predict Winter Weather a Year In Advance (thestack.com) 177

The national weather service of the U.K. claims it can now predict the weather up to a year in advance. An anonymous reader quotes The Stack: The development has been made possible thanks to supercomputer technology granted by the UK Government in 2014. The £97 million high-performance computing facility has allowed researchers to increase the resolution of climate models and to test the retrospective skill of forecasts over a 35-year period starting from 1980... The forecasters claim that new supercomputer-powered techniques have helped them develop a system to accurately predict North Atlantic Oscillation -- the climatic phenomenon which heavily impacts winters in the U.K.
The researchers apparently tested their supercomputer on 36 years worth of data, and reported proudly that they could predict winter weather a year in advance -- with 62% accuracy.
Australia

Quantum Researchers Achieve 10-Fold Boost In Superposition Stability (thestack.com) 89

An anonymous reader quotes The Stack: A team of Australian researchers has developed a qubit offering ten times the stability of existing technologies. The computer scientists claim that the new innovation could significantly increase the reliability of quantum computing calculations... The new technology, developed at the University of New South Wales, has been named a 'dressed' quantum bit as it combines a single atom with an electromagnetic field. This process allows the qubit to remain in a superposition state for ten times longer than has previously been achieved. The researchers argue that this extra time in superposition could boost the performance stability of quantum computing calculations... Previously fragile and short-lived, retaining a state of superposition has been one of the major barriers to the development of quantum computing. The ability to remain in two states simultaneously is the key to scaling and strengthening the technology further.
Do you ever wonder what the world will look like when everyone has their own personal quantum computer?
Hardware

Fujitsu Picks 64-Bit ARM For Post-K Supercomputer (theregister.co.uk) 30

An anonymous reader writes: At the International Supercomputing Conference 2016 in Frankfurt, Germany, Fujitsu revealed its Post-K machine will run on ARMv8 architecture. The Post-K machine is supposed to have 100 times more application performance than the K Supercomputer -- which would make it a 1,000 PFLOPS beast -- and is due to go live in 2020. The K machine is the fifth fastest known super in the world, it crunches 10.5 PFLOPS, needs 12MW of power, and is built out of 705,000 Sparc64 VIIIfx cores.InfoWorld has more details.
China

China Builds World's Fastest Supercomputer Without U.S. Chips (computerworld.com) 247

Reader dcblogs writes: China on Monday revealed its latest supercomputer, a monolithic system with 10.65 million compute cores built entirely with Chinese microprocessors. This follows a U.S. government decision last year to deny China access to Intel's fastest microprocessors. There is no U.S.-made system that comes close to the performance of China's new system, the Sunway TaihuLight. Its theoretical peak performance is 124.5 petaflops (Linpack is 93 petaflops), according to the latest biannual release today of the world's Top500 supercomputers. It has been long known that China was developing a 100-plus petaflop system, and it was believed that China would turn to U.S. chip technology to reach this performance level. But just over a year ago, in a surprising move, the U.S. banned Intel from supplying Xeon chips to four of China's top supercomputing research centers. The U.S. initiated this ban because China, it claimed, was using its Tianhe-2 system for nuclear explosive testing activities. The U.S. stopped live nuclear testing in 1992 and now relies on computer simulations. Critics in China suspected the U.S. was acting to slow that nation's supercomputing development efforts. There has been nothing secretive about China's intentions. Researchers and analysts have been warning all along that U.S. exascale (an exascale is 1,000 petaflops) development, supercomputing's next big milestone, was lagging.

Slashdot Top Deals