Non-face-to-face Virtual Experiments through Supercomputing
- Hits : 253
- Registration Date : 2020-12-22
Scientific Common Sense
Non-face-to-face Virtual Experiments through Supercomputing
In the wake of the 4th Industrial Revolution, advanced intelligence information infrastructure and utilization, such as supercomputing and AI, are becoming a survival condition for the development of companies and organizations. Supercomputing involves the use and combination of high performance computing (HPC). Using this opportunity, I would like to begin by introducing the definition of these terms. A high performance computer (according to Article 2 of the High-Performance Computer Act) is a computer that can produce, process, and utilize large amounts of data at high speeds. This term is synonymous with “super computer” and is a non-distributed computing system usually found within the top 500 of the Top500 list (www.top500.org). HPC (or supercomputing) refers to computing, telecommunications, and information technology that use high-performance computers or high-performance computing technologies with high-speed and high-capacity processing networks; that build management systems with specialized purposes, along with their relevant software and applications; and that are capable of large-scale data management. Simply speaking, the “ting” of supercomputing is an overarching term that includes research, applications, and education in the HW infrastructure known as the supercomputer. An HPC is a large server-class computer owned by general researchers. This piece of equipment can perform parallel calculations and simulations, AI, and big data analysis, functions that standard PCs cannot perform.
This year, NAVER announced the introduction of a supercomputer. The company stated that its plan is to increase the competitiveness of AI technology to the next level by using this supercomputer to build a massive language model to be used in the field of AI. MS has already built the world's fifth-ranking supercomputer (with 280,000 CPUs and 10,000 GPUs) and unveiled the GPT-3 in June, which can process natural language with a capacity of about 300 billion parameters. Since the 1980s, such supercomputers have been utilized mainly in basic science research to help achieve innovative scientific achievements. Professor John Pople of the University of Cambridge, UK, received the Nobel Prize for a program he developed called Gaussian, which applies quantum mechanics, a field of study that entered the world of physics in the 20th century. In 2013, Professor Martin Carplus of Harvard University developed the molecular mechanics computational science software CHAMM, also received a Nobel Prize for his work. The use of supercomputers in automobile engine development and aircraft design, as well as in other areas of aviation, machinery, shipbuilding, civil engineering, and architecture, is becoming more common. In September 2020, regarding the return on investment (ROI) of HPC technology, Hyperion Research published a white paper announcing that HPC technology's ROI was 4,300%. In other words, a one dollar investment is worth 44 dollars in return.
Research on black holes is a prime example of high-precision, non-face-to-face virtual experiments using supercomputing. Many findings were derived through a study that reproduced and predicted the existence of a black hole that was proven in theory using a supercomputer, which led to successful observation of the phenomenon. Another example of a high-precision, non-face is virtual nuclear testing. In 2002, the United States succeeded in 3D analysis of the first and second explosions of nuclear weapons through computational science research using supercomputing. In 2007, the US also completed warhead certification through virtual tests. Now, the high-precision, multi-disciplinary virtual experiment computational science software that has been developed is being commercialized to help companies grow. In the field of education, the EDISON (www.edison.re.kr) system, which utilizes supercomputing and computational science software, is widely used for non-face-to-face education. In Korea, actual experiments among undergraduate and graduate students in science and engineering is minimized by using computers and computational science software in specialized fields (e.g. nano, chemistry, etc.) to conduct non-face-to-face experiments on thermal fluids, nanophysics, chemistry, structural analysis, urban environments, design, and medicine. I hope that UST students will also access and use such technology as part of their education.
With the advancement of AI and data, there is an ongoing paradigm shift in science and technology research from experiments and theories to data-intensive research methods that analyze large amounts of data (from observation and mathematical models to advanced computational science through computation and data integration). As such, the use of supercomputing and processing technology for large-scale data analysis is increasing immensely. In fact, the European CERN laboratory analyzed data (about 60 PB) acquired through about 10 trillion proton collision experiments to discover Higgs particles, prompting their announcement of a strategy to accelerate the discovery of Higgs particles through AI and supercomputing. In alignment with the aforementioned paradigm shift, the manufacturing industry is also actively introducing supercomputing and data analysis as an important means for product design, development, testing, and verification. Goodyear has reduced their timeline for their tire product design process (from 3 years to 1 year) and cut costs (from 40% to 15%) (US Information Technology Innovation Foundation, 2016) as a result of implementing supercomputing technologies. Furthermore, in an effort to achieve digital innovation in the manufacturing industry, companies are expanding the use of supercomputing for the analysis and prediction of defects in the manufacturing process.
We have now entered a new normal era where non-face-to-face interactions are becoming more commonplace. As such, I hope that society can strengthen its capacities by implementing supercomputing and advanced computational science in digital innovation across research, education, and manufacturing.