(ORNL: Oak Ridge, TN) -- The U.S. Department of Energy took a major step in establishing artificial intelligence as a priority in the coming decades.
ADVERTISEMENT |
Specifically, Energy has announced a $67 million investment in a number of AI projects from institutions in both government and academia as part of its AI for Science initiative, with Oak Ridge National Laboratory among those leading the way. The goal of this funding is to establish foundational models in research areas such as scientific machine learning, large language models for high-performance computing, and automating laboratory workflow.
Six ORNL-led (or co-led) projects received funding, including:
• ENGAGE: (E)nergy-efficient (N)ovel Al(g)orithms and (A)rchitectures for (G)raph L(e)arning (ORNL PI: Keita Teranishi)
• DyGenAI: Dynamic Generative Artificial Intelligence for Prediction and Control of High-Dimensional Nonlinear Complex Systems (ORNL PI: Tom Potok)
• SciGPT: Scalable Foundational Model for Scientific Machine Learning (ORNL PI: Guannan Zhang)
• Productive AI-Assisted HPC Software Ecosystem (ORNL co-lead: Prasanna Balaprakash)
• Privacy-Preserving Federated Learning for Science: Building Sustainable and Trustworthy Foundation Models (ORNL co-lead: William Godoy)
• Durban: Enhancing Performance Portability in HPC (high-performance computing) Software with Artificial Intelligence (ORNL co-lead: Olivera Kotevska)
The projects were chosen via competitive peer review under Energy’s funding opportunity announcement, or FOA, for Advancements in Artificial Intelligence for Science. Funding for these projects lasts up to three years.
“This announcement is very important for the lab because we’ve been hearing about the progress of AI for many years now,” says William Godoy, senior computer scientist at ORNL. “But we were still working on what AI means for HPC, considering the niche nature of HPC systems.”
For Godoy and his team, that means more research into how to best use LLMs on systems like Frontier, the first supercomputer to reach the exascale barrier. Godoy said that shortly after the release of ChatGPT, an AI-powered chatbot, many of his colleagues in the national laboratory community started examining how LLMs could be created in conjunction with Energy’s mission.
Godoy will use the new funding for his project to work alongside his contemporaries at Lawrence Livermore National Laboratory, along with HPC and AI experts from the University of Maryland and Northeastern University, to identify the best strategies for creating LLMs designed specifically for HPC. ORNL’s Pedro Valero Lara, a senior computer scientist who works with Godoy, said these LLMs can also be used for programming language translation, such as translating legacy HPC Fortran codes into more modern and capable C++ codes.
Godoy echoes this sentiment and says the work is intended to strengthen the AI-powered collaboration across the national laboratory ecosystem and the future HPC workforce, including interns, which are using LLMs as a new ubiquitous modality for their own learning.
“Our goal is to build synergies across projects because these projects tend to be large, multidisciplinary, and complex, so we can be more impactful together,” Godoy says. “We are also working with the ORNL-led Durban project to leverage the value of AI for our HPC mission.”
The same rings true for other projects that were awarded funding through this latest round of investment from Energy into advancing AI.
Olivera Kotevska, a research scientist in the Computer Science and Mathematics Division at ORNL who leads the Privacy-Preserving Federated Learning for Science project, stresses the importance of supporting this type of work in advancing AI broadly.
“This support enables our team to advance cutting-edge research in privacy-preserving AI, which is crucial for safeguarding sensitive scientific data while fostering collaboration across institutions,” Kotevska says. “Broadly, this project positions ORNL at the forefront of developing sustainable, trustworthy AI solutions that can have a wide-reaching impact on scientific discovery and national security. Additionally, it strengthens ORNL’s leadership in building trustworthy AI systems for science, benefiting both the lab and the broader scientific community.”
Prasanna Balaprakash, director of AI programs at ORNL leading the lab’s AI Initiative, praised ORNL’s vast capabilities and deep history in AI research.
“The six awards cover all five areas of the FOA, a unique distinction for ORNL,” says Balaprakash. “These awards are a testament to ORNL’s AI expertise and capabilities, solidifying its position as a major leader in AI for science. Several of the projects have been supported by ORNL’s AI Initiative—a lab-directed research and development investment focused on developing secure, trustworthy, and energy-efficient AI solutions to address problems of national importance.”
“Working on machine learning and AI at ORNL for over 25 years has been incredibly rewarding,” says Tom Potok, who leads the Data and AI Systems section at ORNL. “Seeing this research grow as we tackle critical national challenges is both inspiring and fulfilling. It is a great honor for this outstanding team to be awarded this funding.”
Keita Teranishi, who serves as group leader of Programming Systems in the Computer Science and Mathematics Division at ORNL, stresses that AI and machine learning techniques can boost the productivity of large-scale software and application development for high-performance computing systems across the U.S. Department of Energy.
“Developing scientific software and applications for supercomputing systems like Frontier is a huge undertaking,” he says. “With Project Durban, we’re aiming to reduce the burden of coding, maintaining, and tuning large scale scientific software and applications by leveraging AI and machine learning. By combining existing automated code synthesis and performance tuning ideas with new AI-driven, data-centered approaches, we’re working to make the entire process smoother and more efficient.”
Add new comment