![]() |
Klaus Mainzer European Academy of Sciences and Arts |
Future of AI in Science and Engineering: From Foundations of Computing to Emerging Technologies |
Lecture abstract:
We live in the age of artificial intelligence (AI), which has arrived in people's everyday lives and working environments with chatbots such as ChatGPT and DeepSeek. However, only few people realise that this key technology is deeply rooted in the logical, mathematical, physical and engineering foundations of science. Only those who know these fundamentals can correctly assess the potential and limits of AI technology. The lecture therefore examines the foundations and limits of digital machine learning, analog brain-orientated neuromorphic AI and quantum AI. These approaches have different advantages, but also limitations, for solving different types of problems. In the future, it will be challenging to integrate them into a hybrid AI technology that can be used independently of its hardware. The lecture is therefore also a plea for interdisciplinary foundational research in logic, mathematics, physics, material and engineering sciences, which has made possible long-term research breakthroughs and emerging technologies.
References: K. Mainzer, Thinking in Complexity, Springer: New York 5th edition 2007; K. Mainzer, The Digital and the Real World. Computional Foundations of Mathematics, Science, Technology, and Philosophy, World Scientific Singapore 2018; K. Mainzer, Artificial Intelligence. When do Machines take over? Springer: Berlin 2nd edition 2019; K. Mainzer, Quantencomputer. Von der Quantenwelt zur künstlichen Intelligenz, Springer: Berlin 2020; K. Mainzer, Artificial intelligence of Neuromorphic Systems. From Digital, Analog, Quantum and Brain-Orientated Computing to Hybrid AI, World Scientific Singapore 2024; K. Mainzer (Ed.), Complex Systems, Artificial Intelligence, and Emerging Technologies, World Scientific Singapore 2025.
Personal profile:
My personal profile in research concerns the dynamic intersection of complex systems, artificial intelligence (AI), and emerging technologies. As AI-driven solutions become essential for handling vast and ever-growing data sets, innovation in computing technologies is crucial. At the same time, sustainability and resource limitations must be addressed. Bringing together classic digitalization, neuromorphic computing, quantum technologies, and the latest breakthroughs in generative AI (e.g., ChatGPT, DeepSeek), my research examines the evolving innovation portfolio that shapes the future of AI and computing. It highlights the balance between digital and analog technologies, advocating for a hybrid IT and AI approach geared toward efficiency and sustainability. Rooted in decades of research on complex systems and AI, my research sets the stage for a broader discussion on strategic innovation, offering insights into the evolving technological landscape and its impact on society. This research is not only documented in more than 40 international book publications and editions, but also with memberships and leading positions in universities, academies, and foundations. Examples of books which were also translated into Chinese are “Thinking in Complexity” (Springer 1st edition 1994, 5th edition 2007), “Artificial Intelligence – When will Machines take over” (Springer 2nd edition 2019, Tsinghua University Press 2022), “Complex Systems, Artificial Intelligence, and Emerging Technologies” (Editor of book series since 2025 in Word Scientific Singapore).
Klaus Mainzer studied mathematics, physics, and philosophy. He is currently Emeritus of Excellence (TUM Senior Excellence Faculty, Technical University of Munich) and senior professor (University of Tübingen). He was principal investigator of projects of the DFG (Germany), NSF (USA), the Fritz-Thyssen and Udo Keller Foundations and the Academia Europaea. He was a visiting professor in Brazil, China, EU, India, Japan, Korea, Russia, UK, USA.
![]() |
Guowei He Lab of Nonlinear Mechanics, Institute of Mechanics, Chinese Academy of Science, China |
Dada-driven large-eddy simulation for time-accurate prediction of turbulent flows: turbulence modeling and shape optimization |
Lecture abstract:
Large-eddy simulation (LES) has been increasingly used to predict turbulence-generated noise such as aero-and hydro-acoustics. This task requires that LES should be time-accurate: it can correctly predict space-time correlations or wavenumber-frequency spectra. The conventional turbulence models based on flow physics suffers from the problem of the competitive balances of multiple flow processes, such as energy dissipation and random backscatter, attached and separated flows, and the numerical issues, such as stochastic and realization differentials. The machine learning method is potential to become the workhorse for turbulence modelling and numerical issues. In this talk, we present our recent work. (1) Data-driven random forcing model: this model can be used to correctly predict space-time correlations, while the most-often-used eddy viscosity model largely over-predict the time correlations. (2) Knowledge-integrated additive (KIA) wall model: this model overcome the issue of “catastrophic forgetting” in machines learning and can be used to numerically simulate attached and separated flows. (3) LES-based shape optimization: the regularized ensemble Kalman method is introduced to overcome the blow-up of model gradient due to the chaotic nature of turbulence and the LES used for trailing-edge noise reduction. The application of LES to the noise radiated from turbulent flows around underwater vehicles is also presented.
Personal profile:
Dr. Guowei He is a professor and the academic director of Institute of Mechanics, Chinese Academy of Science. He is an elected academician of Chinese Academy of Science and a fellow of America Physical Society. He is the current president of Chinese Society of Theoretical and Applied Mechanics and associated editor of APS journal “Phys Rev. Fluids”. His research interests include: turbulence statistical theory and computational modeling, large eddy simulation of turbulence-generated noise and machine learning.
![]() |
Zhuo Zhuang School of Aerospace Engineering, Tsinghua University, China |
Machine Learning and Mechanics Modeling Empowers Design of Variable Stiffness Structures for Large Composite Aircraft Wings |
Lecture abstract:
The wing panels and beams of wide-body aircraft have large thickness characteristics. By continuously changing the fiber layup angle, layer plates with variable stiffness are designed. A continuum-based plate-shell element is established, and a non-uniform anisotropic constitutive model is adopted. A machine learning cross-scale co-evolution algorithm is proposed to achieve the optimization design of high-dimensional/continuous-discrete hybrid variable-stiffness composite wing structures of large aircraft.
Personal profile:
Professor of the School of Aerospace Engineering at Tsinghua University, Academician of the European Academy of Sciences and Arts (EASA), Vice President of the International Association for Computational Mechanics (IACM), Invited Expert of the China Committee of the International Council of Science (ISC-China). Doctor of the University College Dublin, Ireland, Honorary Doctor of Swansea University, UK. Chief Scientist of the 973 Project. Published over 380 academic papers and authored more than 10 academic books. Papers and books have been cited over 15,000 times. Obtained over 25 invention patents and 10 software copyrights. Received 8 national and provincial-level science and education awards.
![]() |
Yanping Lian Beijing Institute of Technology |
A discrete learning method for nonlinear solid mechanics problems |
Lecture abstract:
Numerical simulation is a powerful approach for engineering and scientific problems. However, the traditional mesh-based and particle-based numerical methods suffer from the inherent shortcoming of being time-consuming for complex problems with real-time analysis requirements. Although data-driven machine learning (ML) methods offer a promising path forward, revolutionizing prediction speed, it is still an open question to have a widely acceptable ML method with interpretability and generalization for those problems featuring nonlinearity, large scale, and high dimensionality. In this lecture, we introduce our recently proposed discrete learning method (DLM), aiming for real-time prediction of complex nonlinear structural responses in solid mechanics. It is important to understand that the performance of a numerical method depends not only on the governing equation to be solved and its spatial-temporal discretization scheme, but also on the way to treat the dataset, particularly for the ML method. In DLM, we introduced a novel dual-discretization concept for the training dataset, leading to the divide-and-conquer methodology, thus the name of the proposed method. It is particularly beneficial for small-sample ML methods, such as Gaussian process regression (GPR), in addressing the challenges posed by large-scale data. For the solid mechanics problems of concern, the dual-discretization consists of the solution space discretization and material domain discretization to the dataset, yielding a set of sub-datasets, featuring a small sample size and low dimensionality. Each sub-dataset is used to train a local reduced-order GPR, and the combination of these local GPRs can predict the high-dimensional problem. The proposed DLM has been demonstrated by A set of problems involving material, geometric, and boundary condition nonlinearities. On one hand, it can offer predictions within a second and attain high precision for a specific problem, and outperforms the traditional GPR for error reductions ranging from 1 to 3 orders of magnitude. On the other hand, it takes less than 1 minute to complete the online predictions for nonlinear extreme deformation problems with over 10 million degrees of freedom, which is not achievable by traditional GPR. We expect the proposed DLM to be a powerful tool for the fast and accurate prediction of large-scale nonlinear problems. In addition, it is straightforward to extend the DLM by integrating other small-sample ML methods.
Personal profile:
Yanping Lian, Professor at Beijing Institute of Technology. He has been recognized as a National Young Talent and served as the Chief Scientist of a National Key Program for Basic Research Enhancement. His professional affiliations include membership in the Solid Mechanics Committee of the Chinese Society of Theoretical and Applied Mechanics (CSTAM), the Data-Driven Computational Mechanics Methods Group, and the National Engineering Computational Methods Liaison Committee. He also serves on the editorial boards of the Chinese Journal of Theoretical and Applied Mechanics, the Chinese Journal of Computational Mechanics, and Materials. He has led multiple national research projects with funding of more than 40 million RMB. His distinctions include the Beijing Natural Science Award (Second Class, 2nd contributor), First Prize in the International Numerical Simulation Challenge for Additive Manufacturing (1st contributor), and the Tsinghua University Outstanding Doctoral Dissertation Award, among others.
Prof. Lian’s research field focuses on computational mechanics, with emphasis on numerical methods for metal additive manufacturing, impact penetration, advanced materials and structures design, and data-driven computational mechanics. He has pioneered several numerical methods, including: adaptive finite element material point method for extreme deformation problems, multi-scale and multi-physics methods for metal additive manufacturing, data-driven discrete learning method, and high-accuracy and stable methods for fractional partial differential equations. His scholarly output comprises over 60 publications in leading journals such as CMAME and CM, accumulating 2,000+ citations. He has co-authored two monographs, secured 20+ software copyrights, and filed six patents. The developed numerical methods have been applied to high-end equipment development, aerospace failure analysis, and commercial CAE software.
Onging update...