33rd open access competition RESULTS

We would like to thank all applicants for submitting projects to the 33rd Open Access Grant Competition. We received a record 86 applications in this round, requesting a total of 18,610,299 normalised hours (approximately 3,250,000 node hours). The demand for Karolina CPU has more than doubled the available capacity, and a similar trend was observed for Karolina GPU.

We understand that the allocation of resources in this round of the Open Access Grant Competition has caused significant dissatisfaction among many of you. IT4Innovations is committed to providing fair access to computational resources; however, we experienced unprecedented demand for the Karolina resources during this round of the competition. As a result, we were unable to meet all requests in full, leading to significant reductions in allocations for many users. While we strive for fairness in our allocation process, the overwhelming interest in these resources required some difficult decisions.

In light of these circumstances, we have prioritised allocations to our experienced users who have demonstrated the ability to use these advanced computational systems effectively. We recognise that this approach may leave some new or less experienced users feeling underserved, and we sincerely apologise for any disappointment this may have caused. We remain committed to improving the accessibility and availability of resources in the future. Thank you for your understanding and continued support.
All proposals were evaluated, taking into account the technical level of the projects, publication history, and the criteria of scientific excellence, computational readiness, and socio-economic impact. The publication history criterion was assessed using the ratio of publications per project over the last three years, with a standard ratio of 1.0 publication per project.

The overall quality of the projects submitted was high. Twenty-four projects scored more than 25 points, and one project achieved full score. A total of 1,944,208 node hours were awarded in this competition across 86 projects, including seven multi-year projects in their first period.

 

THE ALLOCATION COMMISSION DECIDED ON THE ALLOCATIONS WITHIN THE 33rd OPEN ACCESS GRANT COMPETITION AS FOLLOWS:

 

Researcher: Adam Matej      

OPEN-33-1     

Developing van der Waals parameters of Au(111) surfaces in Amber FF   

LUMI-C  Alloc=7700   

In our modern age, new technologies are often focused on miniaturization with a prime example being computer chips with nanometer-scale components. The possibility of scaling electronics down to the molecular level is an actively pursued field with great success in recent years in preparing molecules with tailored properties. Alongside the experimental efforts are theoretical calculations helping to elucidate chemical processes, properties, and physical phenomena on molecular and submolecular levels. For us to be able to efficiently and accurately study these systems, we need sufficient tools. Unfortunately, one such tool is currently missing in the arsenal of computational chemists. Here we plan on developing accurate parameters used in molecular mechanics methods to describe weak interactions between gold surfaces and organic molecules. By applying a robust calculation protocol on a set of training systems, we will be able to obtain a set of parameters, allowing us to use much cheaper calculation methods without significant loss of accuracy. We envision that this project will stimulate efforts in the field of on-surface synthesis and enhance the pace of discoveries, putting us closer to actual applications in nanoelectronics or data storage.                    

                                             

Researcher: Oldřich Plchot    

OPEN-33-10   

Adopting LLMs as a Centerpiece of a Modern Conversational Dialogue System   

LUMI-C  Alloc=2500;  LUMI- Alloc=15000    

Current commercial knowledge-grounded dialogue systems are typically trained on large amounts of freely available textual data and hence, they often do not react well to conversational style inputs. This proposal is linked to H2020 ELOQUENCE, where we develop a multilingual conversational dialogue system including cross-lingual Factual Information Retrieval (FIR) capabilities that leverage large language models (LLMs) and reduce the need for including handcrafted rules. ELOQUENCE aims to better comprehend unstructured dialogues and translate them into explainable, safe, knowledge-grounded, trustworthy, and bias-controlled language models. As one of the main goals of ELOQUENCE is to build on top of prior achievements (existing LLMS and speech foundational models), we propose to design an architecture and methodology to effectively interconnect these foundational models while leveraging limited open-source resources within individual EU languages. The resulting outcome of this project can be seen as a compound model that understands spoken conversation and is tuned to perform as a core of a task-oriented dialogue system. 

                                                                   

Researcher: Roman Bushuiev

OPEN-33-11   

A next-generation machine learning toolbox for annotating tandem mass spectra          

LUMI-G  Alloc=9000   

The discovery and identification of molecules in biological and environmental samples is crucial for advancing biomedical and chemical sciences. Tandem mass spectrometry (MS/MS) is the leading technique for high-throughput elucidation of molecular structures. However, decoding a molecular structure from its mass spectrum is exceptionally challenging, even when performed by human experts. As a result, the vast majority of acquired MS/MS spectra remain uninterpreted, thereby limiting our understanding of the underlying (bio)chemical processes. In our previous IT4INNOVATIONS projects, we developed two tools, DreaMS and MassSpecGym, which enhance the annotation of MS/MS data using machine learning and enable new computational opportunities. Building on this foundation, our current project aims to expand the machine learning toolbox for MS/MS data interpretation by introducing three novel methods: a machine learning model for annotating multi-stage tandem mass spectrometry (MSⁿ) data, a diffusion-based generative model for de novo molecular structure prediction from MS/MS spectra, and a novelty detection method tailored for mass spectrometry applications.                                                                 

 

Researcher: Jan Zemen         

OPEN-33-12   

Modeling of antiphase boundaries in Heusler alloys           

Barbora CPU  Alloc=48000;  Karolina CPU  Alloc=11300      

In the proposed project, we will investigate the influence of antiphase boundaries (APBs) on the local magnetic structure and magnetic anisotropy in a subgroup of Heusler alloys that exhibit elastomagnetic multiferroic behaviour, also known as magnetic shape memory alloys (MSMAs). Specifically, we will study Heusler alloys with the general formula X2MnY, where X = Fe, Ni, Co and Y = Ga, Sn, as well as their off-stoichiometric derivatives. Using density functional theory (DFT), we will relax large supercells incorporating APBs, subsequently determine magnetic anisotropy energy (MAE) and study the implications for the related magnetically induced martensite reorientation (MIR). The motivation for this work arises from a recent experimental study that demonstrated a decrease in MAE in regions with high APB density in Ni-Mn-Ga. This finding is counterintuitive, as domain wall (DW) pinning at APBs typically induces higher coercivity. Our DFT simulations will provide insights into the dependence of MAE on APB configuration, separating it from the effect of  DW pinning in MSMAs. The relaxed magnetic structures will serve as a foundation for future full-potential DFT studies of hyperfine parameters, enabling direct comparisons to values observed in Mössbauer spectroscopy (MS) and nuclear magnetic resonance (NMR) experiments.                                                                       

Researcher: Václav Vávra      

OPEN-33-13   

Monocular depth estimation

Karolina CPU  Alloc=100;  Karolina GPU  Alloc=1600           

This project aims at exploiting and potentially improving existing methods for monocular depth estimation (monodepth). There have been a lot of improvements in this domain recently thanks to various newly developed deep models. However, these models achieve good performance only when measured by the affine invariant depth, i.e., up to scale and shift in depth across the whole image. Even though the unknown scale in depth is a consequence of the projection ambiguity, we believe this ambiguity may be mitigated through various means, e.g., the model being able to estimate metric size of known real world objects detected in the scene. Shift on the other hand might be removed in different ways, e.g., through enforcing plane invariance. We also observe that the recent deep models may vary in architecture or some details of used cost functions, however they almost exclusively learn on the same type of data, i.e., dense depth maps with varying precision. Learning from 2D-3D correspondences from structure-from-motion reconstructions may open new avenues in terms of learning metric depth and using another type of data which is ubiquitous in computer vision. Also, assuming already existing affine invariant monodepth, we can leverage it for pose estimation. We would like to develop a new solver for absolute camera pose estimation, potentially based on our previous solver using depths.      

                                                          

Researcher: Michal Kolar      

OPEN-33-14   

Directionality of the ribosome exit tunnel   

Karolina GPU  Alloc=2500;  LUMI-G  Alloc=3500     

Ribosomes are the cornerstone of life, because they produce all proteins in cells. The chemical reaction that interconnects the protein building blocks into a non-branched chain occurs deep inside the ribosome. Therefore, the nascent polypeptide – the protein precursor – leaves the ribosome through a tunnel. The ribosome tunnel is approximately 10 nm long. The width of the tunnel is variable with the narrowest part in the first third of the tunnel length, and the widest part near the tunnel exit. Furthermore, ribosomal tunnels differ among various organisms. In this work, we will run non-equilibirium molecular dynamics simulations to describe the translation of the nascent polypeptide through the ribosome exit tunnel. We will clarify the evolutionary differences between tunnels of various ribosomes.    

                                                         

Researcher: David Blackman

OPEN-33-15   

Suppression of laser plasma instabilities in inertial fusion relevant experiments  

Karolina CPU  Alloc=3700      

The production of safe and clean energy is one of this century's main challenges. Recent experiments at the National Ignition Facility in the US have demonstrated the plausibility of inertial confinement fusion (ICF) using lasers as a future energy resource. However, these experiments do not use a setup that is conducive to efficient reactor design and the energy gain requires increasing by more than an order of magnitude. Many challenges remain to turn these experiments into a viable and safe mechanism for energy production.    One of the key obstacles for the direct-drive inertial fusion scheme, an energy generating relevant ICF scheme, is the coupling of laser energy to the fuel capsule. When lasers ablate the surface of a fusion fuel capsule substantial amounts of hot material are blown off as plasma. The incoming laser energy can be redirected or poorly absorbed due to complex laser-plasma instabilities (LPI). In direct-drive these LPI can particularly be detrimental as they can prevent energy gain altogether. One possible solution to this problem will be explored in this project, the use of broad bandwidth lasers.   There are several laser facilities around the world which are developing broadband configurations as a solution to laser-plasma instabilities in ICF. Using the resources requested in this proposal we will perform large-scale kinetic simulations of ICF relevant scenarios with the aim of planning experimental campaigns.          

                                                         

Researcher: Martin Friak      

OPEN-33-16   

Magnetism of Fe-Ga alloys: quantum-mechanical and neural-network study       

Barbora CPU  Alloc=30000;  Barbora GPU  Alloc=500;  Karolina CPU  Alloc=32300;  Karolina GPU  Alloc=100    

Local magnetic moments of atoms in magnetic crystals are highly sensitive to their surrounding atomic environment. Understanding these structure-property relationships poses one of the most challenging problems in materials science. A deeper insight into these interactions could pave the way for the design of novel magnetic materials. Traditionally, such analyses rely on computationally intensive quantum-mechanical calculations. However, modern AI tools, such as neural networks combined with structural descriptors, offer a less resource-demanding alternative. These neural networks, however, require training on tens of thousands of examples. Our approach involves using quantum-mechanical calculations to create a large database of local magnetic moments across various crystals. This database will then be used to train a neural network, which can subsequently assist in designing new magnetic materials. The approach will be applied in the case of disordered Fe-Ga alloys.           

                                                 

Researcher: Jan Geletič         

OPEN-33-17   

High-fidelity simulations of urban cimate as an urban heat resilience input for Digital Twin

Karolina CPU  Alloc=59100    

The PALM model system represents a modern tool that allows detailed simulations of conditions in urban areas. These simulations typically concern phenomena of urban heat island and thermal comfort, rarely also air quality issues. ICS team significantly contributed to the model development and validation, moreover we are expertized in evaluation of urban climate adaptation measures. For a HORIZON project CARMINE (Climate-resilient development pathways in metropolitan regions of Europe) we prepared several large domains within Europe for complex simulations including thermal comfort and air quality. Domains represent the whole metropolitan areas in tens of meters resolution and will simulate most of the important sizes of eddies above the urban area. This project request follows an already supported test study OPEN-29-43.                  

                                               

Researcher: Marta Jaroš       

OPEN-33-18   

Automated Tuning of Workflows Executions on Remote Computational Resources II.     

Barbora CPU  Alloc=700;  Barbora GPU  Alloc=300;  Karolina CPU  Alloc=800;  Karolina GPU  Alloc=200 

In recent years, therapeutic ultrasound has diverse applications like tumor ablation and targeted drug delivery. Optimal outcomes require precise, customized preoperative planning. A challenge is accurate, safe, and noninvasive ultrasound energy delivery to the target region. Computation-intensive models for treatment estimation use high-performance computing (HPC). Despite the significance of HPC, clinical end-users lack efficient utilization expertise. The k-Plan software simplifies HPC use without specifying parameters, dependencies, or monitoring. It addresses parameter selection challenges and scaling issues, critical for calculation cost and execution time. Having deployed k-Plan for initial workflows, this project aims to (1) develop and test a new GPU code for thermal simulation, (2) apply real clinical and biomedical workflows, (3) customize task submission planning logic for IT4Innovations clusters with machine learning, and (4) explore methods for k-Plan to auto-tune execution parameters for tasks. Creating a publication covering experiments is a key goal.                                                             

 

Researcher: Diana Sungatullina        

OPEN-33-19   

Backpropagating through Minimal Solvers: Hard Minimal Problems         

LUMI-G  Alloc=15000 

We are continuing the project “Backpropagating through Minimal Solvers” that was started last year but with a focus on hard minimal problems. We aim at developing an approach to backpropagating through minimal problem solvers in end-to-end neural network training. Traditional methods relying on manually constructed formulas, finite differences, and autograd are laborious, approximate, and unstable for complex minimal problem solvers. Through this project, we want to develop fast, simple, and reliable alternatives to the aforementioned methods. We will leverage the Implicit function theorem and explore how to use it to compute derivatives for backpropagation through the solutions of a minimal problem solver. The resulting solution will be applied to a wide variety of geometric problems both on synthetic and real data to show its broad applicability and evaluate its speed, accuracy, and stability. If successful, our method would unlock new possibilities for stable optimization and could enhance numerous real-world applications that involve minimal problems in their formulation, such as robotic navigation and 3D reconstruction.       

                                                              

Researcher: Jennifer Za Nzambi       

OPEN-33-2     

Discovery of Economic Opinions and Causal Relationships from Textual Data      

Karolina CPU  Alloc=1000;  Karolina GPU  Alloc=2600;  LUMI-G  Alloc=2900         

Social media platforms are akin to an untapped gold vein harbouring a reservoir of public opinions, attitudes, and sentiments which, if realised, could revolutionise how public opinions are gathered and interpreted. This project introduces a novel method for extracting opinions about economic indicators and factor impacting society from social media texts by fine-tuning large language models, on datasets comprising of social media posts, comments and more. Through fine-tuning, language models can acquire the ability to understand and mimic the economic discourses within posts published on social media. This project’s value is threefold. First, it amalgamates carefully curated datasets through which the model can effectively learn domain-specificities and economic understanding on an advanced level. Second, it devises metrics based on perplexity comparisons of opposing statements which validate the model’s comprehension of economic texts, thereby measuring the model’s alignment with datasets it was fine-tuned on, and finally applies said models to datasets garnering results indicating that the model-based approach can rival, and in some cases outperform, survey-based predictions and professional forecasts in predicting trends of economic indicators. Beyond the scope of this study, the methods and findings presented could pave the way for further applications of language model fine-tuning as a complement, or potential alternative, to traditional survey-based methods.                                                             

Researcher: Martin Žonda     

OPEN-33-20   

Complex superconducting nanodevices       

Barbora CPU  Alloc=4000;  Barbora GPU  Alloc=300;  Karolina CPU  Alloc=4700;  Karolina GPU  Alloc=100

Superconducting (SC) nanohybrids are promising components for quantum circuits and quantum materials with diverse functionalities. These hybrids integrate nanostructures such as single atoms, molecules, or quantum dots with SC electrodes or surfaces. Even the simplest designs, where a quantum dot couples to one or two electrodes, exhibit tunable properties that enable exploration of phenomena like quantum phase transitions, the interplay between the Kondo effect and superconductivity, or the formation of bound states. This progress has spurred the development of more complex nanohybrids necessary for quantum devices, such as SC transistors, diodes, and qubits, as well as engineered quantum materials that combine, for instance, frustrated quantum magnets with superconductivity.  However, modeling complex SC nanohybrids remains a theoretical challenge. Approximate methods, adequate for simpler systems, often fail for these advanced designs, necessitating precise numerical approaches. Unfortunately, computational demands grow exponentially with the complexity of the system. Recently, we identified an exact mapping of SC ports in nanohybrids to fermionic chains, enabling the application of advanced numerical methods, including the Density Matrix Renormalization Group and Variational Monte Carlo approaches based on neural network quantum states. These techniques allow us to address equilibrium and transport properties of diverse systems, including SC-based quantum circuitry with multiple leads and assemblies of magnetic constituents on SC surfaces for quantum matter engineering.                                                               

 

Researcher: Martin Stack Formánek

OPEN-33-21   

Strong Field Electrodynamics in Flying Focus pulses Barbora CPU  Alloc=5000;  Karolina CPU  Alloc=300

This project studies charged particle behavior in external laser fields using the recently-described “flying focus\ (FF) regime, a laser field setup which allows precise control of the position and velocity of its focus. This novel regime makes it possible to adjust the laser focus so that it co-propagates with the particle, including the situation when the particle is moving against the laser phase fronts. The resulting long laser-particle interaction time enabled by FF pulses is expected to significantly enhance radiation reaction and cumulative QED effects along the particle trajectories. We will investigate the probability of single photon emission by a high energy electron in a flying focus beam, and then study the cascade emission of several photons (quantum radiation reaction). Finally, we will implement flying focus fields in the Particle-In-Cell code SMILEI and custom particle pusher in python. With these codes, we will perform simulations of experimental setups exploiting flying focus fields in laser-particle interactions. Thus we will identify viable experiments that could lead for the first time to unambiguous radiation reaction detection, allow for methods of particle beam control, and enable us to probe quantum electrodynamics in the strong field regime.

                                                                   

Researcher: Michal Piňos      

OPEN-33-22   

Neural Architecture Search with Approximate Computing  

Karolina GPU  Alloc=1300     

Deep neural networks (DNNs) are the driving force in the field of modern AI. Recently, there has been a growing interest in the use of DNNs in resource constrained devices, e.g. embedded devices, internet of things (IoT) devices or wearable technology. However, modern DNNs consist of hundreds of layers and billions of arithmetic operations and their computational complexity is enormous. The deployment of DNN models in such devices is thus limited and there is a need for new DNN models taking into the account not only the final accuracy of the model, but also HW parameters such as the power consumption or delay. In order to reduce the human effort associated with manual design of DNN models, an automated method called Neural Architecture Search (NAS) was created. NAS is routinely employed to deliver high-quality neural network architectures for various challenging datasets. The goal of this project is to implement and experiment with various (evolutionary, one-shot or other) HW-aware NAS methods, which are particularly useful when HW parameters are considered during the automated design of neural network architectures. Additionally, the impact of the employment of approximate multipliers in various layers of the DNN models in order to further reduce the HW parameters (e.g. power consumption) is examined.                                                              

 

Researcher: Evangelos Kazakos        

OPEN-33-23   

A universal approach for video understanding tasks with language grounding     

LUMI-C  Alloc=1000;  LUMI-G  Alloc=25000 

In the proposed project, our goal is to design a framework that unifies multiple video grounding tasks. We aim to develop a video-language model flexible to take different types of inputs and produce different types of outputs for various spatio-temporal grounding problems. The core idea of the proposed model is that an LLM is used both to understand the requirements of each task and to generate predictions as a sequence, and a visual grounding model ingests language embeddings that guide the detection of the language components in the video. While existing models excel at semantic tasks like captioning, they perform poorly on spatial-temporal reasoning, which is critical for areas such as robotics and self-driving cars. To address this, we propose a model that unifies various video grounding tasks—such as spatio-temporal grounding and referring expression comprehension—offering a more comprehensive approach than current models that typically focus on isolated tasks. The proposed project can have a significant impact on various disciplines. For example, it can aid robots in learning to manipulate objects given instructions in natural language, or it can enhance self-driving cars with grounded conversation capabilities between the driver and the car for more effective and safe driving. This project will contribute towards addressing the objectives of the ERC Advanced grant FRONTIER GA no. 101097822.   

                                                               

Researcher: Jakub Kruzik       

OPEN-33-24   

Development and Applications of PERMON and SurrDAMH Libraries       

Barbora CPU  Alloc=500;  Barbora GPU  Alloc=200;  DGX-2  Alloc=200;  Karolina CPU  Alloc=300;  Karolina GPU  Alloc=100;  LUMI-C  Alloc=5000;  LUMI-G  Alloc=200          

The project aim is to facilitate development and applications to real world problems of the PERMON libraries [1] and the SurrDAMH library [2]. The PERMON libraries include algorithms and supporting functions for solving large-scale quadratic programming problems, FETI/BETI-type domain decomposition methods, and support vector machines. These libraries are utilized in a wide range of applications, including large-scale contact problems in (hydro-)mechanics [3], ice-sheet modeling [4], and wildfire detection using satellite imagery. They have been employed to solve problems involving over one billion unknowns, utilizing up to 27,000 CPU cores.  The SurrDAMH library provides a parallel Python implementation of surrogate-accelerated Markov Chain Monte Carlo (MCMC) methods for posterior sampling in Bayesian inversion. A number of Markov chains is generated in parallel, sharing a surrogate model that is continuously refined. The chains also share a pool of spawned parallel solvers, such as those provided by the PERMON libraries. A variety of non-intrusive surrogate models are implemented, including polynomial chaos approximation and deep neural networks.  The project aims to further develop these libraries as part of the REFRESH project, enhance surrogate models using deep neural networks for slope stability and civil engineering applications under the INODIN project, and address inverse problems related to modeling deep repositories for radioactive waste under the EURAD-2 project.                          

                                         

Researcher: Tomas Brzobohaty         

OPEN-33-25   

Flexible manufacturing approach for recyclable bio-based high performance composite molds 

Karolina CPU  Alloc=99900;  LUMI-C  Alloc=23100;  LUMI-G  Alloc=8000   

This project is part of the international Future Mold initiative supported by the EU-funded network M-ERA.NET. The collaboration involves partners from the Czech Republic, Poland, and Germany, combining expertise in additive manufacturing, material sciences, and computational modelling to revolutionize incremental sheet metal forming (ISF). The goal is to enhance ISF's computational efficiency and practical application by leveraging high-performance computing (HPC), advanced finite element modelling (FEM), and machine learning techniques. Current ISF processes face significant bottlenecks, including lengthy simulation times that can take days to complete. This project aims to reduce these times dramatically while maintaining high accuracy. The approach involves using LS-DYNA for initial large-scale sensitivity analyses to understand critical ISF parameters. Subsequently, an in-house FEM solver tailored for ISF will be developed, incorporating implicit time integration schemes, robust contact algorithms, and hybrid OpenMP-MPI parallelization to optimize performance. Integrating Physics-Informed Neural Networks (PINNs) will enable rapid predictions of deformation states, further reducing simulation times. These innovations will create an adaptive, scalable pipeline for simulation and optimization related to ISF technology. The project's outcomes promise significant socioeconomic impacts, including reducing manufacturing costs and CO2 emissions, creating new market opportunities, and fostering sustainable, resource-efficient production methods. The project will drive advances in digital manufacturing and sustainable design through collaboration and technological innovation.     

                                                          

Researcher: Elliot Michael Rothwell Perviz  

OPEN-33-26   

NANOSLIDE2  

Barbora CPU  Alloc=21700;  Karolina CPU  Alloc=16500      

Transition metal dichalcogenides (TMDs) are a class of materials with general chemical formula MX2 (M=transition metal, X=chalcogen). They crystallise in a layered structure in repeating sequences of topologically flat layers bound together by weak van der Waals forces. The application of TMDs as solid lubricants arises from their characteristic vanishing coefficient of friction, on the order of 10^-3 for MoS2, and a tendency to adhere strongly to the substrate on which they are deposited. However, the friction and wear properties of TMDs degrade in dynamic and humid environments. Previous experimental studies have explored the use of dopants and/or heterostructures of TMDs resulting in improvements to the coefficient of friction and reduced wear. Moreover, it has been shown that the displacements that realise layer sliding may be decomposed into phonon contributions, which in turn can be tuned to achieve a desired frictional response. For this reason, we perform a computational study to determine the phonon modes for a selection of bilayer TMD heterostructures. Then, we investigate how the phonon modes and their anharmonic effects relate to structural and electronic properties, and whether these links can be exploited to tune the frictional response.       

                                                          

Researcher: Vladislav Pokorný      

OPEN-33-27   

Quartet superconductivity in quantum dot-based devices 

Barbora CPU  Alloc=12000    

Superconductivity is at the core of the second quantum revolution promising new platforms for fast classical and quantum computing, quantum sensing and communications. The fast pace of development of quantum technologies requires designing new devices with increasing complexity and number of active elements. Understanding the complex interplay among the various quantum-mechanical phenomena which take place in nanoscale is a necessary step in developing a new generation of such devices that utilize new mechanisms. Supercomputers are a necessary tool that allows us to build the theoretical understanding of the underlying processes and explain the available experimental results before such devices can be reliably used to extend the abilities of the current silicon-based electronics.          

                                                           

Researcher: Ahmed Alasqalani     

OPEN-33-28   

Exploring Radiation Tolerance of Medium Entropy Nitride Alloys (RaTMEN)        

Barbora CPU  Alloc=5900;  Karolina CPU  Alloc=18200        

Next-generation nuclear reactors require advanced materials capable of withstanding high radiation doses (100 displacements-per-atom) at temperatures up to 900°C. Radiation-induced damage, such as void swelling and helium embrittlement, can severely compromise various alloy‘s mechanical and structural integrity. Medium entropy alloys (MEAs) nitrides, composed of 2-4 transition metal elements, have demonstrated enhanced properties by combining the strengths of individual nitrides. Recent research indicates that certain MEAs exhibit improved resistance to void swelling due to their complex compositions, which reduce solute diffusivity and promote point defect recombination. Previous studies have shown that nanocrystalline TiN possesses superior radiation resistance. Nitride coatings, typically produced via reactive sputtering, exhibit compositions susceptible to the N₂/Ar gas ratio, making it challenging to precisely control the phases and compositions of medium entropy nitride coatings. This highlights the need to accelerate the design and nanoscale manufacturing of advanced MEA nitrides for extreme environments. To address this, we will utilize high-throughput computational methods, including density functional theory (DFT) and molecular dynamics (MD) simulations, to explore the composition space of (ATiHf)Nₓ MEA nitrides (where A = W, Zr, Ni, Cr, Mo) and predict their phase stability and mechanical properties.  

                                                          

Researcher: Carlos Manuel Pereira Bornes  

OPEN-33-29   

New insights into pyridine-zeolite interactions derived from machine learning potentials

Karolina CPU  Alloc=38400;  LUMI-G  Alloc=2600    

Zeolites are nanoporous materials used as solid acid catalysts in various industrial processes. Despite their broad applications, the precise nature of zeolite acid sites remains a topic of debate, hindering the establishment of structure-activity relationships, and consequently the design of tailored catalysts for specific reactions. Pyridine is used as the standard probe molecule for measuring zeolite acidity experimentally. However, experimental approaches alone cannot provide a detailed atomistic understanding of active site structures and reactivity, often relying on insights from computational studies. Traditional computational methods have been limited by the high cost of ab initio calculations, which restrict models to simplified representations with few acid site types, no defects, and often ignore dynamical effects. This project seeks to address these limitations by developing machine learning potentials capable of accurately modelling the acid-base interactions between pyridine and zeolite acid sites. Our approach will enable performing long equilibration simulations while retaining the accuracy of the (meta)GGA DFT training data. We aim to capture a diverse range of acid-base interactions and understand the effect of pyridine loadings and temperatures, to achieve an accurate representation of experimental conditions. 

                                                             

Researcher: Matěj Kripner

OPEN-33-3     

Advancing Logical Reasoning in Learned Systems   

LUMI-C  Alloc=900;  LUMI-G  Alloc=8900     

Novel multistep reasoning is still hard to achieve reliably using cutting-edge statistical systems trained on big data like large language models. At the same time, improvements in reasoning affect virtually every area of scientific and technological progress.   A natural testbed and training objective for reasoning is theorem proving in mathematics since mathematics is infeasible without precise reasoning, and conversely, reasoning can be trained by self-exploration of mathematics. Thanks to formal verification systems like Lean or Metamath, results in mathematics can be automatically verified, providing a clear training signal. Furthermore, these systems contain tens of thousands of manually written proofs, providing expert traces for supervised pre-training.   Formal theorem proving naturally extends also into natural language processing and understanding tasks, including complex query answering on a knowledge graph, common sense reasoning, and fact verification. Moreover, improvements in the reliability and depth of reasoning are directly applicable in virtually every field, including mathematics, physics, medicine, cryptography, software verification, manufacturing planning, and more.                                                               

 

Researcher: Dalibor Javůrek  

OPEN-33-30   

Circulator       

Karolina CPU  Alloc=2800;  Karolina FAT  Alloc=100

A circulator is a photonic device with three ports. The first port serves as an input for light carrying information; the second port allows the light to exit the structure for further processing; and the third port is typically used for monitoring purposes. When light is processed after exiting the circulator through the second port, back reflections may occur. The key function of the circulator is to route this back-reflected light to the third port, thereby protecting the section of the photonic integrated circuit (PIC) located before the first port. Commercially available circulators are bulky, cylindrical devices with heights of several centimeters and diameters of a few millimeters. We address this limitation by utilizing asymmetric modes in magneto-metallo-dielectric (magneto-bi-plasmonic) photonic waveguides. These modes enable the concentration of the electromagnetic field near one wall of the circulator, facilitating the development of a compact, broadband-integrated circulator. We will numerically design the parameters of the magneto-bi-plasmonic circulator using optimization algorithms to enhance its performance and meet industry requirements. Subsequently, in collaboration with our research partners, the designed circulator will be manufactured, its real-world performance evaluated and compared with the simulations. The entire project is supervised by industry experts who are keen on adopting the technology upon successful development.                                                                    

 

Researcher: Šimon Vrba        

OPEN-33-31   

Modelling of ELM transport in the JET SOL  

Karolina CPU  Alloc=14900    

One of the hot topics of the Magnetic Confinement Fusion research is the estimation and reduction of power loads to the plasma facing components and the related erosion of these elements. Among the most critical ones are power loads during the so-called Edge-Localized Modes in tokamaks, when hot plasma from the confined region directly propagates towards the divertor plates. The aim of this project is to perform a modelling of the ELM transport in the JET tokamak and study the evolution of the ELM related divertor power loads and the resulting W sputtering rates.  

                                                          

Researcher: Adam Pecina     

OPEN-33-32   

Validating Quantum-Mechanical Scoring in Practical Drug Discovery Scenarios

Karolina CPU  Alloc=14300;  Karolina GPU  Alloc=100         

Accurately predicting protein–ligand (P–L) binding affinities is a cornerstone of structure-based computer-aided drug design (CADD) that is a key instrument in the discovery of novel medicines. Current predictive methods range from ultra-fast scoring functions to highly accurate but computationally intensive molecular dynamics (MD)-based free-energy techniques. While these approaches have advanced our ability to identify drug candidates, they often face a critical trade-off between speed and precision. To address this challenge, we have recently developed a reliable and efficient scoring method grounded in semiempirical quantum-mechanical calculations. While it has shown exceptional results for systems with known 3D structures, this project aims to extend its application by showcasing its high accuracy in real-world drug design scenarios, specifically on P–L datasets lacking experimentally confirmed binding modes. Towards this aim, we will also leverage the very recent release of AlphaFold3 for predictions of P–L structures.

                                                             

Researcher: Klára Mitošinková         

OPEN-33-33   

Edge fast ion study for the fusion research  

Barbora CPU  Alloc=25000;  Karolina CPU  Alloc=500          

The tokamak is a leading concept of the future fusion power plant. It confines hot plasma, which contains fast ions besides the bulk ions and electrons. Fast ions can secure plasma heating, support enhanced plasma confinement modes or suppress instabilities. They can also amplify or introduce plasma instabilities, causing significant losses of energy. Therefore, a systematic study of fast ions is crucial for a sucess of the fusion power plants development. Due to the topology of the orbits, fast ions distribution is velocity-space dependent. This makes their studies and modelling extremly difficult. The fast ion distribution is represented by employing 3D orbit tracing codes, whilst the corresponding measured signals by dedicated diagnostics are predicted using synthetic diagnostics codes. The principal investigator recieved prestigious EUROfusion Bernard Bigot Researcher Grant, that supports young scientist. Nowadays most of the fast ion research focused on the core fast ions. Contrary, the project is going to investigate edge fast ions and their role in the formation of plasma modes with better plasma confinement. The research will use data from the USA tokamak DIII-D, which has the most experience in the fast ion research wordlwide. To improve role of the edge fast ions in the high energy confinement mode foloowing codes will be used: Ebdyna (fast ion tracing code), ASCOT (fast ion tracing code), FIDASIM (fast ion synthetic diagnostic).          

                                                       

Researcher: Jiri Brabec          

OPEN-33-34   

Density Matrix Renormalization Group Approach Based on the Coupled-Cluster Downfolded Hamiltonians: Development and Application          

Barbora CPU  Alloc=8000;  Karolina CPU  Alloc=34200        

Recently, we developed a preliminary implementation of a new approach that integrates a Hermitian coupled-cluster downfolding technique (DUCC) with the Density Matrix Renormalization Group (DMRG) method to accurately address both static and dynamic correlations in complex electronic systems. By calculating the lowest-state energies of active-space Hamiltonians using DMRG, we aim to achieve efficient and accurate simulations of strongly correlated systems. This combined approach represents a promising advancement in computational chemistry for studying complex chemical processes. We plan to apply the new DMRG-DUCC(2) and DMRG-DUCC(3) methods to chemically and biologically important systems, including iron-nitrosyl and iron-sulfur complexes.     

                                                        

Researcher: Jonas Kulhanek  

OPEN-33-35   

Unified 3D Foundational Model for 3D Scene Representation        

Karolina GPU  Alloc=1900     

Foundational models recently showed great potential and gained a lot of attention from both research areas and the public sector. Vision language models (VLM) enable deep image and video understanding. However, the great limitation of these models is that they lack any 3D reasoning and generative 3D capabilities. Therefore, in our work, we would like to design a 3D foundation model with generative capabilities. The model will be trained to map between various 3D representations (NeRFs, 3DGS, or SDFs), and it will be forced to internally learn a suitable 3D representation to be able to represent and reason in 3D. We will further provide a link between 2D and 3D by adding a 2D conditioning mapping between various 2D modalities to obtain generative capabilities and 3D priors within the model. We believe this model will greatly advance many 3D vision fields by providing a strong foundational prior for various 3D tasks like 3D generation, 3D understanding, scene reconstruction, etc. Finally, we believe our model could pave the way to embedding generative 3D capabilities to end-user LLM like ChatGPTs, etc.          

                                                          

Researcher: Marek Hrúz       

OPEN-33-36   

Sign Language Alignment      

LUMI-G  Alloc=20000 

Sign languages are vital forms of communication for millions of people in Deaf and hard-of-hearing communities worldwide. However, unlike spoken and written languages, sign languages are visual and highly nuanced, with gestures, facial expressions, and body movements all contributing to meaning. This complexity makes translating sign language into spoken or written language a challenging task—but one with immense potential to foster inclusivity and break down communication barriers.  Our project focuses on aligning sign language with its spoken and written counterparts. This alignment process is key to building effective sign language translation systems. It involves matching specific signs (or sequences of signs) to their linguistic meanings in other languages. By creating accurate alignments, we enable computers to \learn\" how sign language conveys information, laying the foundation for reliable translation systems.  Using advanced computational tools and a high-performance computing cluster, we are developing and training a system that combines artificial intelligence with video and language processing. The system learns from examples of people signing and their corresponding translations, ensuring it understands both the grammatical structure of sign languages and the cultural context in which they are used.  The ultimate goal is to make communication seamless between sign language users and speakers of other languages, creating technologies like real-time sign language interpretation and improved accessibility in education, workplaces, and public services. By bridging this gap, we hope to empower the Deaf community and bring us closer to a world without communication barriers."     

                                                          

Researcher: Riccardo FUSCO 

OPEN-33-37   

AI-Driven Integrated Pipeline for Accelerated Drug Discovery: A High-Performance Computing Approach         

LUMI-C  Alloc=1500;  LUMI-G  Alloc=5900   

The project proposal presents a computational pipeline supporting the ERA Chair ACCELERATOR (101087318) and ERC Advanced Grant AMADEUS (101098001) initiatives. The system integrates machine learning-enhanced virtual screening, binding free energy estimation, reinforcement learning for molecular design, natural language processing, and AlphaFold-based protein modeling. Operating in a protein-agnostic manner, it will be validated across various therapeutic targets, including CDK4/6/9, SOS1, and EGFR. The framework aims to transform the —early— drug discovery process by seamlessly integrating cutting-edge artificial intelligence algorithms with traditional computational chemistry methods and wet-lab instrumentation. This comprehensive system identifies promising drug candidates, systematically optimizes their properties, and validates their potential through an automated workflow. The sophisticated modular architecture ensures efficient processing of extensive compound libraries while maintaining high throughput and computational accuracy. Through integration with experimental data, the pipeline creates a continuous feedback loop that improves computational models, resulting in an iterative, self-improving system that aims to accelerate the drug discovery process.                                                                    

 

Researcher: Christopher Heard        

OPEN-33-38   

Machine-Learning Assisted Modelling of Oxidation Catalysis over Binary Sub-Nanometre Nanoalloys   

Karolina CPU  Alloc=34100;  LUMI-G  Alloc=1458    

The transition towards low cost, high atom-efficiency supported metallic catalysts in thermal, photo- and electro-catalytic chemical conversion for sustainable energy applications is well underway. The sub-nano scale provides a rich manifold for the bottom-up design of selective, efficient catalysts, reducing financial and environmental cost with respect to traditional catalysts, while enhancing reactivity in a wide range of applications, including CO2 reduction and alkane C-H activation. Theoretical treatments rely heavily on expensive electronic structure calculations, which limits the scope of investigation and risks inaccuracy in comparison to experiment. We have recently developed reactive neural network interatomic potentials (NNPs) that drastically speed up dynamical simulations of supported sub-nano metal cluster structure and evolution under realistic conditions. In this project, we will extend this concept to train a reactive NNP for a promising catalytic system (tetrameric binary transition metal clusters on Al2O3). This will be followed by application of the novel conformal sampling method, in which a robust transfer learning approach extends the generality of the NNP towards related systems -thus vastly reducing the computational cost of retraining, allowing for screening across candidate systems for a particular catalytic application. These transfer potentials will be used for the investigation of a model oxidation reaction in 1:1 collaboration with experiment.                                                              

 

Researcher: Vojtěch Bartek  

OPEN-33-39   

Thinking Tokens for Language Models         

Karolina GPU  Alloc=3500     

How much is 56 times 37? Language models often make mistakes in these types of difficult calculations. This is usually explained by their inability to perform complex reasoning. Language models have to rely on large training sets and their great memorization capability. However, one can argue that humans also cannot perform this calculation right away and require a considerable amount of time to come up with the solution. To enhance generalization capability of language models, we propose to use special ’thinking tokens’ which allow the model to perform much more calculation whenever a complex problem is encountered. Similar problem has been already studied in, where a language model recomputes only part of the recurrent hidden layer based on the complexity of the input example that is currently present. Another work with similar motivation explores the possibility of using neural networks that learn algorithms. However, in our work we proposed to extend language models in such a way that would allow them to “think” for longer before producing the final answer.   

                                              

Researcher: Jhacson Andres Meza Arteaga 

OPEN-33-4     

Evaluation of Learned Local Features for Sparse 3D reconstruction           

Karolina CPU  Alloc=200;  Karolina GPU  Alloc=1600           

Structure-from-Motion (SfM) is a 3D reconstruction method that heavily relies on accurate local feature matches to successfully obtain a sparse point cloud representation from a scene. Typically, SfM pipelines use hand-crafted feature detectors like SIFT, SURF, ORB, etc. Recently, many learning-based local feature detectors and feature matchers have been proposed, and in this project, we want to evaluate the use of them for sparse 3D reconstruction within an SfM pipeline. Specifically, we are interested in evaluating the recent progress with learned local features: Can using learned features make “easy” scenes (scenes in which current SfM systems succeed) easier (e.g., accelerate the reconstruction process)? Can using learned features help us to handle scenes where current SfM systems struggle or fail completely, e.g., when there are strong illumination or seasonal changes? To answer these questions, we will evaluate state-of-the-art learned local features in terms of reconstruction efficiency and reconstruction accuracy (using different measures such as pose accuracy and novel view synthesis quality).    

                                                          

Researcher: David Herel        

OPEN-33-40   

Novel algorithms for more efficient language models         

Karolina GPU  Alloc=5200     

Efficiency of language models is a crucial factor nowadays as the best performance is usually achieved by the largest models trained on as much data as possible. Previously, much of the scaling was a direct result of massive investments of companies such as Microsoft, Google and Facebook. Another significant factor was the hardware improvements, notably the GPUs produced by Nvidia. Finally, the algorithms used in language modeling are a crucial factor, with the well known Transformers architectures being the most popular choice for scaling up models to massive amount of GPUs. In our work, we plan to explore ideas such as Model ensembles, Training data selection, Sparse language models and other in order to advance the algorithmic part of language model efficiency.               

                                              

Researcher: Zehao Yu

OPEN-33-41   

High-quality Geometry Prediction from Monocular Images

Karolina CPU  Alloc=100;  Karolina GPU  Alloc=4900           

High-quality geometry prediction from monocular input images is crucial for a wide range of applications, including self-driving cars, robotics, and augmented reality (AR). While much of the existing work focuses on monocular depth map prediction, recent studies have proposed predicting a 3D point map instead. These approaches demonstrated that both the depth map and camera intrinsics can be derived from the predicted point clouds. Although these methods achieve robust geometry predictions, they struggle to reconstruct fine-grained details in scenes, such as the thin legs of chairs or the spokes of bicycles. This project introduces a novel approach for high-quality, detailed geometry prediction. Our key insight is that existing methods rely on a CNN decoder with bilinear upsampling, which produces smooth outputs but fails to account for the inherent discontinuities of point maps at occlusion boundaries. To address this limitation, we propose a discrete decoder architecture that employs a multi-layer perceptron (MLP) to produce discontinuous outputs. Additionally, we regularize the decoder with an edge-aware smoothness loss, enabling it to predict smooth point maps in continuous regions and discrete point maps at boundaries. To enhance the model's capability, we will build on a large text-to-image diffusion model, leveraging the rich contextual information embedded in generative models. We plan to release our codebase and pretrained models to benefit the research community.                

                                            

Researcher: Tomáš Karásek   

OPEN-33-42   

Digital Twin of Polymer Electrolyte Membrane Fuel Cells (PEMFC)

Karolina CPU  Alloc=7300      

The project aims to develop a digital twin of a Polymer Electrolyte Membrane Fuel Cell (PEMFC) using a surrogate model based on artificial intelligence (AI). Due to the difficulty of obtaining all necessary data from experiments, the project will use Computational Fluid Dynamics (CFD) to generate datasets. In the first year, a surrogate model will be created to estimate hard-to-measure fuel cell parameters by performing numerous calculations and comparing results with experimental data. This tuned model will then be used to make the digital twin. The project leverages state-of-the-art techniques, including numerical modelling and simulations, and AI, to enhance fuel cell design, efficiency, and sustainability, contributing to cleaner energy solutions and supporting global efforts to reduce carbon emissions.                                                                   

 

Researcher: Pavlo Polishchuk

OPEN-33-43   

Fragment-based de novo design and structure optimization          

Karolina CPU  Alloc=6000;  Karolina GPU  Alloc=800           

Exploration of chemical space is chemical space is extremely difficult due to a vast number of potential drug-like molecules – 1036. Approaches which can generate compounds on-the-fly and adaptively explore and navigate in this space will substantially increase search speed of potentially active molecules, improve their novelty and greatly expand the knowledge about favorable and unfavorable regions of chemical space. All this will increase efficacy of searching of new hits, leads and drug candidates. This project is focused on development and application of computational tools based on the previously developed fragment-based structure generation framework CReM which main advantage is generation of synthetically accessible molecules that will improve outcomes of medicinal chemistry projects.                  

                                            

Researcher: Martin Matys    

OPEN-33-44   

Gamma-ray flash generation using structured target          

Karolina CPU  Alloc=24200    

The current development of multi-petawatt lasers opens the window to the quantum electrodynamics regime. Interaction of  these ultra-intense lasers with matter results in gamma-photon emission mainly via the multi-photon Compton scattering process. The gamma-ray flashes are of a big interest for a wide portion of the scientific community, with theoretical predictions going along the same path as computer simulations and first experiments currently being realized. Generating intense gamma-ray flashes is of particular interest for imagining of high-density material, astrophysical studies and many other fields like radiation chemistry, materials sciences, nuclear physics and applications such as gamma knife in medicine. In this work, we employ particle-in-cell (PIC) simulations to realize the interaction of high-intensity laser with structured targets of various configurations. By varying the target topology the spectral and spatial properties of the gamma-photon generation and the increase of its yield will be studied.  

                                                            

Researcher: Michal Langer   

OPEN-33-45   

Theoretical Exploration of Photo-Catalysis in Carbon Dots and Electro-Catalysis in Low-Dimensional Nanomaterials           

Karolina CPU  Alloc=19600;  LUMI-C  Alloc=20000  

Carbon dots (CDs) are an emerging class of nanomaterials with remarkable potential in photocatalysis, offering innovative solutions for renewable energy generation and environmental challenges. Their appeal lies in their highly adaptable optical and electronic characteristics, which make them suitable for applications like hydrogen production and CO2 conversion. Nevertheless, realizing their full catalytic potential remains complex due to the intricate relationship between their structural features and functional performance. Computational techniques, particularly Density Functional Theory (DFT), play a crucial role in addressing these challenges by providing detailed insights into the electronic behavior, charge dynamics, and catalytic active sites of CDs. Simulations exploring structural alterations, such as doping, size optimization, and surface functionalization, allow for the fine-tuning of their properties for specific uses. In parallel, molecular dynamics and machine learning models accelerate the discovery of key structure-performance relationships, while advanced quantum chemical methods unravel the fundamental mechanisms of catalytic reactions. This integrated theoretical approach not only enhances our understanding of CDs but also drives the development of efficient, environmentally friendly catalysts for a sustainable future.                                                                 

Researcher: Jan Pokorny       

OPEN-33-46   

Analysis of Fiberous Particle Motion in Human Respiratory Tract using the Lattice Boltzmann Method  

Karolina CPU  Alloc=2000;  Karolina GPU  Alloc=300           

Computational Fluid Dynamics (CFD) plays a crucial role in understanding the transport and deposition of particles within the respiratory tract. Experimental investigation of deposited particles, particularly non-spherical ones, is a significant challenge. This project aims to increase the accuracy of the Lattice Boltzmann Method to simulate the transport and deposition of fibrous particles using modified Euler-Lagrange Euler-Rotation approach within a realistic geometric model of the female respiratory tract. The simulations will be validated against traditional CFD techniques and experimental findings, offering valuable insights into the potential medical applications of the Lattice Boltzmann Method. Additionally, analyzing results across age and gender-specific geometries will provide important comparative data.                                                          

 

Researcher: Matúš Kaintz     

OPEN-33-47   

Ab Initio Modelling of Advanced Photovoltaic and Semiconducting Devices Based on Diamond (AIM-PhotoDiamond)         

Barbora CPU  Alloc=32100;  Karolina CPU  Alloc=9400        

As global energy demand continues to rise, there is an urgent need for sustainable, carbon-free energy harvesting technologies. Intermediate band (IB) photovoltaics offers an innovative solution by overcoming Shockley-Queisser limit of traditional solar cells. To this aim, we propose an IB solar cell based on diamond, whose wide band gap can host multiple, suitably spaced IBs to maximise harvesting efficiency. To form the IBs, we will utilise dopant-vacancy-dopant cluster defect. Furthermore, we propose a diamond-based PN junction, where both regions are doped with the same n-type dopant. The p-type region is created by additional introduction of vacancies to form dopant-vacancy cluster. This approach could significantly simplify the manufacturing of semiconducting devices, for instance, by reducing cross-contamination and streamlining the supply chain. To model components of the proposed devices with high accuracy, we employ density functional theory with GW corrections. Additionally, we calculate Bethe-Salpeter equations and electron-phonon coupling to determine optical absorption and electronic properties where relevant. The presented research has not only the potential to advance engineering fields but also new approaches to simulate photovoltaic devices purely by ab initio calculations.       

                                                          

Researcher: Štěpán Timr      

OPEN-33-48   

Role of the C-terminal Tail in the Allosteric Regulation of Phosphofructokinase-1 

Barbora CPU  Alloc=25000;  Karolina CPU  Alloc=2300        

Glycolysis is the vital process through which cells extract energy from glucose, powering everything from muscle movement to brain function. At the heart of this process lies phosphofructokinase 1 (PFK1), a “gatekeeper” enzyme that determines whether glucose will be broken down for energy. Our computational study aims to uncover how PFK1 is regulated at the molecular level, focusing on a flexible segment of the enzyme, known as the C-terminal tail (CTT). This region acts like a switch, controlling whether PFK1 is active or inactive. Using advanced molecular dynamics (MD) simulations, we will explore how the CTT moves and interacts with the enzyme and how this influences the binding of activator molecules like AMP (a signal for low energy). Our findings will not only help clarify how the regulation of PFK1 works but also help scientists design drugs to target diseases like diabetes, obesity, and cancer, where energy production goes awry.                                                                      

Researcher: Pavel Eichler      

OPEN-33-49   

First stage of construction of large-scale library and development of computational tools to predictable selectivity in C-H bond cleavage

Barbora CPU  Alloc=4000;  Barbora GPU  Alloc=1000;  Karolina CPU  Alloc=1400 

The rapid progress in syntheses based on aliphatic C-H bond cleavage highlights a significant gap: there is a lack of a quantitatively predictable and user-friendly theoretical framework to guide the design of efficient synthetic routes. Although theoretical advances have yielded many insights, most efforts have focused on rationalizing experimental results or exploring the mechanistic diversity of coupled proton and electron transfer reactions, including C-H bond cleavage. This project aims to address this gap by developing a theory and methodology to accurately predict the selectivity of H-atom abstraction. The last approach exploits recently discovered extra-diagonal thermodynamic factors - frustration and asynchronicity - that are experimentally and computationally accessible. The main objectives include (i) uncovering the physical reasons for their variable dominance in reaction systems, (ii) understanding their interaction with other non-thermodynamic factors affecting reactivity, and (iii) designing chemical strategies to optimize catalysts for highly selective functionalization of C-H bonds.  This project focuses mainly on the extension of a large database of reactions using HPC. The large database created will be a source for the evaluation of several thermodynamic models and the possible development of new models based on e.g. AI. These findings will provide fundamental insights into the theory necessary for understanding the thermodynamics of chemical reactions.                                                                  

Researcher: Charalampos Tzamos    

OPEN-33-5     

Machine Learning and Approximate Solutions to Camera Geometry Problems    

Karolina CPU  Alloc=900;  LUMI-C  Alloc=200;  LUMI-G  Alloc=3000           

Camera geometry estimation is a fundamental task in computer vision, often relying on point correspondences as input to determine the camera parameters and their spatial configuration. This problem typically involves estimating models through a hypothesis-and-test framework, such as RANSAC, to robustly identify correct correspondences. A critical aspect of such optimization schemes is to require as few input correspondences as possible to estimate the model, which directly impacts the computational efficiency and robustness. Many camera geometry problems result in complex equations, for which there are no efficient or numerically stable solutions. In this project, we work towards providing approximate solutions to complex problems of camera geometry estimation, aiming to reduce the complexity of the studied problem configurations, which can lead to more efficient algorithms. We investigate refinement techniques to transform initial approximate solutions into highly geometrically-accurate ones.                                                            

 

Researcher: Ondrej Vlcek      

OPEN-33-50   

Retrieval of air quality annual statistics from limited number of LES model simulations as part of project ARAMIS          

Karolina CPU  Alloc=57900    

Poor air quality can lead to serious health problems among the population, making air quality monitoring and assessment an important task. Current annual air quality mapping on national level required by the legislation has resolution 1x1km, which is not enough in urban areas. The complex microscale models are able to simulate air pollution in very fine resolution (in units of meters), however, due to computational costs, over only a few short periods of time. These limitations have resulted in the proposal of a statistical method, which allows to “reconstruct” the map of annual statistics (for example of annual mean) from simulations of selected “typical” days. These “typical” days represent specific “types” of meteorological conditions and air pollution concentrations, observed in the considered urban area. The method was developed using observational data (Pikousová et al. 2024) and preliminary tested on already existing simulations, which, however, did not fully correspond to the typical days suggested by the method. The aim of the project is to make dedicated simulations of the typical days for the validation of the method. The properly reconstructed maps of annual statistics can then help with right decisions to achieve the goal of cleaner air in the cities.                                                                    

 

Researcher: Anagha Sasikumar        

OPEN-33-51   

Theoretical calculations of EPR and pNMR parameters for paramagnetic solids   

Karolina CPU  Alloc=2000      

Magnetic resonance (MR) techniques play a key role in the development of novel materials with tunable properties such as solar cells. Both Electron Paramagnetic Resonance (EPR) and Nuclear Magnetic Resonance (NMR) spectra of these materials contain valuable structural and dynamical information. The goal of this project is to utilize theoretical simulations and develop computational strategies to understand the phenomena contributing to the EPR and paramagnetic NMR spectra of materials and provide a robust framework for their accurate prediction and interpretation. State-of-the-art theoretical tools with accurate descriptions of scalar-relativistic and spin-orbit effects as implemented in the program ReSpect will be used to decipher and interpret the MR spectra. The EPR parameters (electronic g-tensor, hyperfine A-tensor) and pNMR shifts will systematically be investigated for metal halide perovskites (MHP) doped with paramagnetic ions (e.g., Fe, Cr, Ru) to interpret the experimental observations.             

                                              

Researcher: Rostislav Langer

OPEN-33-52   

Precision Engineering of Single Atoms in Organic Systems: Theoretical Insights for Catalysis and Electrochemistry       

Karolina CPU  Alloc=50500

The growing need for sustainable and efficient technologies in energy conversion and chemical production calls for breakthroughs at the atomic level. The emerging single-atom catalysts (SACs) offer unparalleled activity, selectivity, and resource efficiency by maximizing the utilization of active metal sites. SACs, anchored on advanced two-dimensional organic frameworks, promise to revolutionize fields such as catalysis and electrochemistry by reducing reliance on scarce noble metals while enhancing performance. However, fundamental questions regarding their structural stability, adsorption behavior at electrodes or energy barriers and catalytic pathways remain unanswered. Our research explores two essential fields for the possible industrial application of SACs systems. The first area focuses on the adsorption and reactivity of zinc single-atoms at organic-based electrode materials. The second area deals with carbon nitride systems embedded with single atoms, aiming to enhance their catalytic performance across various chemical reactions. To achieve our goals, we plan to employ a quantum mechanical simulation using advanced tools such as VASP and Gaussian. These computational studies will allow us to model the atomic-level interactions and predict material properties with high precision. The insights gained from this work will provide a theoretical foundation for experimental studies and guide the development of next-generation catalytic materials and electrochemical devices.          

                                                        

Researcher: Georgios Kordopatis-Zilos         

OPEN-33-53   

Learning a Universal Similarity Function for Vision applications – LUSt-Vision      

LUMI-G  Alloc=21000 

Multimedia content is indispensable in our society, necessitating effective content management. A critical aspect of this is assessing the similarity between two multimedia items like images, videos, and documents. Our mission is to learn a universal similarity function (LUSt) capable of precisely measuring similarity across a broad spectrum of multimedia domains and tasks. Diverging from traditional problem-specific approaches prevalent in current literature, we adopt a novel strategy. With LUSt, we plan to break down multimedia items into constituent parts, including image regions, video frames, and text sentences. Subsequently, a foundational model will be trained on input data comprising part similarities across various multimedia items. This strategic choice yields a universal input space with multiple advantages. In the first stage of this venture, the objective is to develop LUSt-vision, a representation-agnostic similarity network meticulously designed for vision-based applications. This involves a similarity model acting on local similarities of images and videos. Such model architecture is grounded in transformer-based deep learning modules and will be fortified by pioneering positional encodings rooted in kernel methods. These positional encodings empower us to effectively manage the differing topologies encountered across diverse domains—a formidable challenge. This will pave the way for a universal similarity foundation model for Computer Vision.                                                                  

 

Researcher: Rafael Dolezal   

OPEN-33-54   

Tracking ligand-induced conformational changes of the ionotropic cold-sensing TRPM8 channel: AI-enhanced searching for potential TRPM8 antagonists  

Karolina CPU  Alloc=1000;  Karolina GPU  Alloc=900;  LUMI-C  Alloc=3000;  LUMI-G  Alloc=2500

The rapid expansion of artificial intelligence (AI) models across the global population has raised challenging questions, particularly for modern drug design strategies that leverage similar computational tools to interpret and generate novel drugs in symbolic chemical language. Can drug development become more efficient with the integration of AI and supercomputers? This project aims to address this question by evaluating 1 billion chemical compounds using AI to identify new antagonists for the human TRPM8 receptor. This receptor is involved in the sensation of cool temperatures and plays a specific role in diseases such as migraines and prostate cancer. A portion of the input data for training the AI model will come from biochemical experiments, while additional data will be generated through high-throughput molecular docking. Ultimately, the project will identify several top-performing drug candidates with selective activity against TRPM8 for biological testing to validate their pharmacological potential. By advancing the discovery of innovative treatments, this research could lead to new therapeutics for neuropathic pain and certain types of cancer.                                                                  

 

Researcher: Marketa Paloncyova     

OPEN-33-55   

Interaction of π-conjugated compounds with nucleic acids

Karolina CPU  Alloc=9600;  Karolina GPU  Alloc=2100;  LUMI-G  Alloc=18000       

Nucleic acids contain genetic information, code individual proteins and participate in important biological phenomena. Evaluation of the effect of xenobiotics in our body on the structure and function of nucleic acids is of the uppermost interest for their targeted design and safety. Significant part of nucleic acids targeting drugs contains a planar π-conjugated moiety, able to stack on nucleobases, embed into nucleic acid grooves or even intercalate between the base pairs. Understanding of the mechanism of action of these drugs can be then extrapolated to the evaluation and prediction of the effect of currently studied nanomaterials. Our preliminary simulations show that carbon dots interact with nucleic acids, but do not intercalate or harm them. The aim of this project is to perform a systematic study of the mechanism of interactions of nucleic acid targeting drugs with several nucleic acid types and after the understanding of the mechanism of the effect of drugs we will extrapolate the gained insight  into carbon nanomaterial. 

                                                          

Researcher: Petra KÜHROVÁ 

OPEN-33-56   

Fine-tuning a methodological approach to building lipid-mediated delivery systems       

Karolina CPU  Alloc=2800;  Karolina GPU  Alloc=1100;  LUMI-G  Alloc=32200       

Lipid-mediated delivery of active pharmaceutical ingredients (APIs) offers significant opportunities in advanced therapies. Encapsulation within lipid nanocarriers (LNCs) enables the delivery of poorly water-soluble compounds, reduces severe side effects, and stabilizes fragile molecules such as nucleic acids. However, the rational design of LNCs is hindered by an incomplete understanding of the relationships between their composition, structure, and function. Classical molecular dynamics (MD) simulations provide insights into LNC behavior but are constrained by limited timescales, exploring only a small portion of conformational space near the starting structure. Rare events and slow transitions, critical for understanding functional properties, often remain inaccessible. Enhanced sampling techniques address these limitations by introducing biases that allow broader exploration of conformational landscapes. Methods such as collective variable-based and temperature-based approaches enable simulations to overcome energy barriers and access otherwise unreachable states. Despite their potential, enhanced sampling methods are rarely applied to LNCs due to computational challenges and the complexities of simulation setup. Proposing a methodological framework for applying these techniques could deepen the understanding of LNC dynamics and advance the rational design of more efficient lipid-based delivery systems.           

                                                  

Researcher: František Fňukal

OPEN-33-57   

Validation of the CSD crystal structure database using the DFT method    

Karolina CPU  Alloc=1800      

The Cambridge Structural Database (CSD) is the largest repository of organic and metal-organic crystal structures including pharmaceuticals. It can be used in studying molecular and crystallographic phenomena, predicting crystal structures or designing and optimizing pharmaceutical compounds. In our work, we discovered that the CSD potentially contains a large number of erroneous entries. The issues found range from minor inconsistencies to serious errors in crystal structure determination. This is due to poor data validation consequential to the rapid growth of the CSD. To validate a crystal structure, we perform a quantum chemical calculation based on the plane-wave density functional theory (DFT) method. The results of such calculations allow us to see discrepancies indicated by the quantum theory. The goal of our work is to design a validation scheme for the contents of the CSD and ensure that its users can rely on trustworthy, high-quality data.                      

                                           

Researcher: Silvie Illesova     

OPEN-33-58   

Optimization Techniques in Molecular Simulations Using VQE       

Barbora CPU  Alloc=11500    

Quantum computing holds great promise for solving complex molecular electronic structure problems. This research focuses on analyzing the behavior of various optimization methods applied to two advanced variational quantum algorithms: State-Averaged Orbital-Optimized Variational Quantum Eigensolver (SA-OO-VQE) [1,2] and VQE with Variational Hamiltonian Ansatz (VHA)[3]. Using statevector simulations, we will compute the ground and excited states of molecules such as H2, H4 chain, LiH, and formaldimine. The second phase of the project will incorporate sampling noise to assess the robustness of these optimization techniques for further deployment on real quantum hardware. The study aims to evaluate convergence efficiency, stability, and accuracy across different optimization strategies, providing insights into their performance in varying quantum problem setups. This research is expected to shed light on the interplay between optimization methods and variational algorithms, guiding the development of more efficient quantum computational strategies. These findings have implications for quantum chemistry and beyond, accelerating the practical adoption of quantum technologies in scientific and industrial application                

                                                

Researcher: Radim Špetlík    

OPEN-33-59   

Identity Verification using GCxGC ToF Mass Spectrograms of Human Scent          

LUMI-G  Alloc=34800 

Modern machine learning, particularly deep learning techniques from computer vision, offers tools for analyzing GCxGC-TOF-MS data. GCxGC-TOF-MS effectively separates and identifies components in complex chemical mixtures but generates large datasets that are challenging for traditional chemical compound-oriented analysis methods. Deep learning models, such as convolutional neural networks (CNNs), can manage these by directly extracting features from raw data, addressing high complexity of GCxGC-TOF-MS outputs, and enabling accurate and automated identity verification with minimal manual intervention.           

                                                          

Researcher: Lukáš Burget      

OPEN-33-6     

Fully End-to-End Multi-Channel, Multi-Talker Speech Recognition Leveraging Large Pre-trained Models

Karolina CPU  Alloc=100;  Karolina GPU  Alloc=10000;  LUMI-C  Alloc=1000;  LUMI-G  Alloc=20000        

Modern speech recognition systems can transcribe single-talker audio in controlled environments with remarkable accuracy thanks to advances in deep learning and availability of large-scale data. However, recognizing speech in complex scenarios like multi-party conversations with overlapping speakers and multiple microphones remains unsolved mainly due to scarcity of annotated real-world multi-talker conversational data. We propose leveraging large pre-trained foundation models, originally designed for simpler tasks but trained on extensive data, to build a unified system capable of handling multi-talker multi-microphone environments. By adapting models such as OpenAI’s Whisper and integrating subsystems such as speaker diarization and source separation into a cohesive framework, we aim to overcome limitations of current pipelined solutions. Building on our prior success in adapting pre-trained models for multi-talker and multi-microphone tasks, we will explore subsystem design, model extension, and end-to-end integration to create an advanced, adaptable solution. The outcome could significantly enhance automatic speech recognition capabilities in complex real-world settings.                                                                 

 

Researcher: Fabien Jaulmes  

OPEN-33-60   

Computational modelling of fast ion orbits and their consequences in tokamak  

Barbora CPU  Alloc=8600;  Karolina CPU  Alloc=200

Nuclear fusion will enable us to generate energy without releasing large amounts of greenhouse gases into the atmosphere or leaving behind us long lived radioactive waste. The tokamak concept involves the use of magnetic fields to confine plasma hot enough to sustain fusion within itself. COMPASS Upgrade (COMPASS-U) will be a large magnetic field (5T) tokamak that will allow the scientific investigation of various physics issues related to the operation of the future ITER. In particular, an 80keV Neutral Beam Injection (NBI) system is planned to heat up the plasma with 4MW of external power. Such a unit was tested on the COMPASS tokamak before its shut down and our modelling contributed to the interpretation of the results. The study and modelling of NBI-born particle behavior is of great relevance: it might influence future design of the system and its integration in the overall reactor design. We request computational time for the modelling of the interaction of the fast particles and the design of the fast ions related diagnostics. Our code, EBdyna, with its new collisional features, was benchmarked against the NUBEAM code on several test cases. A publication in the Nuclear Fusion journal summarizes the results of our initial modelling effort.                 

                                           

Researcher: Michael Komm  

OPEN-33-61   

Particle-in-cell simulations of inverse sheath parameter space      

Karolina CPU  Alloc=7400      

Inverse sheath is a unique regime of the plasma-wall boundary, which has been recently predicted by numerical simulations. Inverse sheath confines plasma ions and as such could have far reaching consequences for example for the erosion of plasma-facing components in nuclear fusion reactors. The access to this regime remains speculative since no experimental evidence has been achieved yet. Recently, our simulations have identified Coulomb collisions in high density magnetised plasmas as a possible mechanism, which prevents access to inverse sheath. Within this project, we plan to map the parameter space of inverse sheath access to determine its relevance for nuclear fusion research.              

                                                

Researcher: Michal Cifra       

OPEN-33-62   

Effect of electric field on wild-type and PTM variants of tubulin dimers    

LUMI-G  Alloc=10700 

Proteins are tiny molecular machines essential for life, performing tasks like breaking down food, transporting materials inside cells, and enabling cell division. These functions depend on the protein's shape, which can be influenced by external factors like chemicals, temperature, pressure, electric fields etc. Our project explores how strong electric fields (100–300 MV/m) affect the structure and behavior of tubulin, a building block of microtubules, which are key to processes like cell shape and movement. Tubulin is especially interesting because it carries a high electric charge, making it more sensitive to electric fields. Additionally, tubulin can undergo post-translational modifications (PTMs), such as polyglutamylation, where extra glutamate molecules, each carrying additional electric charge, are added to specific sites. These modifications are crucial for regulating interactions between microtubules and other cellular components, but how they affect tubulin's behavior under electric fields is not well understood. Using advanced computer simulations, we will study how tubulin’s structure and function change when exposed to electric fields. We’ll also investigate how modifications like polyglutamylation impact these effects. This research could improve our understanding of how electric fields influence biological molecules and offer insights into the diverse roles of PTMs of tubulin in health and disease.   

                                                               

Researcher: Kateřina Skotnicová      

OPEN-33-63   

Atomistic modelling of NiFeCoCr-based complex concentrated alloys via ab initio and molecular dynamics simulations    

Karolina CPU  Alloc=7500      

Complex Concentrated Alloys (CCAs) are an emerging class of advanced materials characterized by their unique composition, consisting of multiple principal elements (typically three or more) in near-equiatomic proportions. Unlike traditional alloys dominated by one or two primary elements, CCAs exhibit a high level of chemical complexity and atomic-scale compositional disorder. This complexity imparts exceptional mechanical, thermal, and chemical properties, making CCAs highly attractive for advanced applications. This computational research aims to study phase formation, chemical order and its effect on deformation mechanisms at the atomic scale in NiFeCrCo-based CCAs using Hybrid Monte Carlo/Molecular Dynamics (MC/MD) simulations and quasi-random structures (SQSs)/cluster expansion density functional theory (DFT) simulations. Additionally, the study will evaluate the influence of alloying elements such as Al, Ti, Ta, and Nb on the formation of FCC + L1₂ dual-phase structures, contributing to the development of CCAs with tailored properties for high-performance applications.                                                                   

 

Researcher: Rabindranath Lo

OPEN-33-64   

Free Energy Investigation of Hydrogen-Bonded Systems in Solvent Environments

Barbora CPU  Alloc=5000;  Barbora GPU  Alloc=100;  Karolina CPU  Alloc=9900;  Karolina GPU  Alloc=3000;  LUMI-C  Alloc=3700;  LUMI-G  Alloc=7300   

The project aims to improve our understanding of how solvent polarity influences the stability of hydrogen-bonded complexes using computational methods. Hydrogen-bonded systems play a crucial role in many chemical, biological, and material processes, and understanding their behavior in solvent environments is essential for accurate modeling. Free energy analysis provides insights into the stability, interactions, and dynamic properties of such systems, particularly in complex environments like solvents. The computational analyses will be closely integrated with experimental work conducted by experimentalists utilizing state-of-the-art techniques. This collaboration has the potential to significantly contribute to our comprehension of how solvents impact the stability of complexes, with broad practical applications across diverse fields. Utilizing specific DFT functionals, the project will assess the electronic properties of different complexes, considering both implicit and explicit solvent environments.                                                                    

 

Researcher: Jan Komeda       

OPEN-33-65   

ab initio simulations of novel Surface Enhanced Raman Spectroscopy substrates (SERSMaterials)         

Karolina CPU  Alloc=1100      

Surface Enhanced Raman Spectroscopy (SERS) is a highly sensitive and selective technique that greatly enhances the signal of an analyte, compared with its signal from classical Raman Spectroscopy, due to its interaction with a substrate’s surface. It has been shown that boron doped graphene can be used as a substrate in Surface Enhanced Raman Spectroscopy of simple organic molecules. Recent studies also suggest that graphene can remain stable when doped with significantly higher concentrations of boron than previously observed. Since B-doped graphene displays a higher enhancement factor when compared with pristine graphene, this material warrants further investigation. In this framework, with this project we aim to use quantum mechanical simulations to investigate the influence of dopant concentration and geometric distribution on the effectiveness of B-doped graphene as a SERS substrate.                                                                    

 

Researcher: Pavel Hobza       

OPEN-33-66   

Covalent Dative Bonding, Ionic, H-Bonding, and Charge Transfer Complexes: Surprising Stability/Instability Trends with Increasing Solvent Polarity       

Barbora CPU  Alloc=15000;  Karolina CPU  Alloc=58400;  Karolina FAT  Alloc=200;  Karolina GPU  Alloc=7300;  LUMI-C  Alloc=33100;  LUMI-G  Alloc=19100           

This project focuses on deepening our understanding of how solvent polarity affects the stability of covalent dative and non-covalent complexes through computational studies. These theoretical analyses are designed to complement cutting-edge experimental investigations performed by collaborators using advanced methodologies. By bridging computational and experimental approaches, the study aims to provide valuable insights into the role of solvents in stabilizing various complexes, which has wide-ranging implications in multiple disciplines. Leveraging specific density functional theory (DFT) methods, the research will evaluate the electronic and optical properties of complexes within both implicit and explicit solvent frameworks.                                                                    

 

Researcher: Vasileios Psomas           

OPEN-33-67   

Retrieval-Augmented Vision-Language Models for Open-Vocabulary Segmentation (RAVLOS)

LUMI-G  Alloc=25000 

How do self-driving cars safely navigate busy streets? How do satellites monitor environmental changes or assist in disaster relief? At the heart of these innovations lies visual scene understanding—the ability of technology to interpret complex images and make sense of the world. However, today’s systems often fail when encountering unfamiliar objects or scenarios, limiting their potential in dynamic, real-world environments. RAVLOS (Retrieval-Augmented Vision-Language Models for Open Vocabulary Segmentation) sets out to change this. Inspired by the way humans recall memories to better understand new situations, RAVLOS combines advanced artificial intelligence models with a powerful memory system that stores examples of objects and scenes. By using this memory, the system can adapt on the fly, recognizing not only generic objects like “cat” or “vehicle” but also highly specific ones like “Egyptian cat” or “electric scooter”. This breakthrough enables more accurate and detailed visual understanding, whether in autonomous vehicles, medical diagnostics, or environmental monitoring. The impact of RAVLOS could transform industries and improve lives—safer autonomous vehicles, smarter assistive technologies for people with disabilities, and better tools for protecting our planet. By teaching machines to see and understand the world as we do, RAVLOS brings us closer to a future where technology seamlessly enhances our daily lives.                                                                  

 

Researcher: Jan Rezac

OPEN-33-68   

Synergy of semiempirical QM chemistry and machine learning     

Barbora CPU  Alloc=53000;  Karolina GPU  Alloc=4900       

We are developing a novel computational chemistry method based on semiempirical quantum-mechanical (SQM) calculations and state-of-the-art machine learning. The present version of the method, PM6-ML, is being published and is already available as a preprint. In terms of accuracy, PM6-ML outperforms both all the existing SQM methods, as well as the standalone ML potentials, and opens unique applications to large molecular systems, including biomolecules relevant co computer-aided drug design. Now, we are developing the method further by improving the ML model, and enlarging the training data set. We expect the next generation method to be even more accurate and versatile while conserving its excellent scaling to large calculations.

                                                                      

Researcher: David Sabatini   

OPEN-33-69   

Discovery of novel nutrient sensors by in-silico screen using biomolecular Interaction models   

LUMI-G  Alloc=6000   

Every cell in our body acts as a microscopic sentry, constantly assessing its environment for essential nutrients like sugars, amino acids, and fats. This vigilant monitoring is driven by nutrient-sensing signaling pathways—intricate communication networks that regulate how cells grow, divide, and adapt to changing conditions. These pathways are vital for maintaining health, but when disrupted, they can lead to a lot of diseases, including diabetes, cancer, and neurodegenerative disorders. In previous studies, we have identified Sestrin2, CASTOR1, SAMTOR as Leucine, L-Arginine, S-adenosylmethionine (SAM) sensor in human cells, respectively. We also identified Unmet as a new S-adenosylmethionine (SAM) sensor in fly cells and established a theoretical model of nutrient sensor evolution. We want to leverage mTORC1 nutrient sensing signaling pathway to discover more novel nutrient sensors in the special niche of evolutionary tree. In this project, we strive to leverage the power of large-scale PPI prediction by Alphafold3 to test our theoretical model and lead to more discovery of new nutrient sensors.                                                                     

 

Researcher: Jan Priessnitz     

OPEN-33-7     

Search and modeling of unconventional magnetic materials         

Barbora CPU  Alloc=6200;  Barbora FAT  Alloc=100;  Barbora GPU  Alloc=2000;  Karolina CPU  Alloc=3100;  Karolina FAT  Alloc=100;  Karolina GPU  Alloc=600;  LUMI-C  Alloc=2300;  LUMI-G  Alloc=2500   

Recent discovery of altermagnets has opened up a large area in magnetism with strong application potential in spintronics. However, altermagnets with d, g, or -wave spin polarized order are not the only recently discovered and promising class of unconventional magnets. Other classes, such as s+--wave magnets, p-wave magnets, and other non-collinear and non-coplanar magnetic structures, are lately being theoretically investigated with potential use in spintronics and nanoelectronics, too. This project will focus on these novel magnets and perform the next step from theory to application – search for promising material candidates and predict their properties using density functional theory (DFT). Ideally, this will lead to a collaboration with experimental group with an aim to experimentally demonstrate the novel magnetic properties desirable in applications.          

                                                          

Researcher: Zdeněk Futera   

OPEN-33-70   

Quantum Calculations of Non-Linear Optics Parameters for Liquid Water Barbora CPU  Alloc=10000;  Karolina CPU  Alloc=4900        

Non-linear optical techniques such as Second Harmonic Generation (SHG) and Sum Frequency Generation (SFG) are non-invasive methods ideal for probing molecular structures of interfaces. They can provide detailed information about interactions of biological materials and solid surfaces with liquids and the molecular arrangement of adsorbed solvent molecules on such interfaces. However, interpreting the measured spectra requires theoretical input and atomistic modeling, often combining molecular dynamics simulations and quantum calculations of optical parameters, i.e., the molecular and hyperpolarizabilities. The accuracy of these parameters is crucial for the quality of the predicted spectral signals and correct analyses of the measured spectra. Here, we aim to obtain these data by embedded quantum calculations within the density-functional theory framework to provide accurate optical parameters for liquid water, which plays a prominent role in biology and electrochemistry.                                                                

 

Researcher: Bedřich Smetana           

OPEN-33-71   

Toward Pareto Optimal Alloy Electrocatalysts         

Karolina CPU  Alloc=33700;  Karolina GPU  Alloc=1700       

The global energy landscape in the first quarter of the 21st century was marked by increasing energy demand, dwindling fossil fuel reserves, fluctuating prices, and intensifying geopolitical tensions.  The world is starved for a better energy solution, and hydrogen energy is one of the most promising candidates. Producing hydrogen from water, i.e., water splitting, and utilizing hydrogen to generate electricity requires a Platinum catalyst, a precious metal preventing the wide adoption of hydrogen energy.  Here, we propose a Lanthanum (La) based ternary alloy with the other candidate elements, including Nickel (Ni), Tin(Sn), Magnesium(Mg), Indium(In) and Copper(Cu). The vast space of alloys, considering different elements and compositions, made it prohibitive for experimental screening. A rigorous first-principle calculation was considered an impossibility due to the huge computational costs. Here, we propose to use the supercomputer at IT4Innovation to discover and optimize for the best La-based ternary alloy for Hydrogen Evolution Reaction (HER) and Oxygen Reduction Reaction (ORR), while the former produces hydrogen in water splitting, and the latter as the rate-determining step in a fuel cell.  We will use CP2K already installed in Karolina to evaluate synthesizability using Ab Initio Molecular Dynamics, followed by surface reaction calculations to locate catalysts with catalytic efficiency. Lastly, we will use Bayesian optimization to find the Pareto optimal of synthesizability, catalytic efficiency, and cost.  In addition, a machine learning interatomic potential will be trained as a surrogate model for the alloy system to predict synthesizability and catalytic efficiency within a few milliseconds.         

                                                              

Researcher: Dominik Čáp      

OPEN-33-72   

Optimizing Laser Wakefield Acceleration for Enhanced Betatron X-ray Radiation 

Karolina CPU  Alloc=2300      

In recent decades, advancements in laser technologies have enabled the development of novel particle acceleration techniques. One of the most promising methods is the so-called laser wakefield acceleration, which accelerates electrons in the plasma wave generated by an intense laser pulse propagating in a gas jet. These electrons then oscillate in the electric field of the wave and generate bright, ultrafast X-ray pulses. This offers a compact and spatially coherent X-ray source suitable for many applications, such as phase contrast imaging and X-ray time-resolved spectroscopy but is limited by its low conversion efficiency and relatively low photon flux. Our work focuses on optimizing this source to improve efficiency and enhance flux by modulating the driver pulse or adding an additional, higher-density gas jet. The particle-in-cell simulations conducted in this project aim to provide a deeper theoretical understanding of the process and ultimately aiding in the optimization of experimental sources for various medical and imaging applications. Parameters are chosen to correspond to the capabilities of the Gammatron beamline at ELI Beamlines for future experiments.   

                                                         

Researcher: Michael Bakker 

OPEN-33-73   

Decoding the Intricacies of p53TAD: Uncovering the Role of a Conserved Disordered Region     

Barbora CPU  Alloc=2000;  Barbora GPU  Alloc=2000;  Karolina CPU  Alloc=1700;  LUMI-G  Alloc=10800

The beginning of p53 holds a structured and unstructured region known as the transactional domain (p53TAD). This section is pivotal in dictating how the protein interacts with itself and co-activating enzymes, displaying intricacy and precision. Despite extensive research on p53TAD, there remains a significant knowledge gap regarding the importance of its conserved disordered tail in various species. This gap needs to be addressed to gain a complete understanding of p53TAD. By combining molecular dynamics trajectories with experimental chemical shifts, our study aims to shed light on the structural diversity of the p53TAD and assess the impact of the disordered region and post-translational modifications on its conformational landscape. In addition to a better functional understanding of the p53 protein, we will incorporate the novel SASAP technique to map the protein's conformational landscape based on a specific residue's solvent accessibility angle.                 

                                              

Researcher: Ctirad Červinka  

OPEN-33-74   

Ab initio Monte Carlo simulations of bulk molecular liquids, glasses, their phase transitions and volatility        

Karolina CPU  Alloc=62900;  Karolina FAT  Alloc=250;  LUMI-C  Alloc=23700         

Molecular materials represent a widespread class of compounds spanning from drugs over semi-conductors to biogenic compounds and solvents. Amorphous forms of molecular materials include liquids and the glassy solid state, both missing any long-range regular structure. Significance of molecular liquids is obvious as all Earth-bound life depends on water. Furthermore, uncountable chemical processes take place in solution, relying typically on a molecular solvent. For organic molecular solids, their amorphous forms have been overlooked for long although those possess several potentially beneficial characteristics such as higher solubility (novel drug formulations) or are easier to manufacture (thin semi-crystalline semi-conductor films). While it is relatively straightforward to model properties of a well-defined crystal with quantum-chemical methods, there are important knowledge and applicability gaps related to the amorphous field. State-of-the art quantum-chemical simulations  may also provide qualitative failures when comparing properties of multiple forms of a molecular material. For example, it is relatively difficult to predict from the first principles that water ice should float on liquid water, not sink. As such, classical molecular-dynamics simulations, relying on semi-empiric force-field models with a limited predictive power, still prevail as the principal tool to study liquids and glasses. Current proposal aims at development of novel simulation techniques, requiring minimum input empiric information and maximizing their predictive power, to enable highly accurate quantum-chemical description of molecular liquids and glasses at an acceptable computational cost.    

                                                         

Researcher: Samuel Lukeš     

OPEN-33-75   

ERO2.0 simulations of tungsten heat shields on COMPASS Upgrade tokamak      

Karolina CPU  Alloc=500        

With the ever-increasing energy consumption of mankind, the need to use ecologically and politically acceptable, reliable 24/365 and inherently safe sources also increases. One of the very few ways (if not the only way) to satisfy these requirements seems to be nuclear fusion. That is why its flagship research project ITER is currently the most expensive science experiment on the planet. However, several unanswered questions still remain, one of them is the long-term reliable heat shield of the reactors. Current tungsten (W) shields must withstand extreme heat fluxes from thermonuclear plasma in the form of charged energetic particles, which significantly erode and degrade any crystalline lattice or overheat the matter to the point of cracking or melting. However, an even greater risk for the reactor is the transport of released material into the confined plasma, where even at very low concentrations most of the energy supplied to the plasma will be radiated away and the ongoing nuclear fusion will be interrupted. This project simulates the erosion & transport of W heat shields in the new COMPASS Upgrade tokamak built in Prague. Parameters of fully W COMPASS Upgrade corresponds to future reactors, and any research on it in this sense will be globally unique. 3D Monte Carlo code ERO2.0, which solves the transport and plasma-wall interactions of impurities in the region between the confined plasma and the solid wall of current fusion devices, is used for this purpose.                                                                    

 

Researcher: Lukas Neuman   

OPEN-33-76   

Inductive Bias of Deep Neural Networks for Computer Vision        

DGX-2  Alloc=300;  Karolina GPU  Alloc=500;  LUMI-G  Alloc=1000

Deep Neural Networks models have in the recent years dominated virtually all areas of Artificial Intelligence and Computer Vision. Their main advantage is that, given enough training samples, a training algorithm can automatically update network parameters to directly maximise given objective, such as image classification accuracy. Despite the recent success, the models are easily confused by trivial samples not present in the training set and even the largest models lack basic generalisation and reasoning abilities despite having hundreds of millions of parameters and despite being trained on millions of very diverse data samples -- suggesting that a fundamental piece of understanding is  still missing.   We propose that one of the missing pieces in current models compared to humans is an appropriate inductive bias -- the set of prior assumptions used to generalise and make a prediction based on a finite set of training samples. In this project, we want to exploit this observation and search for new inductive biases to incorporate them into modern Deep Neural Networks used in common Computer Vision tasks. This will result in Deep Neural Network models which require less parameters, which are more efficient, which are less confused by out-of-distribution data samples and which require less training data, as using an appropriate inductive bias is likely equivalent to even exponentially less  training data.        

                                                            

Researcher: Karel Sindelka    

OPEN-33-77   

Pectin gelation: Computer simulations of polysaccharide-based annealed polyelectrolytes        

Barbora CPU  Alloc=18000;  Karolina CPU  Alloc=13500      

Pectin, a natural polysaccharide, plays an essential role in food processing as a gelling agent and is increasingly used in biomedical applications, such as drug delivery. This project investigates pectin gelation mechanisms via computer simulations, addressing a significant gap in understanding the behaviour of pectin-based materials. Pectin gelation reles on electrostatic crosslinking with calcium ions, resulting in stable “egg-box” structures. We therefore use mesoscopic simulations with explicit electrostatics to study interactions between many pectin chains of realistic lengths.                                                               

 

Researcher: Jun Terasaki       

OPEN-33-78   

Prediction of nuclear matrix element with advanced transition operator for neutrinoless double-β decay        

Karolina CPU  Alloc=24900    

There are two kinds of elementary particles, that is, particles (matter) and antiparticles (antimatter). Matter and antimatter are expected to exist in the same amount because of the symmetric properties of this pair of elementary particles. However, reality is different; the antimatter does not exist stably in the universe. Majorana neutrino is a hypothetical particle introduced in a theory to solve this mystery of the universe. A special decay of the atomic nucleus by the weak interaction without emitting the neutrino (the neutrinoless double-β decay) is the proof of the existence of the Majorana neutrino because this decay does not occur without this neutrino. More than 30 experiments using large facilities around the world are in progress or in preparation to find this epoch-making new particle. Theoretical physics carries an important task to provide the decay probability, with which experimentalists can design the proper detectors. This decay is extremely rare; its half-life is certainly much longer than 10^21 years for 136Xe. Because of this extreme rarity, the reliable prediction of the decay probability has been a challenging problem for theorists for many years. I calculate this decay probability with an advanced theoretical method using the transition operator more accurate than usual ones. This operator is constructed by including higher-order components in terms of the field theory. In this project, I focus on the effects of the ground-state correlations included in this new transition operator. The new terms of the decay probability are complicated, and large-scale computations are necessary. This is the reason why I need the super-parallel computer.                                                           

 

Researcher: Theodorus Petrus Cornelis Klaver        

OPEN-33-79   

MD simulation of high velocity dust particle impacts on fuzz covered W surfaces

Barbora CPU  Alloc=38000;  Barbora FAT  Alloc=250;  Karolina CPU  Alloc=5400;  Karolina FAT  Alloc=250;  LUMI-C  Alloc=8000   

Fusion energy is a long sought-after energy source that, if realized, will offer virtually endless, emission-free energy without long-lived radio-active waste. The leading fusion reactor design is a tokamak, i. e. a ‘hollow donut’ in which H isotopes collide with great energy, making them fuse into He. The He produced in the fusion plasma has a detrimental effect on the W surface it comes into contact with, turning it into a low density, fragile ‘fuzz’ layer. Surfaces are also impacted by tiny, high velocity W dust particles that have come loose. There is a fear that such dust particle impacts may cause more secondary dust particles to come loose, initiating an avalanche effect that could extinguish the fusion plasma. Obviously, it is very important to know if such an effect would occur or not. The extreme conditions inside a fusion reactor are very difficult to emulate in laboratory experiments. Hence, we plan to conduct molecular dynamics simulations of dust particles impacting on fuzz covered W surfaces, to see how many new particles come loose by such impacts. The particles and fuzz covered surfaces in the simulations will consist of hundreds of millions of atoms, requiring great computational power only available in supercomputers, such as those at IT4I. The outcome of the simulations should tell us whether dust particle impacts will cause particle avalanches inside a tokamak and if they will therefore be a barrier to realizing fusion energy.                                                              

 

Researcher: Sergiu Arapan    

OPEN-33-8     

A computational study of meta-magnetic transitions in FeRh        

Barbora CPU  Alloc=14200;  Barbora FAT  Alloc=500;  Karolina CPU  Alloc=7100;  Karolina FAT  Alloc=200;  Karolina GPU  Alloc=2800;  LUMI-C  Alloc=5400      

The meta-magnetic transformation of FeRh from antiferromagnetic to ferromagnetic ordering occurs at a temperature of about 350 K. At the transition temperature, both magnetic and structural changes take place, with a 1% expansion of the unit cell volume in the ferromagnetic phase. The transition temperature significantly changes with the composition, strain, doping, and magnetic fields. The tunability of the transition temperature in a range close to the room temperature could make FeRh useful for a range of technological applications, most notably, the magnetic refrigeration. However, the mechanism behind the transition is still not well understood. The purpose of this work is to shed light on the effect of composition on the meta-magnetic transformation by using modern computational tools for the electronic structure calculations, magnetic exchange interactions and lattice dynamics.                                                                     

 

Researcher: Marcel Lamač   

OPEN-33-80   

Relativistic mirrors for bright coherent X-ray generation     

Karolina CPU  Alloc=3700;  Karolina GPU  Alloc=1700         

In his seminal work on the special theory of relativity, A. Einstein described the behavior of a relativistic mirror moving at speeds close to that of light. Electromagnetic waves reflecting off such a mirror experience a double Doppler upshift, resulting in increased frequency and amplitude while reducing pulse duration—a powerful concept for generating coherent high-frequency radiation. Such a radiation source can serve as a unique tool for fundamental research (e.g., light intensification towards the Schwinger limit, investigation of photon-photon and Delbruck scattering, detection of Hawking radiation and the information loss paradox) and for many practical applications in diverse fields (e.g., for molecular imaging, attosecond spectroscopy, plasma diagnostics). This project aims to employ large-scale particle-in-cell simulations to study relativistic mirrors which are produced by irradiating a gas or solid target with a focused high-power laser pulse or a relativistic charged particle beam. We will be working closely with the experimental team to design the setup for upcoming experiments using state-of-the-art laser systems at the ELI Beamlines facility. The ultimate goal of our research is to develop compact and tunable source of coherent high-brightness radiation with wavelengths ranging from x-rays to gamma-rays.      

                                                        

Researcher: Paulo Miguel Guimarães da Silva         

OPEN-33-81   

Development of a Molecular Dynamics pipeline for better understanding of viral mutations and variants        

Karolina GPU  Alloc=2200     

It has been well-documented throughout history that human beings periodically fight against viral threats. So far, science has helped us win the battles of survival. Although many vaccines and new drugs have been developed, nature is changeable and pathological agents are constantly mutating and evolving. Climatic variations also play a fundamental role in this natural selection. Brazil is currently experiencing the largest outbreak of Dengue in the world. However, it is not the only country to suffer from infections of this flavivirus: last year, autochthonous cases of dengue were detected in some European countries, including Italy. Because of this, there is a growing need to develop scientific methods and techniques that allow us to understand these pathogens more quickly. We believe that accelerated computational growth is a crucial condition for trying to stay one step ahead of the threats that inevitably lie ahead. With this project we aim to establish a solid basis for a greater understanding of viral pathogen proteins that can culminate in an endemic or a pandemic. Our main goal is achieving better preparedness and quick response against infectious diseases. More specifically, this consortium seeks the discovery of novel broad-spectrum drugs to combat viruses with epidemic potential. The target is to map viral target proteins that can be used to develop new drugs and to build pipelines that allow us to carry out automated molecular dynamics simulations of the targets and their possible mutations.  This project is part of the European AVITHRAPID (Antiviral Therapeutics for Rapid Response Against Pandemic Infectious Diseases) consortium (https://avithrapid.eu/) funded by the European Union under grant agreement No 101137192.  

                                                            

Researcher: Paulo Miguel Guimarães da Silva         

OPEN-33-82   

Intelligent Traffic Modelling for Improved Urban Mobility: Towards Traffic Routing Equilibrium

Barbora CPU  Alloc=3400;  Karolina CPU  Alloc=3800;  LUMI-C  Alloc=7000          

The proposed research is two-fold: technical optimisation and exploratory research on traffic modelling. The first step centres on the refinement and optimization of a traffic simulator with the main objective of enabling efficient and cost-saving computations on different HPC clusters. The second step considers the usage of an optimised and scalable version of the traffic simulator to perform scientific experiments with the aim of producing novel insights that can lead to improved urban mobility. More specifically, we intend to study how the ratio of vehicles that have live traffic updates and re-routing possibilities influences the overall traffic flow. Previous work has allowed us to identify computationally expensive functions. As such, during the first stage of this project, we aim to optimise the traffic simulator by distributing computationally expensive functions, address other optimisation possibilities like memory structures and data transfer.  For the second stage of the project, we intend to perform multiple simulations with various approaches and settings like different cities and traffic volumes, different access to traffic information, different re-routing or change route settings. In addition, we also intend to consider other relevant aspect of traffic modelling, such as consideration of the network characteristics from a graph theory approach to help with the identification of the more usually occupied roads and crossroads within a city or study the influence of public transport on the traffic flow.   

                                                                  

Researcher: Alexander Molodozhentsev      

OPEN-33-83   

Particle-In-Cell Modeling of Laser Wakefield Acceleration for Generating High-Quality GeV-Class Electron Beams

Karolina CPU  Alloc=15700;  Karolina GPU  Alloc=1300;  LUMI-G  Alloc=8000       

A plasma-based acceleration scheme for particle acceleration by space charge wave was proposed by Y. Fainberg in 1956. This acceleration approach allows one to overcome one of the most significant limitations in conventional accelerators - limited electric field gradient in radio frequency accelerating structures. Extreme laser-plasma accelerating gradients, demonstrated experimentally by different teams, offer a path towards a compact laser-plasma accelerator (LPA). Such a compact accelerator can be used as an electron beam driver needed in a broad variety of applications, including free electron lasers (FEL), Thomson sources and even electron-positron colliders with TeV energy.   Laser-plasma acceleration has been the subject of active research overdecades to generate high-quality electron beams with up to GeV energies. A significant effort is being made to improve the quality, e.g., high charge, low energy spread, small beam emittance, and low divergence of the accelerated electron beam. The electron beam quality in a laser-plasma accelerator (LPA) strongly depends on the injection mechanism. In this study, we will investigate different injection mechanisms and their influence on the accelerated electron beam quality. In particular, we will explore the laser wakefield accelartion process using self-injection scheme, e.g., density downramp injection, injection in a pre-formed plasma channel, and density tailoring in a capilary setup and ionization induced injection mechanism e.g., injection from neutral gas in a single-stage configuration. Additionally, we will investigate the multi-stage acceleration process using Particle-In-Cell simulations. Specifically, we examine the charge coupling and acceleration of externally injected electron beams in the booster stage. We aim to uncover the underlying physical mechanisms and identify critical parameters that affect the trapping of the injected beam into the wakefield.                                                                

 

Researcher: Robert Vacha     

OPEN-33-84   

Peptide Killers of Bacteria 2  

Karolina CPU  Alloc=96000;  Karolina GPU  Alloc=3200       

The annual death toll from antibiotic-resistant bacteria stands at 700,000 and is projected to reach 10 million by 2050. Moreover, emerging strains of bacteria resistant to all available antibiotics may lead to a global post-antibiotic era. Because of this threat, the WHO and the UN are encouraging the research and development of new treatments. This proposal aims to design novel peptides that selectively target and disrupt the membranes of pathogens but not those of human cells. To obtain such peptides, we will develop an innovative coarse-grained model of membranes, which will enable us to establish the relationship between peptide sequence motifs and their affinity to membranes with specific lipid compositions. We will experimentally verify our computational results with peptide-membrane affinity measurements using the quartz crystal microbalance method. This project will enable the design of new peptides specifically targeting bacteria. Additionally, it will support the development of peptides that bind to the membranes of enveloped viruses, cancer cells, or cellular organelles, with potential applications as sensors, biomarkers, or therapeutics.         

                                                    

Researcher: Zuzana Janáčková         

OPEN-33-85   

Interaction of fusogenic polyarginine peptides with curved lipid bilayers  

LUMI-G  Alloc=6000   

Short cationic peptides are known for their ability to passively translocate through the cell membrane and to carry a cargo with them, which results in a variety of applications, such as targeted drug delivery. Despite intensive research over the past 30 years and numerous theories proposed, the exact mechanism of passive cell penetration remains unknown. Molecular dynamics (MD) simulations offer atomistic resolution and thus have become a complementary method to experiments allowing to elucidate otherwise hidden molecular details of peptide-membrane interactions. Despite multiple simulation studies of cell-penetrating peptides (CPPs) no dedicated research was conducted to date on their interaction with the curved lipid bilayers, which constitute the vast majority of the membranes in living cells. The goal of this research is to uncover the mechanism through which arginine-rich CPPs induce membrane multilamellarity and fusion, enabling their passive membrane translocation and associated bilayer remodling in the significantly curved lipid membranes. Gaining this insight is essential for improving the design of next generation of peptide-based drug delivery systems.  We will utilize atomistic molecular dynamics (MD) simulations with enhanced sampling techniques such as umbrella sampling and metadynamics, as well as employing the EnCurv method to enforcing the uniform membrane curvature.  The atomistic MD simulation data will be combined with coarse-grained MD simulation data and complementary experimental techniques, such as cryo-electron microscopy of lipid vesicles and fluorescent microscopy of the living cells. This combined approach aims to shed light on the detailed mechanism of passive membrane penetration of cationic peptides and its interplay with the membrane curvature, potentially advancing the development of targeted drug delivery systems and other biomedical applications.      

                                                              

Researcher: Jan Zdražil          

OPEN-33-86   

Topology-Driven AI Framework for Unveiling Novel Pathways in Alzheimers Neurodegeneration

LUMI-G  Alloc=6000   

Alzheimer’s disease (AD) presents one of the greatest challenges in modern neuroscience, with its complex neurodegenerative processes and lack of effective treatments. The intricate architecture of the brain, particularly the interactions within neural networks, remains difficult to map and understand in the context of AD. Traditional approaches to studying neurodegeneration have often relied on techniques that reduce the data's complexity, potentially overlooking subtle yet critical patterns in brain network activity. To fully grasp the progression of Alzheimer’s, novel computational methods are needed that can capture the intricacies of brain network topology and identify biomarkers and therapeutic targets that remain hidden from current methodologies (1). Our project proposes a framework that integrates Topological Data Analysis (TDA) with machine learning to address this need (2). TDA, known for its ability to extract complex structural features from high-dimensional data, will allow us to delve into the brain’s network complexity without oversimplifying the data. Coupled with AI, we aim to identify novel pathways and brain configurations linked to Alzheimer’s progression. By applying this framework to large-scale neuroimaging datasets, we will explore new avenues in brain connectivity and uncover potential intervention targets for neurodegeneration. This innovative approach could provide insights into the brain's dynamic landscape, offering a clearer understanding of AD mechanisms and enabling the discovery of intervention strategies to ameliorate the impact of neurodegeneration. Through this research, we aim to push the boundaries of neurodegeneration studies, contributing to the development of targeted, patient-specific therapeutic approaches for Alzheimer’s disease.                                                                     

 

Researcher: Yaqing Ding       

OPEN-33-9     

Sparse View Synthesis without Camera Pose Priors

LUMI-G  Alloc=7900   

Novel view synthesis from a sparse set of input images is a challenging problem of great practical interest, especially when camera poses are absent or inaccurate. Direct optimization of camera poses and usage of estimated depths in neural radiance field algorithms usually do not produce good results because of the coupling between poses and depths, and inaccuracies in monocular depth estimation. The goal of this thesis project is to leverage the recent 3D Gaussian splatting and monocular depth estimation methods to develop a novel method for sparse view synthesis simultaneously estimating the camera poses. Unlike current approaches, which use camera poses and 3D points from Colmap as initialization, we aim to be able to estimate camera poses and 3D structure directly from monocular depths. We propose new geometric model to eliminate the scale and shift biases in monocular depth estimation. We hope that our work will pave the way for future study on sparse view synthesis, few-shot reconstruction, and reconstruction without camera poses.