William Roper, Peter Thomas, Chris Lovell, Kyle Oman, Thomas Sandnes, Victor Forouhar
Session type:
Regular
Description:
Simulations are currently looking down the barrel of a new regime of observational data. Whether from the ever-expanding high-redshift frontier and the mysteries of exoplanets probed by JWST, or the massive surveys due to be conducted by the likes of the Euclid Space Telescope and Vera C. Rubin Observatory. For our "virtual twins" to keep up, they must probe statistically significant samples with high enough resolution to resolve detailed physics in robust models. This requirement demands high-resolution simulations probing large samples, or in the case of stellar and planetary simulations numerous individual realisations of objects. A hugely computationally demanding proposition.
In addition to the required volumes, these observational frontiers probe physical mechanisms poorly represented in large volumes in computational astrophysics studies: from the reionisation of the Universe on the largest scales, to the coevolution of black holes and their accretion disks and everything in between and beyond. The fidelity with which we model the Universe needs to improve in step with the fidelity of observational data, as do our analysis methods. To truly compare theoretical and observational results we need to be comparing like-for-like. This requires robust and complete physical models as a foundation, and forward modelling software to produce synthetic observations built on top of that solid foundation.
Even so, we are not lost in this game of catch-up - novel and efficient implementations are plentiful, HPC facilities are ever-increasing in computational power, and the Exascale regime is just around the corner in the UK. With the rise of the Exascale regime comes the advent of accelerator-based systems, ever-increasing thread counts and ample memory. However, this increase in computational potential could become a poisoned chalice without proper consideration. Not only do we have to consider new architectures and handle calculations magnitudes larger than we have in the past, we also need to store and analyse the huge data footprint produced by them, and mitigate the energy consumption inherent in running these simulations and storing the data.
In this session, we want to promote both scientific and technical discussions surrounding the next generation of simulations across spatial regimes. From planetary impact and formation simulations to massive cosmological simulations encompassing the Hubble volume, what do we need to interpret and understand observations at the frontier? What physics is missing in our models? How do we best utilise the hardware at our disposal? How do we analyse the huge datasets we will produce efficiently? Do we have common workflows and best practices? And, most importantly, how do we do all of this efficiently without undue impact on the environment from our energy usage?
Topic:
Techniques
Schedule
15 Minutes
Joaquin Sureda
LYRA: Probing the high-resolution regime of cosmological zoom-in simulations
15 Minutes
Ethann Taylor
EDGE: Star clusters, Dwarf galaxies, and something inbetween
15 Minutes
Mac McMullan
Dwarf Galaxies and the role of Environment on Mass Assembly
15 Minutes
Jemima Briggs
Zoom Simulations in SWIFT: Dwarfs Galaxies in Diverse Cosmological Environments
15 Minutes
Louise Seeyave
FLARE Simulations: a unique simulation approach for JWST
15 Minutes
Edoardo Altamura
EAGLE-like simulation models do not solve the entropy core problem in groups and clusters of galaxies
All attendees are expected to show respect and courtesy to other attendees and staff, and to adhere to the NAM Code of Conduct.