Schaller M. et al., MNRAS, May 2024, Vol. 530, Issue 2, pp. 2378-2419 (citations: 49)
Schaller M. et al., Astrophysics Source Code Library, 2018, ascl:1805.020 (citations: 48)
Schaller M. et al., PASC, 2016, Vol. 1, Article Id. 2 (citations: 39)
They have jointly gathered 1312 citations and have an h-index of 21.
Kegerreis, J et al., Icar, 2025 , vol. 425 (citations: 0)
Abstract
The origin of Mars's small moons, Phobos and Deimos, remains unknown. They are typically thought either to be captured asteroids or to have accreted from a debris disk produced by a giant impact. Here, we present an alternative scenario wherein fragments of a tidally disrupted asteroid are captured and evolve into a collisional proto-satellite disk. We simulate the initial disruption and the fragments' subsequent orbital evolution. We find that tens of percent of an unbound asteroid's mass can be captured and survive beyond collisional timescales, across a broad range of periapsis distances, speeds, masses, spins, and orientations in the Sun–Mars frame. Furthermore, more than one percent of the asteroid's mass could evolve to circularise in the moons' accretion region. This implies a lower mass requirement for the parent body than that for a giant impact, which could increase the likelihood of this route to forming a proto-satellite disk that, unlike direct capture, could also naturally explain the moons' orbits. These three formation scenarios each imply different properties of Mars's moons to be tested by upcoming spacecraft missions.
Kugel, R et al., MNRAS, 2024 , vol. 534 , issue 3 (citations: 1)
Abstract
Galaxy clusters provide an avenue to expand our knowledge of cosmology and galaxy evolution. Because it is difficult to accurately measure the total mass of a large number of individual clusters, cluster samples are typically selected using an observable proxy for mass. Selection effects are therefore a key problem in understanding galaxy cluster statistics. We make use of the
Pizzati, E et al., MNRAS, 2024 , vol. 534 , issue 4 (citations: 9)
Abstract
Recent observations from the EIGER JWST program have measured for the first time the quasar-galaxy cross-correlation function at
Upadhye, A et al., arXiv, 2024 (citations: 0)
Abstract
Observational cosmology is rapidly closing in on a measurement of the sum M_nu of neutrino masses, at least in the simplest cosmologies, while opening the door to probes of non-standard hot dark matter (HDM) models. By extending the method of effective distributions, we show that any collection of HDM species, with arbitrary masses, temperatures, and distribution functions, including massive neutrinos, may be represented as a single effective HDM species. Implementing this method in the FlowsForTheMasses non-linear perturbation theory for free-streaming particles, we study non-standard HDM models that contain thermal QCD axions or generic bosons in addition to standard neutrinos, as well as non-standard neutrino models wherein either the distribution function of the neutrinos or their temperature is changed. Along the way, we substantially improve the accuracy of this perturbation theory at low masses, bringing it into agreement with the high-resolution TianNu neutrino N-body simulation to about 2% at k = 0.1 h/Mpc and to within 21% up to k = 1 h/Mpc. We accurately reproduce the results of simulations including axions and neutrinos of multiple masses. Studying the differences between the normal, inverted, and degenerate neutrino mass orderings on their non-linear power, we quantify the error in the common approximation of degenerate masses. We release our code publicly at http://github.com/upadhye/FlowsForTheMassesII .
Contreras, S et al., A&A, 2024 , vol. 690 (citations: 0)
Abstract
Context. Mock galaxy catalogues are essential for correctly interpreting current and future generations of galaxy surveys. Despite their significance in galaxy formation and cosmology, little to no work has been done to validate the predictions of these mocks for high-order clustering statistics. Aims. We compare the predicting power of the latest generation of empirical models used in the creation of mock galaxy catalogues: a 13-parameter halo occupation distribution (HOD) and an extension of the SubHalo Abundance Matching technique (SHAMe). Methos. We built GalaxyEmu-Planck, an emulator that makes precise predictions for the two-point correlation function, galaxy-galaxy lensing (restricted to distances greater than 1 h‑1 Mpc in order to avoid baryonic effects), and other high-order statistics resulting from the evaluation of SHAMe and HOD models. Results. We evaluated the precision of GalaxyEmu-Planck using two galaxy samples extracted from the FLAMINGO hydrodynamical simulation that mimic the properties of DESI-BGS and BOSS galaxies, finding that the emulator reproduces all the predicted statistics precisely. The HOD shows a comparable performance when fitting galaxy clustering and galaxy-galaxy lensing. In contrast, the SHAMe model shows better predictions for higher-order statistics, especially regarding the galaxy assembly bias level. We also tested the performance of the models after removing some of their extensions, finding that we can withdraw two (out of 13) of the HOD parameters without a significant loss of performance. Conclusions. The results of this paper validate the current generation of empirical models as a way to reproduce galaxy clustering, galaxy-galaxy lensing, and other high-order statistics. The excellent performance of the SHAMe model with a small number of free parameters suggests that it is a valid method to extract cosmological constraints from galaxy clustering.
Dou, J et al., MNRAS, 2024 , vol. 534 , issue 1 (citations: 0)
Abstract
Head-on giant impacts (collisions between planet-sized bodies) are frequently used to study the planet formation process as they present an extreme configuration where the two colliding bodies are greatly disturbed. With limited computing resources, focusing on these extreme impacts eases the burden of exploring a large parameter space. Results from head-on impacts are often then extended to study oblique impacts with angle corrections or used as initial conditions for other calculations, for example, the evolution of ejected debris. In this study, we conduct a detailed investigation of the thermodynamic and energy budget evolution of high-energy head-on giant impacts, entering the catastrophic impacts regime, for target masses between 0.001 and 12 M
Eilers, A et al., ApJ, 2024 , vol. 974 , issue 2 (citations: 15)
Abstract
We expect luminous (M 1450 ≲ ‑26.5) high-redshift quasars to trace the highest-density peaks in the early Universe. Here, we present observations of four z ≳ 6 quasar fields using JWST/NIRCam in the imaging and wide-field slitless spectroscopy mode and report a wide range in the number of detected [O III]-emitting galaxies in the quasars' environments, ranging between a density enhancement of δ ≈ 65 within a 2 cMpc radius—one of the largest protoclusters during the Epoch of Reionization discovered to date—to a density contrast consistent with zero, indicating the presence of a UV-luminous quasar in a region comparable to the average density of the Universe. By measuring the two-point cross-correlation function of quasars and their surrounding galaxies, as well as the galaxy autocorrelation function, we infer a correlation length of quasars at <z> = 6.25 of
Baker, W et al., arXiv, 2024 (citations: 0)
Abstract
We use NIRSpec/MSA spectroscopy and NIRCam imaging to study a sample of 18 massive ($\log\; M_{*}/M_{\odot} \gt 10\;$dex), central quiescent galaxies at $2\leq z \leq 5$ in the GOODS fields, to investigate their number density, star-formation histories, quenching timescales, and incidence of AGN. The depth of our data reaches $\log M_*/M_\odot \approx 9\;$dex, yet the least-massive central quiescent galaxy found has $\log M_*/M_\odot \gt 10\;$dex, suggesting that quenching is regulated by a physical quantity that scales with $M_*$. With spectroscopy as benchmark, we assess the completeness and purity of photometric samples, finding number densities 10 times higher than predicted by galaxy formation models, confirming earlier photometric studies. We compare our number densities to predictions from FLAMINGO, the largest-box full-hydro simulation suite to date. We rule out cosmic variance at the 3-$\sigma$ level, providing spectroscopic confirmation that galaxy formation models do not match observations at $z>3$. Using FLAMINGO, we find that the vast majority of quiescent galaxies' stars formed in situ, with these galaxies not having undergone multiple major dry mergers. This is in agreement with the compact observed size of these systems and suggests that major mergers are not a viable channel for quenching most massive galaxies. Several of our observed galaxies are particularly old, with four galaxies displaying 4000-Å breaks; full-spectrum fitting infers formation and quenching redshifts of $z\geq8$ and $z\geq6$. Using all available AGN tracers, we find that 8 massive quiescent galaxies host AGN, including in old systems. This suggests a high duty cycle of AGN and a continued trickle of gas to fuel accretion.
Kay, S et al., MNRAS, 2024 , vol. 534 , issue 1 (citations: 1)
Abstract
The relativistic Sunyaev-Zel'dovich (SZ) effect can be used to measure intracluster gas temperatures independently of X-ray spectroscopy. Here, we use the large-volume FLAMINGO simulation suite to determine whether SZ y-weighted temperatures lead to more accurate hydrostatic mass estimates in massive (
Husko, F et al., arXiv, 2024 (citations: 0)
Abstract
We present results of cosmological zoom-in simulations of a massive protocluster down to redshift $z\approx4$ (when the halo mass is $\approx10^{13}$ M$_\odot$) using the SWIFT code and the EAGLE galaxy formation model, focusing on supermassive black hole (BH) physics. The BH was seeded with a mass of $10^4$ M$_\odot$ at redshift $z\approx17$. We compare the base model that uses an Eddington limit on the BH accretion rate and thermal isotropic feedback by the AGN, with one where super-Eddington accretion is allowed, as well as two other models with BH spin and jets. In the base model, the BH grows at the Eddington limit from $z=9$ to $z=5.5$, when it becomes massive enough to halt its own and its host galaxy's growth through feedback. We find that allowing super-Eddington accretion leads to drastic differences, with the BH going through an intense but short super-Eddington growth burst around $z\approx7.5$, during which it increases its mass by orders of magnitude, before feedback stops further growth (of both the BH and the galaxy). By $z\approx4$ the galaxy is only half as massive in the super-Eddington cases, and an order of magnitude more extended, with the half-mass radius reaching values of a few physical kpc instead of a few hundred pc. The BH masses in our simulations are consistent with the intrinsic BH mass$-$stellar mass relation inferred from high-redshift observations by JWST. This shows that galaxy formation models using the $\Lambda$CDM cosmology are capable of reproducing the observed massive BHs at high redshift. Allowing jets, either at super- or sub-Eddington rates, has little impact on the host galaxy properties, but leads to lower BH masses as a result of stronger self-regulation, which is itself a consequence of higher feedback efficiencies.
Scharre, L et al., MNRAS, 2024 , vol. 534 , issue 1 (citations: 5)
Abstract
Using several variants of the cosmological SIMBA simulations, we investigate the impact of different feedback prescriptions on the cosmic star formation history. Adopting a global-to-local approach, we link signatures seen in global observables, such as the star formation rate density (SFRD) and the galaxy stellar mass function (GSMF), to feedback effects in individual galaxies. We find a consistent picture: stellar feedback mainly suppresses star formation below halo masses of
Nusser, A, ApJ, 2024 , vol. 974 , issue 1 (citations: 3)
Abstract
The evolution of halos with masses around M h ≈ 1011 M ⊙ and M h ≈ 1012 M ⊙ at redshifts z > 9 is examined using constrained N-body simulations. The average specific mass accretion rates,
Braspenning, J et al., arXiv, 2024 (citations: 0)
Abstract
The masses of galaxy clusters are commonly measured from X-ray observations under the assumption of hydrostatic equilibrium (HSE). This technique is known to underestimate the true mass systematically. The fiducial FLAMINGO cosmological hydrodynamical simulation predicts the median hydrostatic mass bias to increase from $b_\text{HSE} \equiv (M_\text{HSE,500c}-M_\text{500c})/M_\text{500c} \approx -0.1$ to -0.2 when the true mass increases from group to cluster mass scales. However, the bias is nearly independent of the hydrostatic mass. The scatter at fixed true mass is minimum for $M_\text{500c}\sim 10^{14}~\text{M}_\odot$, where $\sigma(b_\text{HSE})\approx 0.1$, but increases rapidly towards lower and higher masses. At a fixed true mass, the hydrostatic masses increase (decrease) with redshift on group (cluster) scales, and the scatter increases. The bias is insensitive to the choice of analytic functions assumed to represent the density and temperature profiles, but it is sensitive to the goodness of fit, with poorer fits corresponding to a stronger median bias and a larger scatter. The bias is also sensitive to the strength of stellar and AGN feedback. Models predicting lower gas fractions yield more (less) biased masses for groups (clusters). The scatter in the bias at fixed true mass is due to differences in the pressure gradients rather than in the temperature at $R_\text{500c}$. The total kinetic energies within $r_\text{500c}$ in low- and high-mass clusters are sub- and super-virial, respectively, though all become sub-virial when external pressure is accounted for. Analyses of the terms in the virial and Euler equations suggest that non-thermal motions, including rotation, account for most of the hydrostatic mass bias. However, we find that the mass bias estimated from X-ray luminosity weighted profiles strongly overestimates the deviations from hydrostatic equilibrium.
Lauwers, A et al., A&A, 2024 , vol. 689 (citations: 1)
Abstract
Context. To understand the structures of complex astrophysical objects, 3D numerical simulations of radiative transfer processes are invaluable. For Monte Carlo radiative transfer, the most common radiative transfer method in 3D, the design of a spatial grid is important and non-trivial. Common choices include hierarchical octree and unstructured Voronoi grids, each of which has advantages and limitations. Tetrahedral grids, commonly used in ray-tracing computer graphics, can be an interesting alternative option.
Aims: We aim to investigate the possibilities, advantages, and limitations of tetrahedral grids in the context of Monte Carlo radiative transfer. In particular, we want to compare the performance of tetrahedral grids to other commonly used grid structures.
Methods: We implemented a tetrahedral grid structure, based on the open-source library TetGen, in the generic Monte Carlo radiative transfer code SKIRT. Tetrahedral grids can be imported from external applications or they can be constructed and adaptively refined within SKIRT. We implemented an efficient grid traversal method based on Plücker coordinates and Plücker products.
Results: The correct implementation of the tetrahedral grid construction and the grid traversal algorithm in SKIRT were validated using 2D radiative transfer benchmark problems. Using a simple 3D model, we compared the performance of tetrahedral, octree, and Voronoi grids. With a constant cell count, the octree grid outperforms the tetrahedral and Voronoi grids in terms of traversal speed, whereas the tetrahedral grid is poorer than the other grids in terms of grid quality. All told, we find that the performance of tetrahedral grids is relatively poor compared to octree and Voronoi grids.
Conclusions: Although the adaptively constructed tetrahedral grids might not be favourable in most media representative of astrophysical simulation models, they still form an interesting unstructured alternative to Voronoi grids for specific applications. In particular, they might prove useful for radiative transfer post-processing of hydrodynamical simulations run on tetrahedral or unstructured grids.
Fischer, M et al., A&A, 2024 , vol. 689 (citations: 8)
Abstract
Context. Dark matter (DM) halos can be subject to gravothermal collapse if the DM is not collisionless, but engaged in strong self-interactions instead. When the scattering is able to efficiently transfer heat from the centre to the outskirts, the central region of the halo collapses and reaches densities much higher than those for collisionless DM. This phenomenon is potentially observable in studies of strong lensing. Current theoretical efforts are motivated by observations of surprisingly dense substructures. However, a comparison with observations requires accurate predictions. One method to obtain such predictions is to use N-body simulations. Collapsed halos are extreme systems that pose severe challenges when applying state-of-the-art codes to model self-interacting dark matter (SIDM). Aims. In this work, we investigate the root of such problems, with a focus on energy non-conservation. Moreover, we discuss possible strategies to avoid them. Methods. We ran N-body simulations, both with and without SIDM, of an isolated DM-only halo and we adjusted the numerical parameters to check the accuracy of the simulation. Results. We find that not only the numerical scheme for SIDM can lead to energy non-conservation, but also the modelling of gravitational interaction and the time integration are problematic. The main issues we find are: (a) particles changing their time step in a non-time-reversible manner; (b) the asymmetry in the tree-based gravitational force evaluation; and (c) SIDM velocity kicks breaking the time symmetry. Conclusions. Tuning the parameters of the simulation to achieve a high level of accuracy allows us to conserve energy not only at early stages of the evolution, but also later on. However, the cost of the simulations becomes prohibitively large as a result. Some of the problems that make the simulations of the gravothermal collapse phase inaccurate can be overcome by choosing appropriate numerical schemes. However, other issues still pose a challenge. Our findings motivate further works on addressing the challenges in simulating strong DM self-interactions.
Braspenning, J et al., MNRAS, 2024 , vol. 533 , issue 3 (citations: 9)
Abstract
Galaxy clusters are important probes for both cosmology and galaxy formation physics. We test the cosmological, hydrodynamical FLAMINGO (Full-hydro large-scale structure simulations with all-sky mapping for the interpretation of next generation observations) simulations by comparing to observations of the gaseous properties of clusters measured from X-ray observations. FLAMINGO contains unprecedented numbers of massive galaxy groups (
Nobels, F et al., MNRAS, 2024 , vol. 532 , issue 3 (citations: 6)
Abstract
We use smoothed particle hydrodynamics simulations of isolated Milky Way-mass disc galaxies that include cold, interstellar gas to test subgrid prescriptions for star formation (SF). Our fiducial model combines a Schmidt law with a gravitational instability criterion, but we also test density thresholds and temperature ceilings. While SF histories are insensitive to the prescription for SF, the Kennicutt-Schmidt (KS) relations between SF rate and gas surface density can discriminate between models. We show that our fiducial model, with an SF efficiency per free-fall time of 1 per cent, agrees with spatially resolved and azimuthally averaged observed KS relations for neutral, atomic, and molecular gas. Density thresholds do not perform as well. While temperature ceilings selecting cold, molecular gas can match the data for galaxies with solar metallicity, they are unsuitable for very low-metallicity gas and hence for cosmological simulations. We argue that SF criteria should be applied at the resolution limit rather than at a fixed physical scale, which means that we should aim for numerical convergence of observables rather than of the properties of gas labelled as star-forming. Our fiducial model yields good convergence when the mass resolution is varied by nearly 4 orders of magnitude, with the exception of the spatially resolved molecular KS relation at low surface densities. For the gravitational instability criterion, we quantify the impact on the KS relations of gravitational softening, the SF efficiency, and the strength of supernova feedback, as well as of observable parameters such as the inclusion of ionized gas, the averaging scale, and the metallicity.
O'Brennan, H et al., arXiv, 2024 (citations: 0)
Abstract
Recent JWST observations of very early galaxies, at $z \geq 10$, has led to claims that tension exists between the sizes and luminosities of high-redshift galaxies and what is predicted by standard ${\Lambda}$CMD models. Here we use the adaptive mesh refinement code Enzo and the N-body smoothed particle hydrodyanmics code SWIFT to compare (semi-)analytic halo mass functions against the results of direct N-body models at high redshift. In particular, our goal is to investigate the variance between standard halo mass functions derived from (semi-)analytic formulations and N-body calculations and to determine what role any discrepancy may play in driving tensions between observations and theory. We find that the difference between direct N-body calculations and halo mass function fits is less than a factor of two within the mass range of galaxies currently being observed by JWST and is therefore not a dominant source of error when comparing theory and observation at high redshift.
Kugel, R et al., arXiv, 2024 (citations: 0)
Abstract
Galaxy cluster counts have historically been important for the measurement of cosmological parameters and upcoming surveys will greatly reduce the statistical errors. To exploit the potential of current and future cluster surveys, theoretical uncertainties on the predicted abundance must be smaller than the statistical errors. Models used to predict cluster counts typically combine a model for the dark matter only (DMO) halo mass function (HMF) with an observable - mass relation that is assumed to be a power-law with lognormal scatter. We use the FLAMINGO suite of cosmological hydrodynamical simulations to quantify the biases in the cluster counts and cosmological parameters resulting from the different ingredients of conventional models. For the observable mass proxy we focus on the Compton-Y parameter quantifying the thermal Sunyaev-Zel'dovich effect, which is expected to result in cluster samples that are relatively close to mass-selected samples. We construct three mock surveys based on existing (Planck and SPT) and upcoming (Simons Observatory) surveys. We ignore measurement uncertainties and compare the biases in the counts and inferred cosmological parameters to each survey's Poisson errors. We find that widely used models for the DMO HMF differ significantly from each other and from the DMO version of FLAMINGO, leading to significant biases for all three surveys. For upcoming surveys, dramatic improvements are needed for all additional model ingredients, i.e. the functional forms of the fits to the observable-mass scaling relation and the associated scatter, the priors on the scaling relation and the prior on baryonic effects associated with feedback processes on the HMF.
Lim, S et al., MNRAS, 2024 , vol. 532 , issue 4 (citations: 7)
Abstract
Motivated by the recent JWST discovery of galaxy overdensities during the Epoch of Reionzation, we examine the physical properties of high-z protoclusters and their evolution using the Full-hydro Large-scale structure simulations with All-sky Mapping for the Interpretation of Next Generation Observations (FLAMINGO) simulation suite. We investigate the impact of the apertures used to define protoclusters, because the heterogeneous apertures used in the literature have limited our understanding of the population. Our results are insensitive to the uncertainties of the subgrid models at a given resolution, whereas further investigation into the dependence on numerical resolution is needed. When considering galaxies more massive than $M_\ast \, {\simeq }\, 10^8\, {\rm M_\odot }$, the FLAMINGO simulations predict a dominant contribution from progenitors similar to those of the Coma cluster to the cosmic star formation rate density during the reionization epoch. Our results indicate the onset of suppression of star formation in the protocluster environments as early as $z\, {\simeq }\, 5$. The galaxy number density profiles are similar to NFW (Navarro-Frenk-White profile) at $z\, {\lesssim }\, 1$ while showing a steeper slope at earlier times before the formation of the core. Different from most previous simulations, the predicted star formation history for individual protoclusters is in good agreement with observations. We demonstrate that, depending on the aperture, the integrated physical properties including the total (dark matter and baryonic) mass can be biased by a factor of 2 to 5 at $z\, {=}\, 5.5$-7, and by an order of magnitude at $z\, {\lesssim }\, 4$. This correction suffices to remove the ${\simeq }\, 3\, \sigma$ tensions with the number density of structures found in recent JWST observations.
Yuasa, T et al., NewA, 2024 , vol. 109 (citations: 0)
Abstract
We present a new hydrodynamic scheme named Godunov Density-Independent Smoothed Particle Hydrodynamics (GDISPH), that can accurately handle shock waves and contact discontinuities without any manually tuned parameters. This is in contrast to the standard formulation of smoothed particle hydrodynamics (SSPH), which requires the parameters for an artificial viscosity term to handle the shocks and struggles to accurately handle the contact discontinuities due to unphysical repulsive forces, resulting in surface tension that disrupts pressure equilibrium and suppresses fluid instabilities. While Godunov SPH (GSPH) can handle the shocks without the parameters by using solutions from a Riemann solver, it still cannot fully handle the contact discontinuities. Density-Independent Smoothed Particle Hydrodynamics (DISPH), one of several schemes proposed to handle contact discontinuities more effectively than SSPH, demonstrates superior performance in our tests involving strong shocks and contact discontinuities. However, DISPH still requires the artificial viscosity term. We integrate the Riemann solver into DISPH in several ways, yielding some patterns of GDISPH. The results of standard tests such as the one-dimensional Riemann problem, pressure equilibrium, Sedov–Taylor, and Kelvin–Helmholtz tests are favourable to GDISPH Case 1 and GDISPH Case 2, as well as DISPH. We conclude that GDISPH Case 1 has an advantage over GDISPH Case 2effectively handling shocks and contact discontinuities without the need for specific parameters or introducing any additional numerical diffusion.
Sandnes, T et al., arXiv, 2024 (citations: 0)
Abstract
We present REMIX, a smoothed particle hydrodynamics (SPH) scheme designed to alleviate effects that typically suppress mixing and instability growth at density discontinuities in SPH simulations. We approach this problem by directly targeting sources of kernel smoothing error and discretisation error, resulting in a generalised, material-independent formulation that improves the treatment both of discontinuities within a single material, for example in an ideal gas, and of interfaces between dissimilar materials. This approach also leads to improvements in capturing hydrodynamic behaviour unrelated to mixing, such as in shocks. We demonstrate marked improvements in three-dimensional test scenarios, focusing on more challenging cases with particles of equal mass across the simulation. This validates our methods for use-cases relevant across applications spanning astrophysics and engineering, where particles are free to evolve over a large range of density scales, or where emergent and evolving density discontinuities cannot easily be corrected by choosing bespoke particle masses in the initial conditions. We achieve these improvements while maintaining sharp discontinuities; without introducing additional equation of state dependence in, for example, particle volume elements; and without contrived or targeted corrections. Our methods build upon a fully compressible and thermodynamically consistent core-SPH construction, retaining Galilean invariance as well as conservation of mass, momentum, and energy. REMIX is integrated in the open-source, state-of-the-art \swift code and is designed with computational efficiency in mind, which means that its improved hydrodynamic treatment can be used for high-resolution simulations without significant cost to run-speed.
Buehlmann, M et al., arXiv, 2024 (citations: 1)
Abstract
We present the online service cosmICweb (COSMological Initial Conditions on the WEB) - the first database and web interface to store, analyze, and disseminate initial conditions for zoom simulations of objects forming in cosmological simulations: from galaxy clusters to galaxies and more. Specifically, we store compressed information about the Lagrangian proto-halo patches for all objects in a typical simulation merger tree along with properties of the halo/galaxy across cosmic time. This enables a convenient web-based selection of the desired zoom region for an object fitting user-specified selection criteria. The information about the region can then be used with the MUSIC code to generate the zoom ICs for the simulation. In addition to some other simulations, we currently support all objects in the EAGLE simulation database, so that for example the Auriga simulations are easily reproduced, which we demonstrate explicitly. The framework is extensible to include other simulations through an API that can be added to an existing database structure and with which cosmICweb can then be interfaced. We make the web portal and database publicly available to the community.
Roca-Fabrega, S et al., ApJ, 2024 , vol. 968 , issue 2 (citations: 5)
Abstract
In this fourth paper from the AGORA Collaboration, we study the evolution down to redshift z = 2 and below of a set of cosmological zoom-in simulations of a Milky Way mass galaxy by eight of the leading hydrodynamic simulation codes. We also compare this CosmoRun suite of simulations with dark matter-only simulations by the same eight codes. We analyze general properties of the halo and galaxy at z = 4 and 3, and before the last major merger, focusing on the formation of well-defined rotationally supported disks, the mass–metallicity relation, the specific star formation rate, the gas metallicity gradients, and the nonaxisymmetric structures in the stellar disks. Codes generally converge well to the stellar-to-halo mass ratios predicted by semianalytic models at z ∼ 2. We see that almost all the hydro codes develop rotationally supported structures at low redshifts. Most agree within 0.5 dex with the observed mass–metallicity relation at high and intermediate redshifts, and reproduce the gas metallicity gradients obtained from analytical models and low-redshift observations. We confirm that the intercode differences in the halo assembly history reported in the first paper of the collaboration also exist in CosmoRun, making the code-to-code comparison more difficult. We show that such differences are mainly due to variations in code-dependent parameters that control the time stepping strategy of the gravity solver. We find that variations in the early stellar feedback can also result in differences in the timing of the low-redshift mergers. All the simulation data down to z = 2 and the auxiliary data will be made publicly available.
Radtke, P et al., arXiv, 2024 (citations: 0)
Abstract
The C++ programming language and its cousins lean towards a memory-inefficient storage of structs: The compiler inserts helper bits into the struct such that individual attributes align with bytes, and it adds additional bytes aligning attributes with cache lines, while it is not able to exploit knowledge about the range of integers, enums or bitsets to bring the memory footprint down. Furthermore, the language provides neither support for data exchange via MPI nor for arbitrary floating-point precision formats. If developers need to have a low memory footprint and MPI datatypes over structs which exchange only minimal data, they have to manipulate the data and to write MPI datatypes manually. We propose a C++ language extension based upon C++ attributes through which developers can guide the compiler what memory arrangements would be beneficial: Can multiple booleans be squeezed into one bit field, do floats hold fewer significant bits than in the IEEE standard, or does the code require a user-defined MPI datatype for certain subsets of attributes? The extension offers the opportunity to fall back to normal alignment and padding rules via plain C++ assignments, no dependencies upon external libraries are introduced, and the resulting code remains standard C++. Our work implements the language annotations within LLVM and demonstrates their potential impact, both upon the runtime and the memory footprint, through smoothed particle hydrodynamics (SPH) benchmarks. They uncover the potential gains in terms of performance and development productivity.
Mansfield, P et al., MNRAS, 2024 , vol. 531 , issue 1 (citations: 0)
Abstract
As cosmological simulations have grown in size, the permanent storage requirements of their particle data have also grown. Even modest simulations present a major logistical challenge for the groups which run these boxes and researchers without access to high performance computing facilities often need to restrict their analysis to lower quality data. In this paper, we present GUPPY, a compression algorithm and code base tailored to reduce the sizes of dark matter-only cosmological simulations by approximately an order of magnitude. GUPPY is a 'lossy' algorithm, meaning that it injects a small amount of controlled and uncorrelated noise into particle properties. We perform extensive tests on the impact that this noise has on the internal structure of dark matter haloes, and identify conservative accuracy limits which ensure that compression has no practical impact on single-snapshot halo properties, profiles, and abundances. We also release functional prototype libraries in C, Python, and Go for reading and creating GUPPY data.
Aramburo-Garcia, A et al., arXiv, 2024 (citations: 3)
Abstract
The resonant conversion, within the inter-galactic medium, of regular photons into dark photons amplifies the anisotropy observed in the CMB, thereby imposing stringent constraints on the existence of light dark photons. In this study, we investigate the impact of light dark photons, with masses in the range $3\times 10^{-15} ~\rm{eV} < m_{A'} < 3\times 10^{-12}~\rm{eV}$ on the power spectrum of temperature anisotropies within the cosmic microwave background (CMB) radiation utilizing the state-of-the-art large-volume FLAMINGO cosmological simulations. Our results show that using full Planck data, one can expect the existing constraints on the dark photon mixing parameter in this mass range to improve by an order of magnitude.
Upadhye, A et al., MNRAS, 2024 , vol. 530 , issue 1 (citations: 1)
Abstract
Cosmology is poised to measure the neutrino mass sum Mν and has identified several smaller-scale observables sensitive to neutrinos, necessitating accurate predictions of neutrino clustering over a wide range of length scales. The FlowsForTheMasses non-linear perturbation theory for the the massive neutrino power spectrum, $\Delta ^2_\nu (k)$, agrees with its companion N-body simulation at the $10~{{\ \rm per\ cent}}-15~{{\ \rm per\ cent}}$ level for k ≤ 1 h Mpc-1. Building upon the Mira-Titan IV emulator for the cold matter, we use FlowsForTheMasses to construct an emulator for $\Delta ^2_\nu (k)$, Cosmic-Eν, which covers a large range of cosmological parameters and neutrino fractions Ων, 0h2 ≤ 0.01 (Mν ≤ 0.93 eV). Consistent with FlowsForTheMasses at the 3.5 per cent level, it returns a power spectrum in milliseconds. Ranking the neutrinos by initial momenta, we also emulate the power spectra of momentum deciles, providing information about their perturbed distribution function. Comparing a Mν = 0.15 eV model to a wide range of N-body simulation methods, we find agreement to 3 per cent for k ≤ 3kFS = 0.17 h Mpc-1 and to 19 per cent for k ≤ 0.4 h Mpc-1. We find that the enhancement factor, the ratio of $\Delta ^2_\nu (k)$ to its linear-response equivalent, is most strongly correlated with Ων, 0h2, and also with the clustering amplitude σ8. Furthermore, non-linearities enhance the free-streaming-limit scaling $\partial \log (\Delta ^2_\nu /\Delta ^2_{\rm m}) / \partial \log (M_\nu)$ beyond its linear value of 4, increasing the Mν-sensitivity of the small-scale neutrino density.
Colazo, P et al., A&A, 2024 , vol. 685 (citations: 3)
Abstract
Context. This Letter explores the potential role of primordial black holes (PBHs) to address cosmological tensions as the presence of more massive than expected galaxies at high redshifts, as indicated by recent James Webb Space Telescope observations.
Aims: Motivated by inflation models that enhance the power at scales beyond the observable range that produce PBHs with Schechter-like mass functions, we aim to explain the excess of high redshift galaxies via a modification of the Λ cold dark matter power spectrum that consists in adding (i) a blue spectral index nb at kpiv = 10/Mpc and (ii) Poisson and isocurvature contributions from massive PBHs that only make up 0.5% of the dark matter.
Methods: We simulated these models using the SWIFT code and find an increased abundance of high redshift galaxies in simulations that include PBHs. We compared these models to estimates from James Webb Space Telescope observations.
Results: Unlike the Λ cold dark matter model, the inclusion of PBHs allowed us to reproduce the observations with reasonable values for the star formation efficiency. Furthermore, the power spectra we adopted potentially produce PBHs that can serve as seeds for supermassive black holes with masses 7.57 × 104 M⊙.
Broxterman, J et al., MNRAS, 2024 , vol. 529 , issue 3 (citations: 9)
Abstract
Weak gravitational lensing convergence peaks, the local maxima in weak lensing convergence maps, have been shown to contain valuable cosmological information complementary to commonly used two-point statistics. To exploit the full power of weak lensing for cosmology, we must model baryonic feedback processes because these reshape the matter distribution on non-linear and mildly non-linear scales. We study the impact of baryonic physics on the number density of weak lensing peaks using the FLAMINGO cosmological hydrodynamical simulation suite. We generate ray-traced full-sky convergence maps mimicking the characteristics of a Stage IV weak lensing survey. We compare the number densities of peaks in simulations that have been calibrated to reproduce the observed galaxy mass function and cluster gas fraction or to match a shifted version of these, and that use either thermally driven or jet active galactic nucleus feedback. We show that the differences induced by realistic baryonic feedback prescriptions (typically 5-30 per cent for κ = 0.1-0.4) are smaller than those induced by reasonable variations in cosmological parameters (20-60 per cent for κ = 0.1-0.4) but must be modelled carefully to obtain unbiased results. The reasons behind these differences can be understood by considering the impact of feedback on halo masses, or by considering the impact of different cosmological parameters on the halo mass function. Our analysis demonstrates that, for the range of models we investigated, the baryonic suppression is insensitive to changes in cosmology up to κ ≈ 0.4 and that the higher κ regime is dominated by Poisson noise and cosmic variance.
Upadhye, A et al., MNRAS, 2024 , vol. 529 , issue 2 (citations: 5)
Abstract
Weak lensing of the cosmic microwave background is rapidly emerging as a powerful probe of neutrinos, dark energy, and new physics. We present a fast computation of the non-linear CMB lensing power spectrum that combines non-linear perturbation theory at early times with power spectrum emulation using cosmological simulations at late times. Comparing our calculation with light-cones from the FLAMINGO 5.6 Gpc cube dark-matter-only simulation, we confirm its accuracy to $1{{\ \rm per\ cent}}$ ($2{{\ \rm per\ cent}}$) up to multipoles L = 3000 (L = 5000) for a νΛCDM cosmology consistent with current data. Clustering suppression due to small-scale baryonic phenomena such as feedback from active galactic nuclei can reduce the lensing power by $\sim 10{{\ \rm per\ cent}}$. To our perturbation theory and emulator-based calculation, we add SP(k), a new fitting function for this suppression, and confirm its accuracy compared to the FLAMINGO hydrodynamic simulations to $4{{\ \rm per\ cent}}$ at L = 5000, with similar accuracy for massive neutrino models. We further demonstrate that scale-dependent suppression due to neutrinos and baryons approximately factorize, implying that a careful treatment of baryonic feedback can limit biasing neutrino mass constraints.
Dou, J et al., MNRAS, 2024 , vol. 529 , issue 3 (citations: 2)
Abstract
During the final stage of planetary formation, different formation pathways of planetary embryos could significantly influence the observed variations in planetary densities. Of the approximately 5000 exoplanets identified to date, a notable subset exhibits core fractions reminiscent of Mercury, potentially a consequence of high-velocity giant impacts. In order to better understand the influence of such collisions on planetary formation and compositional evolution, we conducted an extensive set of smoothed particle hydrodynamics giant impact simulations between two-layered rocky bodies. These simulations spanned a broad range of impact velocities from 1 to 11 times the mutual escape velocity. We derived novel scaling laws that estimate the mass and core mass fraction of the largest post-collision remnants. Our findings indicate that the extent of core vaporization markedly influences mantle stripping efficiency at low impact angles. We delineate the distinct roles played by two mechanisms - kinetic momentum transfer and vaporization-induced ejection - in mantle stripping. Our research suggests that collisional outcomes for multilayered planets are more complex than those for undifferentiated planetesimal impacts. Thus, a single universal law may not encompass all collision processes. We found a significant decrease in the mantle stripping efficiency as the impact angle increases. To form a 5 M⊕ super-Mercury at 45°, an impact velocity over 200 km s-1 is required. This poses a challenge to the formation of super-Mercuries through a single giant impact, implying that their formation would favour either relatively low-angle single impacts or multiple collisions.
Berlok, T et al., JOSS, 2024 , vol. 9 , issue 96 (citations: 1)
Abstract
We present Paicos, a new object-oriented Python package for analyzing simulations performed with Arepo. Paicos strives to reduce the learning curve for students and researchers getting started with Arepo simulations. As such, Paicos includes many examples in the form of Python scripts and Jupyter notebooks as well as an online documentation describing the installation procedure and recommended first steps. Paicos' main features are automatic handling of cosmological and physical units, computation of derived variables, 2D visualization (slices and projections), 1D and 2D histograms, and easy saving and loading of derived data including units and all the relevant metadata.
Elbers, W et al., arXiv, 2024 (citations: 10)
Abstract
Large-scale structure surveys have reported measurements of the density of matter, $\Omega_\mathrm{m}$, and the amplitude of clustering, $\sigma_8$, that are in tension with the values inferred from observations of the cosmic microwave background. While this may be a sign of new physics that slows the growth of structure at late times, strong astrophysical feedback processes could also be responsible. In this work, we argue that astrophysical processes are not independent of cosmology and that their coupling naturally leads to stronger baryonic feedback in cosmological models with suppressed structure formation or when combined with a mechanism that removes dark matter from halos. We illustrate this with two well-motivated extensions of the Standard Model known to suppress structure formation: massive neutrinos and decaying dark matter. Our results, based on the FLAMINGO suite of hydrodynamical simulations, show that the combined effect of baryonic and non-baryonic suppression mechanisms is greater than the sum of its parts, particularly for decaying dark matter. We also show that the dependence of baryonic feedback on cosmology can be modelled as a function of the ratio $f_\mathrm{b}/c^2_\mathrm{v}\sim f_\mathrm{b}/(\Omega_\mathrm{m}\sigma_8)^{1/4}$ of the universal baryon fraction, $f_\mathrm{b}$, to a velocity-based definition of halo concentration, $c^2_\mathrm{v}$, giving an accurate fitting formula for the baryonic suppression of the matter power spectrum. Although the combination of baryonic and non-baryonic suppression mechanisms can resolve the tension, the models with neutrinos and decaying dark matter are challenged by constraints on the expansion history.
Correa, C et al., arXiv, 2024 (citations: 3)
Abstract
Self-interacting dark matter (SIDM) has the potential to significantly influence galaxy formation in comparison to the cold, collisionless dark matter paradigm (CDM), resulting in observable effects. This study aims to elucidate this influence and to demonstrate that the stellar mass Tully-Fisher relation imposes robust constraints on the parameter space of velocity-dependent SIDM models. We present a new set of cosmological hydrodynamical simulations that include the SIDM scheme from the TangoSIDM project and the SWIFT-EAGLE galaxy formation model. Two cosmological simulations suites were generated: one (Reference model) which yields good agreement with the observed $z=0$ galaxy stellar mass function, galaxy mass-size relation, and stellar-to-halo mass relation; and another (WeakStellarFB model) in which the stellar feedback is less efficient, particularly for Milky Way-like systems. Both galaxy formation models were simulated under four dark matter cosmologies: CDM, SIDM with two different velocity-dependent cross sections, and SIDM with a constant cross section. While SIDM does not modify global galaxy properties such as stellar masses and star formation rates, it does make the galaxies more extended. In Milky Way-like galaxies, where baryons dominate the central gravitational potential, SIDM thermalises, causing dark matter to accumulate in the central regions. This accumulation results in density profiles that are steeper than those produced in CDM from adiabatic contraction. The enhanced dark matter density in the central regions of galaxies causes a deviation in the slope of the Tully-Fisher relation, which significantly diverges from the observational data. In contrast, the Tully-Fisher relation derived from CDM models aligns well with observations.
Rathore, O et al., arXiv, 2024 (citations: 1)
Abstract
With the advent of exascale computing, effective load balancing in massively parallel software applications is critically important for leveraging the full potential of high performance computing systems. Load balancing is the distribution of computational work between available processors. Here, we investigate the application of quantum annealing to load balance two paradigmatic algorithms in high performance computing. Namely, adaptive mesh refinement and smoothed particle hydrodynamics are chosen as representative grid and off-grid target applications. While the methodology for obtaining real simulation data to partition is application specific, the proposed balancing protocol itself remains completely general. In a grid based context, quantum annealing is found to outperform classical methods such as the round robin protocol but lacks a decisive advantage over more advanced methods such as steepest descent or simulated annealing despite remaining competitive. The primary obstacle to scalability is found to be limited coupling on current quantum annealing hardware. However, for the more complex particle formulation, approached as a multi-objective optimization, quantum annealing solutions are demonstrably Pareto dominant to state of the art classical methods across both objectives. This signals a noteworthy advancement in solution quality which can have a large impact on effective CPU usage.
Ventura, E et al., MNRAS, 2024 , vol. 529 , issue 1 (citations: 10)
Abstract
We implemented Population III (Pop. III) star formation in mini-haloes within the MERAXES semi-analytic galaxy formation and reionization model, run on top of a N-body simulation with L = 10 h-1 cMpc with 20483 particles resolving all dark matter haloes down to the mini-haloes (~105 M⊙). Our modelling includes the chemical evolution of the IGM, with metals released through supernova-driven bubbles that expand according to the Sedov-Taylor model. We found that SN-driven metal bubbles are generally small, with radii typically of 150 ckpc at z = 6. Hence, the majority of the first galaxies are likely enriched by their own star formation. However, as reionization progresses, the feedback effects from the UV background become more pronounced, leading to a halt in star formation in low-mass galaxies, after which external chemical enrichment becomes more relevant. We explore the sensitivity of the star formation rate density and stellar mass functions to the unknown values of free parameters. We also discuss the observability of Pop. III dominated systems with JWST, finding that the inclusion of Pop. III galaxies can have a significant effect on the total UV luminosity function at z = 12-16. Our results support the idea that the excess of bright galaxies detected with JWST might be explained by the presence of bright top-heavy Pop. III dominated galaxies without requiring an increased star formation efficiency.
Pizzati, E et al., MNRAS, 2024 , vol. 528 , issue 3 (citations: 7)
Abstract
Observations from wide-field quasar surveys indicate that the quasar autocorrelation length increases dramatically from z ≈ 2.5 to ≈ 4. This large clustering amplitude at z ≈ 4 has proven hard to interpret theoretically, as it implies that quasars are hosted by the most massive dark matter haloes residing in the most extreme environments at that redshift. In this work, we present a model that simultaneously reproduces both the observed quasar autocorrelation and quasar luminosity functions. The spatial distribution of haloes and their relative abundance are obtained via a novel method that computes the halo mass and halo cross-correlation functions by combining multiple large-volume dark-matter-only cosmological simulations with different box sizes and resolutions. Armed with these halo properties, our model exploits the conditional luminosity function framework to describe the stochastic relationship between quasar luminosity, L, and halo mass, M. Assuming a simple power-law relation L ∝ Mγ with lognormal scatter, σ, we are able to reproduce observations at z ~ 4 and find that: (i) the quasar luminosity-halo mass relation is highly non-linear (γ ≳ 2), with very little scatter (σ ≲ 0.3 dex); (ii) luminous quasars ($\log _{10} L/{\rm erg}\, {\rm s}^{-1}\gtrsim 46.5-47$) are hosted by haloes with mass log10M/M⊙ ≳ 13-13.5; and (iii) the implied duty cycle for quasar activity approaches unity ($\varepsilon _{\rm DC}\approx 10\,\mathrm{ per}\,\mathrm{ cent}-60~{{\ \rm per\ cent}}$). We also consider observations at z ≈ 2.5 and find that the quasar luminosity-halo mass relation evolves significantly with cosmic time, implying a rapid change in quasar host halo masses and duty cycles, which in turn suggests concurrent evolution in black hole scaling relations and/or accretion efficiency.
Schaller, M, MNRAS, 2024 , vol. 529 , issue 1 (citations: 0)
Abstract
In his 2021 lecture to the Canadian Association of Physicists Congress, P.J.E. Peebles pointed out that the brightest extragalactic radio sources tend to be aligned with the plane of the de Vaucouleur Local Supercluster up to redshifts of z = 0.02 ($d_{\rm MW}\approx 85~\rm {Mpc}$). He then asked whether such an alignment of clusters is anomalous in the standard Lambda cold dark matter (ΛCDM) framework. In this letter, we employ an alternative, absolute orientation agnostic, measure of the anisotropy based on the inertia tensor axial ratio of these brightest sources and use a large cosmological simulation from the FLAMINGO suite to measure how common such an alignment of structures is. We find that only 3.5 per cent of randomly selected regions display an anisotropy of their clusters more extreme than the one found in the local Universe's radio data. This sets the region around the Milky Way as a 1.85σ outlier. Varying the selection parameters of the objects in the catalogue, we find that the clusters in the local Universe are never more than 2σ away from the simulations' prediction for the same selection. We thus conclude that the reported anisotropy, whilst noteworthy, is not in tension with the ΛCDM paradigm.
Ploeckinger, S et al., MNRAS, 2024 , vol. 528 , issue 2 (citations: 2)
Abstract
Large-scale cosmological galaxy formation simulations typically prevent gas in the interstellar medium (ISM) from cooling below $\approx 10^4\, \mathrm{K}$. This has been motivated by the inability to resolve the Jeans mass in molecular gas ($\ll 10^5\, \mathrm{M}_{\odot }$) which would result in undesired artificial clumping. We show that the classical Jeans criteria derived for Newtonian gravity are not applicable in the simulated ISM if the spacing of resolution elements representing the dense ISM is below the gravitational force softening length and gravity is therefore softened and not Newtonian. We re-derive the Jeans criteria for softened gravity in Lagrangian codes and use them to analyse gravitational instabilities at and below the hydrodynamical resolution limit for simulations with adaptive and constant gravitational softening lengths. In addition, we define criteria for which a numerical runaway collapse of dense gas clumps can occur caused by oversmoothing of the hydrodynamical properties relative to the gravitational force resolution. This effect is illustrated using simulations of isolated disc galaxies with the smoothed particle hydrodynamics code SWIFT. We also demonstrate how to avoid the formation of artificial clumps in gas and stars by adjusting the gravitational and hydrodynamical force resolutions.
Strawn, C et al., ApJ, 2024 , vol. 962 , issue 1 (citations: 4)
Abstract
We analyze the circumgalactic medium (CGM) for eight commonly-used cosmological codes in the AGORA collaboration. The codes are calibrated to use identical initial conditions, cosmology, heating and cooling, and star formation thresholds, but each evolves with its own unique code architecture and stellar feedback implementation. Here, we analyze the results of these simulations in terms of the structure, composition, and phase dynamics of the CGM. We show properties such as metal distribution, ionization levels, and kinematics are effective tracers of the effects of the different code feedback and implementation methods, and as such they can be highly divergent between simulations. This is merely a fiducial set of models, against which we will in the future compare multiple feedback recipes for each code. Nevertheless, we find that the large parameter space these simulations establish can help disentangle the different variables that affect observable quantities in the CGM, e.g., showing that abundances for ions with higher ionization energy are more strongly determined by the simulation's metallicity, while abundances for ions with lower ionization energy are more strongly determined by the gas density and temperature.
Chan, T et al., MNRAS, 2024 , vol. 528 , issue 2 (citations: 9)
Abstract
An ionization front (I-front) that propagates through an inhomogeneous medium is slowed down by self-shielding and recombinations. We perform cosmological radiation hydrodynamics simulations of the I-front propagation during the epoch of cosmic reionization. The simulations resolve gas in mini-haloes (halo mass 104 ≲ Mh[M⊙] ≲ 108) that could dominate recombinations, in a computational volume that is large enough to sample the abundance of such haloes. The numerical resolution is sufficient (gas-particle mass ~20 M⊙ and spatial resolution <0.1 ckpc) to allow accurate modelling of the hydrodynamic response of gas to photoheating. We quantify the photoevaporation time of mini-haloes as a function of Mh and its dependence on the photoionization rate, Γ-12, and the redshift of reionization, zi. The recombination rate can be enhanced over that of a uniform medium by a factor ~10-20 early on. The peak value increases with Γ-12 and decreases with zi, due to the enhanced contribution from mini-haloes. The clumping factor, cr, decreases to a factor of a few at ~100 Myr after the passage of the I-front when the mini-haloes have been photoevaporated; this asymptotic value depends only weakly on Γ-12. Recombinations increase the required number of photons per baryon to reionize the Universe by 20 per cent-100 per cent, with the higher value occurring when Γ-12 is high and zi is low. We complement the numerical simulations with simple analytical models for the evaporation rate and the inverse Strömgren layer. The study also demonstrates the proficiency and potential of SPH-M1RT to address astrophysical problems in high-resolution cosmological simulations.
Lock, S et al., PSJ, 2024 , vol. 5 , issue 2 (citations: 2)
Abstract
Earth likely acquired much of its inventory of volatile elements during the main stage of its formation. Some of Earth's proto-atmosphere must therefore have survived the giant impacts, collisions between planet-sized bodies, that dominate the latter phases of accretion. Here, we use a suite of 1D hydrodynamic simulations and impedance-match calculations to quantify the effect that preimpact surface conditions (such as atmospheric pressure and the presence of an ocean) have on the efficiency of atmospheric and ocean loss from protoplanets during giant impacts. We find that—in the absence of an ocean—lighter, hotter, and lower-pressure atmospheres are more easily lost. The presence of an ocean can significantly increase the efficiency of atmospheric loss compared to the no-ocean case, with a rapid transition between low- and high-loss regimes as the mass ratio of atmosphere to ocean decreases. However, contrary to previous thinking, the presence of an ocean can also reduce atmospheric loss if the ocean is not sufficiently massive, typically less than a few times the atmospheric mass. Volatile loss due to giant impacts is thus highly sensitive to the surface conditions on the colliding bodies. To allow our results to be combined with 3D impact simulations, we have developed scaling laws that relate loss to the ground velocity and surface conditions. Our results demonstrate that the final volatile budgets of planets are critically dependent on the exact timing and sequence of impacts experienced by their precursor planetary embryos, making atmospheric properties a highly stochastic outcome of accretion.
Husko, F et al., MNRAS, 2024 , vol. 527 , issue 3 (citations: 11)
Abstract
Using the SWIFT simulation code, we compare the effects of different forms of active galactic nuclei (AGNs) feedback in idealized galaxy groups and clusters. We first present a physically motivated model of black hole (BH) spin evolution and a numerical implementation of thermal isotropic feedback (representing the effects of energy-driven winds) and collimated kinetic jets that they launch at different accretion rates. We find that kinetic jet feedback is more efficient at quenching star formation in the brightest cluster galaxies (BCGs) than thermal isotropic feedback, while simultaneously yielding cooler cores in the intracluster medium (ICM). A hybrid model with both types of AGN feedback yields moderate star formation rates, while having the coolest cores. We then consider a simplified implementation of AGN feedback by fixing the feedback efficiencies and the jet direction, finding that the same general conclusions hold. We vary the feedback energetics (the kick velocity and the heating temperature), the fixed efficiencies and the type of energy (kinetic versus thermal) in both the isotropic and the jet case. The isotropic case is largely insensitive to these variations. On the other hand, jet feedback must be kinetic in order to be efficient at quenching. We also find that it is much more sensitive to the choice of energy per feedback event (the jet velocity), as well as the efficiency. The former indicates that jet velocities need to be carefully chosen in cosmological simulations, while the latter motivates the use of BH spin evolution models.
Schaye, J et al., MNRAS, 2023 , vol. 526 , issue 4 (citations: 114)
Abstract
We introduce the Virgo Consortium's FLAMINGO suite of hydrodynamical simulations for cosmology and galaxy cluster physics. To ensure the simulations are sufficiently realistic for studies of large-scale structure, the subgrid prescriptions for stellar and AGN feedback are calibrated to the observed low-redshift galaxy stellar mass function and cluster gas fractions. The calibration is performed using machine learning, separately for each of FLAMINGO's three resolutions. This approach enables specification of the model by the observables to which they are calibrated. The calibration accounts for a number of potential observational biases and for random errors in the observed stellar masses. The two most demanding simulations have box sizes of 1.0 and 2.8 Gpc on a side and baryonic particle masses of 1 × 108 and $1\times 10^9\, \text{M}_\odot$, respectively. For the latter resolution, the suite includes 12 model variations in a 1 Gpc box. There are 8 variations at fixed cosmology, including shifts in the stellar mass function and/or the cluster gas fractions to which we calibrate, and two alternative implementations of AGN feedback (thermal or jets). The remaining 4 variations use the unmodified calibration data but different cosmologies, including different neutrino masses. The 2.8 Gpc simulation follows 3 × 1011 particles, making it the largest ever hydrodynamical simulation run to z = 0. Light-cone output is produced on-the-fly for up to 8 different observers. We investigate numerical convergence, show that the simulations reproduce the calibration data, and compare with a number of galaxy, cluster, and large-scale structure observations, finding very good agreement with the data for converged predictions. Finally, by comparing hydrodynamical and 'dark-matter-only' simulations, we confirm that baryonic effects can suppress the halo mass function and the matter power spectrum by up to ≈20 per cent.
McCarthy, I et al., MNRAS, 2023 , vol. 526 , issue 4 (citations: 19)
Abstract
A number of recent studies have found evidence for a tension between observations of large-scale structure (LSS) and the predictions of the standard model of cosmology with the cosmological parameters fit to the cosmic microwave background (CMB). The origin of this 'S8 tension' remains unclear, but possibilities include new physics beyond the standard model, unaccounted for systematic errors in the observational measurements and/or uncertainties in the role that baryons play. Here, we carefully examine the latter possibility using the new FLAMINGO suite of large-volume cosmological hydrodynamical simulations. We project the simulations onto observable harmonic space and compare with observational measurements of the power and cross-power spectra of cosmic shear, CMB lensing, and the thermal Sunyaev-Zel'dovich (tSZ) effect. We explore the dependence of the predictions on box size and resolution and cosmological parameters, including the neutrino mass, and the efficiency and nature of baryonic 'feedback'. Despite the wide range of astrophysical behaviours simulated, we find that baryonic effects are not sufficiently large to remove the S8 tension. Consistent with recent studies, we find the CMB lensing power spectrum is in excellent agreement with the standard model, while the cosmic shear power spectrum, tSZ effect power spectrum, and the cross-spectra between shear, CMB lensing, and the tSZ effect are all in varying degrees of tension with the CMB-specified standard model. These results suggest that some mechanism is required to slow the growth of fluctuations at late times and/or on non-linear scales, but that it is unlikely that baryon physics is driving this modification.
Roper, W et al., MNRAS, 2023 , vol. 526 , issue 4 (citations: 17)
Abstract
In the First Light And Reionization Epoch Simulations (FLARES) suite of hydrodynamical simulations, we find the high-redshift (z > 5) intrinsic size-luminosity relation is, surprisingly, negatively sloped. However, after including the effects of dust attenuation, we find a positively sloped UV observed size-luminosity relation in good agreement with other simulated and observational studies. In this work, we extend this analysis to probe the underlying physical mechanisms driving the formation and evolution of the compact galaxies driving the negative size-mass/size-luminosity relation. We find the majority of compact galaxies (R1/2, ⋆ < 1 pkpc, which drive the negative slope of the size-mass relation, have transitioned from extended to compact sizes via efficient centralized cooling, resulting in high specific star formation rates in their cores. These compact stellar systems are enshrouded by non-star-forming gas distributions as much as 100 times larger than their stellar counterparts. By comparing with galaxies from the EAGLE simulation suite, we find that these extended gas distributions 'turn on' and begin to form stars between z = 5 and 0 leading to increasing sizes, and thus the evolution of the size-mass relation from a negative to a positive slope. This explicitly demonstrates the process of inside-out galaxy formation in which compact bulges form earlier than the surrounding discs.
Altamura, E, PhDT, 2023 (citations: 0)
Abstract
Hydrodynamic simulations have become irreplaceable in modern cosmology for exploring complex systems and making predictions to steer future observations. In Chapter 1, we begin with a philosophical discussion on the role of simulations in science. We argue that simulations can bridge the gap between empirical and fundamental knowledge. The validation of simulations stresses the importance of achieving a balance between trustworthiness and scepticism. Next, Chapter 2 introduces the formation of structures and comparisons between synthetic and observational data. Chapter 3 describes the production pipeline of zoom-in simulations used to model individual objects and novel methods to mitigate known shortcomings. Then, we assessed the weak scaling of the SWIFT code and found it to be one of the hydrodynamic codes with the highest parallel efficiency. In Chapter 4, we study the rotational kinetic Sunyaev-Zeldovich (rkSZ) effect for high-mass galaxy clusters from the MACSIS simulations. We find a maximum signal greater than 100 $\mu$K, 30 times stronger than early predictions from self-similar models, opening prospects for future detection. In Chapter 5, we address a tension between the distribution of entropy measured from observations and predicted by simulations of groups and clusters of galaxies. We find that most recent hydrodynamic simulations systematically over-predict the entropy profiles by up to one order of magnitude, leading to profiles that are shallower and higher than the power-law-like entropy profiles that have been observed. We discuss the dependence on different hydrodynamic and sub-grid parameters using variations of the EAGLE model. Chapter 6 explores the evolution of the profiles as a function of cosmic time. We report power-law-like entropy profiles at high redshift for both objects. However, at late times, an entropy plateau develops and alters the shape of the profile.
Baldi, R, A&ARv, 2023 , vol. 31 , issue 1 (citations: 15)
Abstract
Radio-loud compact radio sources (CRSs) are characterised by morphological compactness of the jet structure centred on the active nucleus of the galaxy. Most of the local elliptical galaxies are found to host a CRS with nuclear luminosities lower than those of typical quasars, ≲1042ergs-1 . Recently, low-luminosity CRSs with a LINER-like optical spectrum have been named Fanaroff-Riley (FR) type 0 to highlight their lack of substantially extended radio emission at kpc scales, in contrast with the other Fanaroff-Riley classes, full-fledged FR Is and FR II radio galaxies. FR 0s are the most abundant class of radio galaxies in the local Universe, and characterised by a higher core dominance, poorer Mpc-scale environment and smaller (sub-kpc scale, if resolved) jets than FR Is. However, FR 0s share similar host and nuclear properties with FR Is. A different accretion-ejection paradigm from that in place in FR Is is invoked to account for the parsec-scale FR 0 jets. This review revises the state-of-the-art knowledge about FR 0s, their nature, and which open issues the next generation of radio telescopes can solve in this context.
Borrow, J et al., MNRAS, 2023 , vol. 526 , issue 2 (citations: 24)
Abstract
All modern galaxy formation models employ stochastic elements in their sub-grid prescriptions to discretize continuous equations across the time domain. In this paper, we investigate how the stochastic nature of these models, notably star formation, black hole accretion, and their associated feedback, that act on small (< kpc) scales, can back-react on macroscopic galaxy properties (e.g. stellar mass and size) across long (> Gyr) time-scales. We find that the scatter in scaling relations predicted by the EAGLE model implemented in the SWIFT code can be significantly impacted by random variability between re-simulations of the same object, even when galaxies are resolved by tens of thousands of particles. We then illustrate how re-simulations of the same object can be used to better understand the underlying model, by showing how correlations between galaxy stellar mass and black hole mass disappear at the highest black hole masses (MBH > 108 M⊙), indicating that the feedback cycle may be interrupted by external processes. We find that although properties that are collected cumulatively over many objects are relatively robust against random variability (e.g. the median of a scaling relation), the properties of individual galaxies (such as galaxy stellar mass) can vary by up to 25 per cent, even far into the well-resolved regime, driven by bursty physics (black hole feedback), and mergers between galaxies. We suggest that studies of individual objects within cosmological simulations be treated with caution, and that any studies aiming to closely investigate such objects must account for random variability within their results.
Kugel, R et al., MNRAS, 2023 , vol. 526 , issue 4 (citations: 34)
Abstract
To fully take advantage of the data provided by large-scale structure surveys, we need to quantify the potential impact of baryonic effects, such as feedback from active galactic nuclei (AGN) and star formation, on cosmological observables. In simulations, feedback processes originate on scales that remain unresolved. Therefore, they need to be sourced via subgrid models that contain free parameters. We use machine learning to calibrate the AGN and stellar feedback models for the FLAMINGO (Fullhydro Large-scale structure simulations with All-sky Mapping for the Interpretation of Next Generation Observations) cosmological hydrodynamical simulations. Using Gaussian process emulators trained on Latin hypercubes of 32 smaller volume simulations, we model how the galaxy stellar mass function (SMF) and cluster gas fractions change as a function of the subgrid parameters. The emulators are then fit to observational data, allowing for the inclusion of potential observational biases. We apply our method to the three different FLAMINGO resolutions, spanning a factor of 64 in particle mass, recovering the observed relations within the respective resolved mass ranges. We also use the emulators, which link changes in subgrid parameters to changes in observables, to find models that skirt or exceed the observationally allowed range for cluster gas fractions and the SMF. Our method enables us to define model variations in terms of the data that they are calibrated to rather than the values of specific subgrid parameters. This approach is useful, because subgrid parameters are typically not directly linked to particular observables, and predictions for a specific observable are influenced by multiple subgrid parameters.
Revaz, Y, A&A, 2023 , vol. 679 (citations: 4)
Abstract
So far, numerical simulations of ultra-faint dwarf galaxies (UFDs) have failed to properly reproduce the observed size-luminosity relation. In particular, no hydrodynamical simulation run has managed to form UFDs with a half-light radius as small as 30 pc, as seen in observations of several UFD candidates. We tackle this problem by developing a simple but numerically clean and powerful method in which predictions of the stellar content of UFDs from ΛCDM cosmological hydrodynamical simulations are combined with very high-resolution dark-matter-only runs. This method allows us to trace the buildup history of UFDs and to determine the impact of the merger of building-block objects on their final size. We find that, while no UFDs more compact than 20 pc can be formed, slightly larger systems are only reproduced if all member stars originate from the same initial mini-halo. However, this imposes that (i) the total virial mass is smaller than 3 × 108 M⊙, and (ii) the stellar content prior to the end of the reionisation epoch is very compact (< 15 pc) and strongly gravitationally bound, which is a challenge for current hydrodynamical numerical simulations. If initial stellar building blocks are larger than 35 pc, the size of the UFD will extend to 80 pc. Finally, our study shows that UFDs keep strong imprints of their buildup history in the form of elongated or extended stellar halos. Those features can erroneously be interpreted as tidal signatures.
Zimmer, F et al., JCAP, 2023 , vol. 2023 , issue 11 (citations: 10)
Abstract
Gravitational potentials of the Milky Way and extragalactic structures can influence the propagation of the cosmic neutrino background (CNB). Of particular interest to future CNB observatories, such as PTOLEMY, is the CNB number density on Earth. In this study, we have developed a simulation framework that maps the trajectories of relic neutrinos as they move through the local gravitational environment. The potentials are based on the dark matter halos found in state-of-the-art cosmological N-body simulations, resulting in a more nuanced and realistic input than the previously employed analytical models. We find that the complex dark matter distributions, along with their dynamic evolution, influence the abundance and anisotropies of the CNB in ways unaccounted for by earlier analytical methods. Importantly, these cosmological simulations contain multiple instances of Milky Way-like halos that we employ to model a variety of gravitational landscapes. Consequently, we notice a variation in the CNB number densities that can be primarily attributed to the differences in the masses of these individual halos. For neutrino masses between 0.01 and 0.3 eV, we note clustering factors within the range of 1 + 𝒪(10-3) to 1 + 𝒪(1). Furthermore, the asymmetric nature of the underlying dark matter distributions within the halos results in not only overdense, but intriguingly, underdense regions within the full-sky anisotropy maps. Gravitational clustering appears to have a significant impact on the angular power spectra of these maps, leading to orders of magnitude more power on smaller scales beyond multipoles of ℓ = 3 when juxtaposed against predictions by primordial fluctuations. We discuss how our results reshape our understanding of relic neutrino clustering and how this might affect observability of future CNB observatories such as PTOLEMY. GitHub: our simulation code will be made visible here.
Yuan, Q et al., Natur, 2023 , vol. 623 , issue 7985 (citations: 10)
Abstract
Seismic images of Earth's interior have revealed two continent-sized anomalies with low seismic velocities, known as the large low-velocity provinces (LLVPs), in the lowermost mantle1. The LLVPs are often interpreted as intrinsically dense heterogeneities that are compositionally distinct from the surrounding mantle2. Here we show that LLVPs may represent buried relics of Theia mantle material (TMM) that was preserved in proto-Earth's mantle after the Moon-forming giant impact3. Our canonical giant-impact simulations show that a fraction of Theia's mantle could have been delivered to proto-Earth's solid lower mantle. We find that TMM is intrinsically 2.0-3.5% denser than proto-Earth's mantle based on models of Theia's mantle and the observed higher FeO content of the Moon. Our mantle convection models show that dense TMM blobs with a size of tens of kilometres after the impact can later sink and accumulate into LLVP-like thermochemical piles atop Earth's core and survive to the present day. The LLVPs may, thus, be a natural consequence of the Moon-forming giant impact. Because giant impacts are common at the end stages of planet accretion, similar mantle heterogeneities caused by impacts may also exist in the interiors of other planetary bodies.
Qin, Y et al., MNRAS, 2023 , vol. 526 , issue 1 (citations: 23)
Abstract
Using a semi-analytic galaxy formation model, we study analogues of eight z ≳ 12 galaxies recently discovered by James Webb Space Telescope (JWST). We select analogues from a cosmological simulation with a (311 cMpc)3 volume and an effective particle number of 1012 enabling the resolution of every atomic-cooling galaxy at z ≤ 20. We vary model parameters to reproduce the observed ultraviolet (UV) luminosity function at 5 < z < 13, aiming for a statistically representative high-redshift galaxy mock catalogue. Using the forward-modelled JWST photometry, we identify analogues from this catalogue and study their properties as well as possible evolutionary paths and local environment. We find faint JWST galaxies (MUV ≳ - 19.5) to remain consistent with the standard galaxy formation model and that our fiducial catalogue includes large samples of their analogues. The properties of these analogues broadly agree with conventional spectral energy distribution-fitting results, except for having systematically lower redshifts due to the evolving ultraviolet luminosity function, and for having higher specific star formation rates as a result of burstier histories in our model. On the other hand, only a handful of bright galaxy analogues can be identified for the observed z ~ 12 galaxies. Moreover, in order to reproduce the z ≳ 16 JWST galaxy candidates, boosting star-forming efficiencies through reduced feedback regulation and increased gas depletion rate is necessary relative to models of lower redshift populations. This suggests star formation in the first galaxies could differ significantly from their lower redshift counterparts. We also find that these candidates are subject to low-redshift contamination, which is present in our fiducial results as both the dusty or quiescent galaxies at z ~ 5.
Shao, H et al., ApJ, 2023 , vol. 956 , issue 2 (citations: 10)
Abstract
We discover analytic equations that can infer the value of Ωm from the positions and velocity moduli of halo and galaxy catalogs. The equations are derived by combining a tailored graph neural network (GNN) architecture with symbolic regression. We first train the GNN on dark matter halos from Gadget N-body simulations to perform field-level likelihood-free inference, and show that our model can infer Ωm with ~6% accuracy from halo catalogs of thousands of N-body simulations run with six different codes: Abacus, CUBEP3M, Gadget, Enzo, PKDGrav3, and Ramses. By applying symbolic regression to the different parts comprising the GNN, we derive equations that can predict Ωm from halo catalogs of simulations run with all of the above codes with accuracies similar to those of the GNN. We show that, by tuning a single free parameter, our equations can also infer the value of Ωm from galaxy catalogs of thousands of state-of-the-art hydrodynamic simulations of the CAMELS project, each with a different astrophysics model, run with five distinct codes that employ different subgrid physics: IllustrisTNG, SIMBA, Astrid, Magneticum, SWIFT-EAGLE. Furthermore, the equations also perform well when tested on galaxy catalogs from simulations covering a vast region in parameter space that samples variations in 5 cosmological and 23 astrophysical parameters. We speculate that the equations may reflect the existence of a fundamental physics relation between the phase-space distribution of generic tracers and Ωm, one that is not affected by galaxy formation physics down to scales as small as 10 h -1 kpc.
de Santi, N et al., arXiv, 2023 (citations: 4)
Abstract
It has been recently shown that a powerful way to constrain cosmological parameters from galaxy redshift surveys is to train graph neural networks to perform field-level likelihood-free inference without imposing cuts on scale. In particular, de Santi et al. (2023) developed models that could accurately infer the value of $\Omega_{\rm m}$ from catalogs that only contain the positions and radial velocities of galaxies that are robust to uncertainties in astrophysics and subgrid models. However, observations are affected by many effects, including 1) masking, 2) uncertainties in peculiar velocities and radial distances, and 3) different galaxy selections. Moreover, observations only allow us to measure redshift, intertwining galaxies' radial positions and velocities. In this paper we train and test our models on galaxy catalogs, created from thousands of state-of-the-art hydrodynamic simulations run with different codes from the CAMELS project, that incorporate these observational effects. We find that, although the presence of these effects degrades the precision and accuracy of the models, and increases the fraction of catalogs where the model breaks down, the fraction of galaxy catalogs where the model performs well is over 90 %, demonstrating the potential of these models to constrain cosmological parameters even when applied to real data.
Balu, S et al., MNRAS, 2023 , vol. 525 , issue 2 (citations: 2)
Abstract
The hyperfine 21-cm transition of neutral hydrogen from the early Universe (z > 5) is a sensitive probe of the formation and evolution of the first luminous sources. Using the Fisher matrix formalism we explore the complex and degenerate high-dimensional parameter space associated with the high-z sources of this era and forecast quantitative constraints from a future 21-cm power spectrum (21-cm PS) detection. This is achieved using $\rm {\small ERAXES}$, a coupled semi-analytic galaxy formation model and reionization simulation, applied to an N-body halo merger tree with a statistically complete population of all atomically cooled galaxies out to z ~ 20. Our mock observation assumes a 21-cm detection spanning z ∈ [5, 24] from a 1000 h mock observation with the forthcoming Square Kilometre Array, and is calibrated with respect to ultraviolet luminosity functions (UV LFs) at z ∈ [5, 10], the optical depth of CMB photons to Thompson scattering from Planck, and various constraints on the IGM neutral fraction at z > 5. In this work, we focus on the X-ray luminosity, ionizing UV photon escape fraction, star formation, and supernova feedback of the first galaxies. We demonstrate that it is possible to recover five of the eight parameters describing these properties with better than 50 per cent precision using just the 21-cm PS. By combining with UV LFs, we are able to improve our forecast, with five of the eight parameters constrained to better than 10 per cent (and all below 50 per cent).
Kenworthy, M et al., Natur, 2023 , vol. 622 , issue 7982 (citations: 6)
Abstract
Planets grow in rotating disks of dust and gas around forming stars, some of which can subsequently collide in giant impacts after the gas component is removed from the disk1-3. Monitoring programmes with the warm Spitzer mission have recorded substantial and rapid changes in mid-infrared output for several stars, interpreted as variations in the surface area of warm, dusty material ejected by planetary-scale collisions and heated by the central star: for example, NGC 2354-ID8 (refs. 4,5), HD 166191 (ref. 6) and V488 Persei7. Here we report combined observations of the young (about 300 million years old), solar-like star ASASSN-21qj: an infrared brightening consistent with a blackbody temperature of 1,000 Kelvin and a luminosity that is 4 percent that of the star lasting for about 1,000 days, partially overlapping in time with a complex and deep, wavelength-dependent optical eclipse that lasted for about 500 days. The optical eclipse started 2.5 years after the infrared brightening, implying an orbital period of at least that duration. These observations are consistent with a collision between two exoplanets of several to tens of Earth masses at 2-16 astronomical units from the central star. Such an impact produces a hot, highly extended post-impact remnant with sufficient luminosity to explain the infrared observations. Transit of the impact debris, sheared by orbital motion into a long cloud, causes the subsequent complex eclipse of the host star.
Teodoro, L et al., ApJ, 2023 , vol. 955 , issue 2 (citations: 9)
Abstract
We simulate the collision of precursor icy moons analogous to Dione and Rhea as a possible origin for Saturn's remarkably young rings. Such an event could have been triggered a few hundred million years ago by resonant instabilities in a previous satellite system. Using high-resolution smoothed particle hydrodynamics simulations, we find that this kind of impact can produce a wide distribution of massive objects and scatter material throughout the system. This includes the direct placement of pure-ice ejecta onto orbits that enter Saturn's Roche limit, which could form or rejuvenate rings. In addition, fragments and debris of rock and ice totaling more than the mass of Enceladus can be placed onto highly eccentric orbits that would intersect with any precursor moons orbiting in the vicinity of Mimas, Enceladus, or Tethys. This could prompt further disruption and facilitate a collisional cascade to distribute more debris for potential ring formation, the re-formation of the present-day moons, and evolution into an eventual cratering population of planetocentric impactors.
Chaikin, E et al., MNRAS, 2023 , vol. 523 , issue 3 (citations: 14)
Abstract
We present a subgrid model for supernova feedback designed for cosmological simulations of galaxy formation that may include a cold interstellar medium (ISM). The model uses thermal and kinetic channels of energy injection, which are built upon the stochastic kinetic and thermal models for stellar feedback used in the OWLS and EAGLE simulations, respectively. In the thermal channel, the energy is distributed statistically isotropically and injected stochastically in large amounts per event, which minimizes spurious radiative energy losses. In the kinetic channel, we inject the energy in small portions by kicking gas particles in pairs in opposite directions. The implementation of kinetic feedback is designed to conserve energy, linear and angular momentum, and is statistically isotropic. To test the model, we run simulations of isolated Milky Way-mass and dwarf galaxies, in which the gas is allowed to cool down to 10 K. Using the thermal and kinetic channels together, we obtain smooth star formation histories and powerful galactic winds with realistic mass loading factors. Furthermore, the model produces spatially resolved star formation rates (SFRs) and velocity dispersions that are in agreement with observations. We vary the numerical resolution by several orders of magnitude and find excellent convergence of the global SFRs and wind mass loading. We show that large thermal energy injections generate a hot phase of the ISM and modulate the star formation by ejecting gas from the disc, while the low-energy kicks increase the turbulent velocity dispersion in the neutral ISM, which in turn helps suppress star formation.
McGibbon, R et al., MNRAS, 2023 , vol. 523 , issue 4 (citations: 0)
Abstract
Using a novel machine learning method, we investigate the buildup of galaxy properties in different simulations, and in various environments within a single simulation. The aim of this work is to show the power of this approach at identifying the physical drivers of galaxy properties within simulations. We compare how the stellar mass is dependent on the value of other galaxy and halo properties at different points in time by examining the feature importance values of a machine learning model. By training the model on IllustrisTNG, we show that stars are produced at earlier times in higher density regions of the universe than they are in low density regions. We also apply the technique to the Illustris, EAGLE, and CAMELS simulations. We find that stellar mass is built up in a similar way in EAGLE and IllustrisTNG, but significantly differently in the original Illustris, suggesting that subgrid model physics is more important than the choice of hydrodynamics method. These differences are driven by the efficiency of supernova feedback. Applying principal component analysis to the CAMELS simulations allows us to identify a component associated with the importance of a halo's gravitational potential and another component representing the time at which galaxies form. We discover that the speed of galactic winds is a more critical subgrid parameter than the total energy per unit star formation. Finally, we find that the Simba black hole feedback model has a larger effect on galaxy formation than the IllustrisTNG black hole feedback model.
de Santi, N et al., ApJ, 2023 , vol. 952 , issue 1 (citations: 29)
Abstract
We train graph neural networks to perform field-level likelihood-free inference using galaxy catalogs from state-of-the-art hydrodynamic simulations of the CAMELS project. Our models are rotational, translational, and permutation invariant and do not impose any cut on scale. From galaxy catalogs that only contain 3D positions and radial velocities of ~1000 galaxies in tiny ${(25\,{h}^{-1}\mathrm{Mpc})}^{3}$ volumes our models can infer the value of Ωm with approximately 12% precision. More importantly, by testing the models on galaxy catalogs from thousands of hydrodynamic simulations, each having a different efficiency of supernova and active galactic nucleus feedback, run with five different codes and subgrid models-IllustrisTNG, SIMBA, Astrid, Magneticum, SWIFT-EAGLE-we find that our models are robust to changes in astrophysics, subgrid physics, and subhalo/galaxy finder. Furthermore, we test our models on 1024 simulations that cover a vast region in parameter space-variations in five cosmological and 23 astrophysical parameters-finding that the model extrapolates really well. Our results indicate that the key to building a robust model is the use of both galaxy positions and velocities, suggesting that the network has likely learned an underlying physical relation that does not depend on galaxy formation and is valid on scales larger than ~10 h -1 kpc.
Braspenning, J et al., MNRAS, 2023 , vol. 523 , issue 1 (citations: 7)
Abstract
Cloud-wind interactions are common in the interstellar and circumgalactic media. Many studies have used simulations of such interactions to investigate the effect of particular physical processes, but the impact of the choice of hydrodynamic solver has largely been overlooked. Here we study the cloud-wind interaction, also known as the 'blob test', using seven different hydrodynamic solvers: three flavours of SPH, a moving mesh, adaptive mesh refinement, and two meshless schemes. The evolution of masses in dense gas and intermediate-temperature gas, as well as the covering fraction of intermediate-temperature gas, are systematically compared for initial density contrasts of 10 and 100, and five numerical resolutions. To isolate the differences due to the hydrodynamic solvers, we use idealized non-radiative simulations without physical conduction. We find large differences between these methods. SPH methods show slower dispersal of the cloud, particularly for the higher density contrast, but faster convergence, especially for the lower density contrast. Predictions for the intermediate-temperature gas differ particularly strongly, also between non-SPH codes, and converge most slowly. We conclude that the hydrodynamical interaction between a dense cloud and a supersonic wind remains an unsolved problem. Studies aiming to understand the physics or observational signatures of cloud-wind interactions should test the robustness of their results by comparing different hydrodynamic solvers.
Harris, A et al., JOSS, 2023 , vol. 8 , issue 86 (citations: 10)
Abstract
nan
Adamek, J et al., JCAP, 2023 , vol. 2023 , issue 6 (citations: 21)
Abstract
The measurement of the absolute neutrino mass scale from cosmological large-scale clustering data is one of the key science goals of the Euclid mission. Such a measurement relies on precise modelling of the impact of neutrinos on structure formation, which can be studied with N-body simulations. Here we present the results from a major code comparison effort to establish the maturity and reliability of numerical methods for treating massive neutrinos. The comparison includes eleven full N-body implementations (not all of them independent), two N-body schemes with approximate time integration, and four additional codes that directly predict or emulate the matter power spectrum. Using a common set of initial data we quantify the relative agreement on the nonlinear power spectrum of cold dark matter and baryons and, for the N-body codes, also the relative agreement on the bispectrum, halo mass function, and halo bias. We find that the different numerical implementations produce fully consistent results. We can therefore be confident that we can model the impact of massive neutrinos at the sub-percent level in the most common summary statistics. We also provide a code validation pipeline for future reference.
Sedain, A et al., arXiv, 2023 (citations: 1)
Abstract
Prolate rotation is characterized by a significant stellar rotation around a galaxy's major axis, which contrasts with the more common oblate rotation. Prolate rotation is thought to be due to major mergers and thus studies of prolate-rotating systems can help us better understand the hierarchical process of galaxy evolution. Dynamical studies of such galaxies are important to find their gravitational potential profile, total mass, and dark matter fraction. Recently, it has been shown in a cosmological simulation that it is possible to form a prolate-rotating dwarf galaxy following a dwarf-dwarf merger event. The simulation also shows that the unusual prolate rotation can be time enduring. In this particular example, the galaxy continued to rotate around its major axis for at least $7.4$\,Gyr (from the merger event until the end of the simulation). In this project, we use mock observations of the hydro-dynamically simulated prolate-rotating dwarf galaxy to fit various stages of its evolution with Jeans dynamical models. The Jeans models successfully fit the early oblate state before the major merger event, and also the late prolate stages of the simulated galaxy, recovering its mass distribution, velocity dispersion, and rotation profile. We also ran a prolate-rotating N-body simulation with similar properties to the cosmologically simulated galaxy, which gradually loses its angular momentum on a short time scale $\sim100$\,Myr. More tests are needed to understand why prolate rotation is time enduring in the cosmological simulation, but not in a simple N-body simulation.
Husko, F et al., MNRAS, 2023 , vol. 521 , issue 3 (citations: 10)
Abstract
We use SWIFT, a smoothed particle hydrodynamics code, to simulate the evolution of bubbles inflated by active galactic nuclei (AGNs) jets, as well as their interactions with the ambient intracluster medium (ICM). These jets inflate lobes that turn into bubbles after the jets are turned off (at t = 50 Myr). Almost all of the energy injected into the jets is transferred to the ICM very quickly after they are turned off, with roughly 70 per cent of it in thermal form and the rest in kinetic. At late times (t > 500 Myr) we find the following: (1) the bubbles draw out trailing filaments of low-entropy gas, similar to those recently observed, (2) the action of buoyancy and the uplift of the filaments dominates the energetics of both the bubbles and the ICM, and (3) almost all of the originally injected energy is in the form of gravitational potential energy, with the bubbles containing 15 per cent of it, and the rest contained in the ICM. These findings indicate that feedback proceeds mainly through the displacement of gas to larger radii. We find that the uplift of these filaments permanently changes the thermodynamic properties of the ICM by reducing the central density and increasing the central temperature (within 30 kpc). We propose that jet feedback proceeds not only through the heating of the ICM (which can delay cooling), but also through the uplift-related reduction of the central gas density. The latter also delays cooling, on top of reducing the amount of gas available to cool.
Husko, F et al., MNRAS, 2023 , vol. 520 , issue 4 (citations: 11)
Abstract
Simulations of active galactic nuclei (AGN) jets have thus far been performed almost exclusively using grid-based codes. We present the first results from hydrodynamical tests of AGN jets, and their interaction with the intracluster medium (ICM), using smoothed particle hydrodynamics as implemented in the SWIFT code. We launch these jets into a constant-density ICM, as well as ones with a power-law density profile. We also vary the jet power, velocity, opening angle, and numerical resolution. In all cases we find broad agreement between our jets and theoretical predictions for the lengths of the jets and the lobes they inflate, as well as the radii of the lobes. The jets first evolve ballistically, and then transition to a self-similar phase, during which the lobes expand in a self-similar fashion (keeping a constant shape). In this phase the kinetic and thermal energies in the lobes and in the shocked ICM are constant fractions of the total injected energy. In our standard simulation, two thirds of the initially injected energy is transferred to the ICM by the time the jets are turned off, mainly through a bow shock. Of that, $70{{\%}}$ is in kinetic form, indicating that the bow shock does not fully and efficiently thermalize while the jet is active. At resolutions typical of large cosmological simulations (mgas ≈ 107 M⊙), the shape of the lobes is close to self-similar predictions to an accuracy of $15{{\%}}$. This indicates that the basic physics of jet-inflated lobes can be correctly simulated even at such resolutions (≈500 particles per jet).
Altamura, E et al., MNRAS, 2023 , vol. 520 , issue 2 (citations: 14)
Abstract
Recent high-resolution cosmological hydrodynamic simulations run with a variety of codes systematically predict large amounts of entropy in the intra-cluster medium at low redshift, leading to flat entropy profiles and a suppressed cool-core population. This prediction is at odds with X-ray observations of groups and clusters. We use a new implementation of the EAGLE galaxy formation model to investigate the sensitivity of the central entropy and the shape of the profiles to changes in the sub-grid model applied to a suite of zoom-in cosmological simulations of a group of mass M500 = 8.8 × 1012 M⊙ and a cluster of mass 2.9 × 1014 M⊙. Using our reference model, calibrated to match the stellar mass function of field galaxies, we confirm that our simulated groups and clusters contain hot gas with too high entropy in their cores. Additional simulations run without artificial conduction, metal cooling or active galactic nuclei (AGN) feedback produce lower entropy levels but still fail to reproduce observed profiles. Conversely, the two objects run without supernova feedback show a significant entropy increase which can be attributed to excessive cooling and star formation. Varying the AGN heating temperature does not greatly affect the profile shape, but only the overall normalization. Finally, we compared runs with four AGN heating schemes and obtained similar profiles, with the exception of bipolar AGN heating, which produces a higher and more uniform entropy distribution. Our study leaves open the question of whether the entropy core problem in simulations, and particularly the lack of power-law cool-core profiles, arise from incorrect physical assumptions, missing physical processes, or insufficient numerical resolution.
Ivkovic, M, PhDT, 2023 (citations: 2)
Abstract
The development and implementation of GEAR-RT, a radiative transfer solver using the M1 closure in the open source code SWIFT, is presented, and validated using standard tests for radiative transfer. GEAR-RT is modeled after RAMSES-RT (Rosdahl et al. 2013) with some key differences. Firstly, while RAMSES-RT uses Finite Volume methods and an Adaptive Mesh Refinement (AMR) strategy, GEAR-RT employs particles as discretization elements and solves the equations using a Finite Volume Particle Method (FVPM). Secondly, GEAR-RT makes use of the task-based parallelization strategy of SWIFT, which allows for optimized load balancing, increased cache efficiency, asynchronous communications, and a domain decomposition based on work rather than on data. GEAR-RT is able to perform sub-cycles of radiative transfer steps w.r.t. a single hydrodynamics step. Radiation requires much smaller time step sizes than hydrodynamics, and sub-cycling permits calculations which are not strictly necessary to be skipped. Indeed, in a test case with gravity, hydrodynamics, and radiative transfer, the sub-cycling is able to reduce the runtime of a simulation by over 90%. Allowing only a part of the involved physics to be sub-cycled is a contrived matter when task-based parallelism is involved, and is an entirely novel feature in SWIFT. Since GEAR-RT uses a FVPM, a detailed introduction into Finite Volume methods and Finite Volume Particle Methods is presented. In astrophysical literature, two FVPM methods are written about: Hopkins (2015) have implemented one in their GIZMO code, while the one mentioned in Ivanova et al. (2013) isn't used to date. In this work, I test an implementation of the Ivanova et al. (2013) version, and conclude that in its current form, it is not suitable for use with particles which are co-moving with the fluid, which in turn is an essential feature for cosmological simulations.
Roper, W et al., arXiv, 2023 (citations: 1)
Abstract
We present the first study of galaxy evolution in $\ddot{\mu}$ based cosmologies. We find that recent JWST observations of massive galaxies at extremely high redshifts are consistent with such a cosmology. However, the low redshift Universe is entirely divergent from the $\ddot{\mu}$ cosmic star formation rate density. We thus propose that our Universe was at one point dominated by a Primordial Bovine Herd (PBH) which later decayed producing dark energy. Note that we do not detail the mechanisms by which this decay process takes place. Despite its vanishingly small probability for existence, a $\ddot{\mu}$ based cosmological model marries the disparate findings in the high and low redshift Universe.
Alonso Asensio, I et al., MNRAS, 2023 , vol. 519 , issue 1 (citations: 5)
Abstract
We extend the state-of-the-art N-body code PKDGRAV3 with the inclusion of mesh-free gas hydrodynamics for cosmological simulations. Two new hydrodynamic solvers have been implemented, the mesh-less finite volume and mesh-less finite mass methods. The solvers manifestly conserve mass, momentum, and energy, and have been validated with a wide range of standard test simulations, including cosmological simulations. We also describe improvements to PKDGRAV3 that have been implemented for performing hydrodynamic simulations. These changes have been made with efficiency and modularity in mind, and provide a solid base for the implementation of the required modules for galaxy formation and evolution physics and future porting to GPUs. The code is released in a public repository, together with the documentation, and all the test simulations presented in this work.
Chan, T et al., IAUS, 2023 , vol. 362 (citations: 1)
Abstract
The progress of cosmic reionization depends on the presence of over-dense regions that act as photon sinks. Such sinks may slow down ionization fronts as compared to a uniform intergalactic medium (IGM) by increasing the clumping factor. We present simulations of reionization in a clumpy IGM resolving even the smallest sinks. The simulations use a novel, spatially adaptive and efficient radiative transfer implementation in the SWIFT SPH code, based on the two-moment method. We find that photon sinks can increase the clumping factor by a factor of ∼10 during the first ∼100 Myrs after the passage of an ionization front. After this time, the clumping factor decreases as the smaller sinks photoevaporate. Altogether, photon sinks increase the number of photons required to reionize the Universe by a factor of η ∼2, as compared to the homogeneous case. The value of η also depends on the emissivity of the ionizing sources.
Correa, C et al., MNRAS, 2022 , vol. 517 , issue 2 (citations: 28)
Abstract
We introduce the TangoSIDM project, a suite of cosmological simulations of structure formation in a Λ-self-interacting dark matter (SIDM) universe. TangoSIDM explores the impact of large dark matter (DM) scattering cross-sections over dwarf galaxy scales. Motivated by DM interactions that follow a Yukawa potential, the cross-section per unit mass, σ/mχ, assumes a velocity-dependent form that avoids violations of current constraints on large scales. We demonstrate that our implementation accurately models not only core formation in haloes but also gravothermal core collapse. For central haloes in cosmological volumes, frequent DM particle collisions isotropise the particles orbit, making them largely spherical. We show that the velocity-dependent σ/mχ models produce a large diversity in the circular velocities of satellites haloes, with the spread in velocities increasing as the cross-sections reach 20, 60, and 100 cm2 g-1 in $10^9~\rm {M}_{\odot }$ haloes. The large variation in the haloes internal structure is driven by DM particles interactions, causing in some haloes the formation of extended cores, whereas in others gravothermal core collapse. We conclude that the SIDM models from the Tango project offer a promising explanation for the diversity in the density and velocity profiles of observed dwarf galaxies.
Elbers, W et al., MNRAS, 2022 , vol. 516 , issue 3 (citations: 21)
Abstract
The discovery that neutrinos have mass has important consequences for cosmology. The main effect of massive neutrinos is to suppress the growth of cosmic structure on small scales. Such growth can be accurately modelled using cosmological N-body simulations, but doing so requires accurate initial conditions (ICs). There is a trade-off, especially with first-order ICs, between truncation errors for late starts and discreteness and relativistic errors for early starts. Errors can be minimized by starting simulations at late times using higher order ICs. In this paper, we show that neutrino effects can be absorbed into scale-independent coefficients in higher order Lagrangian perturbation theory (LPT). This clears the way for the use of higher order ICs for massive neutrino simulations. We demonstrate that going to higher order substantially improves the accuracy of simulations. To match the sensitivity of surveys like DESI and Euclid, errors in the matter power spectrum should be well below $1{{\ \rm per\ cent}}$. However, we find that first-order Zel'dovich ICs lead to much larger errors, even when starting as early as z = 127, exceeding $1{{\ \rm per\ cent}}$ at z = 0 for k > 0.5 Mpc-1 for the power spectrum and k > 0.1 Mpc-1 for the equilateral bispectrum in our simulations. Ratios of power spectra with different neutrino masses are more robust than absolute statistics, but still depend on the choice of ICs. For all statistics considered, we obtain $1{{\ \rm per\ cent}}$ agreement between 2LPT and 3LPT at z = 0.
Husko, F et al., MNRAS, 2022 , vol. 516 , issue 3 (citations: 35)
Abstract
We implement a black hole spin evolution and jet feedback model into SWIFT, a smoothed particle hydrodynamics code. The jet power is determined self-consistently assuming that the black hole accretion rate is equal to the Bondi rate (i.e. the accretion efficiency is 100 per cent), and using a realistic, spin-dependent efficiency. The jets are launched along the spin axis of the black hole, resulting in natural reorientation and precession. We apply the model to idealized simulations of galaxy groups and clusters, finding that jet feedback successfully quenches gas cooling and star formation in all systems. Our group-size halo (M200 = 1013 M⊙) is quenched by a strong jet episode triggered by a cooling flow, and it is kept quenched by a low-power jet fed from hot halo accretion. In more massive systems (M200 ≳ 1014 M⊙), hot halo accretion is insufficient to quench the galaxies, or to keep them quenched after the first cooling episode. These galaxies experience multiple episodes of gas cooling, star formation, and jet feedback. In the most massive galaxy cluster that we simulate (M200 = 1015 M⊙), we find peak cold gas masses of 1010 M⊙ and peak star formation rates of a few times 100 $\mathrm{M}_\odot \,\, \mathrm{yr}^{-1}$. These values are achieved during strong cooling flows, which also trigger the strongest jets with peak powers of 1047$\mathrm{erg}\, \mathrm{s}^{-1}$. These jets subsequently shut off the cooling flows and any associated star formation. Jet-inflated bubbles draw out low-entropy gas that subsequently forms dense cooling filaments in their wakes, as seen in observations.
Kegerreis, J et al., ApJL, 2022 , vol. 937 , issue 2 (citations: 19)
Abstract
The Moon is traditionally thought to have coalesced from the debris ejected by a giant impact onto the early Earth. However, such models struggle to explain the similar isotopic compositions of Earth and lunar rocks at the same time as the system's angular momentum, and the details of potential impact scenarios are hotly debated. Above a high resolution threshold for simulations, we find that giant impacts can immediately place a satellite with similar mass and iron content to the Moon into orbit far outside Earth's Roche limit. Even satellites that initially pass within the Roche limit can reliably and predictably survive, by being partially stripped and then torqued onto wider, stable orbits. Furthermore, the outer layers of these directly formed satellites are molten over cooler interiors and are composed of around 60% proto-Earth material. This could alleviate the tension between the Moon's Earth-like isotopic composition and the different signature expected for the impactor. Immediate formation opens up new options for the Moon's early orbit and evolution, including the possibility of a highly tilted orbit to explain the lunar inclination, and offers a simpler, single-stage scenario for the origin of the Moon.
Bahe, Y et al., MNRAS, 2022 , vol. 516 , issue 1 (citations: 34)
Abstract
Active galactic nucleus (AGN) feedback from accreting supermassive black holes (SMBHs) is an essential ingredient of galaxy formation simulations. The orbital evolution of SMBHs is affected by dynamical friction that cannot be predicted self-consistently by contemporary simulations of galaxy formation in representative volumes. Instead, such simulations typically use a simple 'repositioning' of SMBHs, but the effects of this approach on SMBH and galaxy properties have not yet been investigated systematically. Based on a suite of smoothed particle hydrodynamics simulations with the SWIFT code and a Bondi-Hoyle-Lyttleton sub-grid gas accretion model, we investigate the impact of repositioning on SMBH growth and on other baryonic components through AGN feedback. Across at least a factor ~1000 in mass resolution, SMBH repositioning (or an equivalent approach) is a necessary prerequisite for AGN feedback; without it, black hole growth is negligible. Limiting the effective repositioning speed to ≲10 km s-1 delays the onset of AGN feedback and severely limits its impact on stellar mass growth in the centre of massive galaxies. Repositioning has three direct physical consequences. It promotes SMBH mergers and thus accelerates their initial growth. In addition, it raises the peak density of the ambient gas and reduces the SMBH velocity relative to it, giving a combined boost to the accretion rate that can reach many orders of magnitude. Our results suggest that a more sophisticated and/or better calibrated treatment of SMBH repositioning is a critical step towards more predictive galaxy formation simulations.
Hausammann, L et al., A&C, 2022 , vol. 41 (citations: 2)
Abstract
Exa-scale simulations are on the horizon but almost no new design for the output has been proposed in recent years. In simulations using individual time steps, the traditional snapshots are over resolving particles/cells with large time steps and are under resolving the particles/cells with short time steps. Therefore, they are unable to follow fast events and use efficiently the storage space. The Continuous Simulation Data Stream (CSDS) is designed to decrease this space while providing an accurate state of the simulation at any time. It takes advantage of the individual time step to ensure the same relative accuracy for all the particles. The outputs consist of a single file representing the full evolution of the simulation. Within this file, the particles are written independently and at their own frequency. Through the interpolation of the records, the state of the simulation can be recovered at any point in time. In this paper, we show that the CSDS can reduce the storage space by 2.76x for the same accuracy than snapshots or increase the accuracy by 67.8x for the same storage space whilst retaining an acceptable reading speed for analysis. By using interpolation between records, the CSDS provides the state of the simulation, with a high accuracy, at any time. This should largely improve the analysis of fast events such as supernovae and simplify the construction of light-cone outputs.
Nobels, F et al., MNRAS, 2022 , vol. 515 , issue 4 (citations: 16)
Abstract
Using high-resolution hydrodynamical simulations of idealized galaxy clusters, we study the interaction between the brightest cluster galaxy, its supermassive black hole (BH), and the intracluster medium (ICM). We create initial conditions for which the ICM is in hydrostatic equilibrium within the gravitational potential from the galaxy and an NFW dark matter halo. Two free parameters associated with the thermodynamic profiles determine the cluster gas fraction and the central temperature, where the latter can be used to create cool-core or non-cool-core systems. Our simulations include radiative cooling, star formation, BH accretion, and stellar and active galactic nucleus (AGN) feedback. Even though the energy of AGN feedback is injected thermally and isotropically, it leads to anisotropic outflows and buoyantly rising bubbles. We find that the BH accretion rate (BHAR) is highly variable and only correlates strongly with the star formation rate (SFR) and the ICM when it is averaged over more than $1~\rm Myr$. We generally find good agreement with the theoretical precipitation framework. In $10^{13}~\rm M_\odot$ haloes, AGN feedback quenches the central galaxy and converts cool-core systems into non-cool-core systems. In contrast, higher mass, cool-core clusters evolve cyclically. Episodes of high BHAR raise the entropy of the ICM out to the radius, where the ratio of the cooling time and the local dynamical time tcool/tdyn > 10, thus suppressing condensation and, after a delay, the BHAR. The corresponding reduction in AGN feedback allows the ICM to cool and become unstable to precipitation, thus initiating a new episode of high SFR and BHAR.
Grove, C et al., MNRAS, 2022 , vol. 515 , issue 2 (citations: 25)
Abstract
Analysis of large galaxy surveys requires confidence in the robustness of numerical simulation methods. The simulations are used to construct mock galaxy catalogues to validate data analysis pipelines and identify potential systematics. We compare three N-body simulation codes, ABACUS, GADGET-2, and SWIFT, to investigate the regimes in which their results agree. We run N-body simulations at three different mass resolutions, 6.25 × 108, 2.11 × 109, and 5.00 × 109 h-1 M⊙, matching phases to reduce the noise within the comparisons. We find systematic errors in the halo clustering between different codes are smaller than the Dark Energy Spectroscopic Instrument (DESI) statistical error for $s\ \gt\ 20\ h^{-1}$ Mpc in the correlation function in redshift space. Through the resolution comparison we find that simulations run with a mass resolution of 2.1 × 109 h-1 M⊙ are sufficiently converged for systematic effects in the halo clustering to be smaller than the DESI statistical error at scales larger than $20\ h^{-1}$ Mpc. These findings show that the simulations are robust for extracting cosmological information from large scales which is the key goal of the DESI survey. Comparing matter power spectra, we find the codes agree to within 1 per cent for k ≤ 10 h Mpc-1. We also run a comparison of three initial condition generation codes and find good agreement. In addition, we include a quasi-N-body code, FastPM, since we plan use it for certain DESI analyses. The impact of the halo definition and galaxy-halo relation will be presented in a follow-up study.
Chaikin, E et al., MNRAS, 2022 , vol. 514 , issue 1 (citations: 22)
Abstract
Supernova (SN) feedback plays a crucial role in simulations of galaxy formation. Because blast waves from individual SNe occur on scales that remain unresolved in modern cosmological simulations, SN feedback must be implemented as a subgrid model. Differences in the manner in which SN energy is coupled to the local interstellar medium and in which excessive radiative losses are prevented have resulted in a zoo of models used by different groups. However, the importance of the selection of resolution elements around young stellar particles for SN feedback has largely been overlooked. In this work, we examine various selection methods using the smoothed particle hydrodynamics code SWIFT. We run a suite of isolated disc galaxy simulations of a Milky Way-mass galaxy and small cosmological volumes, all with the thermal stochastic SN feedback model used in the EAGLE simulations. We complement the original mass-weighted neighbour selection with a novel algorithm guaranteeing that the SN energy distribution is as close to isotropic as possible. Additionally, we consider algorithms where the energy is injected into the closest, least dense, or most dense neighbour. We show that different neighbour-selection strategies cause significant variations in star formation rates, gas densities, wind mass-loading factors, and galaxy morphology. The isotropic method results in more efficient feedback than the conventional mass-weighted selection. We conclude that the manner in which the feedback energy is distributed among the resolution elements surrounding a feedback event is as important as changing the amount of energy by factors of a few.
McAlpine, S et al., MNRAS, 2022 , vol. 512 , issue 4 (citations: 31)
Abstract
We present SIBELIUS-DARK, a constrained realization simulation of the local volume to a distance of 200 Mpc from the Milky Way. SIBELIUS-DARK is the first study of the 'Simulations Beyond The Local Universe' (SIBELIUS) project, which has the goal of embedding a model Local Group-like system within the correct cosmic environment. The simulation is dark-matter-only, with the galaxy population calculated using the semi-analytic model of galaxy formation, GALFORM. We demonstrate that the large-scale structure that emerges from the SIBELIUS constrained initial conditions matches well the observational data. The inferred galaxy population of SIBELIUS-DARK also match well the observational data, both statistically for the whole volume and on an object-by-object basis for the most massive clusters. For example, the K-band number counts across the whole sky, and when divided between the northern and southern Galactic hemispheres, are well reproduced by SIBELIUS-DARK. We find that the local volume is somewhat unusual in the wider context of ΛCDM: it contains an abnormally high number of supermassive clusters, as well as an overall large-scale underdensity at the level of ≈5 per cent relative to the cosmic mean. However, whilst rare, the extent of these peculiarities does not significantly challenge the ΛCDM model. SIBELIUS-DARK is the most comprehensive constrained realization simulation of the local volume to date, and with this paper we publicly release the halo and galaxy catalogues at z = 0, which we hope will be useful to the wider astronomy community.
Ruan, C et al., JCAP, 2022 , vol. 2022 , issue 5 (citations: 26)
Abstract
We present MG-GLAM, a code developed for the very fast production of full N-body cosmological simulations in modified gravity (MG) models. We describe the implementation, numerical tests and first results of a large suite of cosmological simulations for three classes of MG models with conformal coupling terms: the f(R) gravity, symmetron and coupled quintessence models. Derived from the parallel particle-mesh code GLAM, MG-GLAM incorporates an efficient multigrid relaxation technique to solve the characteristic nonlinear partial differential equations of these models. For f(R) gravity, we have included new variants to diversify the model behaviour, and we have tailored the relaxation algorithms to these to maintain high computational efficiency. In a companion paper, we describe versions of this code developed for derivative coupling MG models, including the Vainshtein- and K-mouflage-type models. MG-GLAM can model the prototypes for most MG models of interest, and is broad and versatile. The code is highly optimised, with a tremendous speedup of a factor of more than a hundred compared with earlier N-body codes, while still giving accurate predictions of the matter power spectrum and dark matter halo abundance. MG-GLAM is ideal for the generation of large numbers of MG simulations that can be used in the construction of mock galaxy catalogues and the production of accurate emulators for ongoing and future galaxy surveys.
Chaikin, E et al., MNRAS, 2022 , vol. 512 , issue 1 (citations: 9)
Abstract
Recent studies have shown that live (not decayed) radioactive 60Fe is present in deep-ocean samples, Antarctic snow, lunar regolith, and cosmic rays. 60Fe represents supernova (SN) ejecta deposited in the Solar system around $3 \, \rm Myr$ ago, and recently an earlier pulse ${\approx}7 \ \rm Myr$ ago has been found. These data point to one or multiple near-Earth SN explosions that presumably participated in the formation of the Local Bubble. We explore this theory using 3D high-resolution smooth-particle hydrodynamical simulations of isolated SNe with ejecta tracers in a uniform interstellar medium (ISM). The simulation allows us to trace the SN ejecta in gas form and those eject in dust grains that are entrained with the gas. We consider two cases of diffused ejecta: when the ejecta are well-mixed in the shock and when they are not. In the latter case, we find that these ejecta remain far behind the forward shock, limiting the distance to which entrained ejecta can be delivered to ≈100 pc in an ISM with $n_\mathrm{H}=0.1\,\, \rm cm^{-3}$ mean hydrogen density. We show that the intensity and the duration of 60Fe accretion depend on the ISM density and the trajectory of the Solar system. Furthermore, we show the possibility of reproducing the two observed peaks in 60Fe concentration with this model by assuming two linear trajectories for the Solar system with 30-km s-1 velocity. The fact that we can reproduce the two observed peaks further supports the theory that the 60Fe signal was originated from near-Earth SNe.
Kugel, R et al., JOSS, 2022 , vol. 7 , issue 72 (citations: 8)
Abstract
nan
Borrow, J et al., MNRAS, 2022 , vol. 511 , issue 2 (citations: 43)
Abstract
Smoothed particle hydrodynamics (SPH) is a ubiquitous numerical method for solving the fluid equations, and is prized for its conservation properties, natural adaptivity, and simplicity. We introduce the SPHENIX SPH scheme, which was designed with three key goals in mind: to work well with sub-grid physics modules that inject energy, be highly computationally efficient (both in terms of compute and memory), and to be Lagrangian. SPHENIX uses a Density-Energy equation of motion, along with a variable artificial viscosity and conduction, including limiters designed to work with common sub-grid models of galaxy formation. In particular, we present and test a novel limiter that prevents conduction across shocks, preventing spurious radiative losses in feedback events. SPHENIX is shown to solve many difficult test problems for traditional SPH, including fluid mixing and vorticity conservation, and it is shown to produce convergent behaviour in all tests where this is appropriate. Crucially, we use the same parameters within SPHENIX for the various switches throughout, to demonstrate the performance of the scheme as it would be used in production simulations. SPHENIX is the new default scheme in the SWIFT cosmological simulation code and is available open source.
Dawson, K et al., arXiv, 2022 (citations: 11)
Abstract
Joint studies of imaging and spectroscopic samples, informed by theory and simulations, offer the potential for comprehensive tests of the cosmological model over redshifts z<1.5. Spectroscopic galaxy samples at these redshifts can be increased beyond the planned Dark Energy Spectroscopic Instrument (DESI) program by at least an order of magnitude, thus offering significantly more constraining power for these joint studies. Spectroscopic observations of these galaxies in the latter half of the 2020's and beyond would leverage the theory and simulation effort in this regime. In turn, these high density observations will allow enhanced tests of dark energy, physics beyond the standard model, and neutrino masses that will greatly exceed what is currently possible. Here, we present a coordinated program of simulations, theoretical modeling, and future spectroscopy that would enable precise cosmological studies in the accelerating epoch where the effects of dark energy are most apparent.
Hernandez-Aguayo, C et al., JCAP, 2022 , vol. 2022 , issue 1 (citations: 21)
Abstract
We present MG-GLAM, a code developed for the very fast production of full N-body cosmological simulations in modified gravity (MG) models. We describe the implementation, numerical tests and first results of a large suite of cosmological simulations for two broad classes of MG models with derivative coupling terms - the Vainshtein- and Kmouflage-type models - which respectively features the Vainshtein and Kmouflage screening mechanism. Derived from the parallel particle-mesh code GLAM, MG-GLAM incorporates an efficient multigrid relaxation technique to solve the characteristic nonlinear partial differential equations of these models. For Kmouflage, we have proposed a new algorithm for the relaxation solver, and run the first simulations of the model to understand its cosmological behaviour. In a companion paper, we describe versions of this code developed for conformally-coupled MG models, including several variants of f(R) gravity, the symmetron model and coupled quintessence. Altogether, MG-GLAM has so far implemented the prototypes for most MG models of interest, and is broad and versatile. The code is highly optimised, with a tremendous (over two orders of magnitude) speedup when comparing its running time with earlier N-body codes, while still giving accurate predictions of the matter power spectrum and dark matter halo abundance. MG-GLAM is ideal for the generation of large numbers of MG simulations that can be used in the construction of mock galaxy catalogues and accurate emulators for ongoing and future galaxy surveys.
Chan, T et al., MNRAS, 2021 , vol. 505 , issue 4 (citations: 12)
Abstract
We present a new smoothed particle hydrodynamics-radiative transfer method (SPH-M1RT) that is coupled dynamically with SPH. We implement it in the (task-based parallel) SWIFT galaxy simulation code but it can be straightforwardly implemented in other SPH codes. Our moment-based method simultaneously solves the radiation energy and flux equations in SPH, making it adaptive in space and time. We modify the M1 closure relation to stabilize radiation fronts in the optically thin limit. We also introduce anisotropic artificial viscosity and high-order artificial diffusion schemes, which allow the code to handle radiation transport accurately in both the optically thin and optically thick regimes. Non-equilibrium thermochemistry is solved using a semi-implicit sub-cycling technique. The computational cost of our method is independent of the number of sources and can be lowered further by using the reduced speed-of-light approximation. We demonstrate the robustness of our method by applying it to a set of standard tests from the cosmological radiative transfer comparison project of Iliev et al. The SPH-M1RT scheme is well-suited for modelling situations in which numerous sources emit ionizing radiation, such as cosmological simulations of galaxy formation or simulations of the interstellar medium.
Sexton, J et al., JOSS, 2021 , vol. 6 , issue 63 (citations: 10)
Abstract
nan
Hahn, O et al., MNRAS, 2021 , vol. 503 , issue 1 (citations: 42)
Abstract
We present a novel approach to generate higher order initial conditions (ICs) for cosmological simulations that take into account the distinct evolution of baryons and dark matter. We focus on the numerical implementation and the validation of its performance, based on both collisionless N-body simulations and full hydrodynamic Eulerian and Lagrangian simulations. We improve in various ways over previous approaches that were limited to first-order Lagrangian perturbation theory (LPT). Specifically, we (1) generalize nth-order LPT to multifluid systems, allowing 2LPT or 3LPT ICs for two-fluid simulations, (2) employ a novel propagator perturbation theory to set up ICs for Eulerian codes that are fully consistent with 1LPT or 2LPT, (3) demonstrate that our ICs resolve previous problems of two-fluid simulations by using variations in particle masses that eliminate spurious deviations from expected perturbative results, (4) show that the improvements achieved by going to higher order PT are comparable to those seen for single-fluid ICs, and (5) demonstrate the excellent (i.e. few per cent level) agreement between Eulerian and Lagrangian simulations, once high-quality initial conditions are used. The rigorous development of the underlying perturbation theory is presented in a companion paper. All presented algorithms are implemented in the MONOFONIC MUSIC-2 package that we make publicly available.
McCarthy, I et al., MNRAS, 2020 , vol. 499 , issue 3 (citations: 2)
Abstract
The standard model of cosmology, the Λ cold dark matter (ΛCDM) model, robustly predicts the existence of a multitude of dark matter 'subhaloes' around galaxies like the Milky Way. A wide variety of observations have been proposed to look for the gravitational effects such subhaloes would induce in observable matter. Most of these approaches pertain to the stellar or cool gaseous phases of matter. Here we propose a new approach, which is to search for the perturbations that such dark subhaloes would source in the warm/hot circumgalactic medium (CGM) around normal galaxies. With a combination of analytic theory, carefully controlled high-resolution idealized simulations, and full cosmological hydrodynamical simulations (the ARTEMIS simulations), we calculate the expected signal and how it depends on important physical parameters (subhalo mass, CGM temperature, and relative velocity). We find that dark subhaloes enhance both the local CGM temperature and density and, therefore, also the pressure. For the pressure and density, the fluctuations can vary in magnitude from tens of per cent (for subhaloes with Msub = 1010 M⊙) to a few per cent (for subhaloes with Msub = 108 M⊙), although this depends strongly on the CGM temperature. The subhaloes also induce fluctuations in the velocity field ranging in magnitude from a few km s-1 up to 25 km s-1. We propose that X-ray, Sunyaev-Zel'dovich effect, radio dispersion measure, and quasar absorption line observations can be used to measure these fluctuations and place constraints on the abundance and distribution of dark subhaloes, thereby placing constraints on the nature of dark matter.
Schafer, C et al., A&C, 2020 , vol. 33 (citations: 11)
Abstract
We present the second release of the now open source smoothed particle hydrodynamics code miluphcuda. The code is designed to run on Nvidia CUDA capable devices. It handles one to three dimensional problems and includes modules to solve the equations for viscid and inviscid hydrodynamical flows, the equations of continuum mechanics using SPH, and self-gravity with a Barnes-Hut tree. The covered material models include different porosity and plasticity models. Several equations of states, especially for impact physics, are implemented. The basic ideas of the numerical scheme are presented, the usage of the code is explained and its versatility is shown by means of different applications. The code is hereby publicly available.
Vandenbroucke, B et al., A&A, 2020 , vol. 641 (citations: 3)
Abstract
Context. Monte Carlo radiative transfer (MCRT) is a widely used technique to model the interaction between radiation and a medium. It plays an important role in astrophysical modelling and when these models are compared with observations.
Aims: We present a novel approach to MCRT that addresses the challenging memory-access patterns of traditional MCRT algorithms, which prevent an optimal performance of MCRT simulations on modern hardware with a complex memory architecture.
Methods: We reformulated the MCRT photon-packet life cycle as a task-based algorithm, whereby the computation is broken down into small tasks that are executed concurrently. Photon packets are stored in intermediate buffers, and tasks propagate photon packets through small parts of the computational domain, moving them from one buffer to another in the process.
Results: Using the implementation of the new algorithm in the photoionization MCRT code CMACIONIZE 2.0, we show that the decomposition of the MCRT grid into small parts leads to a significant performance gain during the photon-packet propagation phase, which constitutes the bulk of an MCRT algorithm because memory caches are used more efficiently. Our new algorithm is faster by a factor 2 to 4 than an equivalent traditional algorithm and shows good strong scaling up to 30 threads. We briefly discuss adjustments to our new algorithm and extensions to other astrophysical MCRT applications.
Conclusions: We show that optimising the memory access patterns of a memory-bound algorithm such as MCRT can yield significant performance gains. The source code of CMACIONIZE 2.0 is hosted at https://github.com/bwvdnbro/CMacIonize
Borrow, J et al., JOSS, 2020 , vol. 5 , issue 52 (citations: 44)
Abstract
nan
Kegerreis, J et al., ApJ, 2020 , vol. 897 , issue 2 (citations: 28)
Abstract
We examine the mechanisms by which the atmosphere can be eroded by giant impacts onto Earth-like planets with thin atmospheres, using 3D smoothed particle hydrodynamics simulations with sufficient resolution to directly model the fate of low-mass atmospheres. We present a simple scaling law to estimate the fraction lost for any impact angle and speed in this regime. In the canonical Moon-forming impact, only around 10% of the atmosphere would have been lost from the immediate effects of the collision. There is a gradual transition from removing almost none to almost all of the atmosphere for a grazing impact as it becomes more head-on or increases in speed, including complex, nonmonotonic behavior at low impact angles. In contrast, for head-on impacts, a slightly greater speed can suddenly remove much more atmosphere. Our results broadly agree with the application of 1D models of local atmosphere loss to the ground speeds measured directly from our simulations. However, previous analytical models of shock-wave propagation from an idealized point-mass impact significantly underestimate the ground speeds and hence the total erosion. The strong dependence on impact angle and the interplay of multiple nonlinear and asymmetrical loss mechanisms highlight the need for 3D simulations in order to make realistic predictions.
Cielo, S et al., arXiv, 2020 (citations: 1)
Abstract
The complexity of modern and upcoming computing architectures poses severe challenges for code developers and application specialists, and forces them to expose the highest possible degree of parallelism, in order to make the best use of the available hardware. The Intel$^{(R)}$ Xeon Phi$^{(TM)}$ of second generation (code-named Knights Landing, henceforth KNL) is the latest many-core system, which implements several interesting hardware features like for example a large number of cores per node (up to 72), the 512 bits-wide vector registers and the high-bandwidth memory. The unique features of KNL make this platform a powerful testbed for modern HPC applications. The performance of codes on KNL is therefore a useful proxy of their readiness for future architectures. In this work we describe the lessons learnt during the optimisation of the widely used codes for computational astrophysics P-Gadget-3, Flash and Echo. Moreover, we present results for the visualisation and analysis tools VisIt and yt. These examples show that modern architectures benefit from code optimisation at different levels, even more than traditional multi-core systems. However, the level of modernisation of typical community codes still needs improvements, for them to fully utilise resources of novel architectures.
Elahi, P et al., PASA, 2019 , vol. 36 (citations: 90)
Abstract
We present VELOCIraptor, a massively parallel galaxy/(sub)halo finder that is also capable of robustly identifying tidally disrupted objects and separate stellar halos from galaxies. The code is written in C++11, use the Message Passing Interface (MPI) and OpenMP Application Programming Interface (API) for parallelisation, and includes python tools to read/manipulate the data products produced. We demonstrate the power of the VELOCIraptor (sub)halo finder, showing how it can identify subhalos deep within the host that have negligible density contrasts to their parent halo. We find a subhalo mass-radial distance dependence: large subhalos with mass ratios of ≳10-2 are more common in the central regions than smaller subhalos, a result of dynamical friction and low tidal mass loss rates. This dependence is completely absent in (sub)halo finders in common use, which generally search for substructure in configuration space, yet is present in codes that track particles belonging to halos as they fall into other halos, such as hbt+. VELOCIraptor largely reproduces the dependence seen without tracking, finding a similar radial dependence to hbt+ in well-resolved halos from our limited resolution fiducial simulation.
Zhu, Q, arXiv, 2017 (citations: 1)
Abstract
$N$-body simulations study the dynamics of $N$ particles under the influence of mutual long-distant forces such as gravity. In practice, $N$-body codes will violate Newton's third law if they use either an approximate Poisson solver or individual timesteps. In this study, we construct a novel $N$-body scheme by combining a fast multipole method (FMM) based Poisson solver and a time integrator using a hierarchical Hamiltonian splitting (HHS) technique. We test our implementation for collision-less systems using several problems in galactic dynamics. As a result of the momentum conserving nature of these two key components, the new $N$-body scheme is also momentum conserving. Moreover, we can fully utilize the $\mathcal O(\textit N)$ complexity of FMM with the integrator. With the restored force symmetry, we can improve both angular momentum conservation and energy conservation substantially. The new scheme will be suitable for many applications in galactic dynamics and structure formation. Our implementation, in the code Taichi, is publicly available at https://bitbucket.org/qirong_zhu/taichi_public/.
Kidder, L et al., JCoPh, 2017 , vol. 335 (citations: 87)
Abstract
We introduce a new relativistic astrophysics code, SpECTRE, that combines a discontinuous Galerkin method with a task-based parallelism model. SpECTRE's goal is to achieve more accurate solutions for challenging relativistic astrophysics problems such as core-collapse supernovae and binary neutron star mergers. The robustness of the discontinuous Galerkin method allows for the use of high-resolution shock capturing methods in regions where (relativistic) shocks are found, while exploiting high-order accuracy in smooth regions. A task-based parallelism model allows efficient use of the largest supercomputers for problems with a heterogeneous workload over disparate spatial and temporal scales. We argue that the locality and algorithmic structure of discontinuous Galerkin methods will exhibit good scalability within a task-based parallelism framework. We demonstrate the code on a wide variety of challenging benchmark problems in (non)-relativistic (magneto)-hydrodynamics. We demonstrate the code's scalability including its strong scaling on the NCSA Blue Waters supercomputer up to the machine's full capacity of 22 , 380 nodes using 671 , 400 threads.