List of PhD offers of laboratory.
The last piece of the standard model of particle physics, the Higgs boson, was discovered by the ATLAS and CMS collaborations in 2012. The newly discovered boson provides a unique possibility to search for new unknown physics beyond the Standard Model. The ATLAS group at CPPM have a leading role in detecting and studying the Higgs boson properties in several of its production and decay modes. The group is currently concentrating on the detection of the production of two Higgs bosons or two scalar bosons, a process that was never observed before.
This thesis will concentrate on the study of the production of two Higgs bosons decaying to a pair of photons and a pair or b-quarks (HH->bbyy). The detection of such process is a strong proof of the Higgs self coupling and the electroweak symmetry breaking as described by the standard model. The run 3 of the LHC, currently in operation, will provide enough data (in combination with previous data) to improve the discovery potential of such process. A contribution to the search for new physics in the same decay mode is expected. This will involve searching for a heavy particle decaying to two Higgs bosons (X->HH->bbyy) or a Higgs boson and a new scalar boson (X->SH->bbyy). The search, detection and measurement of the Standard Model process ZH->bbyy, which has the same final state products, is important to validate the previous analysis. A contribution to the understanding of ZH->bbyy will be also considered.
The analyses described above with the run 3 LHC data is being prepared now by a group of several ATLAS institutes around the world that collaborate at CERN. The analysis will look for the HH production as described by the Standard Model as well as with beyond the Standard Model models where the Higgs self coupling is modified or where new scalar particles exist and couple to the Higgs boson.
The efficient identification of photons in the ATLAS detector is one of the main ingredient for the analysis described above. The candidate is expected to work on improving the photon identification in ATLAS. This work involves understanding the shower shapes that a photon leave in the Liquid Argon calorimeter as well as developing modern method (based on neural networks) to identify photons and separate them from the background.
The successful candidate will work within this collaboration and will take part of preparing and studying simulation samples that describes the physics processes. The candidate will work within a team of four researchers and two PhD student at CPPM. He/She will analyze the kinematic and topological distributions of the signal in order to improve the selection of signal events and separate them from the background. He/She will also work on the estimation of the background and extract the corresponding uncertainties. The last task would be to measure the Higgs boson self coupling and compare it with predictions from the Standard Model and/or set limits of the production cross section of beyond the Standard Model processes in case additional scale bosons are not discovered.
Prior knowledge of programming language especially C++/root or python is an advantage but is not mandatory.
This thesis is expected to start in October 2026 (if funding is obtained).
Object:
Being forbidden in the Standard Model (SM) of particle physics, lepton flavor violating decays are among the most powerful probes to search for physics beyond the SM. In view of the recent anomalies seen by LHCb on tests of lepton flavor universality in and processes, the interest of lepton flavor violating decays involving tau leptons in the final state has been greatly reinforced. In particular, several new physics models predict branching fractions of and decays just below the current experimental limits. This is true as well for the FCNC process .
The Belle II experiment located at KEK, Japan, started to take data in 2019, aiming at collecting much more data than its predecessor, Belle. The goal of this PhD is to exploit the Belle II data in order to obtain the best experimental limits on lepton flavor violating decays such as , where X is a hadronic system and l an electron or a muon and on the transitions , as . In particular we'd like to explore new B tagging methods in Belle II.
Activities:
Data analysis using Machine Learning techniques, participation to data taking, participation to Belle II service tasks, activities of outreach and dissemination.
Work context:
This PhD will take place at CPPM, Marseille (https://www.cppm.in2p3.fr/web/en/index.html). Travels to KEK for collaboration meetings, and longer stay for participation to the data taking, are foreseen.
Additional information:
Applicants must hold a Master degree (or equivalent) in Physics, or expect to hold such a degree by the start of employment. Application must include a CV, grade records, a motivation statement and three letters of recommendations.
References:
https://arxiv.org/abs/1808.10567
https://arxiv.org/abs/1703.02508
https://arxiv.org/abs/hep-ex/0511015
Mission :
Mirion Technologies, the world leader in radioprotection, develops measurement systems to guarantee personal safety against ionizing radiation in industrial environments. Mirion Technologies' product range includes systems for protecting people and goods and for monitoring objects and environmental contamination as well as systems for measuring radioactivity.
CPPM, the Marseille Particle Physics Center, is a Joint Research Unit (UMR 7346). The laboratory is part of the National Institute of Nuclear and Particle Physics (IN2P3), under the joint supervision of the French National Centre for Scientific Research (CNRS) and Aix-Marseille University.
The CPPM is involved in the construction of detectors for CERN's large experiments. In particular, it is involved in the design of the integrated circuits required for the development, construction, and assembly of these detectors.
In this context, CPPM and Mirion Technologies have been collaborating for several years on the design and development of integrated circuits dedicated to ionizing radiation detection.
This PhD work is part of the development of the future generation of ionizing radiation detection devices. It aims to assess the impact of advanced technologiesparticularly CMOS processes at 28 nm and belowon the overall performance and resilience of the detection chain with a prime focus over (noise, power consumption) with Total Integrated Dose (TID) for gamma and neutron fluxes. The study focuses on both the analog and digital sections of the ASIC. Special attention is given to evaluating the potential of pixelated ASICs, commonly used in particle physics experiments, for applications in ionizing radiation detection and nuclear medicine.
The analog front-end of the ASIC is responsible for biasing the detector, amplifying, and shaping the signals. The noise factor of this sensitive analog stage is a critical design parameter, as it directly affects the metrological performance of the ASIC/detector system. Optimizing the signal-to-noise ratio (SNR) is therefore a key focus of the study, particularly as it must be achieved in conjunction with the optimization of speed and power consumption.
This PhD work aims to propose, develop, and validate amplification and signal shaping circuit architectures, whether building on or departing from the current state of the art. It should be noted that the integrated circuit must be designed with a level of robustness suitable for developing compact detection devices intended for use in industrial environments.
Main activity:
Establish the state of the art of low-noise amplification circuits
Design analog and digital blocks for ionizing radiation detection
Study the impact of advanced CMOS technologies (28 nm, 22 nm or below) on performance
First step will be to target single-input designs for solid state detectors (e.g. SiC, Diamond) with detector capacitance in the range of 1-10pF. Second step will be to evaluate the relevance of pixelated circuits for the intended application
Contribute to the development of specifications
Contribute to the design and development of the ASIC; implement and verify circuit schematics through simulation
Anticipate process variations to ensure manufacturing yield
Verify the functionality of the integrated circuit under industrial operating constraints (temperature, EMC, supply voltage, etc.)
Develop behavioral models for circuit verification
Contribute to the integration and verification of the ASIC
Communicate effectively within the team and with project partners
Write detailed design documentation and contribute to the definition of the acquisition system required for ASIC characterization
Present results at conferences and publish them in peer-reviewed journals Technical knowledge
Solid knowledge and creativity in microelectronic circuit design, particularly low-noise and low-power analog circuits
Knowledge of CAD tools for simulation, layout, and verification for analog and digital ASICs (Cadence environment)
Knowledge of mixed design (Analog/Digital) rules
First experience in analog IC design or in the design of acquisition and processing systems would be appreciated
Qualifications required: Master's degree-2, an engineering degree in microelectronics or equivalent, knowledge in physics measurement is appreciated
Field of education: Electronics and instrumentation
Language skills: English/oral and written comprehension
Location : Center de Physique des Particules de Marseille (CPPM)
Thesis start date: From Sept/Oct 2025
Annual gross salary: From 26,400 to 28,000 depending on profile
Contact persons: Mohsine Menouni
The context: More than twenty years after the discovery of the accelerated nature of the Universe's expansion, there is still no definitive explanation for its physical origin. Several types of dark energy or even alternatives/extensions to general relativity have been proposed in the literature attempting to explain the acceleration of the expansion. By accurately measuring of both the expansion rate of the Universe as well as the growth rate of structures as a function of cosmic time, we can learn more about this cosmological mystery. Particularly at low redshift when the expansion is accelerated and dark energy dominates the expansion, we are interested in obtaining the best constraints on the growth rate of structures. These measurements can be achieved by combining galaxy positions and their velocities. The statistical properties of the density and velocity field are tightly connected to the underlying cosmological model.
Experiments: Measurements of the expansion and growth rates of the Universe are the main scientific goal of current and future experiments such as the Dark Energy Spectroscopic Instrument (DESI), the Zwicky Transient Facility (ZTF), Euclid and the Vera Rubin Observatory Legacy Survey of Space and Time (Rubin-LSST).
DESI is currently measuring the 40 million galaxy positions (with their redshift) and their lower redshift sample will be the most complete to date.
The ZTF survey will discover more than 5 000 type-Ia supernovae, from which we can derive galaxy velocities. Rubin-LSST will increase this number to the hundreds of thousands.
Goal of thesis: The selected candidate will work towards the joint analysis of DESI and ZTF datasets, which contain millions of galaxies and thousands of type-Ia supernovae. The candidate will get familiarised with the physics and the statistics of galaxy clustering, will code their own analysis pipeline, test it on state-of-the-art simulations, and apply it on real data. The measurement of the growth rate of structures using DESI galaxies and peculiar velocities from ZTF supernovae will enable tests of general relativity on cosmic scales. This study is a key project in the roadmap of DESI and ZTF collaborations.
Profile required: The candidate has to have large interest by cosmology, statistics, data analysis and programming (we use mostly python). English proficiency and team work skills are also required.
Twenty years after the discovery of the accelerating expansion of the universe through supernova measurements, the supernova probe remains one of the most accurate means of measuring the cosmological parameters of this recent period in the history of our universe, dominated by the so-called dark energy.
The Rubin Observatory with the Large Survey of Space and Time (Rubin/LSST) will be commissioned in 2025 and will be fully operational by the end of 2025. It is an 8.4-m telescope equipped with a 3.2-billion-pixel camera, the most powerful ever built.
This telescope will take a picture of half the sky every three nights for ten years. This survey will make it possible to measure billions of galaxies with great precision, and to track the variation over time of all transient objects. Together with many other astrophysical studies, it will be a very powerful machine for determining cosmological parameters using many different probes and, in particular, will impose strong constraints on the nature of dark energy. The LSST project aims to discover up to half a million supernovae. This improvement of two to three statistical orders of magnitude over the current data set will enable precise testing of the parameters of dark energy, test general relativity and also impose new constraints on the isotropy of the universe.
During the thesis, we propose to prepare and then participate in the analysis of the first LSST supernova data. The preparation will be done using existing HSC/Subaru data, as well as the first images of LSST.
The student will participate in the commissioning of Rubin/LSST. He/she will be in charge of pursuing developments in deep learning methods for supernova identification, and applying them to the first observations.
He/she will then take part in the first analyses using the supernovae he/she has helped to identify.
The LSST group at CPPM is already involved in precision photometry for LSST, with direct involvement in the validation of algorithms within DESC/LSST [1][2][3], and has proposed a new deep learning method to improve photometric identification of supernovae [4] and photometric redshifts [5].
[1] https://www.lsst.org/content/lsst-science-drivers-reference-design-and-anticipated-data-products
[2] https://arxiv.org/abs/1211.0310
[3] https://www.lsst.org/about/dm
[4] https://arxiv.org/abs/1901.01298
[5] https://arxiv.org/abs/1806.06607
[6] https://arxiv.org/abs/1401.4064
In the late 90s, measurements of the distance of Supernovae and the redshift of their host galaxies revealed that the expansion of the Universe was accelerating. More than 20 years after this discovery, the nature of the dark energy at the origin of this phenomenon remains unknown.
The CDM concordance model describes a homogeneous, isotropic Universe on large scales, subject to the laws of general relativity (GR). In this model, most of the Universe's energy content comes from cold dark matter and dark energy, introduced as a cosmological constant. The latter behaves like a perfect fluid with negative pressure p, equation of state p = - rho, where rho is the energy density.
Some alternative models (see [1] for a review) introduce scalar fields (quintessence) whose evolution is responsible for the accelerated expansion. These scalar fields can vary in time and space. They can therefore have a time-dependent equation of state and generate anisotropic expansion.
Other models propose to modify the law of gravitation on large scales, mimicking the role of dark energy.
Supernovae remain one of the most accurate probes of the Universe's expansion and homogeneity. In addition, part of the redshift of galaxies is due to a Doppler effect caused by their particular velocities. We can then use supernovae to reconstruct the velocity field on large scales, and measure the growth rate of cosmic structures. This will enable us to test the law of gravitation.
An anisotropy of expansion on large scales, a modification of GR, or an evolution of the equation of state for Dark Energy, would all be revolutionary observations that would challenge our current model.
Until now, supernova surveys have gathered data from multiple telescopes, complicating their statistical analysis. Surveys by the Zwicky Tansient Facility (ZTF: https://www.ztf.caltech.edu/) and the Vera Rubin/LSST Observatory (https://www.lsst.org/) will change all that. They cover the entire sky and accurately measure the distance to tens (hundreds) of thousands of nearby (distant) supernovae.
The CPPM has been working on ZTF data since 2021 and will publish a first cosmological analysis in 2025 with ~3000 SN1a. We have also been involved in the construction and implementation of LSST for years, preparing for the arrival of the first data this summer.
Within the group, we are working on the photometric calibration of the ZTF survey, essential for the measurement precision we need (see ubercalibration [2,3]). A first PhD student developed a pipeline to simulate ZTF and measure the growth rate of structures ([4], defended in 2023), a second student adapted this exercise to LSST ([5], defended in 2025) and a third one started in 2024 to lead the analysis of real data. In addition, three post-doc have joined the group to work on ZTF, and a Chair of Excellence (DARKUNI) is extending this work by combining these data with spectroscopic data from DESI.
The aim of the thesis is to develop and perfect this analysis pipeline for measuring the growth rate of structures. The totality of 30000 SN1a of ZTF will be available to do the final cosmological analysis of this survey. The thesis coincides also with the arrival of the first SN1a catalogs of LSST.
Other aspects may be added to the thesis, such as the study of the homogeneity of the expansion, the photometric calibration of the data, and so on.
This is an observational cosmology thesis, for a candidate interested in cosmology and data analysis.
[1] https://arxiv.org/abs/1601.06133
[2] https://arxiv.org/abs/astro-ph/0703454v2
[3] https://arxiv.org/abs/1201.2208v2
[4] https://arxiv.org/abs/2303.01198 https://snsim.readthedocs.io/
[5] https://arxiv.org/abs/2507.00157