Talk 1 - Ching-Hua Li: Test tau - e universality at LHCb
Abstract: The SM includes three lepton families: electrons, muons, and tau leptons, each associated with a corresponding neutrino type. A key feature of the SM is the universality of the electroweak gauge coupling among the three known fermion families, known as lepton flavor universality (called LFU). Any significant observation of a deviation from this universality, occurring in SM only when the phase space or amplitude becomes dependent on lepton mass, would indicate new physics beyond the SM. The test of LFU is aimed at measuring such deviations in the extent of lepton flavor symmetry breaking relative to SM predictions. This talk will present my study using the branching fraction ratio between the semileptonic decays B -> D* e nu and B -> D* tau nu with tau -> e nu nu, to test LFU in LHCb.
Talk 2 - Sarah Ferraiuolo : Probing Cosmology with Gravitational-Wave and Euclid data
Talk 3 - Raphael BERTRAND: Optimisation of embedded neural networks for the energy reconstruction of the liquid argon calorimeter cells of ATLAS
Abstract: The Large Hadron Collider (LHC) collides protons at nearly the speed of light, producing new particles observed by the ATLAS detector. In 2026, the LHC will undergo a major upgrade to the High-Luminosity LHC (HL-LHC), increasing luminosity by a factor of 5–7 and delivering up to 200 simultaneous collisions. To cope with the resulting data rates, ATLAS will replace the readout electronics of the Liquid Argon Calorimeter (LAr) as part of its Phase-II upgrade. The new LASP board, equipped with two FPGAs, will perform real-time energy reconstruction for 384 channels each, covering about 180,000 calorimeter cells in total.
At high pileup, overlapping electronic pulses challenge the current Optimal Filtering (OF) algorithm used to compute the energy. Neural network (NN)–based alternatives are being explored to surpass OF while respecting FPGA constraints: <125 ns latency and limited resource usage. After earlier studies of recurrent and convolutional architectures, a dense-layer design is proposed, reducing both latency and resource consumption.
Bayesian hyperparameter optimisation is used to adapt the network size, balancing energy resolution with FPGA feasibility. The results show how to achieve optimal performance within hardware limits. In addition, deep evidential regression is employed to estimate uncertainties by fitting predicted energies to probability distributions, enabling quantification of both data noise and model imprecision with minimal overhead.
The talk will compare network architectures and present the Bayesian optimization results, as well as demonstrate uncertainty estimation with evidential regression