Exploring cosmic origins with CORE: Cosmological parameters

We forecast the main cosmological parameter constraints achievable with the CORE space mission which is dedicated to mapping the polarisation of the Cosmic Microwave Background (CMB). CORE was recently submitted in response to ESA's fifth call for medium-sized mission proposals (M5). Here we report the results from our pre-submission study of the impact of various instrumental options, in particular the telescope size and sensitivity level, and review the great, transformative potential of the mission as proposed. Specifically, we assess the impact on a broad range of fundamental parameters of our Universe as a function of the expected CMB characteristics, with other papers in the series focusing on controlling astrophysical and instrumental residual systematics. In this paper, we assume that only a few central CORE frequency channels are usable for our purpose, all others being devoted to the cleaning of astrophysical contaminants. On the theoretical side, we assume ΛCDM as our general framework and quantify the improvement provided by CORE over the current constraints from the Planck 2015 release. We also study the joint sensitivity of CORE and of future Baryon Acoustic Oscillation and Large Scale Structure experiments like DESI and Euclid. Specific constraints on the physics of inflation are presented in another paper of the series. In addition to the six parameters of the base ΛCDM, which describe the matter content of a spatially flat universe with adiabatic and scalar primordial fluctuations from inflation, we derive the precision achievable on parameters like those describing curvature, neutrino physics, extra light relics, primordial helium abundance, dark matter annihilation, recombination physics, variation of fundamental constants, dark energy, modified gravity, reionization and cosmic birefringence. In addition to assessing the improvement on the precision of individual parameters, we also forecast the post-CORE overall reduction of the allowed parameter space with figures of merit for various models increasing by as much as ∼ 107 as compared to Planck 2015, and 105 with respect to Planck 2015 + future BAO measurements.


Introduction
In the quarter century since their first firm detection by the COBE satellite [1], Cosmic Microwave Background (CMB) anisotropies have revolutionized the field of cosmology with an enormous impact on several branches of astrophysics and particle physics. From observations made by ground-based experiments such as TOCO [2], DASI [3] and ACBAR [4], balloonborne experiments like BOOMERanG [5,6], MAXIMA [7] and Archeops [8], and satellite experiments such as COBE, WMAP [9,10] and, more recently, Planck [11,12], a cosmological "concordance" model has emerged, in which the need for new physics beyond the standard model of particles is blatantly evident. The impressive experimental progress in detector sensitivity and observational techniques, combined with the accuracy of linear perturbation theory, have clearly identified the CMB as the "sweet spot" from which to accurately constrain cosmological parameters and fundamental physics. Such a fact calls for new and significantly improved measurements of CMB anisotropies, to continue mining their scientific content.
In particular, observations of the CMB angular power spectrum are not only in impressive agreement with the expectations of the so-called ΛCDM model, based on cold dark matter (CDM hereafter), inflation and a cosmological constant, but they now also constrain several parameters with exquisite precision. For example, the cold dark matter density is now constrained to 1.25% accuracy using recent Planck measurements, naively yielding an evidence for CDM at about ∼ 80 standard deviations (see [12]). Cosmology is indeed extremely powerful in identifying CDM, since on cosmological scales the gravitational effect of CDM are cleaner and can be precisely discriminated from those of standard baryonic matter. In this respect, no other cosmological observable aside from the CMB could show, if considered alone, the need for CDM to such a level of significance. Moreover, the cosmological signatures of CDM rely mainly on gravity, while astrophysical searches of DM annihilating or decaying into standard model particles depend on the strength of the interaction. Similarly, a possible signal in underground laboratory experiments depends on the coupling between CDM particles and ordinary matter (nuclei and electrons). It is possible to construct CDM models that could interact essentially just through gravity, and the current lack of detection of CDM in underground and astrophysics experiments is leaving this possibility open. If this is the case, structure formation on cosmological scales could result in the best observatory we have where to study the CDM properties, and a further improvement from future CMB measurements will clearly play a crucial and complementary role. The CMB even allows to put bounds on the stability and decay time of CDM through purely gravitational effects [14][15][16][17].
CMB measurements also provide an extremely stringent constraint on standard baryonic matter. The recent results from Planck constrain the baryonic content with a 0.7% accuracy, nearly a factor 2 better than the present constraints derived from primordial deuterium measurements [18], obtained assuming standard Big Bang Nucleosynthesis. In this respect, the experimental uncertainties on nuclear rates like d(p, γ) 3 He that enter in BBN computations are starting to be relevant for accurate estimates of the baryon content from measurements of primordial nuclides. A combination of CMB and primordial deuterium measurements is starting to produce independent bounds on these quantities (see, e.g. [19,20]). As a matter of fact, a further improvement in the determination of the baryon density is mainly expected from future CMB anisotropy measurements and could help not only in testing the BBN scenario but also in providing independent constraints on nuclear physics.
In this direction, it is also important to stress that CMB measurements are already so accurate that they are able to constrain some aspects of the physics of hydrogen recombination, such as the 2s − 1s two photon decay channel transition rate, with a precision higher than current experimental estimates [12]. New CMB measurements can, therefore, considerably improve our knowledge of the physics of recombination. Since primordial Helium also recombines, albeit at higher redshifts, the CMB is sensitive to the primordial 4 He abundance which lowers the free electron number density at recombination. The Planck mission already detected the presence of primordial Helium at the level of ∼ 10 standard deviation [12]. Next-generation CMB experiments could significantly improve this measurement, reaching a precision comparable with current direct measurements from extragalactic HII regions that may, however, still be plagued by systematics [21,22]. Constraining the physics of recombination will also bound the possible presence of extra ionizing photons that could be produced by dark matter self annihilation or decay (see e.g. [23,24,[26][27][28]). The Planck 2015 data release already produced significant constraints on dark matter annihilation at recombination that are fully complementary to those derived from laboratory and astrophysical experiments [12].
The CMB is also a powerful probe of the density and properties of "light" particles, i.e. particles with masses below ∼ 1 eV that become non-relativistic between recombination (at redshift z ∼ 1100, when the primary CMB anisotropies are visible) and today. Such particles may affect primary and secondary CMB anisotropies, as well as structure formation. In particular, this can change the amplitude of gravitational lensing produced by the intervening matter fluctuations ( [29]) and leave clear signatures in the CMB power spectra. Neutrinos are the most natural candidate to leave such an imprint (see e.g. [30,31]). From neutrino oscillation experiments we indeed know that neutrinos are massive and that their total mass summed over the three eigenstates should be larger than M ν > 60 meV in the case of a normal hierarchy and of M ν > 100 meV in the case of an inverted hierarchy (see e.g. [32][33][34] for recent reviews of the current data). The most recent constraints from Planck measurements (temperature, polarization and CMB lensing) bound the total mass to M ν < 140 meV [35] at 95% c.l. Clearly, an improvement of the constraint towards a sensitivity of σ(M ν ) ∼ 30 meV will provide a guaranteed discovery for the neutrino absolute mass scale and for the neutrino mass hierarchy (see e.g. [36][37][38][39]). Neutrinos are firmly established in the standard model of particle physics and a non-detection of the neutrino mass would cast serious doubts on the ΛCDM model, opening the window to new physics in the dark sector, such as, for instance, interactions between neutrinos and new light particles [40]. On the other hand, several extensions of the standard model of particle physics feature light relic particles that could produce effects similar to massive neutrinos, and might be detected or strongly constrained by future CMB measurements. Thermal light axions (see e.g. [41][42][43]), for example, can produce very similar effects. Axions change the growth of structure formation after decoupling and increase the energy density in relativistic particles at early times 1 , parametrized by the quantity N eff . Models of thermal axions will be difficult to accommodate with a value of N eff < 3.25, and a CMB experiment with a sensitivity of ∆N eff = 0.04 could significantly rule out or confirm their existence. Other possible candidates are light sterile neutrinos and asymmetric dark matter (see e.g. [44][45][46] and [47]). More generally, a sensitivity to ∆N eff = 0.04 could rule out the presence of any thermally-decoupled Goldstone boson that decoupled after the QCD phase transition (see e.g. [48]). The same sensitivity would also probe non-standard neutrino decoupling (see e.g. [49]) and the possibility of a low reheating temperature of the order of O(MeV) [50].
In combination with galaxy clustering and type Ia luminosity distances, CMB measurements from Planck have also provided the tightest constraints on the dark energy equation of state w [12]. In particular, the current tension between the Planck value and the HST value of the Hubble constant from Riess et al. 2016 [51] could be resolved by invoking an equation of state w < −1 [52]. Planck alone is currently unable to constrain the equation of state w and the Hubble constant H 0 independently, due to a "geometrical degeneracy" between the two parameters. An improved measurement of the CMB anisotropies could break this degeneracy, produce two independent constraints on w and H 0 , and possibly resolve the current tension on the value of the Hubble constant. Moreover, modified gravity models have been proposed that could provide an explanation to the current accelerated expansion of our universe. The CMB can be sensitive to modifications of General Relativity through CMB lensing and the late Integrated Sachs-Wolfe (ISW) effect. Current Planck measurements are compatible with certain types of departures from GR (and even prefer such models, albeit at small statistical significance, see [53]). Future CMB measurements are, therefore, extremely important in addressing this issue.
In order to further improve current measurements and provide deeper insight on the nature of dark matter and dark energy, a CMB satellite mission is clearly our ultimate goal. This does, however, raise two fundamental questions. The first one is whether we really need to go to space and launch a new satellite, given that several other ground-based and balloonborne experiments are under discussion or already under construction (see e.g. [56]). In fifteen years it is certainly reasonable to assume that these experiments will collect excellent data that could, in principle, constrain cosmological parameters to similar precision. However, there is a fundamental aspect to consider: ground-based experiments have very limited frequency coverage and sample just a portion of the CMB sky. Contaminations from unknown foregrounds can be extremely dangerous for ground-based experiments, and can easily fool us. The claimed detection of a primordial Gravitational Waves (GW) background from the BICEP2 experiment [57] was latter ruled out by Planck observations at high frequencies, showing that contaminations from thermal dust in our Galaxy are far more severe than anticipated. This shows that unprecedented control of systematics and a wide frequency coverage are required, both of which call for a space-based mission. In fact, future ground-based and satellite experiments must be seen as complementary: while ground-based experiments could provide a first hint for primordial GWs or neutrino masses, a satellite experiment could monitor the frequency dependence of the corresponding signal with the highest possible accuracy, and unambiguously confirm its primordial nature.
Moreover, most of the future galaxy and cosmic shear surveys will sample several extended regions of the sky. Cross correlations with CMB data in the same sky area will offer a unique opportunity to test for systematics and new physics. It is, therefore, clear that a full sky survey from a satellite will offer much more complete, consistent and homogeneous information than several ground based observations of sky patches. Moreover, an accurate full-sky map of CMB polarisation on large angular scales can provide extremely strong constraints on the reionization optical depth, breaking degeneracies with other parameters such as neutrino masses.
The second fundamental question related to a new CMB satellite proposal arises from the fact that after increasing sensitivity and frequency coverage, one has to deal with the intrinsic limit of cosmic variance. At a certain point, no matter how much we increase the instrumental sensitivity, we reach the cosmic variance limit and stop improving the precision of parameter estimates. This clearly opens the following issue: how close are we from cosmic variance with current CMB data? The Planck satellite measured the temperature angular spectrum up to the limit of cosmic variance in a wide range of angular scales; however, we are far from this limit when we consider polarization spectra. But how much can current constraints improve with a future CMB satellite?
This is exactly the question we want to address in this paper. Assuming that foregrounds and systematics are under control, as should be the case with a well-designed satellite mission, we study by how much current constraints can improve, and find whether these improvements are worth the effort. In this respect, we adopt the proposed baseline experimental configuration of the recent CORE satellite proposal [59], submitted in response to ESA's call for a Medium-size mission opportunity (M5) as the successor of the Planck satellite. We refer to this experimental configuration (with a ∼ 120 cm mirror) as CORE-M5 in all the next sections of this paper. We compare the results from CORE-M5 with other possible experimental configurations that range from a minimal and less expensive configuration (LiteCORE-80), with a ∼ 80 cm mirror, aimed mainly at measuring large and mid-range angular scale polarization, up to a much more ambitious configuration (COrE+), with a ∼ 150 cm mirror. Given different experimental configurations, we forecast the achievable constraints assuming a large number of possible models, trying to review most of the science that could be extracted from the CORE data (with the exception of constraints on GWs and on inflation, addressed separately in a companion paper [60]). After a description of the analysis method in Section II, we start in Section III by providing the constraints achievable under the context of the ΛCDM concordance model. We then review the constraints that could be obtained on spatial curvature (Section IV), extra relativistic relics (Section V), primordial nucleosynthesis and Helium abundance (Section VI), neutrinos (Section VII), dark energy (Section VIII), extended parameters spaces (Section IX), recombination (Section X), Dark Matter annihilation and decay (Section XI), variation of fundamental constants (Section XII), reionization (Section XIII), modified gravity (Section XIV) and cosmic birefringence (Section XV).
This work is part of a series of papers that present the science achievable by the CORE space mission and focuses on the constraints on cosmological parameters and fundamental physics that can be derived from future measurements of CMB temperature and polarization angular power spectra and lensing. The constraints on inflationary models are discussed in detail in a companion paper [60] while the cosmological constraints from complementary galaxy clusters data provided by CORE are presented in [61]. The impact of CORE on the study of extragalactic sources is presented in [62].

Experimental setup and fiducial model
We run Monte Carlo Markhov Chains (MCMC) forecasts for several possible experimental configurations of the CORE CMB satellite, following the commonly used approach described for example in [63] and [64]. The method consists in generating mock data according to some fiducial model. One then postulates a Gaussian likelihood with some instrumental noise level, and fits theoretical predictions for various cosmological models to the mock data, using standard Bayesian extraction techniques. For the purpose of studying the sensitivity of the experiment to each cosmological parameter, as well as parameter degeneracies and possible parameter extraction biases, it is sufficient to set the mock data spectrum equal to the fiducial spectrum, instead of generating random realisations of the fiducial model.
Unless otherwise specified, we choose a fiducial minimal ΛCDM model compatible with the recent Planck 2015 results [35], i.e. with baryon density Ω b h 2 = 0.02218, cold dark matter density Ω c h 2 = 0.1205, spectral index n s = 0.9619, and optical depth τ = 0.0596. This model also assumes a flat universe with a cosmological constant, 3 neutrinos with effective number N eff = 3.046 (with masses and hierarchy that change according to the case under study), and standard recombination.
We use publicly available Boltzmann codes to calculate the corresponding theoretical angular power spectra C T T , C T E , C EE for temperature, cross temperature-polarization and polarization 2 . Depending on cases, we use either CAMB 3 [65] or CLASS 4 [66,67], which are known to agree at a high degree of precision [68][69][70].
In the mock likelihoods, the variance of the "observed" multipoles a lm 's is given by the sum of the fiducial C 's and of an instrumental noise spectrum given by: where θ is the FWHM of the beam assuming a Gaussian profile and where w −1 is the experimental power noise related to the detectors sensitivity σ by w −1 = (θσ) 2 . As we discussed in the introduction, we adopt as main dataset the one presented for the recent CORE proposal, a complete survey of polarised sky emission in 19 frequency bands, with sensitivity and angular resolution requirements summarized in Table 1.
Obviously, data from low (60-115 GHz) and high frequencies (255-600 GHz) channels will be mainly used for monitoring foreground contaminations (and deliver rich related science). In our forecasts we therefore use only the six channels in the frequency range of 130−220 GHZ. As stated in the introduction we will refer to this experimental configuration as CORE-M5.
In what follows we also compare the baseline CORE-M5 configuration with other four possible versions: LiteCORE-80, LiteCORE-120, LiteCORE-150 and COrE+. Experimental specifications for these configurations are given in Table 2. We assume that beam uncertainties are small and that uncertainties due to foreground removal are smaller than statistical errors. In Figure 1, for each configuration, we show the variance C l + N l compared to the fiducial model C l for the temperature (left) and polarisation (middle) auto-correlation spectra. The data are cosmic-variance-limited up to the multipole at which this variance departs from the fiducial model.
Together with the primary anisotropy signal, we also take into account information from CMB weak lensing, considering the power spectrum of the CMB lensing potential C P P . In 2 Note that we don't consider the B mode lensing channel. 3  what follows we use the quadratic estimator method of Hu & Okamoto [71], that provides an algorithm for estimating the corresponding noise spectrum N P P from the observed CMB primary anisotropy and noise power spectra. Like in [72], we use here the noise spectrum N P P associated to the EB estimator of lensing, which is the most sensitive one for all CORE configurations (out of all pairs of maps). We occasionally repeated the analysis with the actual minimum variance estimator, and found very similar results. Figure 1 shows that the lensing reconstruction noise is different on all scales for the various configurations.
CORE-M5 is clearly sensitive also to the BB lensing polarization signal, but here we take the conservative approach to not include it in the forecasts. This leaves open the possibility to use this channel for further checks for foregrounds contamination and systematics. Note that in this work, we consider fiducial models with negligible primordial gravitational waves from inflation. Otherwise, the BB channel would contain primary signal on large angular scales and could not be neglected. The sensitivity of CORE-M5 to primordial gravitational waves is studied separately and with a different methodology in a companion paper [60].  We generate fiducial and noise spectra with noise properties as reported in Table 2. Once a mock dataset is produced we compare a generic theoretical model through a Gaussian likelihood L defined as whereC l andĈ l are the fiducial and theoretical spectra plus noise respectively, |C|, |Ĉ| denote the determinants of the theoretical and observed data covariance matrices respectively, and finally f sky is the sky fraction sampled by the experiment after foregrounds removal. Note that for temperature and polarization,C l andĈ l could be defined to include the lensed or unlensed fiducial and theoretical spectra, and in both cases the above likelihood is slightly incorrect. If we use the unlensed spectra, we optimistically assume that we will be able to do a perfect de-lensing of the T and E map, based on the measurement of the lensing map with quadratic estimators. If we use the lensed spectra, we take the risk of double-counting the same information in two observables which are not statistically independent: the lensing spectrum, and the lensing corrections to the T T , EE and T E spectra. To deal with this issue, one could adopt a more advanced formalism including non-Gaussian corrections, like in [74,75]. However, we performed dedicated forecasts to compare the two approximate Gaussian likelihoods, and even with the best sensitivity settings of COrE+ we found nearly indistinguishable results (at least for the ΛCDM+M ν model). The reconstructed parameter errors change by negligible amounts between the two cases. The biggest impact is on the error on the sound horizon angular scale σ(θ s ), which is 5% smaller when using unlensed spectra, because perfect delensing would allow to better identify the primary peak scales. When using the lensed spectra, we do not observe any statistically significant reduction of the error bars, and we conclude that over-counting the lensing information is not important for an experiment with the sensitivity of COrE+. Hence in the rest of this work we choose to always use the version of the Gaussian likelihood that includes lensed T T , EE and T E spectra. We will usually refer to our full CMB likelihoods with the acronym "TEP", standing for "Temperature, E-polarisation and lensing Potential data". Depending on cases, we derive constraints from simulated data using a modified version of the publicly available Markov Chain Monte Carlo package CosmoMC 5 [76], or with the MontePython 6 [77] package. With both codes, we normally sample parameters with the Metropolis-Hastings algorithm, with a convergence diagnostic based on the Gelman and Rubin statistic performed. In exceptional cases, we switch the MontePython sampling method to MultiNest [78].
In what follows we consider temperature and polarization power spectrum data up to max = 3000, due to possible unresolved foreground contamination at smaller angular scales and larger multipoles. We run CAMB+CosmoMC and CLASS+MontePython with enhanced accuracy settings 7 , including non-linear corrections to the lensing spectrum computed with the latest version of HaloFit [79]. We performed several consistency checks proving that the two pipelines produce identical results. We also include a few external mock data sets in combination with CORE. For the BAO scale reconstruction, we included a mock likelihood for a high precision spectroscopic survey like DESI (Dark Energy Spectroscopic Instrument [80]). For simplicity, our DESI mock data consists in the measurement of the "angular diameter distance to sound horizon scale ratio", D A /s, at 18 redshifts ranging from 0.15 to 1.85, with uncorrelated errors given by the second column of Table V in [81]. For the matter power spectrum reconstruction, we simulate data corresponding to the tomographic weak lensing survey of Euclid. We used the public euclid_lensing mock likelihood of MontePython, with sensitivity parameters identical to the default settings of version 2.2.2. (matched to the current recommendations of the Euclid science working group). Integrals in wavenumber space are conservatively limited to the range k ≤ 0.5h/Mpc, to avoid propagating systematic errors from deeply non-linear scales. For simplicity we do not include extra observables from Euclid (galaxy power spectrum, cluster counts, BAO scale...) which would further decrease error bars. Hence we expect our CORE + Euclid forecasts to be very conservative.

Future constraints from CORE
Adopting the method presented in the previous section, here we forecast the achievable constraints on cosmological parameters from CORE in four configurations: LiteCORE-80, LiteCORE-120, CORE-M5 and COrE+. We work in the framework of the ΛCDM model, that assumes a flat universe with a cosmological constant, and is based on 6 parameters: the baryon Ω b h 2 and cold dark matter Ω c h 2 densities, the amplitude A s and spectral index n s of primordial inflationary perturbations, the optical depth to reionization τ , and the angular size of the sound horizon at recombination θ s . Assuming ΛCDM, constraints can be subsequently obtained on "derived" parameters (i.e. that are not varied during the MCMC process) such as the Hubble constant H 0 and the r.m.s. amplitude of matter fluctuations on spheres of 8M pc −1 h; σ 8 . The ΛCDM model has been shown to be in good agreement with current measurements of CMB anisotropies (see e.g. [12]) and is therefore mandatory to first consider the future possible improvement provided by a CMB satellite experiment such as CORE on the accuracy of its parameters.  Our results are reported in Table 3, where we show the constraints at 68% c.l. on the cosmological parameters from CORE-M5 and we compare the results with three other possible experimental configurations: LiteCORE-80, LiteCORE-120 and COrE+. Besides the standard 6 parameters we also show the constraints obtained on derived parameters such as the Hubble constant H 0 and the amplitude of density fluctuations σ 8 .

Improvement with respect to the Planck 2015 release
In Table 3 we also show the improvement in the accuracy with respect to the most recent constraints coming from the TT, TE and EE angular spectra data from the Planck satellite [35] simply defined as i = σ P lanck /σ CORE . As we can see, even the cheapest configuration of LiteCORE-80 could improve current constraints with respect to Planck by a factor that ranges between ∼ 3, for the scalar spectral index n s , and ∼ 6, for the σ 8 density fluctuations amplitude. The most ambitious configuration, COrE+, could lead to even more significant improvements: up to a factor ∼ 8 in σ 8    These numbers clearly indicate that there is still a significant amount of information that can be extracted from the CMB angular spectra even after the very precise Planck measurements. It is also important to note that the most significant improvements are on two key observables: σ 8 and the Hubble constant H 0 that can be measured in several other independent ways. A precise measurement of these parameters, therefore, offers the opportunity for a powerful test of the standard cosmological model. It should indeed also be noticed that the recent determination of the Hubble constant from observations of luminosity distances of   [51] is in conflict at above 3 standard deviations with respect to the value obtained by Planck (see also [83,84]). A significantly higher value of the Hubble constant has also recently been reported by the H0LiCOW collaboration [85], from a joint analysis of three multiply-imaged quasar systems with measured gravitational time delays. Furthermore, values of σ 8 inferred from cosmic shear galaxy surveys such as CFHTLenS [86] and KiDS [87] are in tension above two standard deviations with Planck. While systematics can clearly play a role, new physics has been invoked to explain these tensions (see e.g. [52,[88][89][90][91][92][93]) and future and improved CMB determinations of H 0 and σ 8 are crucial in testing this possibility.

Comparison between the different CORE configurations
It is interesting to compare the results between the different experimental configurations as reported in Table 3 and as we can also visually see in Figure 3, where we show a triangular plot for the 2D posteriors from LiteCORE-80, CORE-M5 and COrE+.  We find four main conclusions from this comparison: • When we move from LiteCORE-80 to COrE+ we notice an improvement of a factor ∼ 1.6 on the determination of the baryon density Ω b h 2 , and an improvement of a factor ∼ 1.4 on the determination of the Hubble constant H 0 and the amplitude of matter fluctuations σ 8 . COrE+ is clearly the best experimental configuration in terms of constraints on these cosmological parameters. However, the CORE-M5 setup provides very similar bounds on these parameters as COrE+, with a degradation in the accuracy at the level of ∼ 10 − 12%.
• Moderate improvements are also present for the CDM density (of about ∼ 1.3) and the spectral index (∼ 1.14). The constraints from CORE-M5 and COrE+ are almost identical on these parameters.  • Moving from COrE+ to CORE-M5 the maximum degradation on the constraints is about 12% (for the baryon density).
From these results, and considering also the contour plots in Figure 2 and Figure 3 that are almost identical between CORE-M5 and COrE+, we can conclude that CORE-M5, despite having a mirror of smaller size, will produce essentially the same constraints on the parameters with respect to COrE+ with, at worst, a degradation in the accuracy of just ∼ 12%.

Constraints from CORE-M5 and future BAO datasets
We have also considered the constraints achievable by a combination of the CORE-M5 data with information from Baryonic Acoustic Oscillation derived from a future galaxy survey as DESI. We found that the inclusion of this dataset will have minimal effect on the CORE-M5 constraints on ΛCDM parameters. This can clearly be seen in Figure 4, where we plot the 2D posteriors in the H 0 vs σ 8 (left panel) and Ω b h 2 vs Ω c h 2 (right panel) planes. The CORE-M5 and the CORE+DESI contours are indeed almost identical.
It is also interesting to investigate whether the Planck dataset, when combined with future BAO datasets, could reach a precision on the ΛCDM parameters comparable with the one obtained by CORE-M5. To answer to this question we have simulated the Planck dataset with a noise consistent with the one reported in the 2015 release and combined it with our simulated DESI dataset. The 2D posteriors are reported in Figure 4: as we can see, while the inclusion of the DESI dataset with Planck will certainly help in constraining some of ΛCDM parameters, such as H 0 and the CDM density, the final accuracy will not be competitive with the one reachable by CORE-M5. In particular, there will be no significant improvement in the determination of σ 8 and the baryon density.

Future constraints from CORE
Measuring the spatial curvature of the Universe is one of the most important goals of modern cosmology, since flatness is a key prediction of inflation. A precise measurement of the spatial curvature could, therefore, highly constrain some classes of inflationary models (see e.g. [94][95][96]). For example, inflationary models with positive curvature have been proposed in [94], while models with negative spatial curvature have been proposed in [97][98][99][100][101][102]. Interestingly, the most recent constraint coming from the Planck 2015 angular power spectra data marginally prefers a universe with positive spatial curvature, with curvature density parameter Ω k = −0.040 +0.024 −0.016 at 68% CL [12], suggesting a closed universe at about two standard deviations. Moreover, including curvature in the analysis strongly weakens the Planck constraints on the Hubble constant, due to the well know geometric degeneracy (see e.g. [103][104][105]). When Ω k is varied, the Planck 2015 dataset gives H 0 = 55 +4.3 −5.0 km/s/Mpc at 68% c.l., i.e. a constraint weaker by nearly one order of magnitude with respect to the flat case (H 0 = 67.59 ± 0.73 km/s/Mpc at 68% CL [12]).
As shown in [12], the compatibility with a flat universe is restored when the Planck data is combined with the Planck CMB lensing dataset, yielding Ω k = −0.0037 +0.0084 −0.0069 at 68% c.l.. However, the inclusion of the CMB lensing dataset still provides a quite weak constraint on the Hubble constant of H 0 = 66.  In Table 4 we report the results from our forecasts using CMB data only from four experimental configurations: LiteCORE-80, LiteCORE-120, CORE-M5 and COrE+. As we can see, all configurations are able to constrain curvature with similar accuracy, which is anyway always about a factor 8 better than current constraints coming from Planck angular spectra data (about a factor 4 when compared with Planck+CMB lensing). Future CMB data can, therefore, improve the Planck 2015 constraint on curvature by nearly one order of magnitude. The current best fit Planck value of Ω k = −0.033 (see e.g. [12]) can be tested (and falsified) at the level of ∼ 16 standard deviations. Constraints on the Hubble constant are also significantly improved: a future CORE mission can provide constraints on the Hubble constant with a 1σ accuracy better than ∼ 1 km/s/Mpc independently from the assumption of a flat universe. The 2D posteriors on the Ω k vs H 0 plane are reported in Figure 5 (left panel).  Table 5. 68% CL future constraints on cosmological parameters in the ΛCDM + Ω k model for four CORE experimental configurations combined with simulated data of the DESI BAO survey. In the second column, for comparison, we also report the constraints from a simulated Planck+DESI dataset. A flat universe is assumed in the simulated data.

Future constraints from CORE+DESI
Stronger constraints on curvature can be obtained by combining the Planck 2015 data with a combination of BAO measurements. In this case, the constraint is Ω k = 0.0002 ± 0.0021 at 68% c.l., and also the Hubble constant is well constrained with H 0 = 67.58 ± 0.70 km/s/Mpc. The precision of these constraints is very close to the one expected by CMB data alone from CORE and reported in Table 4. It is, therefore, interesting to investigate if a future CORE mission can improve the constraints on Ω k with respect to current Planck+BAO constraints.
In Table 5 we indeed present the constraints on Ω k including future BAO simulated data assuming the experimental specification of the DESI survey. As we can see, including DESI data significantly shrinks the model space, leading to constraints that are now a factor ∼ 2.5 stronger than the constraints from CORE alone and 2.8 times more stringent than current Planck+BAO constraints. While, as we saw in the previous section, there is little advantage in combining CORE with future BAO survey in constraining the ΛCDM parameters, a significant improvement is expected on extensions such as Ω k .
We can also see that, once the DESI dataset is included, there is little difference in the constraints on Ω k between the CORE configurations. The constraints on the H 0 vs Ω k plane from COrE+ and DESI are reported in Figure 5

Extra relativistic relics
The minimal cosmological scenario predicts that, at least after the time of nucleosynthesis, the density of relativistic particles is given by the contribution of CMB photons plus that of active neutrino species, until they become non-relativistic due to their small mass. This assumption is summarized by the standard value of the effective neutrino number N eff = 3.046 [106] (see [107] and [108] for pioneering work and [109] for a review of the subject). A more recent calculation beased on the latest data on neutrino physics finds N eff = 3.045 [110], but at the precision level of CORE the difference is irrelevant, and we will keep 3.046 as our baseline assumption. However, there are many simple theoretical motivations for relaxing this assumption. We know that the standard model of particle physics is incomplete (e.g. because it does not explain dark matter), and many of its extensions would lead to the existence of extra light or massless particles; depending on their interactions and decoupling time the latter could also contribute to N eff . Depending on the context, these extra particles are usually called extra relativistic relics, dark radiation or axion-like particles in more specific cases. In the particular case of particles that were in thermal equilibrium at some point, the enhancement of N eff can be predicted as a function of the decoupling temperature [111]. Even in absence of a significant density of such relics, ordinary neutrinos could have an unexpected density due to non-standard interactions [49], non-thermal production after decoupling [157], or low-temperature reheating [50], leading to a value of N eff larger or smaller than 3.046. There are additional motivations to consider N eff as a free parameter (background of gravitational waves produced by a phase transition, modified gravity, extra dimensions, etc. -see [112] for a review). Over the last years the extended ΛCDM + N eff has received a lot of attention within the cosmology community. Assuming N eff > 3.046 has the potential to solve tensions in observational data: for instance, internal tensions in pre-Planck CMB data, which have now disappeared (N eff = 2.99 ± 0.20 (68%CL) for Planck 2015 TT,TE,EE+lowP [12]); or tensions between CMB data and direct measurements of H 0 [142] (however, solving this problem by increasing N eff requires a higher value of σ 8 , which brings further tensions with other datasets [12]). In any case, the community is particularly eager to measure N eff with better sensitivity in the future, in order to: (i) test the existence of extra relics and probe extensions of the standard model of particle physics; (ii) get a window on precision neutrino physics (since the contribution of neutrinos to N eff depends on the details of neutrino decoupling); and (iii) check whether the tensions in cosmological data are related to the relativistic density or not.
Since CMB data accurately determines the redshift of equality z eq , the impact of N eff on CMB observables is usually discussed at fixed z eq [30,113,114]. The time of equality can be kept fixed by simultaneously increasing N eff and the dark matter density ω cdm (or, depending on the choice of parameter basis, N eff and H 0 ). The impact on the CMB is then minimal, which explains the well known (N eff , ω cdm ) or (N eff , H 0 ) degeneracy: the latter is clearly visible with Planck data in Figure 6 (left plot). However, this transformation does not preserve the angular scale of the photon damping scale on the last scattering surface: hence the best probe of N eff comes from accurate measurements of the exponential tail of the temperature and polarisation spectra at high-. Hence the accuracy with which CMB experiments can measure N eff is directly related to their sensitivity and angular resolution, as confirmed by the following forecasts. Increasing N eff has other effects on the CMB coming from gravitational interactions between photons and neutrinos before decoupling: a smoothing of the acoustic peaks (however, very small, and below the per-cent level for variations of the order of ∆N eff ∼ 0.1), and a shift of the peaks towards larger angles caused by the "neutrino drag" effect [30,113,114]. This means that in order to keep a fixed CMB peak scale, one should decrease the angular size of the sound horizon θ s while increasing N eff : this implies an anticorrelation between θ s and N eff that can be observed in Figure 6 (right plot). Therefore, by accurately measuring N eff , we could get a more robust and model-independent measurement of the sound horizon scale, which would in turn be very useful for constraining the expansion history with BAO data.
Since the parameter N eff is closely related to neutrino properties, and since we know that neutrinos have a small mass, we forecast the sensitivity of different experimental set-ups to N eff while varying simultaneously the summed neutrino mass M ν . This leads to more robust predictions than if we had fixed the mass (although a posteriori we find no significant correlation between N eff and M ν ). We investigate the CORE sensitivity to N eff within two distinct models: • The model "ΛCDM + M ν +∆N massless eff " has 3 massive degenerate and thermalised neutrino species, plus extra massless relics contributing as ∆N massless eff > 0. It is motivated by scenarios with standard active neutrinos and extra massless relics (or very light relics with m 10 meV).
• The model "ΛCDM + M ν + N massive eff " only has 3 massive degenerate neutrino species, with fixed temperature, but with a rescaled density. During radiation domination they contribute to the effective neutrino number as N massive eff , which could be greater or smaller than 3.046. This model provides a rough first-order approximation to specific scenarios in which neutrinos would be either enhanced (e.g. by the decay of other particles) or suppressed (e.g. in case of low-temperature reheating).
Our forecasts consist in fitting these models to mock data, with a choice of fiducial parameters slightly different from the previous section 8 , including in particular neutrino masses summing up to M ν = 60 meV.
The results of our MCMC forecasts are shown in Tables 6, 7, and Figure 6. Since the determination of N eff depends mainly on observations of the exponential tail in the CMB spectra, our results for σ(N eff ) vary a lot with the sensitivity/resolution assumed for CORE, and are only marginally affected by the inclusion of extra datasets like BAOs and cosmic shear surveys. The value l max at which the signal-to-noise blows up in the temperature or polarisation spectrum varies a lot between the different experimental settings, as can be seen in Figure 1. Thus there is a dramatic improvement in σ(N eff ) between Planck and LiteCORE-80 (factor 3), and still a substantial one between LiteCORE-80 and COrE+ (factor 1.7). However, stepping back to the design of CORE-M5, one maintains a very good sensitivity, σ(N eff ) = 0.041, only 10% worse than what could be achieved with the better angular resolution of the COrE+ mission. Instead, LiteCORE-120 would be 25% worse than COrE+. Hence CORE-M5 appears as a good compromise for the purpose of measuring N eff . By achieving σ(N eff ) = 0.041 with CORE-M5 alone, or σ(N eff ) = 0.039 in combination with future BAO data from DESI and/or cosmic shear data from Euclid, we could set very strong bounds on extra relics, neutrino properties, the temperature of reheating, etc., especially compared to Planck + DESI BAOs, which would only yield σ(N eff ) = 0.15. To be more   Table 6. 68% CL constraints on cosmological parameters in the ΛCDM + M ν + ∆N massless eff model (accounting for standard massive neutrino plus extra massless relics, with ∆N massless eff > 0) from the different CORE experimental specifications and with or without external data sets (DESI BAOs, Euclid cosmic shear). For Planck alone, we quote the results from the 2015 data release, while for combinations of Planck with future surveys, we fit mock data with a fake Planck likelihood mimicking the sensitivity of the real experiment (although a bit more constraining).
specific, let us consider the case of early decoupled thermal relics, like in Ref. [111]. Assuming that the last-decoupled relics leave thermal equilibrium at a temperature T F , and that the subsequent number of relativistic degrees of freedom is entirely accounted for by standard model particles, we notice that there are many well-motivated scenarios predicting a value of   Figure 6. Parameter degeneracy between N eff and H 0 or θ s , assuming the extended model "DEG+Neff", with three experimental settings for CORE or with a fake Planck likelihood mimicking the sensitivity of the real experiment (always using all CMB information from TT,TE,EE + lensing extraction). The correlations observed in the Planck case are explained in the text. The degeneracy with H 0 is almost entirely resolved by CORE, while that with θ s is limited to a much smaller range.
BAOs would measure H 0 with 1.2% uncertainty, and ω cdm with 2% uncertainty. Figure 6 (left plot) shows that CORE-M5 would almost completely resolve the (N eff , H 0 ) degeneracy, such that CORE + DESI BAOs would pinpoint both H 0 and ω cdm with 0.5% uncertainty. This would have repercussions on several other parameters, and would allow to fully exploit the synergy between different types of cosmological data. Also, the determination of N eff based on the observation of the CMB damping tails would reduce the uncertainty on the sound horizon angular scale, from σ(θ s ) = 0.00046 for Planck to σ(θ s ) = 0.00011 for CORE: hence the calibration of the sound horizon scale in future BAO data would be much more accurate, and the scientific impact of these observations (for instance, on Dark Energy models) would be enhanced.

Constraints on the primordial Helium abundance
In the framework of standard Big Bang Nucleosynthesis (BBN), the abundances of light elements can be calculated as a function of the baryon-to-photon ratio η b ≡ n b /n γ , of the effective number of relativistic species N eff , and of the chemical potential of electron neutrinos (in the following, the latter is assumed to be zero). This is in particular the case of the primordial abundance of 4 He, that, changing the density of free electrons between helium and hydrogen recombination, has a direct impact on CMB observables and in particular on the damping tail of CMB anisotropies (see e.g. [159][160][161][162][163]). In the other sections of this paper, the 4 He abundance, parameterized by Y BBN P ≡ 4n He /n b , is calculated consistently as a function of the physical baryon density Ω b h 2 (that can be translated to η b by fixing the photon temperature and neglecting the uncertainty associated to the helium fraction itself) and N eff , using approximate analytical formulas based on the PArthENoPE code [164,165]. However, since the CMB is directly sensitive to Y BBN P , it is possible to drop the assumption of standard BBN and obtain model-independent constraints on the abundance of 4 He. This is the goal of this section, where we show the constraints that can be obtained on Y BBN P with different CORE configurations, in the framework of a minimal extension of the standard ΛCDM model, as well as in the case where N eff is also allowed to vary. The fiducial model is the one described in Sec. 2, that assumes standard BBN (and vanishing neutrino chemical potential). Thus the fiducial values ω b = 0.022256 and N eff = 3.046 imply Y BBN P = 0.24669.

Sensitivity to the helium abundance in a minimal extension of ΛCDM
In this section we present constraints assuming standard ΛCDM but without assuming standard BBN, so in addition to the six ΛCDM we let also the primordial Helium abundance parameter Y BBN P to vary. The constraints on cosmological parameters for the different COrE experimental configurations are reported in Table 8 It can be seen that the primordial abundance of 4 He can be constrained with an uncertainty σ(Y BBN P ) = 2.5 × 10 −3 by the COrE+ configuration. This gets slightly degraded, by a factor ∼ 1.2 for the LiteCORE-120 and CORE-M5 configurations (the two configurations yield very similar results), but is significantly worse by ∼ 60% for LiteCORE-80. These numbers should be compared with the present bound from Planck TT+lowP+BAO of Y BBN P = 0.255 +0.036 −0.038 (at 95% CL) [12]. We find in all cases a dramatic improvement over the sensitivity to Y BBN P from this combination of Planck and BAO, gaining a factor of 4.6, 6.6 or 7.4 on this parameter, for LiteCORE-80, CORE-M5 or COrE+, respectively. Quite remarkably, the uncertainty on Y BBN P for these CORE experimental configurations is at least two times smaller than the present observational error in astrophysical determination of the same quantity: Ref. [166] reports Y BBN P = 0.2465 ± 0.0097 (at 68% CL) from a compilation of helium data. In Figure 7 (left panel) we show the 2D constraints in the Y BBN P vs ω b plane.

Sensitivity to the helium abundance in ΛCDM+ N eff
It is well known that there is a strong parameter degeneracy between Y BBN P and N eff (see e.g. [12] and references therein). For this reason, in this section we present constraints on primordial 4 He varying the six ΛCDM parameters, and letting also Y BBN P and N eff to vary. The constraints on cosmological parameters for the different CORE experimental configurations are reported in Table 9. As we can see, including variations in N eff opens a degeneracy. The constraints on Y BBN P gets worse by roughly a factor of 2 for COrE+, CORE-M5 and LiteCORE-120, and slightly less than that for LiteCORE-80, with respect to those obtained by fixing N eff = 3.046. The improvement on Y BBN P moving from LiteCORE-80 to COrE+ is now a factor ∼ 1.4, while it is about ∼ 1.1 in moving from LiteCORE-120 or CORE-M5 to COrE+. These constraints are however still significantly stronger than those presently available from Planck [12] or from astrophysical observations [166]. We note that the degeneracy also affects estimates of the effective number of relativistic species, by greatly enlarging by nearly a factor two the uncertainty on this parameter when Y BBN P is left free to vary, as it can be seen by comparing the numbers in Tables 7 and 9. This degeneracy is clearly seen in Fig. 7   Given the CORE-M5 constraints on parameters as the baryon density Ω b h 2 , the Helium abundance Y p and the neutrino number of relativistic relics N eff , it is possible to constrain the neutron lifetime under the assumption of BBN ( [167]). CMB data can indeed offer a completely independent determination of τ n , useful also for checking the validity of the cosmological scenario. In Table 10 we report the constraints on τ n assuming BBN for CORE-M5 and CORE-M5+DESI in the case of N eff = 3.046 and N eff free. As we can see, when N eff = 3.046 CORE-M5 will constrain τ n with an uncertainty of about ∼ 1.5%. Adding DESI will not improve significantly this bound. When N eff is let free to vary, the CORE-M5 constraint will relax to about ∼ 3% uncertainty, with a small improvement when the DESI dataset is included. Current laboratory data constrain the neutron lifetime with a precision of ∼ 1 − 2s but with a ∼ 4.5 σ tension between different experiments with a difference of ∼ 9s (see discussion in [167]). Future data from CORE could therefore help in clarifying the issue.

Neutrino physics
Neutrino oscillation data show that neutrinos must be massive, but the data are insensitive to the absolute neutrino mass scale. For a normal hierarchy of masses (m 1 , m 2 m 3 ), the mass summed over all eigenstates is approximately at least 60 meV, while for an inverted hierarchy (m 3 m 1 , m 2 ) the minimal summed mass is approximately 100 meV [32][33][34]. The individual neutrino masses in these hierarchical limits are below the detection limit of current and future laboratory β-decay experiments, but they can remarkably be probed by cosmology [30,31,[36][37][38][39][115][116][117]. The detection of the neutrino mass scale is even considered as one of the safest and most rewarding targets of future cosmological surveys, since we know that these masses are non-zero, that they have a significant impact on structure formation, and that their measurement will bring an essential clue for particle physicists to decipher the neutrino sector puzzle (origin of masses, leptogenesis and baryogenesis, etc.). Even the unlikely case of a non-detection would be interesting, since it would force us to revise fundamental assumptions in particle physics and/or cosmology, see e.g. [40].
For individual neutrino masses below 600 meV, the non-relativistic transition of neutrinos takes place after photon decoupling. After that time the neutrino density scales like matter instead of radiation, with an impact on the late expansion history of the universe. This is important for calculating the angular diameter distance to recombination, which determines the position of all CMB spectrum patterns in multipole space. At the time of the non-relativistic transition, metric fluctuations experience a non-trivial evolution which can potentially impact the observed CMB spectrum in the range 50 < < 200 due to the early ISW effect [30,118,119]. However, for individual neutrino masses below 100 meV, the non-relativistic transition happens at z < 190, hence too late to significantly affect the early ISW contribution. Finally, massive neutrinos slow down gravitational clustering on scales below the horizon size at the non-relativistic transition, leaving a clear signature on the matter power spectrum [30,115,120]. The magnitude of this effect is controlled mainly by the summed neutrino mass M ν . Roughly speaking, the suppression occurs on wavenumbers k ≥ 0.01h/Mpc (which means that even relatively large wavelengths are affected), and saturates for k ≥ 1h/Mpc. Above this wavenumber and at redshift zero, the suppression factor is given in first approximation by (M ν /10 meV)%, i.e. at least 6% even for minimal normal hierarchy [30,115,121]. CMB lensing is expected to be a particularly clean probe of this effect [29,[122][123][124].

Neutrino mass splitting
Cosmology is mainly sensitive to the summed neutrino mass M ν , but the mass splitting does play a small role, since the free-streaming length of each neutrino mass eigenstate is determined by the individual masses [30,31,38,39,117,125]. Hence, before doing forecasts for future high-precision experiments, it is worth checking the impact of making different assumptions of the mass splitting (for fixed total mass) on the results of a parameter extraction. If this impact is found to be small, we can perform generic forecasts sticking to one mass splitting scheme. Otherwise, several different cases should be considered separately.
We know from particle physics that there are two realistic neutrino mass schemes, NH and IH, both tending to a nearly-degenerate situation in the limit of large M ν , but that limit is already contradicting current bounds (M ν < 210 meV from Planck 2015 TT+lowP+BAO [12], M ν < 140 meV when including the latest Planck polarisation data [126], M ν < 130 meV with recent BAO+galaxy survey data [127] and M ν < 120 meV with BOSS Lyman-α data [128], all at 95%CL). On top of NH and IH, the cosmological literature often discusses three unrealistic models (for the purpose of speeding up Boltzmann codes and integrating only one set of massive neutrino equations): the degenerate case with masses (M ν /3, M ν /3, M ν /3), that we will call DEG; the case (M ν /2, M ν /2, 0) that we will call 2M and the case (M ν , 0, 0) that we will call 1M. These three unrealistic cases are potentially interesting to use as a fitting model in a forecast, because the total mass can be varied down to zero: thus, on top of estimating the value of M ν , one can assess the significance of the neutrino mass detection by comparing the probability of M ν = 0 to that of the mean or best-fit value. Any of the DEG, 1M, or 2M models can achieve this purpose, however, we can already discard 1M and 2M, as a detailed inspection of the small difference between the matter power spectrum of these three models for fixed M ν shows that the spectrum of the DEG model is much closer to that of the two realistic models (NH, IH) than the spectrum of 1M or 2M (see e.g. Figure 16 in [31]). Even current data starts to be slightly sensitive to the difference between 1M and (NH, IH) [129]. Hence we only need to address the question: can we fit future data with the DEG model, even if the true underlying model is probably either NH or IH, or does this lead to an incorrect parameter reconstruction?

Run
Fiducial model Fitted model posterior curve in Figure 8 1  Table 11. List of fiducial and fitted model used to check for possible parameter reconstruction bias when using the wrong assumptions on neutrino mass splitting.
We first consider a fiducial model with a total mass M ν = 60 meV, thus necessarily given by NH. We generate mock data using the precise mass splitting of NH for such a value, with ∆m 2 atm = 2.45 × 10 −3 (eV) 2 and ∆m 2 sol = 7.50 × 10 −5 (eV) 2 . We then compare the results of forecasts that assume either DEG or NH as a fitting model (still with fixed square mass differences). In both cases, the free parameters are the usual 6-parameter ΛCDM (with fiducial values given in footnote 8) and M ν . These two forecasts correspond to the first two lines in Table 11. The results for the CORE-M5 satellite 9 , alone or in combination with DESI BAOs and Euclid cosmic shear, are shown in the top three panels of Figure 8. This is the most pessimistic case for measuring the neutrino mass, since it corresponds to the minimal total mass allowed by oscillation data. When looking at the results, one should keep in mind that we are fitting directly the fiducial spectrum, hence the posterior would peak at the fiducial value in absence of reconstruction bias; while with real scattered data the best fit would be shifted randomly, typically by one sigma. By looking at the results of the DEG fit (green curves in Figure 8 and numbers in Table 12), we see that CORE-M5 alone  would not detect M ν = 60 meV with high significance, but it would typically achieve a 3σ detection in combination with DESI BAOs, or a 4σ detection when adding also Euclid cosmic shear data. There is a small offset between the mean value of M ν found in the DEG fit and the fiducial value, corresponding respectively to 0.2σ, 0.2σ, 0.5σ in the CORE, CORE+DESI, and CORE+DESI+Euclid cases. This can be attributed to bias reconstruction from assuming the wrong fitting model. However, in this situation, the conclusion of fitting real data with DEG would be that the preferred scenario is NH, since M ν = 100 meV would be disfavoured typically at the 2σ level by CORE+DESI+Euclid, and one would then perform a second fit assuming NH in order to eliminate this reconstruction bias. More detailed discussions on the discrimination power of future data between NH and IH can be found e.g. in [38,39].
Next, we considered a fiducial total mass M ν = 100 meV, which could be achieved either within the NH or IH model. We are not interested in the possibility of directly discriminating between these two models, because the sensitivity of CORE+DESI+Euclid is clearly too low for such an ambitious purpose. Instead we only want to check whether using the DEG model for the fits introduces significant parameter bias. For that purpose, we perform six forecasts for each data set, corresponding to the two possible fiducial models (NH or IH) fitted by each of the three models DEG, IH or NH. We see on the lower panels of Figure 8 that the fiducial mass is again correctly extracted by the DEG fits, up to a bias ranging from 0.1σ to 0.3σ:   Figure 9. Results for the minimal model with massive neutrinos (discussed in section 7.2 and Table 12).
this is smaller than with a fiducial mass of 60 meV because masses are now larger and relative differences between NH, IH, and DEG are reduced. The error bars are always the same up to less than 0.1σ differences.
We have checked that regardless of the real mass splitting realised in nature, and with the experimental data sets discussed in this analysis, we can correctly reconstruct the mass simply by fitting the DEG model to the data. For the purpose of our forecasts, the most important things to check are that the error is stable under different assumptions, and that the reconstruction bias induced by fitting DEG to NH or DEG to IH is under control: this is found to be the case. So the next forecasts can be done using either NH or IH as a fiducial, and sticking to DEG as the fitted model. We can even do something simpler and use DEG as both fiducial and fitted model in the forecasts, since we know that if the fiducial model was NH or IH we would not have a large bias. This is exactly what we will do in the next sections 10 . However, we also see that in future analyses, we ought to be a little bit more careful, and compare the results of different fits using either NH or IH as a fitted model, to assess the impact of different assumptions on the posterior probability for M ν .

Neutrino mass sensitivity in a minimal 7-parameter model
Choosing the same fiducial model as in footnote 8, with a summed mass equal to M ν = 60 meV, we fit the 7-parameter ΛCDM+M ν model for different CORE settings, alone or in combination with mock DESI BAOs and Euclid cosmic shear data.
Since we are looking at very small individual masses (mainly in the range m ν < 100 meV), we expect the sensitivity of the CMB to M ν to be dominated by CMB lensing effects. The different CORE settings considered here lead to different sensitivities to the CMB lensing potential. However, we only observe marginal differences between the forecasted mass sensitivities shown in Table 12, with a symmetrized error ranging from 48 meV for LiteCORE-80 to 44 meV for CORE-M5 and COrE+. The reason is that the neutrino mass effect on the CMB lensing potential does not peak at the highest multipoles: rather it consists of a nearly constant suppression for a wide range of angular scales with l > 100. Hence, in order to achieve a good detection of M ν , it is sufficient to have data in the region where the signalto-noise ratio (S/N) is the largest, which is roughly from = 200 to 700 for CMB lensing.  Lensing extraction on smaller angular scale will always have a smaller S/N and would bring little additional information. In the range 200 < < 700, LiteCORE-80 has a slightly worse sensitivity to the CMB lensing spectrum than other settings considered here, and hence a larger σ(M ν ); the other settings mainly differ for > 700. We conclude that the determination of M ν cannot drive the choice between different possible CORE settings, unlike the determination of other parameters (e.g. tensor-to-scalar ratio, N eff ) that critically depend on the sensitivity and/or resolution of the instrument.   Figure 10. Results for the minimal model with massive neutrinos (discussed in section 7.2 and Table 12).
However, a next-generation CMB satellite is essential for getting such tight bounds on the summed neutrino mass, because of its potential to measure small-scale polarisation and to constrain the optical depth to reionization τ (this is true for all CORE configurations). Indeed, the suppression induced by neutrino masses in the CMB lensing potential could be nearly cancelled by an increase in the primordial spectrum amplitude A s . Since the product e −2τ A s is fixed by the global amplitude of the CMB temperature/polarisation spectra, increasing A s requires increasing τ . Future ground-based CMB experiments would only marginally improve on the τ determination from Planck, due to their limited sky coverage and large sampling variance for small multipoles. Hence, they would be affected by an (M ν , τ ) degeneracy for the reasons discussed above. To prove the importance of this effect we repeated the forecast for CORE-M5, cutting however all polarisation information for < 30, and replacing it by a gaussian prior on τ with the sensitivity of Planck, σ(τ ) 0.01. We did find a degeneracy between M ν and τ and the error bar on the summed mass degraded by a factor 2. Instead, we can clearly see in the left panel of Figure 9 that there is no such degeneracy, neither in the Planck-alone contours, caused by the too weak sensitivity to the CMB lensing spectrum, nor in CORE-alone contours because they break this degeneracy by measuring τ with good enough precision.
We can check how the combination of CMB data with other probes can achieve better constraints with CORE than with Planck. We find that CORE+DESI BAOs is about two times more constraining than Planck+DESI. This is related again to the better CMB lensing spectrum extraction and optical depth measurement by CORE. There are actually two ways to compensate the CMB lensing spectrum suppression induced by neutrino masses: by increasing A s and τ , or by increasing ω cdm [130]. This leads to a strong (M ν , ω cdm ) degeneracy when using only CMB data ( Figure 10, left plot). However, future BAO data will fix ω cdm with very good accuracy. In the Planck+BAO case, the (M ν , τ ) degeneracy would then still remain (Figure 9, middle plot). In the CORE+BAO case, with ω cdm fixed by BAOs and τ nearly fixed by polarisation measurements, very little degeneracies remain: in Figure 9, middle plot, we just see a small positive correlation controlled by the error bar on τ . Hence CORE will powerfully exploit the synergy between CMB and BAO measurements for measuring the neutrino mass. The combination with Euclid will further reduce degeneracies and errors by independently measuring the lensing spectrum at smaller redshifts than CORE. Even with very conservative assumptions on Euclid (i.e. including only cosmic shear data for k ≤ This claim relies on a 7-parameter forecast only, so we should still check its robustness against non-minimal assumptions on the cosmological model.

Degeneracy between neutrino mass and other parameters in extended 8parameter models
In the previous section we found a sensitivity of about σ(M ν ) = 44 meV for CORE-M5, using any configuration, or 21 meV in combination with future BAOs, and 16 meV with future cosmic shear data. We explained why the sensitivity to M ν has a very weak dependence on the assumed instrumental settings for CORE. To check how much these predictions depend on the assumed cosmological model, we do several extended forecasts with 8 free parameters instead of 7.
The new parameters studied here are the primordial helium fraction, the tensor-toscalar ratio, the constant Dark Energy equation of state parameter, the primordial scalar tilt running, and the effective density fraction of spatial curvature. Since our focus here is on neutrino masses, we do not investigate the sensitivity to these parameters in as much detail as in the sections devoted to them. For instance, we use here a (weak energy principle) prior w > −1, while in the Dark Energy section we will also consider phantom Dark Energy or a timevarying w. Also, as in the rest of this paper, we stick to a mock CORE likelihood including only temperature, E-polarisation and lensing data, and not using B-mode information: hence we obtain much worse constraints on r than in the companion ECO paper on inflation [60], in which B modes play an essential role; but at least the present forecast allows to conservatively prove the absence of parameter correlation between M ν and r at the level of precision of CORE combined with DESI and Euclid.
Our extended forecast results are summarised in Table 13. When varying the helium fraction, the tensor-to-scalar ratio 11 , or the tilt running, we find essentially the same sensitivity to M ν as in the 7-parameter model. Nonetheless, the cases with free w or Ω k make the neutrino    Table 13. 68% CL constraints on the additional parameters of several extended 8-parameter models, for the different CORE experimental specifications, and with or without external data sets (DESI BAOs, Euclid cosmic shear). For Planck alone, we quote the results from the 2015 data release, obtained with a fixed mass M ν = 60 meV, while for combinations of Planck with future surveys, we fit mock data with a fake Planck likelihood mimicking the sensitivity of the real experiment (although a bit more constraining). In the case with free tensor-to-scalar ratio r, we did not include B-modes in the likelihood, unlike in the companion ECO paper on inflation [60]. In the case with free w we used a (weak energy principle) prior w > −1, that will be relaxed in the Dark Energy section of this paper.
mass detection more difficult, due to clear parameter degeneracies with M ν when using CMB  Figure 13. Results for the extended model ΛCDM + M ν + w (with a prior w > −1). The w axis scale changes between plots because of the huge difference of sensitivity between data sets. The (M ν , w) degeneracy gets partially resolved by adding Euclid cosmic shear data.
We see in Figures 11,12 that the (M ν , Ω k ) degeneracy (a particular case of the geometrical degeneracy described in [131,132]) gets broken by the inclusion of BAO data, bringing the error down to σ(M ν ) 28 meV. With additional Euclid cosmic shear data, one would reach σ(M ν ) 21 meV, still guaranteeing a 3σ detection, while Planck+DESI+Euclid could only achieve σ(M ν ) 32 meV for free Ω k .
In the case with free w (Figures 11, 13), the degeneracy remains problematic even with CORE+BAO data, but ultimately Euclid cosmic shear data could partly differentiate between the physical effects of w and M ν effects and lead to σ(M ν ) 19 meV under the prior w > −1, instead of 26 meV for Planck+DESI+BAO. The error bar would degrade by also allowing for phantom dark energy, but on the other hand, the inclusion of further Large Scale Structure data (e.g. the Euclid galaxy correlation function) would further help to break the degeneracy, since the effect of neutrino masses and w have a different dependence on redshift and scales [116].   Figure 14. Results for the extended model ΛCDM + M ν + one light and non-thermalised sterile neutrino with effective mass m eff s , contributing to the effective neutrino number as N s .

Light sterile neutrinos
Right-handed or sterile neutrinos are present in several well-motivated extensions of the standard model of particle physics [44,133]. If their mass is of the order of a few keV or bigger, they can play the role of warm or cold dark matter, and they are constrained mainly by X-ray and Lyman-alpha observations [133]. If their mass is of the order of the meV or smaller, they will simply behave as extra relativistic relics contributing to N eff . There is another interesting range deserving a specific study: that of light sterile neutrinos with a mass in the meV to eV range. Such particles have been extensively discussed over the past years, for the reason that the oscillations between such sterile neutrinos and active neutrinos (or more precisely, between the mass eigenstates formed of active and sterile neutrinos) could explain a number of possible anomalies in short-baseline neutrino oscillation data (see e.g. [135]). Sterile neutrinos with large mixing angles would normally acquire a thermal distribution through oscillations with active neutrinos, and their mass would then be very constrained (essentially, as much as that of active neutrinos). However, the explanation of short baseline anomalies requires an O(1) eV mass in tension with cosmological data. To avoid these bounds, people have discussed several ways to prevent sterile neutrino thermalisation (see e.g. [44,45,134]). In that case, the bounds on the sterile neutrino mass become model-dependent, but a wide category of models can be parametrised in good approximation with two numbers (N s , m eff s ), related to the asymptotic density at early times, given by ∆N eff = N s , and the asymptotic density at late times, given by the effective mass m eff s = 94.1ω s eV [11,12], where ω s is the sterile neutrino density. This covers both the case of light early-decoupled thermal relics, and that of Dodelson-Widrow (i.e. non-resonantly produced) sterile neutrinos. For the later case, the physical mass of the sterile neutrino is given by m s = m eff s /N s . To investigate the sensitivity of CORE to a non-thermal sterile neutrino, we stick to the same fiducial model as in the last subsections (total mass M ν = 60 meV and N eff = 3.046), but we now fit it with an extended model with 9 free parameters, including the summed mass of active neutrinos M active ν , as well as N s and m eff s . We impose in our forecasts a top-hat prior m eff s /N s < 5 eV, designed to eliminate models such that the extra species has a large mass, a very small number density, and behaves like extra cold dark matter.
Our results for the parameters (M active ν , N s , m eff s ) are given in Table 14, and the probability contours for (N s , m eff s ) are shown in Figure 14. For CORE-M5, the bounds on the   The sensitivity to (m eff s , N s ) depends heavily on the CORE settings. The error on N s varies by a factor two between LiteCORE-80 and COrE+. As discussed in section 5, this comes mainly from the ability to measure the temperature and polarisation damping tail up to high multipoles when the instrumental sensitivity and resolution are good enough. Besides, the measurement of the CMB lensing potential constrains the density of hot dark matter today, and hence roughly M active ν + m eff s . If this were the only effect, all CORE configurations would lead essentially to M active ν + m eff s = 60 ± 44 meV at one sigma, and to the same constraints on m eff s . However, there is some extra sensitivity to m eff s coming from the fact that for small N s , the physical mass associated to a given value of m eff s can be large 12 , such that the sterile neutrinos have their non-relativistic transition before photon decoupling. In that case, there are additional effects on CMB primary anisotropies 13 that an experiment sensitive to smaller angular scales can constrain better. This explains the gain in sensitivity to m eff s between LiteCORE-80 and COrE+. CORE-M5 appears as a good compromise, more constraining than LiteCORE-80 by 50% for both N s and m eff s . In summary, with a sensitivity to m eff s ten times better than Planck, CORE-M5 appears as an ideal instrument for constraining light sterile neutrinos, and the CORE data release will play a key role in the discussion of anomalies in short baseline neutrino oscillations.
Note that with CORE data alone, we find no lower bound on the active neutrino mass M ν in presence of a sterile neutrino, because the physical effect of the mass M ν = 60 meV in the fiducial model can be partially endorsed by the sterile neutrino mass. In other words, the data is not able to tell whether the fiducial mass of 60 meV belongs to active neutrinos, or to a mixture of sterile and active neutrinos. By removing degeneracies, BAO data from DESI makes the CMB lensing spectrum more sensitive to M active ν +m eff s , and given the upper bound on m eff s , one now finds a lower bound on M active ν . Cosmic shear data from Euclid directly probes the free-streaming effect associated with M active ν + m eff s , which results in a slightly better sensitivity to M active ν , but the constraints on the sterile neutrino sector remain roughly the same as when considering CORE alone.

Constraints on self-interacting neutrinos
In the standard cosmological scenario, neutrinos decouple at T ∼ 1 MeV, when the rate for weak interactions becomes smaller than the expansion rate. After that moment, neutrinos behave as free-streaming particles. This picture is a consequence of combining the standard model of particle physics with general relativity, and can be tested already with present cosmological data [12,[136][137][138][139][140][141]; CORE will allow to test its validity even further. Moreover, the possibility of non-standard neutrino self-interactions that make the neutrino fluid collisional also at T < 1 MeV is envisaged in some extensions of the standard model of particle physics [143][144][145].
Collisional neutrinos in a cosmological framework can be modelled in different ways. A popular approach is to introduce effective viscosity and sound speed, following the parameterization introduced in Ref. [146]; this is the approach followed in Refs. [138][139][140][147][148][149][150]. This method has the advantage of being, to a good extent, model-independent; however, the effective parameters are taken to be time-independent, a situation that is seldom realized in physical models. Moreover, the interpretation of deviations from the free-streaming case is not immediate [136,151]. For these reasons we choose not to use the effective parameterization. Alternative approaches consist in switching the behaviour of the neutrino fluid from free streaming to highly collisional (or viceversa) at some redshift (like in Ref. [137]), or to insert an (approximate) collision term modelling neutrino-neutrino scatterings directly in the Boltzmann equation (like in Refs. [136,141]); here we will stick to the latter method. In particular, we use the relaxation time approximation to rewrite the Boltzmann hierarchy (in synchronous gauge) for massless neutrinos as: where Γ int is the scattering rate, and for the rest we follow the notation of Ma & Bertschinger [152]. The exact form of the collision term depends on the detail of the underlying particle physics model; however, two broad classes of models can be considered by means of an effective parameterization of the collision term. In models in which the neutrino interaction is mediated by a scalar (like, e.g. in Majoron models), Γ int ∼ g 4 T ν , (g being the typical value of the Yukawa couplings), so that Γ int /H increases with time and neutrino become collisional again at some later time after decoupling. In models in which the interaction is mediated by a vector, Γ int ∼ G 2 X T 5 ν (G X being the "Fermi constant" of the new interaction) at low energies (below the mass of the mediator), so that neutrino possibly remain collisional for a longer time after weak decoupling.
Here we consider models of the first kind, i.e., scalar-mediated, and write the interaction rate as Γ int = g 4 eff T ν . We then run a forecast for the model ΛCDM + g eff , with a flat prior on g 4 eff , assuming massless neutrinos. The fiducial model has g 4 eff = 0 and M ν = 0, to be coherent with the assumption of massless neutrinos. We report our results in Tab. 15 and show the one-dimensional posterior for g 4 eff for various CORE configurations in the left panel of Fig.  15. We find that typical 95% upper limits on g 4 eff are of the order of 7 × 10 −29 for all CORE configurations considered here, roughly a factor 8 improvement with respect to current limits from Planck [141]. The marginal dependence of the sensitivity on the CORE settings is due to the fact that the effect of the interaction mainly shows up on intermediate angular scales in the temperature spectrum, and even more clearly in the E-mode polarisation spectrum [141] (as would also be the case for the phenomenological model with effective viscosity and sound speed [140]). On those scales, all CORE configurations have a very good sensitivity to Emodes, close to cosmic variance. Non-standard neutrino scalar interactions can also be probed by searches for neutrinoless double β decay [153,154] or observations of the neutrino signal from supernovae [155][156][157][158]. A proper comparison between constraints from the various probes, including cosmology, is somehow model-dependent; however, for simple models, cosmology gives the tightest limits on the couplings.
Non-standard neutrino interactions also introduce additional parameter degeneracies. The extra pressure of the neutrino fluid induced by collisions changes the height of the peaks in the CMB spectra, in a way that can the compensated by changing accordingly other parameters, most notably a combination of θ, Ω c h 2 or n s [141]. In the right panel of Fig. 15 we show the two-dimensional posterior for n s and g 4 eff , where the correlation is particularly evident. We note however by comparing Tables 3 and 15 that the precision on the ΛCDM parameters is only slightly degraded in presence of interacting neutrinos.    for different CORE configurations; (Right) two-dimensional 68% and 95% credible regions in the (n s , g 4 ef f ) plane, for the same configurations.

Constraints on the Dark Energy equation of state
Since its discovery [274,275], one major goal of modern cosmology is to determine the nature of the dark energy component responsible of the current accelerated expansion of the universe [276][277][278][279][280]. A crucial measurement in this direction is the determination of the dark energy equation of state w, defined as the ratio between the dark energy pressure and energy density: w(a) = P de /ρ de (see, e.g. [281]). In this section we forecast the constraints achievable by CORE on the dark energy equation of state parametrized either by a constant w either by a Chevallier-Polarski-Linder (CPL) [282,283] form where w is a linear function of the scale factor: with w 0 and w a as free parameters, constants with redshift.

Future constraints from CORE
The recent Planck data alone, considering both temperature and polarization power spectra combined with lensing data, provide just a weak constraint on the dark energy equation of state (assumed as constant with redshift) with w = −1.42 +0.25 −0.47 at 68% c.l. [12]. This is due to the well known geometrical degeneracy between w and H 0 since both modify the angular diameter distance at recombination (see e.g. [103][104][105]). However, the improvement in the measurement of CMB lensing with CORE could provide more stringent constraints on w as we report in Table 16. As we can see, a CORE-M5 configuration alone could constrain the dark energy equation of state with a ∼ 10% accuracy almost identical to the one provided by COrE+. Weaker constraints, at the level of ∼ 15 − 20%, could be reached by LiteCORE-120 and LiteCORE-80 respectively. CORE-M5 could therefore improve current constraints on w from Planck by a factor 2-3. This can be also seen in the left panel of Figure 16 where we plot the 2D posteriors in the H 0 vs w plane from current Planck data and from future CORE configurations. In particular, it is interesting to notice how CORE can now bound H 0 independently from any external dataset.  Table 16. 68% CL constraints on cosmological parameters in the ΛCDM + w model from the Planck+Lensing real dataset (see [12]) and different CORE experimental specifications.
In Table 17 we present constraints on w 0 and w a using CORE data alone. As we can see, the achievable constraints are rather weak, due to the intrinsic geometrical degeneracy between these parameters that clearly affects CMB data. This is clearly visible in Figure 16 Parameter Planck + lensing LiteCORE Table 17. 68% CL constraints on cosmological parameters in the ΛCDM + w 0 + w a model from the Planck+Lensing real dataset (see [12]) and different CORE experimental specifications. (right panel) where we plot the constraints on the w a vs w a plane from current Planck+CMB Lensing data and from future CORE configurations. However is important to note that while H 0 is undetermined from current Planck constraints, CORE will provide a ∼ 10% determination of this parameter even in this very extended parameter space.

Future constraints from CORE+DESI
It is interesting to quantify the improvement on w when future BAO datasets are included in the analysis. In Table 18 we present the constraints achievable by CORE combined with a future BAO survey as DESI. We compare the results with those coming from a satellite  with experimental sensitivity as Planck again combined with DESI, derived assuming a cosmological constant as fiducial model. As we can see, while there is no significant variation in the precision between the different CORE configurations, there is a relevant factor ∼ 2 improvement with respect to Planck+DESI. In few words, the constraints on the dark energy equation of state coming from a combination of cosmological data will significantly improve with CORE. This is also evident from the left panel of Figure 17.
In Table 19 we present costraints on w 0 and w a using CORE in combination with the simulated DESI dataset. Again, the DESI data is able to significantly break the geometrical degeneracy that affects the CMB data. This is clearly visible in Figure 16 where we plot the constraints on the w a vs w a plane. When the DESI data is included there is no significant difference between the constraints obtained from the CMB by the different configurations. CORE+DESI would improve the precision in the constraints by ∼ 20% − 30% with respect to Planck+DESI. This is also evident from the right panel of Figure 17.
Before concluding this section is important to note that the CORE-SZ cluster measurements at low redshift [61] will further increase the accuracy on the constraints presented here.

Cosmological constraints from CORE-M5 in extended parameter spaces
In the previous sections we have reported the constraints achievable from CORE-M5 on the 6 parameters of the ΛCDM model and for one or two parameters extensions as, for example, the helium abundance and the neutrino effective number Y p + N eff (Section VI), or the neutrino mass and effective number M ν + N eff (Section VII). In this section, along the lines of recent analyses as [168], we further extend the parameter space by considering 3 or 4 more parameters with respect to ΛCDM. The reason of this kind of analysis is clear: we need to assess the stability of the constraints under the assumption of ΛCDM. Moreover, if we extend the parameter space the CORE constraints will be clearly relaxed since degeneracies are present between the parameters. It is therefore useful to quantify how much future datasets as DESI will help in breaking these degeneracies and what is the gain of CORE-M5 in this case with respect to current results from Planck. We first consider an extension to ΛCDM varying at the same time three parameters already well discussed in the previous section: the number of relativistic degrees of freedom at recombination, N eff , the primordial Helium abundance, Y p , and the total neutrino mass, M ν . The forecasted constraints are reported in Table 20 and in Figure 19. As we can see, in this extended parameter space CORE-M5 will provide a significant improvement on the determination of the cosmological parameters with respect to Planck (about a factor ∼ 7 − 4) and, also, with respect to Planck+DESI (about a factor 2−3). CORE-M5+DESI will strongly improve the constraints respect to Planck+DESI by a factor 3 − 5. Is important for example to notice that a safe detection for a neutrino mass at the level of two standard deviations will be impossible from Planck+DESI, while it will still be achievable by CORE-M5+DESI. The inclusion of the DESI dataset will not substantially improve the CORE-M5 constraints on N eff and M ν .   9.2 CORE-M5 constraints in a ΛCDM+N eff +Y p +M ν +w model.

CORE-M5 constraints in a ΛCDM+N
We now examine a further extension of the parameter space by considering also variations in the dark energy equation of state w (assumed as constant with redshift). In this case we then vary 10 parameters at the same time: the 6 parameters of the standard ΛCDM model, plus w, M ν , N eff , and Y P . The forecasted constraints are reported in Table 21, while in Figure 19 we report the 2D posteriors on the N eff vs M ν plane (Left Panel) and w vs M ν plane (Right Panel). As we can see, also in this case the improvement in the parameter constraints from CORE-M5 with respect to Planck is extremely significant.

Figure of Merit
It is interesting to quantify the improvement of CORE-M5 with respect to current and future datasets by comparing the Figure of Merit (hereafter, FoM) for several cases. Given a covariance matrix of parameter uncertainties for an experimental configuration, we can define the FoM, for ΛCDM, as: that is roughly inversely proportional to the volume of the constrained parameters space (see for example [169]). Clearly, we can also consider the FoM for an extended parameter space, simply defined as where p i with i = (1, ...N ) are the N additional parameters one can consider. It is therefore interesting to compare the constraining power of different experimental configurations by considering the ratios of the FoM, given a cosmological model. In Table 22 we indeed report these ratios, using the Planck+Lensing 2015 FoM as a baseline, for Planck+DESI, CORE-M5 and CORE-M5+DESI, for ΛCDM and several extensions. As we can see, the improvement in adding DESI to Planck will be important (> 10) only for extensions of ΛCDM. Indeed, as also discussed in the third section of this paper, adding DESI to Planck or CORE will not improve the constraints on the ΛCDM parameters significantly. However, CORE-M5 can reduce the volume of the ΛCDM parameter space by almost 3 orders of magnitude with respect to the current constraints from Planck. As we can see from the results reported in the Table, CORE-M5 will reduce the currently viable parameter space by almost four orders of magnitude in case of single parameter extensions and up to more than five orders of magnitude in case of a 4 parameters extensions.  Table 22. Improvement with respect to simulated Planck data of the global figure of merit in the different cosmological scenarios specified in the first column for various data combinations involving CORE-M5 and future BAO measurements from the DESI survey.

Recombination physics
The cosmological recombination epoch marks an important era in the thermal history of our Universe [194,206]. It determines the transition of the fully-ionized primordial plasma (z 8000), consisting mainly of free electrons, protons, and α-particles all immersed in the bath of CMB photons, to the quasi-neutral phase 14 , with hydrogen and helium atoms at z 500.
The fine details of the evolution of doubly-ionized helium (5000 z 7000) and neutral helium 15 (1700 z 3000) only have a tiny direct impact on the CMB anisotropies, because they occur too deep inside the scattering medium to affect them strongly. Anisotropies in the medium mainly become visible during the recombination of hydrogen around z 1100, when photons have the largest probability of last scattering off of free electrons [195,203].
Today's measurements of the CMB anisotropies are so precise that tiny variations of the free electron fraction at the 0.1% − 0.5% level during hydrogen recombination can induce measurable differences and biases in the main cosmological parameters [198,202]. Conversely, this means that measurements of the CMB anisotropies can be used to directly constrain recombination physics and alternative recombination scenarios [e.g., 23,24,180,196]. In this section we outline some of the possible future directions.

Remaining uncertainties among recombination codes
Already for Planck, significant improvements over the standard recombination code, recfast [201], had to be included to achieve the necessary sub-percent accuracy in the ionization fraction. This lead to the development of the publicly available recombination codes CosmoRec [177] and HyRec [170], which agree at the 0.1% level around hydrogen recombination. Both codes include much more detailed computations of radiative transfer and atomic physics than recfast. However, it has been shown that the precise dynamics of hydrogen recombination could be captured with recfast when using fitting functions calibrated on the detailed computations for a given reference cosmology [198,202]. Thus, most analyses available in the literature, including the main papers of the Planck collaboration, use recfast instead of the full -albeit slightly more time-consuming -computations with HyRec or CosmoRec. In this section we check that the accuracy of recfast 1.5 [205] is still sufficient for the analysis of COrE+ data 16 . We only compare recfast 1.5 with HyRec using CLASS, which is sufficient given that HyRec agrees very well with CosmoRec.
To determine possible biases in parameters caused by remaining differences in the modeling of the recombination process, we generate mock data using recfast, and then analyse it with models computed either with recfast or HyRec, assuming COrE+ sensitivity. For most parameters we find negligible shifts of the recovered mean values in comparison with the standard deviations. The biggest shift is for the scalar spectral index n s (see Table 23), which is found to be biased by ∆n s = 0.00044 = 0.31 σ(n s ) due to discrepancies between the recombination codes. The parameters θ s and σ 8 also have non-negligible shifts, by 0.15σ and 0.20σ respectively. Overall, this shows that for next-generation experiments like COrE+ and Stage-IV CMB, the precision of recfast 1.5 is marginally sufficient. For a high-precision interpretation of the real data, the full recombination models from CosmoRec or HyRec should 14 After recombination some tiny fraction 2 × 10 −4 of the hydrogen atoms remain ionized even before reionization at z 10. 15 In earlier recombination calculations, the recombination of helium extended into the recombination era of hydrogen; however, detailed recombination treatments have shown that helium recombination finished at z 1700, significantly diminishing its direct impact on the CMB anisotropies [178,184,197,204]. 16 We restrict this test to the most sensitive configuration to make the point.
be used. 17 Of course, for the purpose of parameter sensitivity forecasts it makes no difference to use recfast 1.5 instead, and as such this is what is done throughout this work.  Table 23. Recovered ΛCDM parameters from mock COrE+ data, using either RecFast 1.5 or HyRec.

Measuring T 0 at last scattering
Our most precise determination of T 0 comes from the CMB energy spectrum measured by FIRAS, yielding T 0 = 2.7255 ± 0.0006 K [181,182,190]. However, the CMB power spectra can also help determine it [12,179,193], since different values of T 0 have peculiar effects on both CMB perturbations and recombination physics. If we were able to make a precise and unique measurement of T 0 at the redshift of CMB decoupling, we would achieve a crucial test of the temperature-to-redshift relation T (z) ∝ (1 + z), which can indeed be modified in exotic models [172,186,187]. A change of T 0 has very strong effects on the CMB power spectra [12,176,183], although many of them are exactly degenerate with shifts in other parameters like ω b , ω cdm , and Ω Λ h 2 . Indeed, the CMB is only probing ratios between the density of different species, and a global rescaling of all densities is unconstrained (unless one uses external data, like direct measurements of H 0 ). In particular, if one artificially fixed z rec while carrying out such a global rescaling, it would leave the angular scale of the sound horizon θ s ≡ r s (z rec )/D A (z rec ) unchanged. But a shift in z rec , which is affected by the absolute value of T 0 , does change θ s , thus lifting the degeneracy. Therefore, it is possible to measure T 0 from CMB observations only, provided that we exquisitely measure the angular scale of acoustic oscillations θ s . In the temperature spectrum, this measurement is slightly degraded by the presence of extra contributions, from the Doppler or early ISW effect. The polarisation spectrum does not include such contributions, and offers an opportunity to make a clean, uncontaminated measurement of the acoustic scale at the recombination time.
Due to significant errors on the polarisation spectrum, Planck was not the ideal experiment to measure T 0 , and could only give constraints on this number in combination with external data (σ(T 0 ) = 27 mK for Planck 2015 TT+lowP+BAO [12]). Thanks to unprecedented polarisation sensitivity, one expects CORE to do much better. To demonstrate this, we analysed mock data with a ΛCDM model adding T 0 as free parameter, with a flat prior on T 0 in the range [2.5, 3] K. We used the Nested Sampling algorithm Multinest [78], which proved to converge faster than the Metropolis-Hastings algorithm in this case.  Table 24. 68% CL constraints on the parameters of the ΛCDM + T 0 model. The first column is for Planck (high-TT + lowP 2015 data) combined with current BAO results, and the next columns for the different CORE experimental specifications with no external data required.
Our results are shown in Table 24. We find that CORE, with whatever configuration, should be able to provide the first CMB-only measurement of T 0 at high-redshift, although not at the same precision level as FIRAS at z = 0. Since the measurement is driven by the determination of the acoustic peak scale in the polarisation spectrum, all CORE settings perform well, because on intermediate angular scales they all measure the polarisation spectrum nearly up to cosmic variance; the error just starts to increase when the sensitivity is downgraded to the LiteCORE-80 level, with σ(T 0 ) = 21 mK instead of σ(T 0 ) = 18 mK for CORE-M5 or COrE+.
These numbers can be compared to σ(T 0 ) = 0.6 mK for the direct determination by FIRAS. FIRAS is of course much more accurate, but we should stress that the two measurements are complementary, given that FIRAS probes the temperature precisely today, while a fit to the CMB is sensitive to the temperature evolution around the time of recombination. An independent measurement of T 0 using the CMB anisotropies would place tight constraints on exotic changes in the temperature-redshift relation between recombination and today, which are in fact being actively searched for using SZ-clusters [188,189,199,200] and molecular line transitions [171,192]. Assuming T (z) = T 0 (1 + z) 1−β , the error σ(T 0 ) = 18 mK implies σ(β) 0.001. This is comparable to what was obtained using the Planck 2015 data release in combination with BAO measurements [12]; however, CORE would provide a CMB-only constraint, which directly complements the CORE-SZ cluster measurements at low redshift [61].

Measurement of the A 2s1s transition rate
The 2s→1s two-photon decay rate in the vacuum is known to be a key parameter of recombination physics [174,191,194,206]. Indeed, it is the dominant process through which a net number of excited hydrogen atoms can reach the ground state. 18 The bulk of the produced photons have too low energy to significantly re-excite or ionize another recombined hydrogen  Figure 20. Results for the ΛCDM + A 2s1s model, showing some of the parameters most correlated with A 2s1s in the mock CORE data.
atom. In contrast, direct recombinations to the ground state are irrelevant because the released Lyman-continuum photons are efficiently reabsorbed. Similarly, 2p→1s decay photons are trapped in the Lyman-α resonance, unless they have time to redshift away from the line center before their next interaction; a very inefficient process. For a CMB experiment as accurate as CORE, several strategies can be adopted: the rate can be fixed to the theoretical value calculated from first principles, or varied within the range allowed by experimental bounds, or treated as a free parameter determined only by fitting cosmological data. The most detailed theoretical calculation leads to A 2s1s = 8.2206 s −1 [185]. Laboratory measurements are extremely challenging and result in large uncertainties [e.g., 173], roughly 6 times worse than the current (indirect) CMB measurement performed by Planck; 7.72 ± 0.60 s −1 for Planck 2015 TT,TE,EE+lowP [12]. Hence, COrE+ could provide the most precise measurement of this transition. This also serves as a consistency check [12], since the theoretical prediction is expected to be very robust and model-independent. Thus, if the measurement shifts significantly away from the expected value, it could hint towards tensions in the data, indicating that further work would have to be done on the interpretation and understanding of foregrounds/systematics (before eventually claiming a discovery of new physics if no other explanation remains). Changes in the value of A 2s1s affect the photon and baryon decoupling time. This has two effects in the CMB spectra: a shift in the position of acoustic peaks, and a change of amplitude in the envelope of the diffusion damping tail. The first effect should be probed equally well by all CORE configurations that measure the temperature and polarisation spectra up to cosmic variance around the scale of the first acoustic peaks, while the second effect should be better probed by the configurations most sensitive to high-polarisation. This is consistent with the results of our forecasts, shown in Table 25. We find that the 2s→1s two-photon transition rate could be measured with a 2.2% error by LiteCORE-80, 1.8% error by LiteCORE-120, 1.6% error by or CORE-M5 and 1.5% error by COrE+, to be compared with 7.8% only for Planck 2015 TT,TE,EE+lowP [12] and the 46% uncertainty from lab measurements [173].
Although the existence of a Dark Matter (DM) component in the universe is by now well established, the nature of DM still lacks identification (see e.g. [207] for a review). In the WIMP paradigm, for instance, one can aim at detecting DM annihilation products. DM could also decay, provided its lifetime is much longer than the lifetime of the universe, as in R-parity breaking SUSY models (see e.g. [208]). We could then detect annihilation and decay products today, or probe their impact on the whole history of the Universe. If stable, DM could still be produced via the decay of a long-lived metastable heavier particle, releasing some electromagnetic energy (the so called the "super-WIMP" scenarios [210]). More generally, given our ignorance of the dark sector, there might be several components of DM, a fraction of which could be able to decay on a timescale smaller than the age of the Universe (Γ > H 0 ), leaving peculiar traces on cosmological observables. Most well-known candidates are e.g. unstable supersymmetric particles [211,212], sterile neutrinos [214], and also scenarios in which DM is made of primordial black holes, either through matter accretion [215] or Hawking radiation [216]. Cosmology, and especially the CMB, is a very sensitive and powerful probe of such models. Typically, DM annihilation or decay via electromagnetic channels can alter the cosmological ionization history, either through modifications around the recombination epoch, or an early reionization of the Universe. This has been extensively studied in the literature and shown to have a strong impact on the CMB power spectra, especially that of polarisation [23-25, 218-222, 227]. Already with WMAP and Planck, the CMB bounds on DM annihilation and decay are among the strongest in the literature, and have the major advantage of being almost free from theoretical and astrophysical uncertainties [12,24]. With very accurate CMB polarisation measurements, the CORE data could bring significant improvement on current bounds. For instance it could give the possibility of constraining scenarios of DM annihilation invoked to explain the so-called Fermi GeV galactic centre excess [223].
Moreover, the CMB has another remarkable property. It can probe scenarios in which DM can decay into non-electromagnetically interacting daughter particles (like neutrinos or some kind of "dark radiation"). The modification of gravitational potential wells due to the decay leads to very peculiar signatures. Planck data alone can constrain the decay lifetime of such DM to be longer than 150 Gyr [17]. More accurate measurements of the temperature, polarisation, and CMB lensing spectra by CORE can greatly help towards constraining (or detecting) such models.

Dark Matter annihilation
We first study the 7-parameter model ΛCDM + p ann , where p ann ≡ f (z = 600) σv /m DM (reported here in units of cm 3 /s/GeV) parametrises the effect of Dark Matter (s-wave) annihilation 19 on the ionization history [24,224,225]. The efficiency factor, f (z = 600), accounts for the fraction of DM annihilation energy deposited into the medium, σv is the thermal average of the cross section times velocity, and m DM is the mass of the DM particles. We choose a fiducial value p ann = 0, and fit the corresponding mock data with a flat prior on p ann ≥ 0. The effect of DM annihilation on the CMB is discussed e.g. in [26,27,224]. The annihilation shifts the time of recombination and increases the free electron fraction after recombination. In the CMB temperature and polarisation spectra, the first effect can in principle affect the peak scale and the envelope of the diffusion damping tail at high-, but only by a small amount. The clearest and most characteristic signature of DM annihilation comes from the second effect, which changes the shape of the polarisation power spectrum on intermediate and large angular scales: this would be seen equally well by all CORE configurations.

Dark Matter decay
Dark matter decay can also be tested using precise measurements of CMB anisotropies [17,23,217,220,235]. Here, we highlight constraints on decaying DM models that interact electromagnetically or purely gravitationally.

Purely gravitational constraints
We first focus on the constraints that CORE could place on the DM lifetime through purely gravitational effects. Although these constraints are often not as strong as those that apply when electromagnetic decay channels are open, they can be the most stringent when the DM decays into neutrinos or some form of dark radiation (DR). Recently, these models have been reinvestigated in the light of tensions between low astronomical measurements of H 0 , σ 8 and Ω m , and those inferred from CMB power spectra analyses. Indeed, DM decay can help in reconciling the discrepant datasets [217], although it does not totally solve the issue (see e.g. [17] and references therein for a recent review).
DM decays affect the temperature power spectrum at small 's through the late ISW effect, the polarisation spectrum at small 's due to changes in the τ to z relation around reionization, and all spectra through a different amount of CMB lensing (since a small fraction of the dark matter forming structures decays between recombination and today) [17]. We expect CORE to improve upon Planck constraints, mainly through its better determination of the CMB lensing spectrum. We therefore analyse ΛCDM + Γ dcdm models, where Γ dcdm is the decay rate of the DM particle. We also exchange Ω c h 2 with Ω dcdm+dr h 2 , the density parameter accounting for both decaying CDM and decay radiation, which would be equal to Ω c h 2 in the limit Γ dcdm = 0 (we refer to [17] for all relevant details on the parametrisation and computation of this model). The fiducial model has Γ dcdm = 0 and we assume Γ dcdm ≥ 0.  We summarise our results in Table 27. For Planck 2015 TT,TE,EE + lensing data we find Γ dcdm < 21 × 10 −20 s −1 (95%CL) in good agreement with Ref. [17]. LiteCORE-80 would already improve this constraint to Γ dcdm < 11 × 10 −20 s −1 (equivalently to a lifetime τ dcdm > 280 Gyr). However, the impact of Γ dcdm on lensing appears to be slightly degenerate with that of Ω dcdm+dr h 2 and n s , leading to parameter correlations. By increasing the sensitivity and resolution of CORE, one can reconstruct the CMB lensing spectrum with larger leverage and reduce these degeneracies. We find that CORE-M5 would give a bound Γ dcdm < 9.4 × 10 −20 s −1 , nearly as strong as COrE+ which would obtain Γ dcdm < 8.9 × 10 −20 s −1 (τ dcdm > 360 Gyr).

Electromagnetic constraints
We now run forecasts for the model ΛCDM + Γ eff , where Γ eff is defined in a manner similar to the annihilation parameter as Γ eff ≡ f eff Γ DM f e.m. in unit of s −1 . Here, f eff is the typical efficiency with which the energy released by the decay of DM particles is deposited into the medium, Γ DM is the DM decay rate, and f e.m. = ∆E/m DM c 2 is the fraction of mass energy transferred to electromagnetic decay products. The effect of such a DM decay is typically to increase the free electron fraction in a way similar to reionization, but starting at much higher redshifts z ≥ 100. As a consequence, one might expect constraints on Γ eff to depend on the detailed way in which reionization itself is modelled. Recently, Ref. [236] has compared bounds on Γ eff obtained in the nearly-instantaneous, or "camb-like", reionization 20 to the more recent, redshift-asymmetric, parametrisation of [237] given by 21 Here, the parameters have been adjusted to match direct observations of the ionized hydrogen fraction Q HII (z) ( [238] and references therein) and are given by z p = 6.1, Q p ≡ Q HII (z p ) = 0.99986 and λ = 0.73. The authors of [236] found that the bounds on Γ eff obtained using nearly-instantaneous or redshift-asymmetric reionization differ by only 20% when using Planck 2015 data, but we wish to check whether this will still be the case with very precise data from CORE. Similarly to Ref. [236], when studying redshift-asymmetric reionization, we fix z p and Q p to their best-fit values, and let the evolution rate λ vary in order to cover a large range of possible ionization histories. The fiducial model assumes Γ eff = 0 and we take a flat prior on this parameter, imposing only Γ eff ≥ 0.
In the nearly-instantaneous reionization scenario, Γ eff could be constrained to be smaller than 5.7 × 10 −27 s −1 at 95%CL by essentially all CORE configurations (see Table 28). This bound represents a factor ten improvement with respect to the current limit from Planck 2015 TT,TE,EE + lensing data; Γ eff < 69 × 10 −27 s −1 . This comes from much better polarisation measurements. However, all CORE configurations do equally well for this model, because EM decay effects only impact large angular scales, unlike gravitational decay effects which also modify CMB lensing. On large angular scales, all CORE settings provide cosmic-variancelimited measurements of both the temperature and polarisation.
In the redshift-asymmetric reionization scenarios, the CORE limits are of O(30%) looser than with nearly-instantaneous reionization (see Table 29). It is reassuring to find the same order of magnitude, since the reionization epoch is still poorly known. In the future, this type of uncertainty can be resolved by a better mapping of the reionization history coming from 21cm surveys [239]. 20 In the nearly-instantaneous reionization, the free electron fraction is given at low-z by xe(z) = f 2 1 + tanh( y−yre ∆y ) with f = 1 + nHe/nH, y = (1 + z) 3/2 and ∆y = 3(1 + z) 1/2 ∆z. The reionization is, therefore, redshift-symmetric, centred around the key parameter zre with a width given by ∆z. 21 Following the authors of Ref. [236], we replaced the argument of the exponent by −λ   [242,250,251]. The fine structure constant characterizes the strength of the electromagnetic force. A wide variety of local experiments and astrophysical observations allows one to set constraints on the variation of α at very different redshifts, from the constraints set using atomic clocks (z ∼ 0) [249] or the Oklo natural nuclear reactor (z ∼ 0.1) [240,241] to the ones from BBN (z ∼ 10 8 ) [165]. The most stringent astrophysical bounds come from the observation of quasar spectra. Long-standing claims of a possible detection of the variation of α in these data at z ∼ 0.2 − 4, at the level of ∼ ∆α/α ∼ 10 −6 , have further increased the interest for these measurements in the past decade [252][253][254][255], although these claims are still the subject of controversy [256].  Table 30. Constraints on the basic six-parameter ΛCDM model and the fine structure constant α/α 0 using different combination of datasets.
The CMB is a very powerful probe of the value of the fine structure constant at redshift z ∼ 1000 [243][244][245][246][247]. A change in the value of α would in fact change the evolution of the recombination history of the universe, thus introducing a signature in the temperature and polarization power spectra. Currently, the Planck experiment sets the strongest constraints on the fine structure constant from the CMB. The first release of the Planck data set a constraint on α/α 0 , where α 0 is the standard value, at the level of 0.4%, α/α 0 = 0.9936 ± 0.0043 [248]. We calculate that the constraints from the second release of the Planck data [12], combining both temperature and polarization and lensing reconstruction are at the level of 0.34% (Planck TTTEEE+lowTEB+lensing). Table 30 shows the improvement that a future satellite mission would bring to these constraints. We find that the LiteCoRE80, COrE-M5 and COrE configurations could improve the constraints by up to a factor of 5 with respect to Planck, to 0.10%, 0.070%, and 0.063% respectively. These constraints are essentially limited by the well known degeneracy between α and H 0 [248], as also shown in Figure 22. For this same reason, the constraint on H 0 is weakened by about a factor of ∼ 2 when marginalizing over α with respect to the ΛCDM case. We find that adding the information from DESI would only marginally improve the results to 0.055%, 0.064% and 0.077%. At face value, these constraints are still three orders of magnitude weaker compared to the latest quasar measurements (see e.g. [255]). However, the comparison is not straightforward since the CMB probes a very different range of redshifts compared to quasars. From an observational point of view, models where a dynamical degree of freedom yields a time variation of α can be divided into just two classes [257]. If this degree of freedom is the one responsible for dark energy, then current low-redshift constraints imply that any α variations at z ∼ 1100 must be no larger than 10 −5 , and thus not directly detectable by the CMB. However, if the physical mechanism responsible for α variations is distinct from the one responsible for dark energy (or if the variations are environment-dependent rather than simply time-dependent) then no such extrapolation can be made, and variations at the level of 10 −3 at z ∼ 1100 could easily be accommodated. Therefore improved high-redhift constraints which CORE can provide, when combined with the low-redshift spectroscopic ones, enable a key consistency test of the underlying theoretical paradigms.

Constraints on the epoch of reionization
The epoch of reionization (EoR) of the Universe is still largely unknown. The observation of the so-called Gunn-Peterson trough [258] in quasar spectra [259][260][261][262] indicates that hydrogen was almost fully reionized by z 6, possibly by the Lyman-α photons emitted by early starforming galaxies. Quasars are then believed to be responsible for helium reionization between z 6 to z 2 (see e.g. [263] for a recent review). The CMB is a sensitive probe of the EoR, since the CMB photons can Compton scatter off free electrons generated by reionization. This leads to a suppression of the CMB anisotropies inside the Hubble horizon at the EoR, typically above ∼ 10, and to a regeneration of power below ∼ 10 in the TE and EE spectra (the so-called reionization bump) (see e.g. [264][265][266][267]). These two effects mostly depend on the column density of electrons along the line-of-sight 22 , parametrized by the optical depth to reionization τ .
There are well known degeneracies between τ and other cosmological parameters, e.g., when using temperature data alone, with the amplitude of the primordial scalar perturbations A s 23 and the spectral index n s . Moreover, in extensions of the ΛCDM model, there exists a degeneracy between τ and the sum of neutrino masses M ν , which gets strengthened by the addition of external datasets such as BAO measurements [271,272]. Thus, an accurate measurement of τ through the reionization bump at large scales is essential for the determination of other cosmological parameters as well.
Finally, the CMB, and in particular its polarization, could potentially provide more information about the evolution of the epoch of reionization than just the constraint on τ [267].
In this section, we thus quantify: i) how much the knowledge of the reionization epoch as observed by CORE would help constraining the other cosmological parameters; ii) how well CORE will be able to provide information about the evolution of the EoR, beyond an accurate measurement of τ .
In order to tackle the first point, we forecast constraints on cosmological parameters excising the low-polarization spectra at < 30, and using a gaussian prior in τ with an uncertainty of σ prior (τ ) = ±0.01, consistent with the precision of the latest results from Planck [126]. This is about 4 times worse than the constraint that a CORE-like experiment could achieve using the full large scale polarisation information, as already shown in Section 3. We find that in the ΛCDM case, excising the large scales in polarization degrades the constraints in the case of CORE-M5 (CORE-M5 + DESI) by a factor of ∼ 2.5 (∼ 2) on τ and log A s , a factor ∼ 2 (∼ 1.6) on Ω c and θ * , and by 30% (14%) on n s , leaving Ω b h 2 unaffected. Note that the recovered constraint on τ is stronger than the prior, at the level of σ(τ ) = ±0.005. This is due to the fact that the degeneracy between τ and A s is reduced by the information provided by lensing on A s . As the ΛCDM+M ν case is concerned, we find that the upper limit on the sum of neutrino masses would be degraded in the case of CORE-M5 from M ν < 152 meV to M ν < 201 meV (95 % CL), and and that the constraint from CORE-M5 + DESI would worsen from σ(M ν ) = 21 meV to σ(M ν ) = 34 meV, while other cosmological parameters would be less affected than in the ΛCDM case, as also shown in Fig. 23. This illustrates the 22 Note that potentially the CMB is sensitive to inhomogeneous (or patchy) reionization, which could also help in refining models. However, the non-gaussian signature of such a a process at small scales was shown to be very challenging to detect with a CORE-like experiment [268][269][270]. 23 The normalization of the > 20 part of the spectrum is mostly controlled by the product As exp −2τ .  Table 31. 68% CL constraints on the ΛCDM + α + z stop model (asymmetric reionization), for the different CORE experimental specifications. Instead of lower bounds on α, we report the more interesting upper bounds on the derived parameter ∆z reio . We also show the results for the derived parameters (z beg , z reio , z end , ∆z reio ).
able to extract the CMB lensing spectrum in a larger range of scales: thus they corner A s with better accuracy, and reionization results are less affected by the τ − A s degeneracy. We find that LiteCORE-80 could set a constraint on the duration of reionization given by ∆z reio < 3.4 (95 % CL), while CORE-M5 (COrE+) would improve the constraint to ∆z reio < 2.6 (2.4). This is about two times better than the constraints from Planck CMB anisotropies combined with Kinetic Sunyaev−Zel'dovich measurements, and a factor of order 4 better than Planck alone [273], without using any prior on z end . Note also that CORE-M5 would be able to provide precise measurements of z beg , z end and z reio , with σ(z beg ) 0.33, σ(z end ) 0.31 and σ(z reio ) 0.21, to be compared to the recent Planck measurements, σ(z beg ) 1.9, z end 10 and σ(z reio ) 1.1. We therefore conclude that a CORE-like experiment would be sensitive enough to constrain the end of the EoR from CMB data only, and would improve the determination of z reio and z beg by a factor of 4 and 6 respectively.

Theoretical framework
The current accelerated expansion of the universe could be also explained by introducing modifications to general relativity and considering an energy content made just of dark matter and baryons and no dark energy. Several modified gravity scenarios have been proposed. One possible way to check for hints for modified gravity in the data, without relying on a particular model, is to introduce additional parameters to perturbation theory that can modify the evolution of the gravitational potentials Φ and Ψ (see, for example [53,[284][285][286][287][288][289][290][291][292][293][294][295][296][297][298][299][300]). For example, a now common approach, presented in the publicly available code MGCAMB [301,302] and also recently applied in [53] and [300] to the Planck data, is to firstly modify the Poisson equation for Ψ: introducing the scale-dependent function µ(k, a). In the above equation, ρ dm is the dark matter energy density and ∆ is the comoving density perturbation. Secondly, one can also introduce the possibility of an additional anisotropic stress considering a second function η(k, a), such that: A third function, Σ(k, a), which modifies the lensing/Weyl potential Φ + Ψ can be introduced as: This function is not independent from µ(k, a) and η(k, a) since: These functions can be used to study the effects of a possible modification to GR. If GR is valid then µ = η = Σ = 1. Here we use the following parametrization: µ(k, a) = 1 + E 11 Ω Λ (a) ; (14.5) η(k, a) = 1 + E 22 Ω Λ (a) . (14.6) where E 11 and E 22 are two parameters that are constant with redshift and Ω Λ is the energy density in the cosmological constant that we choose as a good approximation for the background evolution.
Computing the Σ function today we have: i.e. Σ 0 = 1 if GR is valid. A detection of Σ 0 − 1 = 0 could therefore indicate a departure of the evolution of density perturbations from GR. Interestingly, the recent Planck 2015 data suggest a value of Σ 0 − 1 = 0.23 ± 0.13 [53,300] at 68% CL, i.e. a presence for MG slightly above two standard deviations. Of course, given the very low statistical significance, this indication can be just due to a statistical fluctuation or to a small residual systematic. However, it is clearly important to study what kind of constraint can be achieved by future CMB data and at which level of confidence the current hint could be falsified.   Table 32. 68% CL constraints on cosmological parameters from four different CORE configurations. The possibility of modified gravity is allowed.

Future constraints from CORE
In Table 32 we present the constraints on modified gravity parameters using the three different experimental configurations for CORE, under the assumption of GR (i.e. Σ 0 −1 = 0). As we can see the current constraints on Σ 0 can be improved by nearly a factor three with respect to current constraints from Planck 2015, quite independently from the choice of the experimental configuration. Constraints on the H 0 vs Σ 0 − 1 and σ 8 vs Σ 0 − 1 planes are also reported in Figure 24 from three CORE configurations and also from the current Planck 2015 data. The improvement of CORE with respect to Planck is clearly visible. A future CMB experiment could therefore confirm or exclude at high significance (about four standard deviations) the current hints for MG from Planck.

Cosmological Birefringence
Cosmological birefringence is the in vacuo rotation of the photon polarization direction during propagation [303]. In general, such effect is unconstrained by the T T spectrum, while results in a mixing between Q and U Stokes parameters that produces non-null CMB cross correlations between temperature and B-mode polarization, and between Eand B-mode polarization. Since these correlations are expected to vanish under parity conserving assumptions, cosmic birefringence is a tracer of parity violating physics.
Several theoretical models exhibit cosmological birefringence, such as coupling of the electromagnetic (EM) field with axion-like particles [304] or a quintessence field [305], quantumgravity terms [306] or Chern-Simons type interactions [303] in the EM Lagrangian. For the sake of simplicity, we restrict to the case of constant, isotropic α, for which the effect can be parametrized as [307][308][309] C T E,obs = C T E cos(2α) , with C XY,obs and C XY being the observed and the unrotated power spectra for the XY fields (X, Y = T , E or B), i.e. the one that would arise in absence of birefringence. We set the primordial T B and EB spectra to zero, assuming a negligible role of parity violation effects up to CMB photon decoupling (this choice excludes e.g. chiral gravity theories). Recent constraints on this model employing Planck 2015 data are reported in [310,311]. We remind that the most relevant systematic effect affecting constraints on isotropic birefringence is the miscalibration of the detector polarization angle. An estimate of the error budget from current experiments is ∼ 1 • , already dominant over the statistical error achievable on α [311]. As a result, future experiments should require an exquisite control of systematic effects to really nail down constraints on isotropic birefringence.
In Tab. 33, we report the 68% c.l. around the mean for the birefringence angle α and other cosmological parameters. We employ the full T EB combination of power spectra for the four experimental configurations analyzed in this paper. In Fig. 25, we report two-dimensional 68% and 95% probability contours for the same parameters listed in Tab.33 and for the same experimental configurations. The first column clearly shows no evidence of correlation between the birefringence angle and other cosmological parameters. One-dimensional posterior probability distributions for the same parameters are reported along the diagonal of Fig. 25.   Table 33. 68% CL for the birefringence angle α and other cosmological parameters for the four experimental configurations presented in this work and for the full T EB field combination.

Conclusions
In this paper we forecasted the constraints on several cosmological parameters that can be achieved by the CORE-M5 satellite proposal. Table 34 provides a summary of our main results. Assuming ΛCDM, the improvement with respect to Planck is extremely significant: CORE-M5 can simultaneously improve constraints on key parameters by a factor ∼ 8 (σ 8 ), ∼ 5.5 (H 0 , Ω cdm h 2 ), 4.5 (Ω b h 2 , τ ), and 3 (n s ). Some of the parameters such as σ 8 , H 0 , and Ω b h 2 can be measured or derived independently by galaxy surveys or luminosity distance measurements. Future comparisons with the CORE-M5 results will therefore provide a crucial test for cosmology and the ΛCDM scenario and its extensions. The interest of such measurements by several means is exemplified by the current tensions between the Planck dataset and the local determination of the Hubble constant from [51] or measurements of weak lensing cosmic shear from surveys as CFHTLenS and KiDS-450 [86,87]. These tensions may reveal either previously unknown systematic effects, or new physics. While these current tensions will likely be resolved by the time CORE flies, the large improvement brought by CORE on so many parameters will surely bring new opportunities for revealing tensions with whatever precision datasets will be available by then. These are opportunites for fundamental breakthoughs.  Table 34. Current limits from Planck 2015 and forecasted CORE-M5 uncertainties. The first 6 rows assume a ΛCDM scenario while the following rows give the constraints on single parameter extensions.
In the fourth column, numbers in curly brackets {...} give the improvement in the parameter constraint when moving from Planck 2015 to CORE-M5, defined as the ratio of the uncertainties σ P lanck /σ CORE .
In this paper, we have considered several possible extensions to the basic six parameters ΛCDM model. The forecasted constraints on these extra parameters are summarized in the second section of Table 34. As we can see, also on these extensions CORE-M5 can provide significantly more stringent constraints than the current ones, with a factor of n improvement that ranges from 2 up to more than 6, clearly opening the window to new tests or discoveries for physics beyond the standard model. In particular, we found that: • CORE-M5 alone could detect neutrino masses with an uncertainty of σ(M ν ) = 0.043 eV, enough to rule out the inverted mass hierarchy at more than 95% c.l.. When combined with future galaxy clustering data as expected from surveys as DESI or EUCLID, CORE-M5 will provide a guaranteed discovery for a neutrino mass. Other cosmological information from CORE-M5, as clusters number counts (see the ECO companion paper [61]) could further reduce these uncertainties.
• CORE-M5 could also provide extremely stringent constraints on the neutrino effective number N eff with σ(N eff ) = 0.041. This uncertainty, that can be further reduced by combining the CORE-M5 data with clusters number counts data from CORE-M5 itself and/or complementary galaxy surveys, will test the presence of extra light particles at recombination and the process of neutrino decoupling from the primordial plasma at redshift z ∼ 10 9 . The nature of the neutrino background can be further tested by measuring its self-interactions.
• The primordial Helium abundance Y p can be measured by CORE-M5 with an uncertainty of σ(Y p ) = 0.0029 that is almost a factor two better than current constraints from direct measurements from metal-poor extragalactic H II regions.
• CORE-M5 will also significantly improve current constraints on curvature (by almost a factor 4) and on the dark energy equation of state (by a factor ∼ 3). One key improvement will be the determination of the Hubble constant in these models: the possibility of an equation of state w < −1 to explain current tensions on the values of H 0 can be significantly tested by CORE-M5.
• By measuring the intermediate angular scale CMB polarization with unprecedented accuracy, CORE-M5 will scrutinize with the highest possible detail the process of recombination. This will let CORE-M5 place bounds on known physical process as the amplitude of the recombination two photons rate (improving current constraints by a factor 5) but also to further improve the constraints on extra ionizing photons from dark matter annihilation and on variations of the fine structure constant.
• Large angular scale polarization will also be measured by CORE, providing new constraints on the reionization process. It is here worthwhile to note that the ability of CORE-M5 to measure polarization over a wide range of angular scales will provide a crucial test for the cosmological scenario. The constraints on the optical depth τ from large angular scales, for example, can be only validated by a measurement of small angular scale polarization with results consistent with the overall ΛCDM scenario.
It is also interesting to summarize the constraints from different experimental configurations and to compare them. We do this in Table 35 where we report the ratio of the 1-σ forecasted error of a certain experimental configuration over the expected 1 σ error from  Table 35. Improvements from CORE-M5 on cosmological parameters with respect to several proposed configuration defined as the ratio of the forecasted 1 σ constraints, σ/σ CORE−M 5 .
the proposed CORE-M5 setup. For generality, we also compare the constraints with those expected from the JAXA Litebird proposal ( [312]) that is now in conceptual design phase (called ISAS Phase-A1) . Litebird presents a significantly different experimental design with respect to the CORE configurations studied in this paper, with, for example, a smaller primary mirror of 60 cm. As we can see from the results in Table 35 any CORE configuration is expected to constrain cosmological parameters with an improvement that ranges from a factor 2 to 5 respect to Litebird. CORE-M5, for example, will constrain the neutrino effective number with a precision about 5 times better than Litebird. It is clear from the results presented in the Table that CORE will have the possibility to probe new physics that will not be accessible by Litebird alone. However, constraints on the reionization optical depth will be comparable, since the imprint of reionization is mainly on large scale polarization that can be equally measured by Litebird and CORE. Also from Table 35 we see that CORE-M5 could produce constraints that are up to 50% better than those expected from the cheaper LiteCORE-80 configuration. A significantly higher precision is indeed expected on key parameters as the baryon abundance, the Hubble constant, the neutrino effective number and the primordial Helium abundance. On the other hand, the differences between CORE-M5 and LiteCORE-120 and COrE+ are expected to be of the order of ∼ 10%. From one side we can then consider the forecasts presented here for CORE-M5 as conservative: if the experimental sensitivity will be for some reason degraded to LiteCORE-120 we expect no significant variations in the constraints presented in this paper. On the other hand, the more expensive COrE+ configuration would only slightly improve the main parameter constraints and would not present a decisive improvement in the specific scientific aspect of parameters recovery and model testing. Indeed the scientific driver for higher angular resolution is not the improvement in parameters accuracy.
To conclude, we have presented in this paper a large number of forecasts on cosmological parameters for the CORE-M5 proposed mission. The expected improved constraints, presented in Table 34 clearly calls for of a next CMB satellite mission as CORE. CORE-M5 can probe new physics with unprecedented precision. We have compared the constraints with different experimental configurations and found that the expected constraints are stable under a degradation of the experimental configuration to LiteCORE-120 that has a significantly smaller number of detectors. Assuming the ΛCDM cosmological scenario, we also found that the CORE-M5 setup can produce constraints that are almost identical (at worst a ∼ 10% degradation) to the ones achievable by the larger aperture COrE+ configuration.