14J042 SUPPLEMENTARY MATERIAL Peak-Identification Uncertainty To begin, we assigned a probability to each 2 cm segment of each firn core, which reflects the likelihood that this segment is associated with a mid-summer peak. We refer to the segments within a given ice core as 𝑆𝑖 where 𝑖 = 1, 2, … , 𝑁𝑠 , and 𝑁𝑠 is the number of 2 cm segments. For the vast majority of segments, this probability value 𝑝𝑖 is either a 0 (definitely not associated with an annual mid-summer peak) or a 1 (definitely an annual mid-summer peak). Other segments are assigned a probability pi which is between 0 and 1. For each segment, we then draw a realization 𝑋𝑖 from a Bernoulli distribution with probability 𝑝𝑖 , and treat the ice core segments with 𝑋𝑖 = 1 as one possible collection of mid-summer peak segments. If we believe that each peak segment was truly associated with the same date (e.g., January 1), then the annual accumulations for years 2010, 2009,…, 1966 could then be obtained by summing the segmentwise accumulations between peaks. In cases where there are multiple potential peaks within a small range, a more complex probability structure must be specified before generating a sequence of possible datings. For example, in one range of segments in SEAT-10-4 (the noisiest record), we cannot simply assign Bernoulli probabilities 𝑝𝑖 ∈ [0, 1] to the various potential peaks near depths 1922 cm and 1964 cm because it was determined that while it is possible that peaks reside near one or both of these two depths, it is not reasonable that beginning-of-year peaks are absent at both depths. Consequently, the following probabilities were specified: Peak Locations Probability 1922 cm and 1964 cm 50% 1922 cm only 25% 1964 cm only 25% Neither 1922 cm nor 1964 cm 0% The simulated accumulation records can then be generated based on these probabilities. Peak-Date Uncertainty As discussed in the main text, we assigned a 90% probability range of ±δ days to each potential peak, representing the probability of deviation of the summer maximum from Jan. 1, assuming a zero-mean normal distribution for the peak-date errors. This method accounts for the peak-date uncertainty. Probability-Generated Accumulation Time Series To create an accumulation time series for a given ice core, these steps are followed: 1. Obtain a collection of mid-summer peak segments as outlined in Supplemental Materials: peak-identification Uncertainty. 2. For each peak segment date (denoted 𝐷𝑖 ), specify the standard deviation for the peak-date errors (𝜎𝑖 ) described in Supplemental Materials: Peak-Date Uncertainty. 3. For each peak segment 𝑆𝑖 , draw a realization 𝜖𝑖 from a Normal (0, 𝜎𝑖 ) distribution and let the new date equal 𝐷𝑖 + 𝜖𝑖 . 4. For each peak segment, find the segment date closest to 𝐷𝑖 + 𝜖𝑖 and denote this new segment 𝑆𝑖′ . Replace the original peak segment 𝑆𝑖 from the collection of mid-summer peak segments in Step 1 with the new segment 𝑆𝑖′ . 5. Sum accumulation between mid-summer peaks. The steps above are used to general 1000 accumulate time series accounting for peak-date and peak-identification uncertainties. Sensitivity Analysis The choice of peak-date and peak-identification uncertainty estimates will impact the resulting interannual variability and trend analysis. One of the strengths of this model is that it is open to sensitivity analysis via changing uncertainty input values. This approach allows the user to hypothesize different optimistic or pessimistic views of the two types of dating errors and then explore the impacts on trend inferences and interannual variability. Sensitivity analyses were run for these data analyses including the exploration of the distribution for the uncertainty associated with the identification of Jan 1 (normal vs. heavy-tailed) and the relative assumed peak-date for this distribution (conservative vs. realistic vs. optimistic estimates). Relative to the first assumption, we began by concluding that the error distribution is symmetric since (for these data) mislabeling “Jan 1” as d days before the true Jan 1 is just as likely as mislabeling “Jan 1” as d days late. Further, a bell-shaped distribution is logical because we expect that dating errors will mostly be clustered near zero with increasingly extreme errors becoming increasingly less likely. Thus, our sensitivity analyses focused on comparing the normal distribution with heavy-tailed distributions such as the t distribution with 3 degrees of freedom (or “t3”). However, except for quantiles in the top and bottom 1% of the distribution of accumulations (which do not affect the estimated central 95% intervals calculated for the plots), the choice of normal vs. t3 makes little difference. Note that despite the t3 distribution being more spread, our approach adjusts for this fact. When using the Normal model, we take the estimated measurement uncertainty for identifying Jan 1 (e.g., δ=30 days with 90% confidence) and use δ = 1.645 sigma in order to estimate sigma. Possible dating errors are then drawn from a Normal(0,1) distribution multiplied by sigma (i.e., a draw from a Normal(0,sigma) distribution). If we assumed a t3 distribution for the dating errors, we would then use δ = 2.35 sigma to estimate a smaller sigma, but then draw possible dating errors from a t3 (more dispersed than the normal) multiplied by the smaller sigma. Thus, the simulated dating errors are only subtly affected by the choice of the symmetric distributional model chosen. Figure S1 shows the distribution of dating uncertainties (in years) for a normal model and for a t3 model, assuming that δ =30 days. Thus, distributional form can subtly impact conclusions about trend, but not as dramatically as when a more/less conservative approach is used in quantifying possible dating uncertainty, for example δ =30 days versus δ =15 days. Figure S1. Distribution of dating uncertainties (in years) for a normal model (black line) and for a t3 model (red line), assuming that δ =30 days With respect to the choice for the size of δ, this choice can have substantial impact upon estimated accumulation trends and interannual variability, particularly when uncertainty for an ice core is dominated by peak-date uncertainty as opposed to year-count uncertainty. However, shrinking δ will have a relatively predictable impact on the estimated accumulation trend and interannual variability. Specifically, as δ decreases, the variability of the estimated trend and interannual variability will also decrease. Detrending The yearwise detrending approach begins with the original series of 2 cm ice core segments which are denoted 𝑆𝑡1 , … , 𝑆𝑡𝑛𝑡 , where the year 𝑡 is in the range 1, . . . , 𝑇, and 𝑛𝑡 is the number of segments within year 𝑡. Also, let 𝑎𝑡𝑗 , 𝑡 = 1, . . . , 𝑇, 𝑗 = 1, . . . , 𝑛𝑡 , be the accumulation associated with the segment 𝑆𝑡𝑗 . The segments are assigned to a year using the best assessment of mid-summer peaks. We denote the mean annual accumulation for a core with 𝜇 and each ∗ year’s accumulation total for year 𝑡 with 𝑦𝑡 . We obtain new segment accumulations 𝑎𝑡𝑗 = 𝑎𝑡𝑗 𝜇/𝑦𝑡 so that when we sum the new segment accumulations within each year we obtain a new 𝑛 ∗ 𝑡 annual accumulation 𝑦𝑡∗ = ∑𝑗=1 𝑎𝑡𝑗 that is equal to exactly 𝜇. That is, before introducing any perturbation due to dating uncertainty, the series 𝑦1∗ , … , 𝑦𝑇∗ is constant at 𝜇. This allows us to investigate the potential for spurious changes from flat accumulation (no variability) due to the kinds of dating uncertainties. The linear detrending approach is similar to the yearwise detrending, but instead of completely flattening the series to a constant, we detrend the original series of annual accumulations using linear regression. Specifically, we regress 𝑦1 , … , 𝑦𝑇 on time (year) 𝑥1 , … , 𝑥𝑇 to obtain predicted values 𝑦̂1 , … , 𝑦̂𝑇 . We then create the new series of linearly detrended annual accumulations 𝑦̃𝑡 = 𝑦𝑡 − 𝑦̂𝑡 + 𝜇. Next, we create new segment accumulations using 𝑎̃𝑡𝑗 = 𝑎𝑡𝑗 𝑦̃𝑡 /𝑦𝑡 so that when we sum the segments within each year we obtain new annual 𝑛 𝑡 accumulations 𝑦̃𝑡 = ∑𝑗=1 𝑎̃𝑡𝑗 that vary around the core’s mean 𝜇 in a realistic fashion, but have a linear slope of exactly 0.