1 Method S1. The detail procedures of the kernel regression 2 The pseudo-code of the kernel regression used in the present study is as follows. In the 3 pseudo-code, a normal (e.g., x), bold symbol (e.g., X), superscript symbol T and the symbol 4 I represents a vector, matrix, transpose operator and unit matrix, respectively. 5 6 1: Input: X, T, N and σ denote independent variables, predictors, the number of samples 7 and the parameter of radial basic function (RBF) kernel, respectively. We decided σ by 8 the leave-one-out cross-validation. 9 10 11 2: We calculate the Gramian matrix π using RBF kernel by following equations: π←{π(π₯π,π)}, where π(π₯i , π) = expβ‘(−(π₯i − π)2 /π). 3: πΌ, π½ ← randomβ‘numbers 12 Our aim is to obtain optimal weights coefficient π° of the Gramian matrix π. We 13 assumed the hyper-parameters πΌ, which determines variance of weight coefficients, and 14 π½, which determines variance of noise. In order to obtain optimal π°, πΌ and π½, we 15 conduct an iterative optimization. Initially, we set random values to πΌ and π½, and then 16 conducted the iterative optimization. 17 4: loop 18 5: By using given πΌ and π½, posterior distribution of w is given by the following 19 equation: π(π°, π) = π(π°|π, π). We can calculate the mean M and variance S of 20 the distribution by the following equations. 21 π −π ← πΌπ + π½πT π, 22 π ← π½ππT π 1 23 6: We update πΌ and π½ under given posterior distribution of w by maximizing 24 marginal likelihood function π(π|πΌ, π½) = ∫ π(π|π°, π½)π(π°|πΌ)dπ°. First, we update 25 πΌ by the following equations: 26 πΌ ← πΎ/πT π, and 27 πΎ ← ∑ π/β‘(πΌ + π), where π is eigen vector of π½πT π. 28 29 7: Next, we update π½ by the following equation. π½ ← ∑(π − ππ)2 /(π − πΎ) 30 11: end loop 31 12: We conduct the above loop 100 times and finally obtained the predictive variables by 32 calculating ππ. 33 34 2