Regresión cuantil a lo machín lenin con catboost

Estadística
machine learning
R
2023
Author

José Luis Cañadas Reche

Published

April 23, 2023

Hay veces, más de las que se cree, en que nos interesa estimar un cuantil en vez de la media. Si tenemos una variable dependinte \(y\) y una o varias independientes \(X\), lo que se suele hacer es una regresión cuantil.

Si visitamos uno de los papers originales de dicha técnica Computing Regression Quantiles vemos que trata de minimizar la siguiente expresión.

\[ \arg\min_b R_{\theta}(b) = \sum_{i = 1}^{n}\rho_\theta \left( y_i - x_ib\right) \] Con \(\theta \in (0,1)\) y

\[ \begin{equation} \rho_\theta(u) = \begin{cases} \theta u & u \geq 0\\ (\theta -1) & u < 0 \\ \end{cases} \end{equation} \]

Lo cual es simplemente “penalizar” por \(\theta\) cuando el residuo sea mayor o igual que 0, es decir, cuando nos equivocamos por arriba y por \((\theta -1)\) si nos equivocamos por abajo.

Ejemplo, si \(y_i = 40\) y \(f(x) = 50\) y queremos estimar el cuantil 0.95. Entonces como el residuo es menor que 0, se pondera por 0.05

\[\rho_(40 - 50) = (0.95 -1) (40 - 50) = 0.5 \] Si en cambio \(f(x) = 30\), es decir, nos equivocamos por abajo, pero a la misma distancia del valor real entonces

\[\rho(40-30) = 0.95 (40-30) = 9.5 \]

Y por tanto la función a minimizar \(\arg\min_b R_{\theta}(b)\) cuando \(\theta > 0.5\) va a tener un valor mucho mayor cuando nos “equivocamos” por abajo que por arriba. Y debido a cómo está definido \(\rho_\theta(u)\) se consigue la regresión cuantil con cuantil igual a \(\theta\). En el paper (de 1987) viene mejor explicado y el algoritmo para resolverlo en el caso de que \(f(x)\) sea lineal.

Warning

Pero, ¿qué me estás contando??

¡¡Estamos en la segunda década del siglo XXI y ahora todo es IA y Machín Lenin!!

Fuera coñas, el caso es que la gente de yandex en su librería catboost han utilizado esto para hacer la regresión cuantil, simplemente utilizando la expresión anterior como función de pérdida. Aquí se puede ver las diferentes funciones de pérdida que usan según el caso.

Para la regresión cuantil usan

\[L(t, a, \alpha) = \dfrac{\sum_{i}^{N} \omega_i(\alpha - I(t_i \leq a_i))(t_i-a_i)}{\sum_{i}^{N} \omega_i}\] Dónde

Como vemos, es lo mismo que se cuenta en el paper de 1987. Pero al meterlo como función de pérdida en el algoritmo sirve para el algoritmo de boosting que se utiliza en la librería.

Important

Ojo, que los de yandex han ido un poquito más allá

La gente de catboost, atinadamente ha dicho, y ¿por qué no construimos un función de pérdida que minimice globalmente varios cuantiles? Lo cual es algo así como “encuéntrame la distribución de los parámetros que mejor se ajusta a estos datos en vez de un sólo parámetro”.

Pero esto son arbolitos y boosting, no hay lo que se dice un parámetro de la función propiamente dicho, por lo que al final lo que se “aprende” debe ser la configuración de árboles que minimiza globalmente los cuantiles indicados.

Bueno, la función de pérdida “multi-quantile” es una modificación simple de la anterior.

\[L(t, a, \alpha_q) = \dfrac{\sum_{i}^{N} \omega_i \sum_{q=1}^{Q}(\alpha_q - I(t_i \leq a_i))(t_i-a_i)}{\sum_{i}^{N} \omega_i}\]

Ejemplo

El ejemplo no es mío, lo he visto por algún sitio que no me acuerdo.

Tip

catboost se puede utilizar en R y python.

Mostrar / ocultar código

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from catboost import CatBoostRegressor
sns.set()

n = 800

# X aleatorias
x_train = np.random.rand(n)
x_test = np.random.rand(n)

# un poquito de ruido gaussiano

noise_train = np.random.normal(0, 0.3, n)
noise_test = np.random.normal(0, 0.3, n)

# Simulamos y_train e y _x como y = 2 + 3 * x + ruido
a, b = 2, 3

# al lio
y_train = a * x_train + b + noise_train
y_test = a * x_test + b + noise_test

Pintamos

Mostrar / ocultar código
sns.scatterplot(x = x_train, y = y_train).set(title = "Ejemplillo")

Vaos a predecir 10 cuantiles

Mostrar / ocultar código
quantiles = [q/10 for q in range(1, 10)]

# se ponen en string separados por commas
quantile_str = str(quantiles).replace('[','').replace(']','')

print(quantile_str)
#> 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9

Modelito

Mostrar / ocultar código
model = CatBoostRegressor(iterations=100,
                          loss_function=f'MultiQuantile:alpha={quantile_str}')

model.fit(x_train.reshape(-1,1), y_train)
#> 0:   learn: 0.1977907    total: 49.8ms   remaining: 4.93s
#> 1:   learn: 0.1934878    total: 54.5ms   remaining: 2.67s
#> 2:   learn: 0.1891389    total: 61.2ms   remaining: 1.98s
#> 3:   learn: 0.1849242    total: 64.9ms   remaining: 1.56s
#> 4:   learn: 0.1808818    total: 68.4ms   remaining: 1.3s
#> 5:   learn: 0.1769520    total: 71.7ms   remaining: 1.12s
#> 6:   learn: 0.1732251    total: 74.7ms   remaining: 993ms
#> 7:   learn: 0.1696774    total: 77.9ms   remaining: 896ms
#> 8:   learn: 0.1661482    total: 80.9ms   remaining: 818ms
#> 9:   learn: 0.1628541    total: 84.5ms   remaining: 760ms
#> 10:  learn: 0.1596378    total: 87.6ms   remaining: 709ms
#> 11:  learn: 0.1565479    total: 90.7ms   remaining: 665ms
#> 12:  learn: 0.1536016    total: 93.8ms   remaining: 628ms
#> 13:  learn: 0.1507692    total: 96.9ms   remaining: 595ms
#> 14:  learn: 0.1480845    total: 100ms    remaining: 567ms
#> 15:  learn: 0.1454840    total: 103ms    remaining: 541ms
#> 16:  learn: 0.1429863    total: 106ms    remaining: 518ms
#> 17:  learn: 0.1407059    total: 109ms    remaining: 497ms
#> 18:  learn: 0.1383697    total: 112ms    remaining: 477ms
#> 19:  learn: 0.1361779    total: 115ms    remaining: 460ms
#> 20:  learn: 0.1340707    total: 118ms    remaining: 444ms
#> 21:  learn: 0.1320342    total: 122ms    remaining: 432ms
#> 22:  learn: 0.1300775    total: 125ms    remaining: 418ms
#> 23:  learn: 0.1282447    total: 129ms    remaining: 408ms
#> 24:  learn: 0.1264588    total: 133ms    remaining: 399ms
#> 25:  learn: 0.1247594    total: 137ms    remaining: 390ms
#> 26:  learn: 0.1232137    total: 141ms    remaining: 381ms
#> 27:  learn: 0.1216563    total: 145ms    remaining: 373ms
#> 28:  learn: 0.1202034    total: 149ms    remaining: 365ms
#> 29:  learn: 0.1187735    total: 153ms    remaining: 357ms
#> 30:  learn: 0.1174263    total: 157ms    remaining: 350ms
#> 31:  learn: 0.1161250    total: 162ms    remaining: 344ms
#> 32:  learn: 0.1148583    total: 172ms    remaining: 350ms
#> 33:  learn: 0.1136688    total: 176ms    remaining: 342ms
#> 34:  learn: 0.1125292    total: 180ms    remaining: 335ms
#> 35:  learn: 0.1114357    total: 184ms    remaining: 327ms
#> 36:  learn: 0.1104088    total: 187ms    remaining: 318ms
#> 37:  learn: 0.1094203    total: 190ms    remaining: 309ms
#> 38:  learn: 0.1085302    total: 193ms    remaining: 301ms
#> 39:  learn: 0.1076702    total: 196ms    remaining: 294ms
#> 40:  learn: 0.1068348    total: 200ms    remaining: 288ms
#> 41:  learn: 0.1060263    total: 204ms    remaining: 281ms
#> 42:  learn: 0.1052530    total: 207ms    remaining: 274ms
#> 43:  learn: 0.1045115    total: 212ms    remaining: 269ms
#> 44:  learn: 0.1037861    total: 216ms    remaining: 264ms
#> 45:  learn: 0.1031053    total: 220ms    remaining: 259ms
#> 46:  learn: 0.1024645    total: 225ms    remaining: 254ms
#> 47:  learn: 0.1018457    total: 229ms    remaining: 248ms
#> 48:  learn: 0.1012497    total: 232ms    remaining: 242ms
#> 49:  learn: 0.1006996    total: 235ms    remaining: 235ms
#> 50:  learn: 0.1001835    total: 239ms    remaining: 229ms
#> 51:  learn: 0.0996695    total: 242ms    remaining: 223ms
#> 52:  learn: 0.0991990    total: 246ms    remaining: 218ms
#> 53:  learn: 0.0987716    total: 249ms    remaining: 212ms
#> 54:  learn: 0.0983443    total: 252ms    remaining: 206ms
#> 55:  learn: 0.0979411    total: 255ms    remaining: 200ms
#> 56:  learn: 0.0975621    total: 258ms    remaining: 195ms
#> 57:  learn: 0.0972083    total: 261ms    remaining: 189ms
#> 58:  learn: 0.0968639    total: 265ms    remaining: 184ms
#> 59:  learn: 0.0965243    total: 269ms    remaining: 179ms
#> 60:  learn: 0.0961923    total: 272ms    remaining: 174ms
#> 61:  learn: 0.0958829    total: 275ms    remaining: 168ms
#> 62:  learn: 0.0956101    total: 278ms    remaining: 163ms
#> 63:  learn: 0.0953473    total: 281ms    remaining: 158ms
#> 64:  learn: 0.0950908    total: 284ms    remaining: 153ms
#> 65:  learn: 0.0948327    total: 287ms    remaining: 148ms
#> 66:  learn: 0.0946033    total: 290ms    remaining: 143ms
#> 67:  learn: 0.0943820    total: 293ms    remaining: 138ms
#> 68:  learn: 0.0941723    total: 297ms    remaining: 133ms
#> 69:  learn: 0.0939663    total: 300ms    remaining: 129ms
#> 70:  learn: 0.0937677    total: 304ms    remaining: 124ms
#> 71:  learn: 0.0935819    total: 307ms    remaining: 119ms
#> 72:  learn: 0.0933969    total: 310ms    remaining: 115ms
#> 73:  learn: 0.0932181    total: 313ms    remaining: 110ms
#> 74:  learn: 0.0930514    total: 316ms    remaining: 105ms
#> 75:  learn: 0.0928960    total: 320ms    remaining: 101ms
#> 76:  learn: 0.0927433    total: 323ms    remaining: 96.4ms
#> 77:  learn: 0.0925871    total: 326ms    remaining: 91.9ms
#> 78:  learn: 0.0924635    total: 329ms    remaining: 87.6ms
#> 79:  learn: 0.0923467    total: 333ms    remaining: 83.1ms
#> 80:  learn: 0.0922292    total: 336ms    remaining: 78.7ms
#> 81:  learn: 0.0921089    total: 339ms    remaining: 74.3ms
#> 82:  learn: 0.0919860    total: 342ms    remaining: 70.1ms
#> 83:  learn: 0.0918862    total: 346ms    remaining: 65.9ms
#> 84:  learn: 0.0917925    total: 349ms    remaining: 61.6ms
#> 85:  learn: 0.0916927    total: 353ms    remaining: 57.4ms
#> 86:  learn: 0.0916089    total: 356ms    remaining: 53.1ms
#> 87:  learn: 0.0915482    total: 356ms    remaining: 48.6ms
#> 88:  learn: 0.0914624    total: 362ms    remaining: 44.7ms
#> 89:  learn: 0.0913841    total: 365ms    remaining: 40.5ms
#> 90:  learn: 0.0912981    total: 368ms    remaining: 36.4ms
#> 91:  learn: 0.0912323    total: 371ms    remaining: 32.3ms
#> 92:  learn: 0.0911620    total: 374ms    remaining: 28.2ms
#> 93:  learn: 0.0910968    total: 378ms    remaining: 24.1ms
#> 94:  learn: 0.0910390    total: 381ms    remaining: 20ms
#> 95:  learn: 0.0909742    total: 384ms    remaining: 16ms
#> 96:  learn: 0.0909243    total: 387ms    remaining: 12ms
#> 97:  learn: 0.0908512    total: 390ms    remaining: 7.95ms
#> 98:  learn: 0.0907883    total: 393ms    remaining: 3.97ms
#> 99:  learn: 0.0907341    total: 396ms    remaining: 0us
#> <catboost.core.CatBoostRegressor object at 0x7f75bdded3c0>

Predecimos

Mostrar / ocultar código

# Make predictions on the test set
preds = model.predict(x_test.reshape(-1, 1))
preds = pd.DataFrame(preds, columns=[f'pred_{q}' for q in quantiles])

preds.head(6)
#>    pred_0.1  pred_0.2  pred_0.3  ...  pred_0.7  pred_0.8  pred_0.9
#> 0  3.401601  3.542894  3.619259  ...  3.938859  4.041568  4.165809
#> 1  4.150397  4.269908  4.355643  ...  4.701792  4.816983  4.958197
#> 2  4.350918  4.476999  4.595515  ...  4.933372  5.017778  5.147716
#> 3  2.829240  2.936415  3.034001  ...  3.365471  3.446338  3.577815
#> 4  3.456021  3.574613  3.658350  ...  3.966363  4.047775  4.166815
#> 5  3.356988  3.505757  3.580486  ...  3.858433  3.961821  4.084200
#> 
#> [6 rows x 9 columns]

Pintamos

Mostrar / ocultar código

fig, ax = plt.subplots(figsize=(10, 6))
ax.scatter(x_test, y_test)

for col in ['pred_0.1', 'pred_0.5', 'pred_0.9']:
    ax.scatter(x_test.reshape(-1,1), preds[col], alpha=0.50, label=col)

ax.legend()

Y ya estaría, no parece mala alternativa si uno tiene que hacer este tipo de cosas.

Tip

Ojalá le sirva a mi amigo Kenet para una cosa que estaba bicheando.

Pues poco más. Feliz domingo

Con R también se puede, como no.

Mostrar / ocultar código
library(reticulate) # para comunicar R y python y poder convertir datos y funciones de uno a otro bidireccionalmente
library(catboost)

X_train <- as.matrix(py$x_train) # catboost en R espera  una matriz
Y_train <-  as.matrix(py$y_train)


X_test <- as.matrix(py$x_test) 
Y_test <-  as.matrix(py$y_test)

head(X_train) ; head(Y_train)
#>           [,1]
#> [1,] 0.6499924
#> [2,] 0.9125034
#> [3,] 0.4820761
#> [4,] 0.2060426
#> [5,] 0.5411845
#> [6,] 0.1779657
#>          [,1]
#> [1,] 4.165367
#> [2,] 5.018479
#> [3,] 4.146041
#> [4,] 2.942043
#> [5,] 4.287877
#> [6,] 3.589826

(quantiles_str <-  py$quantile_str)
#> [1] "0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9"
Mostrar / ocultar código
train_pool <- catboost.load_pool(data = X_train, label = Y_train)
test_pool <- catboost.load_pool(data = X_test)
loss_function <-  paste0("MultiQuantile:alpha=", quantiles_str)

fit_params <-  list(
    iterations = 100,
    loss_function= loss_function
    )
Mostrar / ocultar código

model <- catboost.train(train_pool, params=fit_params)
#> 0:   learn: 0.1977907    total: 50.6ms   remaining: 5s
#> 1:   learn: 0.1934878    total: 56.9ms   remaining: 2.79s
#> 2:   learn: 0.1891389    total: 61.3ms   remaining: 1.98s
#> 3:   learn: 0.1849242    total: 66.1ms   remaining: 1.58s
#> 4:   learn: 0.1808818    total: 71ms remaining: 1.35s
#> 5:   learn: 0.1769520    total: 74.5ms   remaining: 1.17s
#> 6:   learn: 0.1732251    total: 78.2ms   remaining: 1.04s
#> 7:   learn: 0.1696774    total: 83ms remaining: 955ms
#> 8:   learn: 0.1661482    total: 86.6ms   remaining: 876ms
#> 9:   learn: 0.1628541    total: 89.7ms   remaining: 807ms
#> 10:  learn: 0.1596378    total: 93.1ms   remaining: 753ms
#> 11:  learn: 0.1565479    total: 97.3ms   remaining: 714ms
#> 12:  learn: 0.1536016    total: 102ms    remaining: 683ms
#> 13:  learn: 0.1507692    total: 105ms    remaining: 647ms
#> 14:  learn: 0.1480845    total: 109ms    remaining: 615ms
#> 15:  learn: 0.1454840    total: 113ms    remaining: 592ms
#> 16:  learn: 0.1429863    total: 118ms    remaining: 576ms
#> 17:  learn: 0.1407059    total: 122ms    remaining: 557ms
#> 18:  learn: 0.1383697    total: 126ms    remaining: 536ms
#> 19:  learn: 0.1361779    total: 129ms    remaining: 518ms
#> 20:  learn: 0.1340707    total: 133ms    remaining: 500ms
#> 21:  learn: 0.1320342    total: 137ms    remaining: 485ms
#> 22:  learn: 0.1300775    total: 140ms    remaining: 468ms
#> 23:  learn: 0.1282447    total: 144ms    remaining: 455ms
#> 24:  learn: 0.1264588    total: 147ms    remaining: 441ms
#> 25:  learn: 0.1247594    total: 151ms    remaining: 429ms
#> 26:  learn: 0.1232137    total: 154ms    remaining: 416ms
#> 27:  learn: 0.1216563    total: 157ms    remaining: 404ms
#> 28:  learn: 0.1202034    total: 161ms    remaining: 394ms
#> 29:  learn: 0.1187735    total: 164ms    remaining: 383ms
#> 30:  learn: 0.1174263    total: 169ms    remaining: 375ms
#> 31:  learn: 0.1161250    total: 172ms    remaining: 365ms
#> 32:  learn: 0.1148583    total: 175ms    remaining: 356ms
#> 33:  learn: 0.1136688    total: 180ms    remaining: 349ms
#> 34:  learn: 0.1125292    total: 185ms    remaining: 343ms
#> 35:  learn: 0.1114357    total: 188ms    remaining: 334ms
#> 36:  learn: 0.1104088    total: 195ms    remaining: 332ms
#> 37:  learn: 0.1094203    total: 199ms    remaining: 325ms
#> 38:  learn: 0.1085302    total: 202ms    remaining: 316ms
#> 39:  learn: 0.1076702    total: 205ms    remaining: 308ms
#> 40:  learn: 0.1068348    total: 212ms    remaining: 305ms
#> 41:  learn: 0.1060263    total: 216ms    remaining: 298ms
#> 42:  learn: 0.1052530    total: 219ms    remaining: 290ms
#> 43:  learn: 0.1045115    total: 223ms    remaining: 284ms
#> 44:  learn: 0.1037861    total: 228ms    remaining: 278ms
#> 45:  learn: 0.1031053    total: 232ms    remaining: 272ms
#> 46:  learn: 0.1024645    total: 235ms    remaining: 265ms
#> 47:  learn: 0.1018457    total: 242ms    remaining: 262ms
#> 48:  learn: 0.1012497    total: 246ms    remaining: 256ms
#> 49:  learn: 0.1006996    total: 250ms    remaining: 250ms
#> 50:  learn: 0.1001835    total: 253ms    remaining: 243ms
#> 51:  learn: 0.0996695    total: 257ms    remaining: 237ms
#> 52:  learn: 0.0991990    total: 261ms    remaining: 232ms
#> 53:  learn: 0.0987716    total: 265ms    remaining: 225ms
#> 54:  learn: 0.0983443    total: 268ms    remaining: 219ms
#> 55:  learn: 0.0979411    total: 272ms    remaining: 214ms
#> 56:  learn: 0.0975621    total: 275ms    remaining: 208ms
#> 57:  learn: 0.0972083    total: 279ms    remaining: 202ms
#> 58:  learn: 0.0968639    total: 283ms    remaining: 197ms
#> 59:  learn: 0.0965243    total: 287ms    remaining: 191ms
#> 60:  learn: 0.0961923    total: 292ms    remaining: 187ms
#> 61:  learn: 0.0958829    total: 298ms    remaining: 182ms
#> 62:  learn: 0.0956101    total: 304ms    remaining: 178ms
#> 63:  learn: 0.0953473    total: 308ms    remaining: 173ms
#> 64:  learn: 0.0950908    total: 313ms    remaining: 168ms
#> 65:  learn: 0.0948327    total: 317ms    remaining: 163ms
#> 66:  learn: 0.0946033    total: 324ms    remaining: 160ms
#> 67:  learn: 0.0943820    total: 329ms    remaining: 155ms
#> 68:  learn: 0.0941723    total: 332ms    remaining: 149ms
#> 69:  learn: 0.0939663    total: 338ms    remaining: 145ms
#> 70:  learn: 0.0937677    total: 342ms    remaining: 140ms
#> 71:  learn: 0.0935819    total: 346ms    remaining: 134ms
#> 72:  learn: 0.0933969    total: 349ms    remaining: 129ms
#> 73:  learn: 0.0932181    total: 353ms    remaining: 124ms
#> 74:  learn: 0.0930514    total: 358ms    remaining: 119ms
#> 75:  learn: 0.0928960    total: 361ms    remaining: 114ms
#> 76:  learn: 0.0927433    total: 364ms    remaining: 109ms
#> 77:  learn: 0.0925871    total: 369ms    remaining: 104ms
#> 78:  learn: 0.0924635    total: 374ms    remaining: 99.3ms
#> 79:  learn: 0.0923467    total: 377ms    remaining: 94.3ms
#> 80:  learn: 0.0922292    total: 380ms    remaining: 89.2ms
#> 81:  learn: 0.0921089    total: 386ms    remaining: 84.8ms
#> 82:  learn: 0.0919860    total: 390ms    remaining: 79.9ms
#> 83:  learn: 0.0918862    total: 394ms    remaining: 75ms
#> 84:  learn: 0.0917925    total: 398ms    remaining: 70.2ms
#> 85:  learn: 0.0916927    total: 404ms    remaining: 65.8ms
#> 86:  learn: 0.0916089    total: 408ms    remaining: 61ms
#> 87:  learn: 0.0915482    total: 409ms    remaining: 55.8ms
#> 88:  learn: 0.0914624    total: 412ms    remaining: 50.9ms
#> 89:  learn: 0.0913841    total: 416ms    remaining: 46.2ms
#> 90:  learn: 0.0912981    total: 420ms    remaining: 41.6ms
#> 91:  learn: 0.0912323    total: 424ms    remaining: 36.9ms
#> 92:  learn: 0.0911620    total: 427ms    remaining: 32.2ms
#> 93:  learn: 0.0910968    total: 431ms    remaining: 27.5ms
#> 94:  learn: 0.0910390    total: 437ms    remaining: 23ms
#> 95:  learn: 0.0909742    total: 441ms    remaining: 18.4ms
#> 96:  learn: 0.0909243    total: 444ms    remaining: 13.7ms
#> 97:  learn: 0.0908512    total: 449ms    remaining: 9.17ms
#> 98:  learn: 0.0907883    total: 454ms    remaining: 4.58ms
#> 99:  learn: 0.0907341    total: 457ms    remaining: 0us
Mostrar / ocultar código
predicciones <- catboost.predict(model, pool = test_pool)
Mostrar / ocultar código
colnames(predicciones) <- paste0("quantile_", 1:9) 

head(predicciones)
#>      quantile_1 quantile_2 quantile_3 quantile_4 quantile_5 quantile_6
#> [1,]   3.401601   3.542894   3.619259   3.716297   3.794767   3.868639
#> [2,]   4.150397   4.269908   4.355643   4.431774   4.502443   4.586961
#> [3,]   4.350918   4.476999   4.595515   4.651673   4.713752   4.815297
#> [4,]   2.829240   2.936415   3.034001   3.123039   3.208654   3.292293
#> [5,]   3.456021   3.574613   3.658350   3.741498   3.824226   3.897069
#> [6,]   3.356988   3.505757   3.580486   3.651319   3.720015   3.791630
#>      quantile_7 quantile_8 quantile_9
#> [1,]   3.938859   4.041568   4.165809
#> [2,]   4.701792   4.816983   4.958197
#> [3,]   4.933372   5.017778   5.147716
#> [4,]   3.365471   3.446338   3.577815
#> [5,]   3.966363   4.047775   4.166815
#> [6,]   3.858433   3.961821   4.084200