bayesvalidrox.surrogate_models.gaussian_process_sklearn.GPESkl

class bayesvalidrox.surrogate_models.gaussian_process_sklearn.GPESkl(input_obj, meta_model_type='GPE', gpe_reg_method='LBFGS', n_restarts=10, auto_select=False, kernel_type='RBF', isotropy=True, noisy=False, nugget=1e-09, n_bootstrap_itrs=1, normalize_x_method='norm', dim_red_method='no', verbose=False, input_transform='user')

Bases: MetaModel

GP MetaModel using the Scikit-Learn library

This class trains a surrogate model of type Gaussian Process Regression. It accepts an input object (input_obj) containing the specification of the distributions for uncertain parameters and a model object with instructions on how to run the computational model.

Attributes

input_objobj

Input object with the information on the model input parameters.

meta_model_typestr

Surrogate model types, in this case GPE. Default is GPE.

gpe_reg_methodstr

GPE regression method to compute the kernel hyperparameters. The following regression methods are available for Scikit-Learn library 1. LBFGS: Default is LBFGS.

auto_select: bool

Flag to loop through different available Kernels and select the best one based on BME criteria. Default is False.

kernel_type: str

Type of kernel to use and train for. The following Scikit-Learn kernels are available: 1. RBF: Squared exponential kernel 2. Matern: Matern kernel 3. RQ: rational quadratic kernel Default is ‘RBF’ kernel.

n_restarts: int

Number of multiple starts to do for each GP training. Default is 10

isotropy: bool
Flag to train an isotropic kernel (one length scale for all input parameters) or

an anisotropic kernel (a length scale for each dimension). True for isotropy, False for anisotropic kernel

Default is True

noisy: bool

True to consider a WhiteKernel for regularization purposes, and optimize for the noise hyperparameter. Default is False

nugget: float

Constant value added to the Kernel matrix for regularization purposes (not optimized) Default is 1e-9

normalize_x_method: str

Type of transformation to apply to the inputs. If None or none, no transformation is done. The followint options are available: 1. ‘norm’: normalizes inputs U[0,1] 2. ‘standard’: standarizes inputs N[0, 1] 3. ‘none’: no transformation is done 3. None: No transformation is done.

n_bootstrap_itrsint

Number of iterations for the bootstrap sampling. The default is 1.

dim_red_methodstr

Dimensionality reduction method for the output space. The available method is based on principal component analysis (PCA). The Default is ‘no’. There are two ways to select number of components: use percentage of the explainable variance threshold (between 0 and 100) (Option A) or direct prescription of components’ number (Option B):

>>> MetaModelOpts = MetaModel()
>>> MetaModelOpts.dim_red_method = 'PCA'
>>> MetaModelOpts.var_pca_threshold = 99.999  # Option A
>>> MetaModelOpts.n_pca_components = 12 # Option B
verbosebool

Prints summary of the regression results. Default is False.

input_transform: str

Type of transformation to apply to the inputs. Default is user.

__init__(input_obj, meta_model_type='GPE', gpe_reg_method='LBFGS', n_restarts=10, auto_select=False, kernel_type='RBF', isotropy=True, noisy=False, nugget=1e-09, n_bootstrap_itrs=1, normalize_x_method='norm', dim_red_method='no', verbose=False, input_transform='user')

Methods

__init__(input_obj[, meta_model_type, ...])

adaptive_regression(X, y, var_idx[, verbose])

Adaptively fits the GPE model by comparing different Kernel options

add_input_space()

Instanciates experimental design object.

build_kernels()

Initializes the different possible kernels, and selects the ones to train for, depending on the input options.

build_metamodel()

Builds the parts for the metamodel (,...) that are needed before fitting.

calculate_moments()

Computes the first two moments of the metamodel.

check_is_gaussian()

Check if the metamodel returns a mean and stdev.

copy_meta_model_opts()

This method is a convinient function to copy the metamodel options.

eval_metamodel(samples[, b_i])

Evaluates GP metamodel at the requested samples.

fit(X, y[, parallel, verbose, b_i])

Fits the surrogate to the given data (samples X, outputs y).

pca_transformation(target, n_pca_components)

Transforms the targets (outputs) via Principal Component Analysis.

scale_x(X, transform_obj)

Transforms the inputs based on the scaling done during training Parameters ---------- X: 2D list or np.array of shape (#samples, #dim) The parameter value combinations to evaluate the model with. transform_obj: Scikit-Learn object Class instance to transform inputs.

transform_x(X[, transform_type])

Scales the inputs (X) during training using either normalize ([0, 1]), or standardize (N[0, 1]). If None, then the inputs are not scaled. If an invalid transform_type is given, no transformation is done. Parameters ---------- X: 2D list or np.array of shape (#samples, #dim) The parameter value combinations to train the model with. transform_type: str Transformation to apply to the input parameters. Default is None Raises ------ AttributeError: If an invalid scaling name is given. Returns ------- np.array: (#samples, #dim) transformed input parameters obj: Scaler object transformation object, for future transformations during surrogate evaluation.

class AutoVivification

Bases: dict

Implementation of perl’s AutoVivification feature.

Source: https://stackoverflow.com/a/651879/18082457

clear() None.  Remove all items from D.
copy() a shallow copy of D
fromkeys(value=None, /)

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)

Return the value for key if key is in the dictionary, else default.

items() a set-like object providing a view on D's items
keys() a set-like object providing a view on D's keys
pop(k[, d]) v, remove specified key and return the corresponding value.

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem()

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

setdefault(key, default=None, /)

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

update([E, ]**F) None.  Update D from dict/iterable E and F.

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() an object providing a view on D's values
adaptive_regression(X, y, var_idx, verbose=False)

Adaptively fits the GPE model by comparing different Kernel options

Parameters

Xarray of shape (n_samples, ndim)

Training set. These samples should be already transformed.

yarray of shape (n_samples,)

Target values, i.e. simulation results for the Experimental design.

var_idxint

Index of the output.

verbosebool, optional

Print out summary. The default is False.

Returns

return_varsDict

Fitted estimator, BME score

add_input_space()

Instanciates experimental design object.

Returns

None.

build_kernels()

Initializes the different possible kernels, and selects the ones to train for, depending on the input options. It needs the self._auto_select variable, which must be a boolean, the self.kernel_type variable (which must a string with one of the [currently] 3 valid Kernel types.) If an invalid kernel type is given, the RBF kernel is used). and the self.isotropy variable. If True, it initializes isotropic kernels.

Raises

AttributeError: if an invalid type of Kernel is given TypeError: if the kernel type is not a string

Returns

List: with the kernels to iterate over List: with the names of the kernels to iterate over

build_metamodel() None

Builds the parts for the metamodel (,…) that are needed before fitting. This is executed outside any loops related to e.g. bootstrap or transformations such as pca.

Returns

None

calculate_moments()

Computes the first two moments of the metamodel.

Returns

means: dict

The first moment (mean) of the surrogate.

stds: dict

The second moment (standard deviation) of the surrogate.

check_is_gaussian() bool

Check if the metamodel returns a mean and stdev.

Returns

bool

TRUE

copy_meta_model_opts()

This method is a convinient function to copy the metamodel options.

Returns

metamod_copyobject

The copied object.

eval_metamodel(samples, b_i=0)

Evaluates GP metamodel at the requested samples.

Parameters

samplesarray of shape (n_samples, ndim), optional

Samples to evaluate metamodel at. The default is None.

Returns

mean_preddict

Mean of the predictions.

std_preddict

Standard deviation of the predictions.

fit(X: array, y: dict, parallel=False, verbose=False, b_i=0)

Fits the surrogate to the given data (samples X, outputs y). Note here that the samples X should be the transformed samples provided by the experimental design if the transformation is used there.

Parameters

X2D list or np.array of shape (#samples, #dim)

The parameter value combinations that the model was evaluated at.

ydict of 2D lists or arrays of shape (#samples, #timesteps)

The respective model evaluations.

parallelbool

Set to True to run the training in parallel for various keys. The default is False.

verbosebool

Set to True to obtain more information during runtime. The default is False.

Returns

None.

pca_transformation(target, n_pca_components)

Transforms the targets (outputs) via Principal Component Analysis. The number of features is set by self.n_pca_components. If this is not given, self.var_pca_threshold is used as a threshold.

ToDo: Check the inputs needed for this class, there is an error when PCA is used. ToDo: From the y_transformation() function, a dictionary is being sent instead of an array for target.

Parameters

targetarray of shape (n_samples,)

Target values.

Returns

pcaobj

Fitted sklearnPCA object.

OutputMatrixarray of shape (n_samples,)

Transformed target values.

n_pca_componentsint

Number of selected principal components.

static scale_x(X: array, transform_obj: object)

Transforms the inputs based on the scaling done during training Parameters ———- X: 2D list or np.array of shape (#samples, #dim)

The parameter value combinations to evaluate the model with.

transform_obj: Scikit-Learn object

Class instance to transform inputs

Returns

np.array (#samples, #dim)

Transformed input sets

transform_x(X: array, transform_type=None)

Scales the inputs (X) during training using either normalize ([0, 1]), or standardize (N[0, 1]). If None, then the inputs are not scaled. If an invalid transform_type is given, no transformation is done. Parameters ———- X: 2D list or np.array of shape (#samples, #dim) The parameter value combinations to train the model with. transform_type: str

Transformation to apply to the input parameters. Default is None

Raises

AttributeError: If an invalid scaling name is given. Returns ——- np.array: (#samples, #dim)

transformed input parameters

obj: Scaler object

transformation object, for future transformations during surrogate evaluation