bayesvalidrox.surrogate_models.reg_fast_laplace.RegressionFastLaplace¶
- class bayesvalidrox.surrogate_models.reg_fast_laplace.RegressionFastLaplace(n_iter=1000, n_Kfold=10, tol=1e-07, fit_intercept=False, bias_term=True, copy_X=True, verbose=False)¶
Bases:
object
Sparse regression with Bayesian Compressive Sensing as described in Alg. 1 (Fast Laplace) of Ref.[1], which updated formulas from [2].
sigma2: noise precision (sigma^2) nu fixed to 0
uqlab/lib/uq_regression/BCS/uq_bsc.m
Parameters¶
- n_iter: int, optional (DEFAULT = 1000)
Maximum number of iterations
- tol: float, optional (DEFAULT = 1e-7)
If absolute change in precision parameter for weights is below threshold algorithm terminates.
- fit_interceptboolean, optional (DEFAULT = True)
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
- copy_Xboolean, optional (DEFAULT = True)
If True, X will be copied; else, it may be overwritten.
- verboseboolean, optional (DEFAULT = FALSE)
Verbose mode when fitting the model
Attributes¶
- coef_array, shape = (n_features)
Coefficients of the regression model (mean of posterior distribution)
- alpha_float
estimated precision of the noise
- active_array, dtype = np.bool, shape = (n_features)
True for non-zero coefficients, False otherwise
- lambda_array, shape = (n_features)
estimated precisions of the coefficients
- sigma_array, shape = (n_features, n_features)
estimated covariance matrix of the weights, computed only for non-zero coefficients
References¶
- [1] Babacan, S. D., Molina, R., & Katsaggelos, A. K. (2009). Bayesian
compressive sensing using Laplace priors. IEEE Transactions on image processing, 19(1), 53-63.
- [2] Fast marginal likelihood maximisation for sparse Bayesian models
(Tipping & Faul 2003). (http://www.miketipping.com/papers/met-fastsbl.pdf)
- __init__(n_iter=1000, n_Kfold=10, tol=1e-07, fit_intercept=False, bias_term=True, copy_X=True, verbose=False)¶
Methods
__init__
([n_iter, n_Kfold, tol, ...])fit
(X, y)fit_
(X, y, sigma2)predict
(X[, return_std])Computes predictive distribution for test set.
- predict(X, return_std=False)¶
Computes predictive distribution for test set. Predictive distribution for each data point is one dimensional Gaussian and therefore is characterised by mean and variance based on Ref.[1] Section 3.3.2.
Parameters¶
- X: {array-like, sparse} (n_samples_test, n_features)
Test data, matrix of explanatory variables
Returns¶
: list of length two [y_hat, var_hat]
- y_hat: numpy array of size (n_samples_test,)
Estimated values of targets on test set (i.e. mean of predictive distribution)
- var_hat: numpy array of size (n_samples_test,)
Variance of predictive distribution
References¶
[1] Bishop, C. M. (2006). Pattern recognition and machine learning. springer.