bayesvalidrox.bayes_inference.bayes_model_comparison.BayesModelComparison

class bayesvalidrox.bayes_inference.bayes_model_comparison.BayesModelComparison(model_dict, bayes_opts, perturbed_data=None, n_bootstrap=1000, data_noise_level=0.01, use_emulator=True, out_dir='Outputs_Comparison/')

Bases: object

A class to perform Bayesian Analysis.

Attributes

model_dictdict

A dictionary of model names and bvr.engine objects for each model.

bayes_optsdict

A dictionary given the BayesInference options.

perturbed_dataarray of shape (n_bootstrap_itrs, n_obs), optional

User defined perturbed data. The default is None.

n_bootstrapint

Number of bootstrap iteration. The default is 1000.

data_noise_levelfloat

A noise level to perturb the data set. The default is 0.01.

use_emulatorbool

Set to True if the emulator/metamodel should be used in the analysis. If False, the model is run.

out_dirstring, optional

Name of output folder for the generated plots. The default is ‘Outputs_Comparison/’.

__init__(model_dict, bayes_opts, perturbed_data=None, n_bootstrap=1000, data_noise_level=0.01, use_emulator=True, out_dir='Outputs_Comparison/')

Methods

__init__(model_dict, bayes_opts[, ...])

calc_bayes_factors()

Calculate the BayesFactors for each pair of models in the model_dict with respect to given data.

calc_justifiability_analysis()

Perform justifiability analysis by calculating the confusion matrix

calc_model_weights()

Calculate the model weights from BME evaluations for Bayes factors.

generate_ja_dataset()

Generates the data set for the justifiability analysis.

model_comparison_all()

Performs all three types of model comparison:

plot_bayes_factor(bme_dict)

Plots the Bayes factor distibutions in a \(N_m \times N_m\) matrix, where \(N_m\) is the number of the models.

plot_confusion_matrix(confusion_matrix)

Visualizes the confusion matrix and the model weights for the justifiability analysis.

plot_model_weights(model_weights)

Visualizes the model weights resulting from BMS via the observation data.

setup()

Initialize parameters that are needed for all types of model comparison

calc_bayes_factors() dict

Calculate the BayesFactors for each pair of models in the model_dict with respect to given data.

Returns

bme_dictdict

The calculated BME values for each model

calc_justifiability_analysis() dict

Perform justifiability analysis by calculating the confusion matrix

Returns

confusion_matrix: dict

The averaged confusion matrix.

calc_model_weights() dict

Calculate the model weights from BME evaluations for Bayes factors.

Returns

model_weightsdict

The calculated weights for each model

generate_ja_dataset() tuple[list, list, object]

Generates the data set for the justifiability analysis.

Returns

just_list: list

List of the model outputs for each of the given models, as well as the perturbed observations.

var_listlist

List of the uncertainty/stdev associated with each model output and perturbed observation.

samplerobject

bvr.PostSampler object.

model_comparison_all() dict
Performs all three types of model comparison:
  • Bayes Factors

  • Model weights

  • Justifiability analysis

Returns

resultsdict

A dictionary that contains the calculated BME values, model weights and confusion matrix

plot_bayes_factor(bme_dict)

Plots the Bayes factor distibutions in a \(N_m \times N_m\) matrix, where \(N_m\) is the number of the models.

Parameters

bme_dictdict

A dictionary containing the BME values of the models.

plot_confusion_matrix(confusion_matrix)

Visualizes the confusion matrix and the model weights for the justifiability analysis.

Parameters

confusion_matrix: dict

The averaged confusion matrix.

plot_model_weights(model_weights)

Visualizes the model weights resulting from BMS via the observation data.

Parameters

model_weightsarray

Model weights.

setup()

Initialize parameters that are needed for all types of model comparison