Bayesian multi-model comparison

Bayesvalidrox provides three distinct methods to compare sets of models against each other given some observation of the outputs, Bayes’ Factors, model weights and confusion matrices. These are contained within the class bayesvalidrox.bayes_inference.bayes_model_comparison.BayesModelComparison and can be called one-at-a-time with their respective functions, or consecutively with the function model_comparison_all().

Example

To perform model comparison, we first need to define the set of competing models. For this, we create an additional model in the file model2.py based on the example model from Models.

>>> def model2(samples, x_values):
>>>     poly = samples[0]*np.power(x_values, 3)
>>>     outputs = {'A': poly, 'x_values': x_values}
>>>     return outputs

Then we can build another surrogate for this model, following the same code as for the surrogate in Training surrogate models.

>>> Model2 = PyLinkForwardModel()
>>> Model2.link_type = 'Function'
>>> Model2.py_file = 'model2'
>>> Model2.name = 'model2'
>>> Model2.Output.names = ['A']
>>> Model2.func_args = {'x_values': x_values}
>>> Model2.store = False
>>> MetaMod2 = MetaModel(Inputs)
>>> MetaMod2.meta_model_type = 'aPCE'
>>> MetaMod2.pce_reg_method = 'FastARD'
>>> MetaMod2.pce_deg = 3
>>> MetaMod2.pce_q_norm = 1
>>> ExpDesign2 = ExpDesigns(Inputs)
>>> ExpDesign2.n_init_samples = 30
>>> ExpDesign2.sampling_method = 'random'
>>> Engine_2 = Engine(MetaMod2, Model2, ExpDesign2)
>>> Engine_2.train_normal()

To perform model comparison we use the class bayesvalidrox.bayes_inference.bayes_model_comparison.BayesModelComparison.

>>> from bayesvalidrox import BayesModelComparison`

We collect the engines that should be compared in a dictionary, and assign them names.

>>> meta_models = {
>>>     "linear": Engine_,
>>>     "degthree": Engine_2
>>>     }

Then we create an object of class BayesModelComparison.

>>> BayesOpts = BayesModelComparison()

As the comparison uses the class bayesvalidrox.bayes_inference.bayes_inference.BayesInference, we can also set the properties for this class as well. These are collected in a dictionary and given to the function calls that perform the model comparison. In this example we use the following settings.

>>> opts_bootstrap = {
>>>     "bootstrap": True,
>>>     "n_samples": 100,
>>>     "Discrepancy": DiscrepancyOpts,
>>>     "emulator": True,
>>>     "plot_post_pred": False
>>>     }

Now we can run the full model comparison.

>>> output_dict = BayesOpts.model_comparison_all(meta_models, opts_bootstrap)

The created plots are saved in the folder Outputs_Comparison.