bayesvalidrox.post_processing.post_processing.PostProcessing¶
- class bayesvalidrox.post_processing.post_processing.PostProcessing(engine, name='calib', out_dir='', out_format='pdf')¶
Bases:
objectThis class provides post-processing functions for the trained metamodels.
Parameters¶
- engineobj
Trained Engine object, is expected to contain a trained MetaModel object.
- namestring
Name of the PostProcessing object to be used for saving the generated files. The default is ‘calib’.
- out_dirstring
Output directory in which the PostProcessing results are placed. The results are contained in a subfolder ‘/Outputs_PostProcessing_name’ The default is ‘’.
- out_formatstring
Format of the saved plots. Supports ‘png’ and ‘pdf’. The default is ‘pdf’.
Raises¶
- AttributeError
engine must be trained.
- __init__(engine, name='calib', out_dir='', out_format='pdf')¶
Methods
__init__(engine[, name, out_dir, out_format])plot_correl(y_model, y_mm[, r_2, name])Plot the correlation between the model and metamodel outputs.
plot_expdesign([n_mc, show_samples])Visualizes training samples over their given distributions as a pairplot.
plot_moments([plot_type])Plots the moments in a user defined output format (standard is pdf) in the directory Outputs_PostProcessing.
plot_residual_hist(y, y_mm)Checks the quality of the metamodel residuals via visualization and Normality (Shapiro-Wilk) test.
plot_seq_design_diagnostics([plot_single, ...])Plots the validation metrics calculated in the engine.
plot_sobol(sobol_values, par_names, outputs)Generalized function to plot Sobol' indices from PCE models or SALib.
plot_validation_outputs(model_out, out_mean)Plots outputs for visual comparison of metamodel outputs with that of the (full) multioutput original model
sobol_indices([plot_type, save, plot])Visualizes and writes out Sobol' and Total Sobol' indices of the trained metamodel.
validate_metamodel([n_samples, sampling_method])Evaluates all available validation metrics for the engine on validation samples and visualizes the results.
- plot_correl(y_model, y_mm, r_2=None, name='valid') None¶
Plot the correlation between the model and metamodel outputs.
Parameters¶
- y_modeldict
Model evaluations.
- y_mmdict
MetaModel evaluations.
- r_2dict, optional
R2 score for each output key (as a list per key). The default is None.
- namestring, optional
Name of the file. The default is ‘valid’.
- plot_expdesign(n_mc=10000, show_samples=False)¶
Visualizes training samples over their given distributions as a pairplot.
Parameters¶
- n_mcint, optional
Number of samples from the priors to use for plotting. The default is 10000.
- show_samplesbool, optional
If set to True the training samples are also visualized. The default is False.
Returns¶
None.
- plot_moments(plot_type: str = 'line')¶
Plots the moments in a user defined output format (standard is pdf) in the directory Outputs_PostProcessing.
Parameters¶
- plot_typestr, optional
Supports ‘bar’ for barplots and ‘line’ for lineplots The default is line.
Raises¶
- AttributeError
Plot type must be ‘bar’ or ‘line’.
Returns¶
- means: dict
Mean of the model outputs.
- means: dict
Standard deviation of the model outputs.
- plot_residual_hist(y, y_mm) None¶
Checks the quality of the metamodel residuals via visualization and Normality (Shapiro-Wilk) test.
Parameters¶
- ydict
Model evaluations.
- y_mmdict
Corresponding MetaModel evaluations
- plot_seq_design_diagnostics(plot_single=True, metrics=None, name='engine') None¶
Plots the validation metrics calculated in the engine.
Parameters¶
- plot_singlebool
If set to True, generates a single file with all the results. If set to False, generates a file for each validation type.
- metricslist
List of the metrics dictionaries, if generated in validation and not as part of the engine.
- namestr, optional
Name of the plot. Will be ‘post’ if used from self.validate_metamodel. The default is ‘engine’.
- plot_sobol(sobol_values: dict, par_names: list, outputs: list, sobol_type: str = 'sobol', i_order: int = 1, x: ndarray | None = None, xlabel: str = 'Time', plot_type: str = 'line', out_dir: str = './', out_format: str = 'png', quantiles: tuple | None = None)¶
Generalized function to plot Sobol’ indices from PCE models or SALib.
Parameters¶
- sobol_valuesdict
Dictionary of Sobol’ indices for each output.
- par_nameslist
List of parameter names or interaction tuples.
- outputslist
List of output variable names.
- sobol_typestr
‘sobol’ or ‘totalsobol’.
- i_orderint
Order of Sobol’ index (only for sobol_type=’sobol’).
- xnp.ndarray, optional
X-axis values (e.g., time). If None, defaults to index range.
- xlabelstr
Label for x-axis.
- plot_typestr
‘line’ or ‘bar’.
- out_dirstr
Output directory for plots.
- out_formatstr
File format for saving plots.
- quantilestuple, optional
Tuple of (q5, q975) quantile arrays for confidence intervals.
- plot_validation_outputs(model_out, out_mean, out_std=None)¶
Plots outputs for visual comparison of metamodel outputs with that of the (full) multioutput original model
Parameters¶
- model_outdict
Model outputs.
- out_meandict
MetaModel mean outputs.
- out_stddict
MetaModel stdev outputs.
Raises¶
AttributeError: This evaluation only support PCE-type models!
Returns¶
None
- sobol_indices(plot_type: str = 'line', save: bool = True, plot: bool = True)¶
Visualizes and writes out Sobol’ and Total Sobol’ indices of the trained metamodel. One file is created for each index and output key.
Parameters¶
- plot_typestr, optional
Plot type, supports ‘line’ for lineplots and ‘bar’ for barplots. The default is line. Bar chart can be selected by bar.
- savebool, optional
Write out the inidces as csv files if set to True. The default is True.
Raises¶
- AttributeError
MetaModel in given Engine needs to be of type ‘pce’ or ‘apce’.
- AttributeError
Plot-type must be ‘line’ or ‘bar’.
Returns¶
- sobol_alldict
All possible Sobol’ indices for the given metamodel.
- total_sobol_alldict
All Total Sobol’ indices for the given metamodel.
- validate_metamodel(n_samples=None, sampling_method='random')¶
Evaluates all available validation metrics for the engine on validation samples and visualizes the results.
Parameters¶
- n_samplesint, optional
Number of validation samples to generate. If this is set to None, then the validation samples in the exp_design are used. The default is None.
- sampling_methodstr, optional
Sampling method. The default is ‘random’.
Returns¶
- valid, bayesdict, dict
Evaluation of the applicable metrics.