.. model governance, providing traceability and auditability
What about covering critical Model Governance capabilities from within one single environment? One reference, one user interface. Your model runs organized in projects and jobs. Model types, versions and evolution available in a single overview. Integrated audit replication runs and data quality testing. Not imposing models, but hosting yours. That's what we aim for, with MonkeyProof modelSafe.
In modelSafe, you create, run and group related model analyses (jobs) in projects, managed in version control.
You can wrap runs for particular asset-classes in separate projects. Included jobs may be of any model
type or version. The precondition? That your models are hosted in modelSafe.
The Project Summary offers overview of included jobs: when, what, and how. Available at any point in time. For audit replication
or other reuse purposes. A modelSafe job constitutes a job file and its related binary files, containing input and results data.
All stored together safely.
Code evolution of included models is easily monitored, by integrated version control capabilities. In addition to commit logs,
lines of modified code are instantly highlighted. If required, an inventory management module can be included in modelSafe.
The Result Metrics section lets you compare results of runs related to each other. This offers insight in capital variations
and their root-causes quickly, to be discovered in job configuration settings, code modifications or evolved data.
In addition to graphical depictions, results are available numerically and can be exported to any format required. If opted for,
automated reports are available to share current status internally or with authorities.
Equally notable as the project scope is the job scope. Within the individual jobs, users specify not only configuration settings
for a run, but also scenario types and overrides (including justification). The option to clone jobs enables sensitivity analyses,
if required jobs can be imported from or exported to other projects.
In addition, users and authorities alike can invoke validity tests on the underlying calculation data, as specified for that
specific internal model version.
Have to prove model usage in greater detail? The Usage Metrics show which models and versions have been used, over a
predefined period. If opted for, usage metrics can be propagated to an inventory module, to log usage over the model lifecycle.
Concluding, modelSafe covers your model governance, from within a single environment, backstopped by integrated version control.