graph LR
A[Intake] --> B[Assessment]
B --> C[Approval]
C --> D[Deployment]
D --> E[Monitoring]
E --> F[Review]
F --> B
AI Governance — Module 2 of 4
Click to start
“As an AI governance professional, I want to learn how to register AI use cases, conduct impact assessments, and manage lifecycle stages in ValidMind.”
This second module is part of a four-part series:
AI Governance
Training is interactive — you explore ValidMind live. Try it!
→ , ↓ , SPACE , N — next slide ← , ↑ , P , H — previous slide ? — all keyboard shortcuts
A centralized registry of all AI systems and their purposes.
Use case inventories help you:
Configure custom inventory fields for AI governance:
| Field type | Examples |
|---|---|
| Classification | Risk tier, impact level |
| Ownership | Use case owner, business sponsor |
| Purpose | Intended use, use boundaries |
| Status | Lifecycle stage, approval status |
Risk classification enables proportionate governance.
Higher-risk AI systems receive:
Align your classification to relevant regulations:
| Framework | Classification levels |
|---|---|
| EU AI Act | Prohibited, high-risk, limited-risk, minimal-risk |
| Internal | Critical, high, medium, low |
| Tiered | Tier 1, Tier 2, Tier 3 |
In ValidMind, you can:
Impact assessments evaluate potential risks and harms from AI deployment.
They document:
Use ValidMind to:
graph LR
A[Intake] --> B[Assessment]
B --> C[Approval]
C --> D[Deployment]
D --> E[Monitoring]
E --> F[Review]
F --> B
ValidMind tracks AI systems through their lifecycle:
Continue to Module 3 to learn about configuring AI workflows.
ValidMind Academy | Home