Durga Analytics • AIGP • BoK v2.1• 50 Hours

AIGP - Artificial Intelligence Governance Professional
Prep Course

A self-contained, zero-background training mapped precisely to the AIGP Body of Knowledge v2.1 (Effective 2 Feb 2026).

Course Snapshot

  • • Domains I–IV → 12 modules → 52 sequential chapters
  • • 40–60 hours of self-paced learning
  • 2.75 Hrs Exam | Scored Questions 85 | Unscored Questions 15
  • • Exam-aligned practice matching BoK blueprint

Curriculum — Domains I–IV (Chapters 1–52)

The course preserves the exact AIGP roadmap: Domain → Competency (Module) → Performance Indicator (Chapter). Chapters are numbered sequentially 1 → 52. Each chapter is designed as a 10–15 minute micro-lesson plus assets.

DOMAIN I — Understanding the foundations of AI governance (Chapters 1–12)
1. I.A — What is AI: types, definitions & simple analogies
2. I.A — Mapping AI risks & harms (individual → societal)
3. I.A — Unique AI characteristics that require governance
4. I.A — Responsible AI principles: fairness, safety, privacy, transparency, accountability
5. I.B — Defining roles & responsibilities for AI governance stakeholders
6. I.B — Building cross-functional AI governance teams (collaboration & diversity)
7. I.B — AI training & awareness program for all stakeholders
8. I.B — Tailoring governance approaches to size, maturity & industry
9. I.B — Developers, providers, deployers, users — responsibilities compared
10. I.C — Policies for oversight across the AI lifecycle (use-case → monitoring)
11. I.C — Policy gap analysis: updating privacy, security & related policies for AI
12. I.C — Third-party risk policies: procurement, contracts & supply chain controls
DOMAIN II — How laws, standards & frameworks apply to AI (Chapters 13–28)
13. II.A — Transparency, choice, lawful basis & purpose limitation in AI
14. II.A — Data minimization & privacy-by-design for AI (DPIAs)
15. II.A — Controller obligations: DSRs, cross-border transfers, breach reporting
16. II.A — Handling sensitive/special categories of data (e.g., biometrics)
17. II.B — Intellectual property issues: training data & model output risks
18. II.B — Nondiscrimination law implications (employment, credit, housing)
19. II.B — Consumer protection law & deceptive AI practices
20. II.B — Product liability basics for AI systems
21. II.C — AI risk classification frameworks (prohibited → minimal risk)
22. II.C — Risk management, technical documentation & impact assessments
23. II.C — Human oversight, transparency & quality management requirements
24. II.C — Distinct obligations for general-purpose AI models (GPMs)
25. II.C — Enforcement, penalties & role-based differences (provider vs deployer)
26. II.D — OECD principles for trustworthy AI: practical translation
27. II.D — NIST AI RMF: core functions, categories & lifecycle mapping
28. II.D — Core ISO AI standards overview (ISO 22989, 42001, 42005)
DOMAIN III — How to govern AI development (Chapters 29–44)
29. III.A — Define business context & use case for the AI system
30. III.A — Conduct and review AI impact assessments (workflow & scoring)
31. III.A — Ethics-by-design: requirements, architecture & human oversight
32. III.A — Identify & mitigate design/build risks (probability/severity matrix)
33. III.A — Documentation & traceability for design & build (compliance artifacts)
34. III.B — Data governance for training/testing: lawful rights & fit-for-purpose
35. III.B — Data lineage & provenance practices and documentation
36. III.B — Training & testing plans: performance, bias, security, interpretability
37. III.B — Issue identification & mitigation during training/testing
38. III.B — Documenting training & testing results for validation & audit
39. III.C — Release readiness: model card, conformity & production checklist
40. III.C — Continuous monitoring strategy & schedule for maintenance/retraining
41. III.C — Periodic assessment activities: audits, red-teaming & threat modeling
42. III.C — Incident management: identification, documentation & remediation
43. III.C — Cross-functional collaboration to diagnose AI incidents
44. III.C — Public disclosures, technical documentation & post-market monitoring
DOMAIN IV — How to govern AI deployment & use (Chapters 45–52)
45. IV.A — Deployment context assessment: objectives, data & workforce readiness
46. IV.A — Differences in AI model types: classic vs generative; proprietary vs open
47. IV.A — Deployment options: cloud, on-prem, edge; fine-tuning, RAG, agents
48. IV.B — Deployment impact assessment for selected AI system
49. IV.B — Vendor & licensing agreement risk identification & review
50. IV.B — Risks & obligations unique to proprietary, in-house models
51. IV.C — Operational controls for deployment: policies, data governance & risk mgmt
52. IV.C — Monitoring, post-market assurance, deactivation controls & external communications

Self-Paced

On-demand

Full Course + PDF Notes

Pro — Mentor Review

Instructor feedback & cohort

Contact

Includes mentor capstone review + Q&A.

Enterprise

Cohorts

Contact Sales

Corporate Cohorts, tailored labs & LMS export.

Instructors & Credibility

Instructor

Course Authors

Subject-matter experts in AI governance, privacy, compliance, and applied ML — contributors to AIGP-aligned training and real-world governance programs.

Includes: Example artifacts, runbooks, and exam-aligned practice.

Get Started

Enroll or request a cohort. We'll provide access to curriculum, lab datasets and project briefs.