What BaseState Protects
Understanding the scope of BaseState training and certification
What BaseState Protects Against
- ✓ Untrained employee AI use
- ✓ Unsafe prompting and misuse of AI tools
- ✓ Uploading sensitive or regulated data into public models
- ✓ Lack of training documentation and proof of compliance
- ✓ Human-factor risk that undermines AI governance frameworks
- ✓ Gaps in demonstrating "good faith" and due diligence to regulators, insurers, and vendors
Alignment with Key Frameworks
- EU AI Act - training, oversight, documentation
- GDPR / North American privacy laws - where AI use touches personal data
- Canada's AIDA - accountability, responsible AI use
- NIST AI RMF - governance of human use and risk management
What BaseState Does NOT Protect Against
- ✗ It does not replace legal counsel or internal compliance officers
- ✗ It does not by itself make you fully compliant with the EU AI Act, AIDA, GDPR, or any national law
- ✗ It does not audit or certify third-party AI systems or vendors
- ✗ It does not fix AI system design issues, bias, or security flaws in external tools
- ✗ It does not provide cyber insurance, financial indemnity, or regulatory penalty coverage
- ✗ It does not override sector-specific regulations (healthcare, finance, defense, education, etc.)
What BaseState Is
- A structured training program for safe, compliant AI use
- A third-party verification layer that records who has been trained and when
- A supporting component of your broader AI governance and risk management program
What BaseState Is Not
- Not an AI software vendor — We do not sell chatbots, automations, or AI tools
- Not legal advice
- Not a guarantee of full compliance with any specific law
- Not system-level validation or audit of AI tools
- Not cyber insurance or financial indemnity
Disclaimer: BaseState is a training and verification layer for human use of AI. It supports your compliance program but does not replace legal advice or formal regulatory compliance duties.