AI Compliance & Responsible Use

Last Updated: December 24, 2025

Introduction

This statement explains how Intus Lab Inc designs, deploys, and governs artificial intelligence systems in alignment with the EU AI Act and US AI governance principles.


1. Scope

Applies to all AI-powered features, models, APIs, and services, including beta and production systems.


2. EU AI Act Compliance

Intus Lab follows a risk-based AI framework:

  • No prohibited AI practices (social scoring, unlawful surveillance, manipulation)
  • High-risk AI systems undergo risk assessment, logging, and human oversight
  • Users are informed when interacting with AI-generated content
  • AI supports human decisions, not replaces them

3. AI Output Limitations

AI outputs may be inaccurate, incomplete, or biased and must not be relied upon for legal, medical, financial, or safety-critical decisions.


4. Training Data & Customer Content

Unless agreed in writing:

  • Customer data is not used to train AI models
  • API inputs remain under customer control
  • Outputs belong to the customer (subject to law)

5. US AI Compliance & Consumer Protection

Aligned with FTC and US laws, including:

  • Truthful AI capability representation
  • Bias mitigation
  • Accountability and auditability
  • Security safeguards

6. Acceptable Use

Users must not use AI to:

  • Break laws
  • Generate harmful or deceptive content
  • Automate high-impact decisions without human review

7. Monitoring & Enforcement

Intus Lab may monitor usage and suspend access for violations.


8. Contact

AI governance inquiries: legal@intuslab.io