← Back to Documentation

AI Governance Strategy

Category: ai-complianceVersion: v2025-09Updated: 2025-10-09

Our commitment to responsible AI use, privacy, transparency, and human oversight.

AI Governance Strategy

(Public Commitment – September 2025)

Our Belief

AI should augment human judgement, not supersede it.

Our AI solutions exist to help teams think better, decide better, and act better – while staying in control.

We see privacy, transparency and human oversight as core design principles, not just compliance requirements. They guide how we build, how we work, and how we earn trust.

Our Core Principles

1. Privacy by Design

a. User data is a guest, not an asset – we protect it, respect it, and never keep it longer than the user chooses.

b. Encrypted and under user control – all prompts, outputs and logs are encrypted at rest; users can access, revoke or delete their data at any time. If a user requests deletion, any Guardian metadata is pseudonymized or unlinked from the user within 7 days, except where retention is required for security, fraud prevention, or legal obligations (e.g. detection of terrorism related content, human trafficking, CSAM). Such retained metadata contains no user data, is strictly access controlled, and is deleted or fully anonymized after 90 days.

c. Minimal and intentional collection – we gather only what is needed to deliver value, never for hidden or obfuscated purposes.

d. User-owned traceability – we store raw prompts, outputs and agents run logs so users can audit, replay or delete them – however these remain private to them unless they choose to share access with us.

2. Transparency and honesty

a. AI is part of Stallo core – we don't hide it, but we also don't label every sentence or document. Users know when they are interacting with an AI-powered system.

b. Clear about capabilities and limits – we explain what our product can and cannot do, including risks like bias or hallucination.

c. Plain-language model & data overview – we publish which model providers we use (e.g. Anthropic, OpenAI) and explain how we handle prompts, outputs and storage.

d. Contextual transparency – we provide signals, help text and explanations when it matters (e.g. before sending something to a client, or when a workflow is autonomous).

3. Human-in-the-loop by Default

a. Stallo recommends, drafts and summarizes – but it doesn't take irreversible action silently.

b. Users have full visibility and can review, approve or reject actions before they go live.

4. Autonomy as a User Choice

a. Autonomy is opt-in, with levels users can configure:

  • i. Assistive: No actions taken, only suggestions
  • ii. Semi-Autonomous: Safe, reversible actions (e.g. draft + delay-send).
  • iii. Autonomous: Fully proactive ambience, with audit logging and undo where possible.

b. Guardian-Enforced Safety: every output passes through Stallo's Guardian, which grades content against a documented risk taxonomy (e.g. privacy leakage, disallowed use case, bias severity):

  • i. Low-risk: Proceeds as normal
  • ii. Medium-risk: Shown with a warning, workflows pause for human-in-the-loop.
  • iii. High-risk: Blocked by default, but can be expanded by the user.
  • iv. Critical: Blocked and flagged; Cannot be expanded or viewed.

c. Runtime Governance – When "Medium" or higher risks occur:

  • i. Workflows and autonomous agents automatically downgrade to HITL mode until outputs are confirmed safe.
  • ii. A consent-based audit trail is recorded for organizational review.
  • iii. A documented escalation procedure assigns Medium-/High-risk events to a named reviewer role (e.g. workspace admin or compliance officer). Every workspace has at least one reviewer; the first admin is assigned by default. The role can be reassigned, but never left unassigned. Critical events generate automated alerts and blocks further processing.

d. Follow-up and Enforcement

  • i. Metadata from Guardian events are available to Stallo's compliance and AI alignment team to investigate systematic risks, abuse patterns, or malicious usage attempts.
  • ii. Retention of Guardian metadata is justified under GDPR Art. 6(1)(f) (legitimate interest) for security and compliance purposes, and are disclosed in our Data Governance Statement.
  • iii. After 90 days Guardian metadata is decoupled from user identifiers and kept only as aggregated statistics to monitor trends over time.
  • iv. Repeated critical events may result in temporary account suspension and follow-up with the workspace owner to ensure safe usage.

e. Workspace-Aware Logging – expansions of hidden or flagged content are logged and encrypted with the workspace keys, giving admins the ability to review when necessary.

f. Reversible Autonomy – users can pause, revoke or reconfigure autonomy settings at any time.

5. Fairness and Respect

a. Bias-aware by design – we regularly test models and prompts for bias and update system prompts and filters to reduce unfair outcomes.

b. Guardian-powered risk flags – potentially biased, unsafe, or harmful outputs are flagged as "Medium-risk", requiring human review before automation continues.

c. User-driven feedback loop – users can flag problematic output; flagged content remains encrypted and under workspace control, but can be shared with us for improvement if consent is given.

d. No harmful use cases – we refuse to support applications designed to exploit, deceive, or discriminate.

6. Security and Reliability

a. Enterprise-grade foundations – we run on SOC 2 / ISO 27001-certified infrastructure, encrypting data and vectorized embeddings in transit and at rest.

b. User-controlled access – prompts, outputs, user data and logs are encrypted with workspace keys. We do not access them unless explicit, revocable consent is granted. Any support access is logged, auditable and shared with workspace admins.

c. Policies and controls – we maintain internal policies, processes and access controls that prohibit unauthorized data access. Exceptional "break-glass" access follows a documented, multi-party approval process and is treated as a security incident.

d. Strict access control – we apply least-privilege access, strong authentication, and continuous monitoring for keys and secrets.

e. Resilience and monitoring – we track uptime, model version health, and workflow performance, with fallback to HITL if anomalies are detected.

f. Continuous hardening – security reviews and improvements are a part of our regular product development lifecycle.

How we put this into practice

Lifecycle Stage Our Commitment
Design Map privacy, bias, and autonomy risks early. Design encryption, access control and Guardian grading as defaults.
Build All primary data storage is in the EEA. Model inference currently relies on providers located in the United States. All such transfers are governed by Standard Contractual Clauses (SCCs), with Transfer Impact Assessments (TIAs) completed by production use and reviewed annually thereafter, until EU-hosted equivalents become viable. Implement consent flows for support access and enterprise-friendly key management.Transfer Impact Assessments (TIAs) are completed prior to any pilot or production processing of EU personal data, including during early testing phases.Our providers' Data Processing Addenda (DPA) include the EU Commission's Standard Contractual Clauses (SCCs) for all cross-border transfers.We maintain signed copies of all DPAs and SCCs, which are available for regulator or enterprise review upon request.
Deploy Clearly show when and how prompts and outputs are stored. Give users and admins controls to export, delete and revoke access.
Operate User data, including conversation history, is fully user-controlled and can be cryptographically purged by key deletion. Backup copies are also deleted or rendered inaccessible during key destruction, ensuring full erasure. Guardian metadata logs are stored separately, containing no user content, and remain available for compliance and security analysis even if the original data is deleted.
Monitor We track workflow health without exposing private content. When a user chooses to expand or view flagged content, the action is logged as an indicator that a potential breach was viewed, but the content itself is never transferred to Stallo's compliance and AI alignment team.Organizational review of flagged events requires data-owner consent. If consent is denied, Guardian metadata remains available for severity assessment, but we cannot view the content. Continued refusal to cooperate may lead to temporary or permanent suspension under our acceptable use and safety policies.
Improve Review this strategy quarterly and after major changes. Report publicly on progress under the AI Pact and refine based on user, admin, and regulatory feedback.

Our Boundaries

We will not:

  • Use user data for model training without explicit, revocable consent.
  • Deploy fully autonomous decision-making in employment, credit, health or legal contexts without a human review step.
  • Enable covert monitoring, surveillance, or other user cases that violate user trust.

Governance & Accountability

  • Data Ownership – Users and organizations own their data and logs; we are stewards, not owners.
  • Access Model – All access to encrypted data requires explicit, revocable consent and is logged, auditable and shared with workspace admins.
  • Policies & Controls – Unauthorized access is prohibited by policy and treated as a security incident. Exceptional "break-glass" access requires multi-party approval.
  • Transparency – We publish this governance strategy and report annually on our progress under the AI Pact.

Our promise

We would rather lose a customer by being honest about Stallo's limitations than gain one by overpromising.

Trust is not a feature – it's the foundation of Stallo.

Document Information

  • File: ai-compliance/ai-governance-strategy_v2025-09.md
  • Category: ai-compliance
  • Version: 2025-09 (date)
Download formats: