Data Privacy
Data Privacy
Section titled “Data Privacy”Core Principles
Section titled “Core Principles”-
Your data stays on your infrastructure. For self-hosted deployments, nothing leaves your network unless you configure it to.
-
We don’t train on your data. Agent interactions are not used to train AI models. This is a commitment, not a setting.
-
You control retention. Agent memory files, message history, and logs are under your control. Delete them whenever you want.
-
Minimal data collection. We collect only what’s needed to operate the service (usage metrics, error logs). No telemetry on message content.
For Managed Deployments
Section titled “For Managed Deployments”If you use Moe’s managed infrastructure:
- Your agent workspaces are isolated (separate VMs or containers)
- We have access for operational purposes (debugging, updates) but don’t read your data
- You can export or delete your workspace at any time
- We use the AI model provider’s API (Anthropic, Google, OpenAI) which have their own data policies
Model Provider Policies
Section titled “Model Provider Policies”Your agent’s messages are sent to AI model providers for processing. These providers have committed to not training on API data:
- Anthropic (Claude) — API data is not used for training
- Google (Gemini) — API data is not used for training (when using paid API)
- OpenAI (GPT) — API data is not used for training (opt-out default since March 2023)
GDPR and Compliance
Section titled “GDPR and Compliance”For EU-based deployments:
- Data processing agreements available for enterprise customers
- Right to deletion supported (delete agent workspace = delete all data)
- Data residency in EU regions available on request
- Audit logs available for compliance reviews