Data Privacy & Security
Understand AI provider data policies to make informed choices
As an AI Agent, SailFish sends the following data to AI providers:
- Terminal output (IPs, paths, processes)
- Command history (may contain secrets)
- Server configurations
- Business code & logs
- Risks go beyond training — breaches, subpoenas, and internal access are all threats
International Providers
API data is not used for model training by default, with explicit contractual commitments. Consumer products (ChatGPT, Claude.ai, etc.) have different policies.
Only opt-in via Developer Partner Program. Cleanest data policy among all providers.
View policy →Most transparent policies, richest enterprise options. Supports customer-managed encryption keys (EKM).
View policy →Free tier has training risk. Longer retention (55 days) than peers. Enterprise covered by Cloud DPA.
View policy →China-based Providers
Policies below are for each provider's API platform (not consumer apps). Most API platforms have separate service agreements that may be stricter than consumer products. All providers are subject to PIPL and Data Security Law.
Official docs state: "We will never use your data for model training." AES-256 encryption. Clearest API policy among China providers.
View policy →Has separate Open Platform Terms of Service, but training policy still references main privacy policy.
View policy →API served via Volcengine (Ark platform) with separate Model Service Agreement and Data Authorization Agreement.
View policy →API via Qianfan platform (not Ernie Bot app). Has independent agreement, security whitepaper, AES-256 encryption, Level 3 security certification.
View policy →Open platform (platform.minimaxi.com) has separate platform agreement. Gaining prominence with abab model series and Hailuo AI video generation.
View policy →Open platform (bigmodel.cn) has separate privacy policy. Users own their data. Prohibits using outputs to train competing models.
View policy →Open platform (platform.moonshot.ai) has separate Terms of Service and privacy policy.
View policy →Risks Beyond Training
"Not used for training" is good news, but your data still faces multiple risks during transit and storage:
Overall Safety Rating
Legal Landscape
China
- Training with personal data requires user consent
- Must not illegally retain identifiable user input
- Authorities can demand training data disclosures
- Art. 13: Processing requires individual consent
- Art. 14: Consent must be fully informed, voluntary, explicit
- Art. 15: Right to withdraw; bundled consent prohibited
2024 assessment by Southern Metropolis Daily: most domestic platforms lack adequate notice and convenient opt-out mechanisms.
European Union
- Art. 5(1)(c): Data minimization — process only what's necessary
- Art. 5(1)(b): Purpose limitation — no repurposing without consent
- Art. 22: Right to explanation of AI decisions
- GPAI model rules effective August 2025
- Most provisions fully effective August 2026
- Requires public training data summaries (6 categories)
Cumulative GDPR fines reached €5.88 billion by 2024. Once data enters model weights, deletion is practically unenforceable.
United States
- No comprehensive federal AI legislation
- Current administration favors deregulation
- California AB 2013: AI Training Data Transparency Act
- California SB 53: Frontier AI Transparency, up to $1M fines
- Colorado: first comprehensive state AI law (effective Jun 2026)
- Over 30 states pursuing AI-related legislation
Choose by Scenario
Security Tips
- Prefer API over consumer products — API data policies are generally stricter than ChatGPT, Ernie Bot app, etc.
- For China-based providers, check settings — confirm "data for model improvement" toggles are disabled
- Never paste secrets directly in conversations — even with no-training promises, data still transits their servers
- Review provider policies periodically — privacy policies can change, review every 6 months
- Consider local models for sensitive environments — data never leaves your machine, the safest option
Based on publicly available provider policy documents as of March 2026. For reference only, not legal advice. Policies may change — verify via original links before use.