The Data Question: What Can You Share?
Every AI tool you use has a privacy policy and terms of service that govern what happens to the data you put into it. Before pasting anything into an AI tool, ask:
- Does my organization's policy permit using this tool with this type of data?
- Is this data confidential, legally protected, or under NDA?
- Could including this information expose my organization or a client to risk?
General rules:
- Most consumer AI tools (the free tiers of ChatGPT, Gemini, etc.) may use your inputs to improve their models. Don't paste confidential business data, client information, or personal data into these.
- Enterprise versions (ChatGPT Enterprise, Claude for Work, Google Workspace AI) typically include data protection commitments. Know which version your organization has deployed.
- When in doubt: anonymize. Replace names, specific numbers, and identifying details before sharing for analysis.
The Attribution Question: Do You Need to Disclose?
Disclosure norms for AI use are evolving rapidly and vary significantly by context. There is no single universal rule. But there are useful questions:
- Academic work: Most academic institutions now have explicit AI use policies. Know yours and follow it.
- Journalism and publishing: Increasingly requiring disclosure of AI-generated or AI-assisted content.
- Professional services: If your clients are paying for your expertise and judgment, using AI to generate the substantive output without disclosure may misrepresent what they're paying for.
- Internal work: Generally more permissive, but organizational policy varies.
A useful personal standard: if someone who would be affected by this work would feel misled if they knew how it was produced, you probably need to disclose or reconsider the use.
The Responsibility Question: Who Owns the Output?
AI generates outputs; humans are accountable for them. This is not a legal technicality — it's the operational reality. When you use AI to produce work that goes to a client, a colleague, or the public, you are responsible for that work. The AI cannot be held accountable. You can.
Practically, this means:
- AI output you share without editing or verifying is output you're claiming as accurate
- AI errors in your work are your errors
- You cannot defend "the AI told me so" for professional or legal accountability
The Questions to Ask Before Using AI on Any Task
- Is this tool permitted by my organization's policy for this type of data?
- Does this data contain anything confidential, personally identifiable, or legally sensitive?
- Does the context require disclosure of AI use?
- Am I maintaining accountability for the accuracy and quality of this output?
- Who could be affected if this output is wrong, and what would the consequences be?