Sign In
Signed out 5
Why This Matters

Ethics in AI isn't an abstract philosophical exercise. It's a set of practical questions you face every time you use AI at work: Can I use AI for this? Do I need to disclose it? What data am I allowed to share with an AI tool? Who is responsible for AI outputs I use? These questions are arising in every organization, and the people who have thought through them in advance are the ones who don't get caught out.

The Concept

The Data Question: What Can You Share?

Every AI tool you use has a privacy policy and terms of service that govern what happens to the data you put into it. Before pasting anything into an AI tool, ask:

  • Does my organization's policy permit using this tool with this type of data?
  • Is this data confidential, legally protected, or under NDA?
  • Could including this information expose my organization or a client to risk?

General rules:

  • Most consumer AI tools (the free tiers of ChatGPT, Gemini, etc.) may use your inputs to improve their models. Don't paste confidential business data, client information, or personal data into these.
  • Enterprise versions (ChatGPT Enterprise, Claude for Work, Google Workspace AI) typically include data protection commitments. Know which version your organization has deployed.
  • When in doubt: anonymize. Replace names, specific numbers, and identifying details before sharing for analysis.

The Attribution Question: Do You Need to Disclose?

Disclosure norms for AI use are evolving rapidly and vary significantly by context. There is no single universal rule. But there are useful questions:

  • Academic work: Most academic institutions now have explicit AI use policies. Know yours and follow it.
  • Journalism and publishing: Increasingly requiring disclosure of AI-generated or AI-assisted content.
  • Professional services: If your clients are paying for your expertise and judgment, using AI to generate the substantive output without disclosure may misrepresent what they're paying for.
  • Internal work: Generally more permissive, but organizational policy varies.

A useful personal standard: if someone who would be affected by this work would feel misled if they knew how it was produced, you probably need to disclose or reconsider the use.

The Responsibility Question: Who Owns the Output?

AI generates outputs; humans are accountable for them. This is not a legal technicality — it's the operational reality. When you use AI to produce work that goes to a client, a colleague, or the public, you are responsible for that work. The AI cannot be held accountable. You can.

Practically, this means:

  • AI output you share without editing or verifying is output you're claiming as accurate
  • AI errors in your work are your errors
  • You cannot defend "the AI told me so" for professional or legal accountability

The Questions to Ask Before Using AI on Any Task

  1. Is this tool permitted by my organization's policy for this type of data?
  2. Does this data contain anything confidential, personally identifiable, or legally sensitive?
  3. Does the context require disclosure of AI use?
  4. Am I maintaining accountability for the accuracy and quality of this output?
  5. Who could be affected if this output is wrong, and what would the consequences be?
The checklist in practice

Consider this scenario: a colleague asks you to prepare a summary of client feedback for a board presentation. You have 50 pages of interview notes and want to use AI to synthesize them.

Running through the questions: the data includes client names and confidential business information — you need to check whether your AI tool has the appropriate data protection for this. The output will go to board members — you're accountable for accuracy. The summary will be used to make decisions — consequences of error are meaningful.

The right approach: anonymize the client data before analysis, use an enterprise-tier tool with appropriate data protection, verify key claims against the source documents, and produce the summary yourself with AI's help rather than using AI's summary verbatim.

Hands-On Exercise

Apply the ethics checklist to your real work

ClaudeChatGPTGemini
Think of three tasks in your current work where you use or might use AI. For each one, run through the five questions from this lesson: 1. Is the tool permitted for this data? 2. Is there confidential or sensitive data involved? 3. Is disclosure required or appropriate? 4. Am I maintaining accountability for the output? 5. What are the consequences if the output is wrong? Based on this review: are there any tasks where you were using AI in a way that needs adjustment? Are there tasks you weren't using AI for that would actually be straightforward to use responsibly?
The goal isn't to stop using AI — it's to use it thoughtfully. Most tasks will pass the checklist easily. A few will need adjustment.
Active Recall

Before moving on — close this lesson and answer these from memory. Then come back and check. Testing yourself (not re-reading) is how this sticks.

1 What data should you generally not paste into a consumer-tier AI tool (the free versions of ChatGPT, Gemini, etc.)? Why?
2 Who is accountable for AI-generated output that you share professionally? What does this mean practically for how you review and share AI outputs?
Reflection

Is there a task you currently use AI for that would benefit from more careful application of the ethics checklist? What would you do differently? Is there a task you've been avoiding AI for due to ethical uncertainty that would actually be fine with the right precautions?

Key Takeaway

AI use requires practical ethical judgment on four fronts: data privacy (what can you share?), attribution (what do you need to disclose?), accountability (you own the output), and impact (who's affected if it's wrong?). Run the five-question checklist before using AI on any consequential task.