Skip to main content

Everywhere I go – whether I’m speaking to CEOs, nonprofit leaders, or IT teams, one question keeps coming up: “What’s actually safe to share with ChatGPT or other AI tools?” 

People want to use AI responsibly, but they’re not sure who “owns” the guardrails. The truth is: no single company or government fully controls the answer. It’s shared — between you as the user, the model provider, and the laws that protect our data and privacy. 

And at GadellNet, this matters deeply to us. As a B Corp and Managed Services Provider, we believe technology should amplify impact, not create new risks. That’s why we spend so much time helping organizations adopt AI responsibly—with transparency, security, and trust at the center. 

Let’s Be Clear About Who Owns What When It Comes to AI 

You, the user. 
You own what you type into an AI tool. You’re also responsible for what you expose. If you share sensitive client data, confidential contracts, or internal strategy documents, that’s on you. The safest approach is still: don’t feed a model what you can’t afford to see resurface. (There are ways across all frontier models of turning off training the model. You can even turn on this setting in LinkedIn.) 

The AI provider. 
Companies like OpenAI and Microsoft are responsible for how models are trained, stored, and secured. Enterprise products, like Microsoft 365 Copilot and ChatGPT Enterprise, specifically states that your data isn’t used to train public models. That’s one of the reasons we advocate for enterprise-grade AI tools for our clients. 

The regulators. 
Laws like GDPR, CCPA, and the upcoming AI Acts set the minimum standards for fairness, consent, and data protection. They exist because “responsibility by design” can’t be optional anymore. It must be part of every organization’s DNA. 

A man holding a microphone and gesturing

Five Simple Rules for Safe AI Use 

At GadellNet, these are the guidelines we use ourselves and share with our clients: 

  1. Never input data you wouldn’t want in the wild. 
    If it’s under NDA, contains Personally Identifiable Information (PII), or would create harm if disclosed, it doesn’t belong in a public or consumer AI tool. 
  1. Anonymize before you analyze. 
    Replace names, account numbers, or client identifiers with placeholders before asking a public model to help. 
  1. Use enterprise AI tools whenever possible. 
    ChatGPT Enterprise, Microsoft Copilot, and Hatz AI all offer strict data isolation that protects your inputs and outputs. 
  1. Create data classifications—and stick to them. 
    Define what’s “public,” “internal,” “confidential,” and “restricted.” Only the first one should touch general-purpose models. 
  1. Educate your team. 
    Policies don’t protect data, people do. Regular reminders, examples, and brief training can help prevent accidental exposure, especially when new technologies emerge, such as AI-enabled browsers like OpenAI’s Atlas, Perplexity’s Comet, and Google’s Gemini in Chrome.  

Why This Matters to You and GadellNet

We’ve built GadellNet on trust, service, and doing the right thing even when no one’s watching. As we help clients navigate AI adoption, that means building a foundation of responsible experimentation. We want teams to use these tools, because they’re powerful, but with eyes wide open about how data flows, where it’s stored, and who can access it. 

A Simple Rule of Thumb 

Before sharing anything with an AI model, ask: “If this information showed up in someone else’s chat or dataset, would that create risk or regret?” 

If the answer is yes – pause. Anonymize it, route it through a private model, or ask your IT partner for a safer workflow. 

AI should make us more human, not less careful. 

At GadellNet, we’re committed to helping organizations strike that balance—using AI to save time, amplify impact, and protect what matters most. If you have questions you want answers to, contact our team of AI experts today.

Further Reading & References 

1. Data Exposure from LLM Apps: An In-depth Investigation of OpenAI’s GPTs (2024) 
Research from Washington University and the University of Chicago showing that some GPT “Actions” and custom GPTs collect more user data than most realize, underscoring the importance of enterprise governance and access controls. 

2. Malicious and Unintentional Disclosure Risks in Large Language Models for Code Generation (2025) 
A 2025 study highlighting how language models can inadvertently memorize and reveal sensitive information, even without malicious intent. 

3. OpenAI to Retain Deleted ChatGPT Conversations Following Court Order (SiliconANGLE, 2025) 
Details the legal requirement for OpenAI to preserve deleted chat logs—an important case study in how legal and privacy policies can quickly evolve. 

4. OpenAI Is Storing Deleted ChatGPT Conversations as Part of Its NYT Lawsuit (The Verge, 2025) 
Explains how deleted user data is being retained across ChatGPT tiers due to active litigation, prompting new conversations around data retention transparency. 

5. Privacy Under Pressure: What the NYT v. OpenAI Teaches Us About Data Governance (National Law Review, 2025) 
A legal commentary breaking down how this case redefines data governance expectations for organizations using third-party AI systems. 

6. A Survey on Privacy Risks and Protection in Large Language Models (2025) 
A global review of privacy threats in large language models and the mitigation strategies (like differential privacy and federated learning) that are emerging. 

7. Trustworthy AI: Securing Sensitive Data in Large Language Models (2024) 
An engineering-focused paper on designing privacy-preserving AI architectures through encryption, sandboxing, and secure deployment models. 

8. On Protecting the Data Privacy of Large Language Models (2025) 
A ScienceDirect publication examining privacy leakage, model inversion, and best-practice frameworks for safeguarding sensitive information in enterprise deployments.