Alex is Sprintlaw's co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
If you’re running a small business, it’s easy to see why AI tools are tempting. They can help you draft emails faster, brainstorm marketing ideas, summarise notes, and even create first drafts of policies and proposals.
But there’s a big question that comes up quickly (and it’s a smart one to ask): is ChatGPT confidential?
This article is general information only and not legal advice. Because AI tools and their terms can change, and every business handles different types of information, you should get advice for your specific situation.
For New Zealand business owners, questions about ChatGPT confidentiality aren’t just technical. They’re a legal and risk management issue. If you put customer details, staff information, pricing, or commercially sensitive plans into an AI tool, you might be creating privacy risk, confidentiality risk, and even intellectual property issues without realising it.
Below, we’ll break down what “confidential” actually means in practice, how the Privacy Act 2020 fits in, and the practical steps you can put in place so your team can use AI tools confidently (without accidentally exposing data your business is supposed to protect).
What Does “ChatGPT Confidentiality” Actually Mean For Your Business?
When people ask whether ChatGPT is “confidential”, they’re usually asking one (or more) of these questions:
- Will anyone else see what I type?
- Can the tool store, reuse, or learn from my inputs?
- If I paste in personal information, am I breaching privacy law?
- If I paste in business secrets, have I “leaked” confidential information?
From a legal point of view, “confidential” generally means information that:
- is not public;
- has value because it’s not public (commercial value, strategic value, or personal sensitivity); and
- is shared in circumstances where it should be kept private (for example, within your business, with your staff, or with service providers under a contract).
That’s why it helps to separate two concepts that often get mixed up:
- Privacy: usually about personal information (information about an identifiable individual) and the legal obligations around collecting, using, storing, and disclosing it.
- Confidentiality: usually about non-public information (which can include personal info, but also trade secrets, pricing, contracts, and internal plans) and the obligations to keep it secret.
In other words: you can have a confidentiality problem even if there’s no “privacy breach” (for example, exposing your wholesale pricing or business acquisition plans). And you can have a privacy problem even if you didn’t mean to “share” anything publicly (for example, disclosing customer data to a third party tool without the right safeguards).
Is What You Type Into ChatGPT Confidential Under New Zealand Law?
The safest mindset for a business owner is this:
Don’t assume an AI tool is confidential unless you have clear, written terms (and settings) that give you that protection.
In NZ, confidentiality can come from a few sources, including:
- Contracts (for example, a confidentiality clause in an agreement with a supplier or consultant);
- Equitable obligations (where information is clearly confidential and shared in circumstances creating a duty of confidence); and
- Professional duties (for example, some industries have strict confidentiality expectations).
When you use an AI tool, you’re usually dealing with a third-party service provider. Whether your information is “confidential” in a practical sense will depend heavily on:
- the provider’s terms of use and privacy terms (as they change from time to time);
- the settings you have selected (if any) about data retention, logging, and whether inputs may be used to improve services;
- where data is stored and processed; and
- who in your business has access to the account and the conversation history.
From a Privacy Act 2020 perspective, the key issue is often whether what you’re entering counts as personal information (including customer details, client files, staff performance notes, health information, or even identifiable call transcripts).
The Privacy Act doesn’t ban you from using external tools. What it does require is that you handle personal information responsibly, including:
- only collecting/using it for a lawful purpose;
- not using or disclosing it in ways that are unexpected or unfair;
- keeping it secure (reasonable safeguards); and
- being transparent with people about what you’re doing with their information.
So if your team is copying and pasting personal info into an AI tool “to save time”, you should treat that as a compliance decision - not just a productivity hack.
And if you’re handling customer or client data, your website and customer touchpoints should clearly explain how you handle information, including through a well-drafted Privacy Policy.
What Privacy And Confidentiality Risks Should NZ Businesses Watch For?
Most confidentiality issues with AI tools aren’t caused by bad intentions. They come from everyday business habits: moving fast, copying text, and trying to get a quick answer.
Here are the most common risk areas we see for small businesses thinking about ChatGPT confidentiality.
1. Accidentally Sharing Personal Information (Customer Or Staff Data)
If you paste in a customer complaint, an order history, or a staff incident report, you may be disclosing personal information to a third party.
This is especially risky if the information includes “sensitive” details (like medical information, allegations of misconduct, or financial hardship). Even if you remove the person’s name, the details might still identify them.
Practical examples to avoid:
- “Here’s a client email chain with their address and invoice history - draft a response.”
- “Summarise these performance notes about my employee.”
- “Rewrite this complaint including their phone number so it sounds nicer.”
2. Leaking Confidential Business Information
Confidential information isn’t just “secrets”. It can include:
- pricing models and margins;
- supplier terms and rates;
- client lists;
- business sale plans and due diligence materials;
- product roadmaps; and
- internal policies that reveal how your business handles disputes or refunds.
If you share that information with an external tool, you may lose control of who can access it, where it’s stored, and how long it is retained. That can create real commercial harm (especially if your business relies on a competitive edge).
It can also create contractual problems. Many supplier and client contracts include a confidentiality clause that restricts how you can use and disclose information you receive.
3. Breaching Your Client Confidentiality Promises
Even if you’re not in a regulated profession, many businesses promise confidentiality as part of their service (think: consultants, agencies, coaches, IT providers, and anyone handling client data).
If your client gives you information on the basis it will be kept private, and you paste it into an AI tool without clear permission and safeguards, you risk:
- breaching your contract;
- damaging the relationship; and
- creating reputational harm if the client finds out.
Where you work with clients on sensitive projects, it’s common to use a properly drafted Non-Disclosure Agreement to set clear rules (including what tools you can and can’t use with confidential info).
4. IP And Ownership Confusion (Who Owns The Output?)
For many small businesses, the bigger risk isn’t whether the tool is “confidential” - it’s whether the output is safe to use commercially.
For example:
- Can you publish AI-generated copy on your website?
- Can you use AI-generated imagery in marketing materials?
- Does your client agreement allow you to use AI to produce deliverables?
- If you paste your own templates into the tool, are you accidentally giving away valuable IP?
This is why your legal documents should clearly deal with IP ownership and confidentiality, especially in your client-facing Service Agreement.
5. Staff Misuse Or “Shadow AI”
Even if you personally don’t paste sensitive info into AI tools, your staff might.
Common causes include:
- no clear workplace policy about AI tool use;
- pressure to move quickly and “just get it done”;
- employees using personal accounts on personal devices; and
- no training on what counts as personal information or confidential information.
If you have staff, you’ll usually want both contractual and policy-based controls - for example a tailored Employment Contract plus workplace policies that deal with confidentiality and tech use.
How Can You Use AI Tools Safely? A Practical Compliance Checklist
You don’t need to ban AI tools to protect your business. For most NZ small businesses, it’s about setting sensible rules and training your team so you get the benefits without creating unnecessary exposure.
Here’s a practical checklist you can work through.
1. Decide What’s “Never Enter” Information
Create a clear internal rule for high-risk data that should never be pasted into an AI tool.
Common “never enter” categories include:
- customer names, contact details, ID numbers, payment details, or full addresses;
- staff performance or disciplinary information;
- medical or health information (including mental health details);
- client confidential documents (especially if you’re under NDA);
- contracts, deeds, and legal advice you’ve received; and
- passwords, API keys, and system access details.
If your business deals with sensitive sectors (health, finance, education, children’s services), you may need stricter rules.
2. Use Anonymisation And Data Minimisation
If you want help drafting a customer response or summarising an issue, consider “sanitising” the input:
- remove names and contact details;
- change specific dates/locations that could identify a person;
- describe the issue at a high level (“a customer says the product arrived damaged”); and
- only include what the tool actually needs to do the task.
This aligns with good privacy practice: only use/disclose personal information when it’s necessary and proportionate.
3. Set Up Access Controls In Your Business
Confidentiality failures often happen internally, not externally. If multiple staff share one login, or if former staff keep access, your risk increases.
Basic controls to consider:
- unique accounts for staff (where possible);
- multi-factor authentication (MFA);
- rules about using business accounts only (not personal accounts);
- clear offboarding steps when someone leaves; and
- limits on who can use AI tools for certain tasks (e.g. HR issues only handled by managers).
4. Check Vendor Terms Like You Would Any Supplier
If you outsource payroll or use cloud accounting software, you probably don’t do it blindly. AI tools should be treated the same way.
Before adopting any tool in your business, check:
- what the provider says about storage and retention (and what it actually lets you control);
- whether inputs may be used to improve services, and what options (if any) exist to limit that;
- how to delete data and what deletion processes are available;
- where data is processed (especially if it’s stored offshore); and
- what security measures are described.
If the tool provider is processing personal information on your behalf, it’s worth considering whether you need additional contractual safeguards (particularly where you’re handling sensitive information or working with enterprise suppliers). Depending on the provider and your risk profile, this may include terms similar to a Data processing agreement.
5. Train Your Team (And Make It Easy To Follow The Rules)
Policies only work if they’re practical.
A good training approach is to give your team examples of:
- what is “confidential information” in your business;
- what counts as “personal information” under the Privacy Act 2020;
- safe prompts (drafting generic content) vs unsafe prompts (pasting client files); and
- what to do if they think they made a mistake (who to tell, and how quickly).
Many businesses also put in place a specific workplace policy for AI usage so expectations are crystal clear, like a tailored Generative AI use policy.
6. Create An “AI Use” Approval Process For High-Risk Tasks
If you use AI tools for anything that touches customers, legal terms, HR decisions, or regulated advice, introduce a simple approval process.
For example:
- marketing can use AI for first drafts, but a human must review before publishing;
- HR-related prompts must not include staff personal info and require manager review;
- customer dispute responses can be drafted, but final wording must be checked against your refund and complaint processes.
This doesn’t slow you down much, but it significantly reduces the chance of mistakes that later become legal issues.
What Legal Documents And Policies Help Protect You?
Using AI tools safely is part tech, part people, and part legal foundations.
Here are the key documents and clauses that often matter for NZ businesses thinking about ChatGPT confidentiality and broader AI use.
Confidentiality Terms With Staff And Contractors
If your staff (or contractors) will use AI tools, you want clear written obligations about:
- what information must be kept confidential;
- what tools can be used (and what can’t);
- how business information must be stored and protected; and
- what happens if someone breaches confidentiality.
This is commonly addressed through your employment and contractor documentation, starting with a well-drafted Employment Contract.
Customer/Client Contracts That Allow (Or Restrict) AI Use
If you provide services (especially professional services, creative services, consulting, marketing, tech, or admin support), your clients may assume a human is producing the work - unless you tell them otherwise.
Depending on your industry and what you deliver, it can be important to deal with AI usage in your Service Agreement, including:
- whether AI tools may be used;
- what confidentiality safeguards apply;
- who owns the deliverables;
- limitations of liability where appropriate; and
- any client consent requirements for using third-party tools.
NDAs For Sensitive Commercial Conversations
If you’re sharing sensitive information with potential partners, investors, buyers, or suppliers, you generally want confidentiality locked down before you share details.
An Non-Disclosure Agreement can help, and it can also be drafted to cover modern scenarios like restrictions on uploading or processing confidential information through external tools (including AI systems).
Privacy Compliance Documents
If your business collects personal information (most do), your compliance toolkit usually includes:
- a clear Privacy Policy that reflects what you actually do;
- privacy collection notices (especially at the point of collection); and
- internal procedures for storage, access, and deletion.
If you use external providers to process personal information, it’s also worth looking at how you contractually manage that relationship, including whether you need additional safeguards in your supplier terms. In some cases, that may include provisions similar to a Data processing agreement.
An AI Use Policy That Matches Your Business Reality
A practical AI policy should be easy to follow and tailored to what your team actually does. It should cover things like:
- approved use cases (e.g. generic marketing drafts, brainstorming, formatting);
- prohibited use cases (e.g. client data, staff issues, confidential contracts);
- account rules (business accounts only, no shared logins); and
- review requirements (human review before publishing or sending to clients).
For many workplaces, a dedicated Generative AI use policy is the cleanest way to set expectations without trying to squeeze everything into one document.
Key Takeaways
- Don’t assume ChatGPT is “confidential” by default - treat AI tools like any other external service provider and check what the terms and settings actually allow.
- Under the Privacy Act 2020, personal information must be handled carefully, including keeping it secure and not disclosing it in unexpected ways.
- Confidentiality risk isn’t just about customer data - it also includes pricing, strategy, supplier terms, and other commercially sensitive information.
- Create a clear “never enter” list for staff so no one accidentally pastes sensitive customer or employee details into an AI tool.
- Use practical safeguards like anonymisation, restricted access, staff training, and review processes for high-risk tasks.
- Get your legal documents aligned by using the right contracts and policies, such as a Privacy Policy, confidentiality clauses, NDAs, and an AI use policy.
If you’d like help setting up the right privacy and confidentiality protections for your business (including AI use policies, customer terms, and contracts), you can reach us at 0800 002 184 or team@sprintlaw.co.nz for a free, no-obligations chat.







