ChatGPT For Business: Legal Compliance Considerations In NZ

Using ChatGPT in your business can feel like a shortcut to doing more with less - faster customer replies, draft marketing copy, internal templates, and even first-pass analysis of documents or data.

But once you start using AI tools in real business workflows, you’re also stepping into a set of legal and compliance issues that many small businesses don’t think about until something goes wrong.

This guide is a practical overview of legal compliance for ChatGPT in business in New Zealand - the main risks, the key laws to keep in mind, and the policies and contracts that can help protect you from day one.

As always, this is general information only. Because the right setup depends on how your business uses AI, it’s worth getting tailored advice before you roll it out widely.

When people search for ChatGPT compliance for business use, they’re usually trying to answer a few practical questions, like:

  • Can we put customer info into ChatGPT?
  • Who owns the content it produces?
  • Can we use AI-generated text/images in marketing and on our website?
  • What if ChatGPT produces something wrong and we rely on it?
  • Do we need staff policies around AI use?

In simple terms, compliance is about using AI in a way that:

  • follows New Zealand law (especially privacy, consumer law, and employment obligations);
  • matches your promises to customers (including in your advertising and terms); and
  • reduces preventable risk (data leaks, misleading claims, IP issues, and contractual disputes).

Even if you’re “just” using ChatGPT to draft emails or marketing content, it can still affect your legal position. For example, if an AI-drafted ad makes a claim you can’t back up, your business (not the tool) is the one exposed.

Are There Any Privacy And Data Rules When Using ChatGPT In NZ?

Yes - and privacy is usually the first place we recommend small businesses start.

In New Zealand, the Privacy Act 2020 applies when your business collects, uses, stores, or discloses personal information. That can include customer names, emails, phone numbers, addresses, purchase history, health information, and sometimes even IP addresses or identifiers depending on context.

Don’t Treat AI Prompts Like A Private Notebook

A common compliance issue is staff copying and pasting personal information (or confidential business info) into ChatGPT prompts to “speed things up”. For example:

  • drafting a reply to a complaint by pasting the customer’s full message with identifying details;
  • asking AI to summarise client files;
  • uploading internal HR notes to generate a performance plan;
  • pasting in spreadsheets with client or employee details.

From a ChatGPT compliance perspective, you should assume that anything you input could become a security, retention, or disclosure issue unless you have strong controls in place.

Practical Privacy Steps For Small Businesses

If you want a “do this first” list, start here:

  • Decide what data is banned from AI tools (for example: customer personal info, employee info, health info, bank details, passwords, commercially sensitive pricing).
  • Create an internal AI use rule so your team isn’t guessing.
  • Update your public-facing privacy documentation if AI affects how you collect or use personal info - many businesses cover this in their Privacy Policy.
  • Check your suppliers (including AI tools and integrations) and put the right terms in place - for some businesses this is where a Data Processing Agreement becomes important.
  • Have a plan for mistakes (for example, if someone accidentally shares personal info in a prompt).

It’s also important to think about overseas disclosures. Many AI tools store or process data outside New Zealand (or use overseas sub-processors). Under the Privacy Act 2020, you need to consider cross-border disclosures (including Information Privacy Principle 12) and take reasonable steps to ensure overseas recipients will protect personal information in a way that provides comparable safeguards (or otherwise meet a permitted basis for disclosure).

If your business handles sensitive information (such as health information, financial details, or information about children), the risk increases - and so does the need for a clear policy and tighter controls.

Who Owns AI-Generated Content And What About IP Infringement?

AI can generate content quickly, but it can also create legal uncertainty around intellectual property (IP). Two big issues come up for NZ businesses:

  • Ownership: do you own what the AI produces?
  • Infringement: does the output copy someone else’s work?

Ownership Isn’t Always Straightforward

In New Zealand, copyright is governed by the Copyright Act 1994, and the position on whether (and when) AI-generated outputs attract copyright protection - and who would be the “author”/owner - is still developing and can be fact-specific. In practice, the more important point for many businesses is making sure you have the right to use the output commercially under the AI provider’s terms, and that it won’t create downstream disputes with customers or collaborators.

This matters if you’re using AI to generate:

  • website copy, blogs, and product descriptions;
  • training materials or online course content;
  • branding elements (names, slogans, taglines);
  • images, packaging layouts, and design concepts;
  • code snippets or software documentation.

Where possible, treat AI output as a draft and apply human review and editing. This helps you avoid accidental copying, wrong claims, or content that doesn’t reflect your brand properly.

Confidentiality: Protect Your Business Information

Even if you don’t input personal information, you might still risk disclosing confidential business information - pricing models, supplier terms, customer lists, internal processes, product roadmaps, or unreleased marketing campaigns.

If you’re sharing sensitive information with contractors (for example, a marketing agency using AI tools to create content for you), it’s often sensible to have a Non-Disclosure Agreement in place so expectations are clear and enforceable.

And internally, you should make it clear what staff can and can’t put into AI tools - not just as a “good idea”, but as a baseline risk control.

How Do Consumer Law And Advertising Rules Apply To AI-Generated Marketing?

If you use ChatGPT to create ads, product pages, social media captions, or email campaigns, you still need to comply with New Zealand consumer law - and you’re responsible for what you publish.

The key laws to keep in mind include:

  • Fair Trading Act 1986 (misleading or deceptive conduct, false or unsubstantiated representations);
  • Consumer Guarantees Act 1993 (guarantees that apply when you sell to consumers);
  • Unfair contract terms rules (if you use standard form consumer contracts).

AI Can “Confidently” Make Claims You Can’t Prove

A very common risk is AI-generated copy that sounds great, but includes:

  • performance claims (e.g. “guaranteed results” or “works in 24 hours”);
  • medical or therapeutic claims (especially high-risk);
  • pricing statements that don’t match your actual offer;
  • comparisons with competitors that aren’t accurate;
  • statements about being “certified”, “approved”, or “compliant” when you’re not.

From a compliance perspective, build a workflow where AI-drafted content is reviewed against what you can actually deliver, what you’ve tested, and what you can substantiate.

Your Website Terms Still Matter

If you sell online, your legal foundations usually include clear customer-facing terms. AI might help you draft these, but you shouldn’t rely on generic templates (AI-generated or otherwise) without review.

For many businesses, it’s worth putting properly tailored Website Terms And Conditions in place - especially if you’re taking online payments, offering subscriptions, or relying on disclaimers around service limitations.

You can also set boundaries around how customers use your site or platform content through Terms Of Use, which becomes even more relevant if your business has community features, user-generated content, or customer accounts.

What Workplace Policies Do You Need If Staff Use ChatGPT?

If you have employees (or even long-term contractors), it’s important to be clear about how AI tools can be used at work. This isn’t about micromanaging - it’s about avoiding preventable risk and setting consistent standards.

Start With The Basics: Contracts And Confidentiality

Your employment documentation is the foundation. A properly drafted Employment Contract can help cover things like:

  • confidentiality obligations;
  • ownership of work product created during employment;
  • use of company systems and data security expectations;
  • disciplinary processes if policies are breached.

From there, you can layer in a practical AI policy that matches your actual operations.

Include A Clear “AI Use Policy” (Even If You’re Small)

Even a small team benefits from having one source of truth. A good AI policy for ChatGPT use often covers:

  • Approved use cases (e.g. first drafts of marketing copy, brainstorming, internal templates).
  • Prohibited inputs (personal information, client data, sensitive commercial information, logins).
  • Human review requirements (especially for ads, customer advice, technical instructions, and legal/financial content).
  • Transparency rules (when staff must tell a manager that content was AI-assisted).
  • Record-keeping (what needs to be retained and where, particularly if content is used externally).
  • Security controls (approved accounts, devices, and integrations).

For some businesses, it makes sense to incorporate this into a broader technology or conduct framework, or to adopt a dedicated Generative AI Use Policy so expectations are crystal clear.

Be Careful With HR Use Cases

Using ChatGPT for HR tasks (performance management notes, restructure planning, termination letters, or investigating workplace issues) can quickly become sensitive from a privacy and employment law perspective.

If you’re ever tempted to paste in employee complaints, medical information, or performance records, pause and get advice first - it’s rarely worth the risk.

How Do You Manage Contracts And Liability When AI Is In Your Workflow?

Once AI is used “in the system” (customer support, marketing approvals, internal SOPs, product recommendations), you need to think about liability and contract risk in a more structured way.

  • Customer support: AI suggests troubleshooting steps that damage property or cause injury.
  • Quotes and proposals: AI generates pricing or scope that your team doesn’t notice, creating disputes.
  • Professional services: AI drafts advice that is wrong or inappropriate for a client’s situation.
  • Website content: AI-generated terms or policies are incomplete, inconsistent, or unenforceable.

This doesn’t mean you can’t use AI - it just means you should build guardrails around it.

Set Your “Human In The Loop” Rules

A simple way to reduce risk is to define:

  • what content must be reviewed by a human before it goes out;
  • who is responsible for that review (role-based, not “someone”); and
  • what checks they must perform (accuracy, claims, consistency with policies, tone, customer commitments).

If you’re using AI for anything that could be considered advice (legal, financial, health, safety, technical), it’s particularly important to ensure customers aren’t misled into thinking AI content is professional guidance.

Review Your Customer Terms, Supplier Terms, And Risk Allocation

Contracts are where you set the rules of the relationship - including limits of liability, warranty disclaimers (where lawful), and what happens if something goes wrong.

If you’re:

  • selling services where customers rely on outputs;
  • providing information, reports, or recommendations;
  • offering subscriptions or ongoing support;
  • integrating AI into a product feature;

…it’s worth reviewing your contracts and updating them to reflect the reality of your operations.

The goal is to avoid a mismatch between what your business actually does and what your terms say you do - because that mismatch is where disputes and regulatory issues often start.

Key Takeaways

  • ChatGPT can be a great productivity tool, but your business is still responsible for privacy compliance, marketing claims, and the content you publish.
  • The Privacy Act 2020 is a key part of compliance when using ChatGPT in your business - set rules on what staff can input, manage cross-border disclosures (including IPP 12), and review how personal information is handled.
  • AI-generated marketing must comply with the Fair Trading Act 1986 and Consumer Guarantees Act 1993, so build a human review step to avoid misleading or unsubstantiated claims.
  • IP and confidentiality risks are real - avoid putting sensitive business information into AI tools and consider NDAs where contractors are involved.
  • Workplace expectations should be documented, ideally through clear employment contracts and a practical AI use policy that fits your operations.
  • Don’t rely on generic AI-generated legal documents - customer terms, privacy documentation, and internal policies should be tailored to your business.

If you’d like help setting up your ChatGPT compliance approach - whether that’s a privacy review, AI policy, contract updates, or advice on a specific use case - you can reach us at 0800 002 184 or team@sprintlaw.co.nz for a free, no-obligations chat.

Alex Solo

Alex is Sprintlaw's co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.

Need legal help?

Get in touch with our team

Tell us what you need and we'll come back with a fixed-fee quote - no obligation, no surprises.

Keep reading

Related Articles

Is Puffery Advertising Legal In New Zealand?

Is Puffery Advertising Legal In New Zealand?

If you run a small business, you’ve probably wondered where the line is between “good marketing” and “misleading advertising”. You want your ads to stand out, but you also don’t want a...

10 May 2026
Read more
How To Run Prize Competitions, Giveaways And Trade Promotions In NZ

How To Run Prize Competitions, Giveaways And Trade Promotions In NZ

Running a giveaway or prize competition can be a great way to build your audience, launch a new product, move stock, or reward loyal customers. But if you’re a small business owner,...

1 May 2026
Read more
How To Run a Lottery or Trade Promotion in New Zealand Legally

How To Run a Lottery or Trade Promotion in New Zealand Legally

Running a giveaway can be a great way to build your mailing list, increase sales, or get people talking about your brand. But in New Zealand, promotions that involve prizes and “luck”...

1 May 2026
Read more
Giveaway Laws In New Zealand: Rules Your Business Must Follow

Giveaway Laws In New Zealand: Rules Your Business Must Follow

Running a giveaway can be a great way to build your email list, grow your social following, launch a new product, or simply get more eyes on your brand. But if you’re...

14 Apr 2026
Read more
Filming Without Consent In New Zealand: What Businesses Need To Know

Filming Without Consent In New Zealand: What Businesses Need To Know

If you run a small business, chances are you’ve thought about cameras at some point - for security, training, quality control, or even marketing. But filming people without their consent (or without...

10 Apr 2026
Read more
False Advertising Pitfalls And Compliance In New Zealand

False Advertising Pitfalls And Compliance In New Zealand

If you’re running a small business, marketing is how you get noticed. But in New Zealand, the line between “good marketing” and “false advertising” can be thinner than you think. One unclear...

10 Apr 2026
Read more
Need support?

Need help with your business legals?

Speak with Sprintlaw to get practical legal support and fixed-fee options tailored to your business.