Four Kitchens

Responsible AI use policy

Introduction

At Four Kitchens, we have a unique professional responsibility to inhabit two worlds. In one world, we are forward-thinking, imaginative, and experimental. In the other world, we are constrained by timeline, budget, security, privacy, and accessibility concerns — to name a few. To do good work and preserve our clients’ trust, we must be meticulous, cautious, and pragmatic.

Our approach to AI also inhabits these two worlds. We are excited by the possibilities AI presents, but we are skeptical of its claims and fully aware of its limitations. We constantly try new AI tools, but we carefully vet them before adding them to our workflow. Above all, our use of AI — or our refusal to use it — must serve our clients’ best interests.

Four Kitchens’ Responsible AI Use Policy explains how we should (and shouldn’t) use AI. It’s intentionally short and high-level. The world of AI is changing rapidly, so we must rely on guidelines rather than detailed instructions or checklists.

Terminology and scope

  • “AI” and “AI tools” refer to anything that uses machine learning, large language models, or other techniques to generate content, predict outcomes, or summarize or translate content. In other words, we mean everything from generative AI (ChatGPT and Midjourney) to virtual assistants (chatbots) to notetakers (Fathom).
  • This responsible AI use policy places special emphasis on generative AI. That’s because AI that generates “new” content is particularly useful and controversial, so more policies are written with this type of AI in mind.

Guidelines

Ethical and human-focused use

Above all, we use AI ethically and in the service of all humans. This begins with acknowledging that AI is inherently flawed. These flaws include:

  • Bias. The most popular LLMs are trained on major websites (such as Wikipedia), which are themselves biased due to the demographics of who writes and maintains them. AI as a whole is a mirror reflecting our society’s issues and inequities back to us.
  • Environmental impact. AI requires a lot of energy to train and use, a lot of machinery to operate, and a lot of water to cool it all down.
  • Economic impact. AI is eliminating jobs previously thought safe from automation. AI is also, in some cases, perpetuating gaps in wealth and class. For example, some AIs are trained by real people performing repetitive tasks for very little money.

We keep these flaws in mind when using AI tools and proving AI services to our clients, and we minimize their impact whenever possible. Here are some examples of how we do this:

  • We are intentional about using and implementing AI. We don’t integrate AI tools in a client’s CMS unnecessarily. This can introduce countless problems for the client in the future if done thoughtlessly: added maintenance costs, copyright claims, and privacy issues to name a few.
  • We help our clients optimize their content for content for AI. This reduces token cost, which makes their content less expensive for the AI to retrieve — while also making it easier for their audiences to understand and their team to maintain.

Transparency

We are transparent about our use of AI. There’s no good reason to hide its use — in fact, it sends a positive message to our clients and the world that we’re leading the way through innovation and thought leadership.

However, we don’t disclose every time we use AI to write an email or a line of code. (This is, in fact, impossible. Countless tools and services have AI “baked in,” and we are often not aware AI is being used in the background.) Web Chefs should exercise their own judgment when deciding what to disclose. When in doubt, we err on the side of transparency.

Here are the general rules we follow regarding transparency:

  • We inform our clients about our meaningful use of AI tools.
  • We inform our clients about our meaningful use of generative AI in deliverables.
  • We inform clients when their data may be used to train AI, and we should explain the steps we take to protect, obscure, and/or anonymize their data.

Documentation

We document our use of AI tools, both internally and for client engagements. This includes documenting:

  • The process we use to review, approve, and reject AI tools.
  • The AI tools we have approved for use, their intended use, and any concerns raised during the review process — particularly data privacy concerns. Each tool’s policy regarding the use of data for training purposes is specifically documented and/or linked to.
  • The AI tools we have rejected for use and the reasons why.
  • The AI tools we have used in each client engagement, as well as a brief explanation of how they were used.
  • Whether a client has objected to the use of any AI tools and the alternatives, if any, we’ve agreed to.
  • Changes to clients’ AI software settings (e.g., system prompts, temperature).

Documentation is maintained on the Four Kitchens wiki, project wikis, in code, or wherever is most appropriate for that use case.

Privacy and data security

Privacy and data security for ourselves and our clients are top priority. We assume anything we upload or paste into an AI tool will be stored permanently — and possibly be used in a response to someone else’s prompt in the future.

Here are the general rules we follow to maintain our and our clients’ privacy:

  • We avoid using AI tools that don’t specifically say they aren’t gathering data for training purposes. (Hint: If it’s free, it’s probably gathering your data.)
  • We don’t rely on default settings.
    • Opt out of “store your activity,” “training,” “analytics,” and the like.
    • Consider opting into “auto delete” and other privacy-enhancing options.
  • We scrub all data and remove all identifying information before pasting it into the chat window.

Intellectual property

According to United States law, only human authorship is protectable. This means AI-generated content is not “ownable” by anyone — not us, our clients, or the creator of the algorithm. Additionally, because most generative AI tools are “trained” using content found on the open web, other people’s work may appear in AI-generated output.

As a result, AI-generated content is not “ownable” and may unknowingly include other people’s work. For these reasons, we do not use AI-generated content directly in our final deliverables without (1) receiving permission from our clients in advance, (2) reviewing it for quality, accuracy, and potential biases, and (3) verifying that it does not include other people’s work.

Accountability

Accountability cannot be outsourced to a machine. AI is an assistant, not a replacement for good judgment, and humans are ultimately responsible for the actions of an AI.

We never deliver or incorporate work that has been created by AI without a human reviewing it for quality, accuracy, and potential biases, and verifying that it does not include other people’s work.

If there are any negative outcomes as a result of our AI or AI-generated deliverables, we will take full responsibility and make it right.

However, our clients are responsible for any modifications they make to our deliverables. If it’s possible our clients could modify our deliverables, we should assume they will, and we should educate them about possible negative outcomes.

Onboarding

This Responsible AI Use Policy must be part of new Web Chef onboarding.

How we don’t use AI

For clarity, here are some ways we don’t use AI in our work:

  • Creating high-level strategic direction for our clients. All of our strategy is created by expert humans using real-world experience and research.
  • Replacing human creativity. While we use AI as a partner in the creative process, we do not outsource creativity to AI.
  • Misleading or manipulating our clients or their audiences. All content created using AI must align with our or our clients’ values. AI-generated content should be reviewed for bias, inaccuracies, and other risks.
  • Impersonating other organizations or people. AI has the ability to create content “in the style” of other brands or public figures. We do not allow this without that person’s or brand’s explicit permission.
  • Pretending to be human. All AI tools that we implement will be clearly identified as such and will not pretend to be a human-powered chat tool or customer service agent.