Generative AI use policy guidelines

backgroud-texture-bg-4

Overview

The rapid adoption of generative AI holds great promise for innovations that create new opportunities for many organisations and individuals. It is also accompanied by risks, some of which are understood today and others that are emerging or yet to be discovered. Therefore, any organisation developing new capabilities and content with generative AI should have an appropriate usage policy in place.

A deliberate and well-conceived usage policy can assist an organisation in promoting innovation using generative AI technology while managing risks and enabling adjustments as the landscape develops. Sophos has developed a working use policy for generative AI so that our employees can securely pursue new innovations that could benefit our customers and partners. We have since received many requests to share our policy, and we concluded that if it could help our customers, partners, and the industry in general, we would do so.

The discussion, examples, and copyable content in this document can be used to develop an approach — formal or informal — to using generative AI in a company’s business. We hope that this policy framework will help guide the use of generative AI during the early stages of exploration and discovery. As every organisation has custom approaches to strategy and execution, it follows that the use of generative AI should be tailored to the organisation that adopts it.

This document is not a template, and it is intended for informational purposes only. This content is not intended to constitute legal advice, and we recommend consulting a legal or professional advisor before adopting or implementing any policies based on the suggested guidelines and topics below.

Definitions and considerations

“Policy” definition

Here we use the term “policy” in its most general sense to mean rules or guidelines that an organisation establishes to govern behaviour. In this case, the use of generative AI within the organisation. A policy in this context may be formally adopted through a corporate approval process or instituted more informally. An organisation’s approach to generative AI, as we are addressing it here, may be called a policy or something else. Although it is important for an organisation to consider how it communicates expectations, we are focused here on the general approach to generative AI, not on providing guidance on what a policy might be called or how it is implemented or enforced.

The scope

One initial consideration is who the generative AI usage policy covers. For example, it could apply to all employees and/or third parties who interact with the organisation, such as contractors, vendors, and technology and sales partners. Or your policy could cover a subset of one or all of these entities.

Another question to consider is the extent of technology that will be addressed in the policy. For example, “generative AI” may refer to a category of technologies trained on data sets that can generate text, images, video, sound, or other work content (output) in response to prompts (input). Examples include ChatGPT/Bard (text-to-text/image), GitHub CoPilot (text-to-code), Midjourney/Stability AI (text-to-image), ModelScope (text-to-video), and programming language code. Generative AI can also manifest as a feature in another application.

Generative AI considerations

Generative AI has the potential to deliver significant benefits by enhancing efficiency and productivity. At the same time, however, current implementations may carry risks, including inaccurate or unreliable outputs (“hallucinations”), biased or inappropriate outputs, security vulnerabilities, intellectual property (IP) and privacy concerns, legal uncertainties, and vendor licence terms and conditions that may be unacceptable to a given organisation. Moreover, there are legal uncertainties regarding whether generative AI outputs qualify for IP protection and the ownership of any generative AI-created content. When integrating a generative AI implementation into your organisation’s processes or applications, it is therefore crucial to clearly identify the materials created using generative AI tools to avoid potential complications with any company IP.

Due to the continuous rapid development of generative AI and its evolving risks, organisations can benefit from a usage policy to responsibly adopt generative AI, as outlined below.

Updates and revisions

The rapid innovation in the generative AI domain suggests that a policy should be reviewed regularly and adjusted as necessary. The state of the art is changing so fast, as is the legal and regulatory landscape, that neglect may lead to irrelevance.

Generative AI adoption, implementation, and usage

Numerous vendors have developed generative AI implementations with different methods of access (e.g., chat interface, API) through different types of accounts (e.g., personal accounts, free accounts, paid accounts) and under different user terms. A more technical question for organisations is how to enable employees and business partners to access and exchange information with generative AI.

Like other applications used for business purposes, some companies may limit the use of generative AI to corporate accounts. If there is value to the organisation, they may require the use of accounts that have terms and conditions that are acceptable to the company. In this context, it can be useful to consider generative AI options in a similar way to how you might interact with SaaS vendors and other cloud service providers that operate partly by collecting your data.

Implementing/adopting a new generative AI platform

Another consideration is the approval process required for the adoption of a generative AI platform. For example, the acquisition of a new generative AI platform for use by an organisation (whether as a standalone application or as a feature of another system) might require following the organisation’s standard procurement process, supplemented with enquiries and terms specific to generative AI.

Steps that could be taken prior to an implementation could include:

  1. Approval by appropriate functional stakeholders, where applicable, from internal organisations such as product management, engineering, data privacy, legal, security, and risk management. Some of these functions could obviously be consolidated under fewer stakeholders.
  2. A technology assessment of the commercial options that covers areas such as those below:
    • Source and quality of the training dataset.
    • Whether inputs and outputs become part of the training dataset and if there is an option to opt out of having the input/output data used to train the generative AI model.
    • The risks of using the generative AI model and internal mechanisms to mitigate/manage them.
    • Ability to comply with the generative AI system’s terms and conditions.
    • Commercial implications and associated licence entitlements.
  3. A business assessment of the planned implementation that accounts for factors such as the following:
    • Implementation cost.
    • Expected return on investment (ROI).
    • The development of tracking mechanisms for helping calculate actual ROI.
    • Planned implementation usage, which is explored in detail in the next section.

Use of approved generative AI

  1. Each new use case of generative AI could be subject to an approval process. For example, one possibility is to name a designated approver for each functional stakeholder so that learning is concentrated and accelerated.
  2. Use of safety features. If applicable, each user may be required to enable all available safety features, monitor for new safety features, and enable the new features as they become available.

Prohibited by default, approved by exception

In some cases, it may be useful to require the review and approval of generative AI usage that falls outside the standard set of approved uses. For example, the use of generative AI could be prohibited unless approved by exception. If this is the approach taken, it could be important to update the list of approved use cases regularly due to the speed of innovation.

For example, the following types of use could be prohibited unless specifically approved:

  1. Usage that necessitates the following categories of input, whether in whole or in part:
    • Any confidential information or business-sensitive information.
    • Any personal data or any information that identifies the organisation.
    • Any of the organisation’s IP.
    • Proprietary computer code.
    • Any information concerning the organisation’s customers, suppliers, partners, or other protected information, including personally identifiable information (PII).
    • Any information about employees.
    • System access credentials (for the organisation’s systems or those of any third party).
  2. Usage where the output potentially impacts the rights or obligations of any individual.
  3. Incorporation of the output into the organisation’s technology or other IP.
  4. Usage that breaches the organisation’s policies, contractual obligations, or the technology’s terms and conditions of use.
  5. Any illegal use or use that demonstrates unethical intent (e.g., disinformation, manipulation, discrimination, defamation, invasion of privacy).

Code written by generative AI

Implementing generative AI to rewrite existing code into modern, memory-safe languages is a complex and ambitious endeavour. It involves several technical, ethical, and practical considerations. For example, here is a sampling of issues to consider:

  1. Quality and reliability: preserving the functionality of the original code whilst adhering to modern memory-safe practices.
  2. Security and vulnerability analysis: a thorough review of the generated code to validate secure practices.
  3. Performance: assessment and optimisation of the generated code so that it meets or exceeds the performance of the original code.
  4. IP rights: establishing the IP rights for AI-generated code, a complicated matter that current legal frameworks may not fully address.
  5. Data privacy and compliance: taking appropriate data protection measures, in accordance with relevant regulations, to avoid inadvertently exposing sensitive or personal data used in training the generative AI models.

Utilisation of ChatGPT and comparable tools for personal productivity

In some organisations, the use of generative AI platforms may be permitted for the purpose of increasing personal administrative productivity, as illustrated below. Any such use should be subject to the following:

  1. Avoidance of any of the organisation’s prohibited uses.
  2. Adherence to the applicable terms, conditions, and policies.
  3. Where available, opting out of training dataset contributions before usage.
  4. Verification of the accuracy/reliability/appropriateness of the output prior to implementation.

Examples of permitted business use of ChatGPT (and similar free generative AI tools) through personal accounts:

  1. Fact-checking or research similar to using Google search, Wikipedia, and other internet resources.
  2. Creating first drafts of routine emails and internal documents.
  3. Editing documents.
  4. Generating fundamental concepts (e.g., compiling a list of social activities for an offsite, explaining the functionality of a specific code block, outlining the process of writing a particular function)

An organisation may decide to develop training and/or certification for individuals who will use generative AI. Current security training courses could be updated to include the risks associated with using generative AI.

It is also worth considering the implications of a user not adhering to the framework. These guidelines likely will be decided similarly to other formal or informal company policies.

Final thoughts

Generative AI will impact many aspects of an organisation, with known and unknown risks that need to be skilfully mitigated. We are at the early stages of understanding the impact of the technology, and proactive organisations will not restrict the innovation potential with generative AI. Reducing risks while encouraging exploration, curiosity, and trial and error will be the hallmark of the winners in this new age.

Taking a skilful approach to establishing usage policies tailored to an organisation’s likely use cases is a good initial step as the world adjusts to generative AI and its numerous possibilities. Beyond this, policies and guidelines could be integrated into a larger governance and risk management strategy, which may include forming a steering committee, conducting regular audits and risk assessments, and establishing ongoing policy refinement processes to balance the responsible use of generative AI with appropriate risk mitigation efforts.