The rapid adoption of Generative AI holds great promise of innovations that create new opportunities for many organizations and individuals. It is also accompanied by risks; some of which are understood today and others that are emerging or yet to be discovered. As such, any organization developing new capabilities and content with Generative AI should have an appropriate Use Policy in place.
Sophos has developed an initial Use Policy for Generative AI so that our employees can securely pursue new innovations that could benefit our customers and partners. We have since received many requests asking if we could share such a document and we concluded that if it could help our customers, partners, and the industry in general, we would share these guidelines.
The discussion, examples, and copyable content in this document can be used to develop an organization’s approach – formal or informal – for the use of Generative AI in the company’s business. We hope that this framework for a policy will be helpful in guiding the use of Generative AI during the early stages of exploration and discovery. As every organization has custom approaches to strategy and execution, it follows that the use of Generative AI should be tailored to the organization that adopts it.
This document is not a template, and it is intended for informational purposes only. This is not legal advice, and we recommend consulting with a legal or professional advisor before adopting or implementing any policies based on the suggested guidelines and topics below.
A deliberate and well-conceived Use Policy can help an organization promote innovation using Generative AI technology while managing risks and allowing for changes as the landscape develops.
Definitions and Considerations
Here we use the term “policy” in its most general sense to mean rules or guidelines that an organization establishes to govern behavior, in this case for the use of Generative AI within the organization. A “policy” in this context may be formally adopted through a corporate approval process or adopted more informally. An organization’s approach to Generative AI, as we are thinking of it here, may be called a “policy” or it could be called something else. Although it is important for an organization to consider how it communicates expectations, we are focused on the approach to Generative AI and are not trying to provide guidance on what a policy might be called or how it is implemented or enforced.
One initial consideration is who does the Generative AI use policy apply to? For example, it could apply to all employees and/or third-parties who interact with the organization, such as contractors and vendors. You might also consider technology and sales partners. Or it could be a subset of one or all of these entities.
Another question to consider is the scope of technology that will be covered in the policy? For example, “Generative AI” may refer to a category of technologies that are trained on data sets and can generate text, images, video, sound or other work product (output) in response to prompts (input). Examples include ChatGPT/Bard (text-to-text/image), GitHub CoPilot (text-to-code), Midjourney/Stability AI (text-to-image), ModelScope (text-to-video), programming language code, etc. Generative AI can also appear as a feature in another application.
Generative AI Considerations
Generative AI has the potential to deliver significant benefits by increasing efficiency and productivity. Simultaneously, current Generative AI implementations may carry risks, including inaccurate or unreliable outputs (“hallucinations”), biased or inappropriate outputs, security vulnerabilities, IP and privacy concerns, legal uncertainties, and vendor license terms and conditions that may be unacceptable to a given organization. Additionally, there are legal uncertainties around whether Generative AI outputs qualify for IP protection as well as around the ownership of any Generative AI created content. When integrating a Generative AI implementation into your organization’s processes or applications, it is therefore crucial to clearly identify the materials created using Generative AI tool to avoid potential complications with any company IP.
Because of the ongoing rapid development of Generative AI and its evolving risks, organizations can benefit from a use policy to responsibly adopt Generative AI, as outlined below.
Updates and Revisions
The rapid innovation in this domain suggests that a policy should be reviewed regularly and adjusted as necessary. The state of the art is changing so fast, as is the legal and regulatory landscape, that neglect may lead to irrelevance.
Generative AI Adoption/Implementation and Use
There are numerous vendors who have developed Generative AI implementations with different methods of access (e.g., chat interface, API) through different types of accounts (e.g., personal accounts, free accounts, and paid accounts) and under different user terms. So, a more technical question for organizations to consider is how to allow employees and business partners to access and exchange information with Generative AI.
Like other applications used for business purposes, some companies may restrict the use of Generative AI to corporate accounts. If there is value to the organization, they may require the use of accounts that have terms and conditions that are acceptable to the company. In this respect, it is helpful to think of Generative AI options similarly to how you might be engaging with other SaaS vendors or cloud service providers who supply services and operate on collections of your data.
Implementing/Adopting a New Generative AI Platform
Another consideration is the approval process required for adoption of a Generative AI platform. For example, the acquisition of a new Generative AI platform for use by an organization (whether as a standalone application or as a feature of another system) could be required to follow the organization’s standard procurement process, supplemented with specific inquiries related to Generative AI.
Steps that could be taken prior to an implementation could include:
- Approval by appropriate functional stakeholders where applicable. For example: Product Management, Engineering, Data Privacy, Legal, Security, Risk Management. Some of these functions could obviously be consolidated under fewer stakeholders.
- A technology assessment of the commercial options, for example in the following areas:
- Source and quality of the training data set;
- Whether inputs and outputs become part of the training data set, and the ability to opt-out of having the input/output data be used to train the Generative AI model;
- Associated risks of using the Generative AI model and internal mechanisms to mitigate/manage such risks;
- Ability to comply with the Generative AI system’s terms or conditions;
- Commercial implications and associated license entitlements.
- A business assessment of the planned implementation, for example:
- Implementation cost
- Expected return on investment
- Development of tracking mechanisms for evaluation of actual return on investment
- A usage assessment of the planned implementation:
- This is explored in detail in the next section
Use of Approved Generative AI
- Each new use-case of Generative AI could be subject to an approval process. For example, one possibility is to name a designated approver for each functional stakeholder so that learning is concentrated and accelerated.
- Use of safety features. If applicable, each user could be required to enable all available safety features, including monitoring for and using new safety features as they become available.
Prohibited by Default, Approved by Exception
In some cases, it may be useful to require review and approval of Generative AI use outside the set of approved uses. For example, use of Generative AI could be prohibited unless approved by exception. If this is the approach taken, it could be important to update the list of approved use cases regularly due to the speed of innovation.
For example, the following types of use could be prohibited unless approved:
- Usage that necessitates the following categories of input, whether in whole or in part:
- any confidential information or business sensitive information
- any personal data or any information that identifies the organization
- any organization intellectual property
- proprietary computer code
- any information of the organization’s customers, suppliers, partners or other protected information including PII
- any information about employees
- system access credentials (for the organization’s systems or those of any third party)
- Usage where the output potentially affects the rights or obligations of any person.
- Incorporation of the output into the organization’s technology or other intellectual property.
- Any use that violates the organization’s policies, contractual obligations, or the technology’s terms and conditions for use.
- Any unlawful usage or usage that demonstrates unethical intent. (i.e., disinformation, manipulation, discrimination, defamation, invasion of privacy, etc.)
Code Generation by Generative AI
Implementing Generative AI to rewrite existing code into modern, memory-safe languages is a complex and ambitious undertaking. It involves several technical, ethical, and practical considerations. For example, here are a subset of issues to consider:
- Quality and reliability: the functionality of the original code should be preserved while adhering to modern memory-safe practices
- Security and vulnerability analysis: thorough review of the generated code to validate secure practices
- Performance: assessment and optimization of the generated code so that it meets or exceeds the performance of the original code
- IP rights: establishing the IP rights for AI- generated code can be a complicated matter, and current legal frameworks may not fully address these scenarios
- Data Privacy and Compliance: Generative AI models, trained using sensitive or personal data, may inadvertently expose such data in the generated code, requiring appropriate data protection measures in accordance with relevant regulations
Usage of ChatGPT and Similar Tools for Personal Productivity
In some organizations, use of Generative AI platforms may be permitted for the purpose of increasing personal administrative productivity, as illustrated below. Any such use should be subject to the user’s: (i) avoiding any of the organization’s prohibited uses; (ii) adherence to the applicable terms and conditions, and applicable policies (iii) where available, enabling of opt-out of training data set contributions prior to use; and (iv) verification of the accuracy/reliability/appropriateness of the output prior to implementation.
Examples of Permitted Business Use of ChatGPT and Similar Free Generative AI Tools Through Personal Accounts
- Fact-checking or deepening understanding of a subject matter: using in the same way Google or Wikipedia or other internet resources are used
- First drafts: creating first drafts of routine emails or internal documents
- Editing documents
- Generating general ideas: (i.e., list of social activities for an offsite, how a particular code block works or how to write a particular function, etc.)
An organization may decide to develop training and/or certification for individuals who will use Generative AI. Current security training courses could be updated to include the threats and risks associated with use of Generative AI.
It is also worth considering the consequences of a user not following the framework. This likely will be decided similarly to other formal or informal company policies.
Generative AI will impact many aspects of an organization, with known and unknown risks that need to be skillfully mitigated. We are in the early stages of understanding the impact, and forward-thinking organizations will not limit the innovation possibilities with Generative AI. Reducing risks while encouraging exploration, curiosity, and trial and error will be the hallmark of the winners in this new age.
A skillful approach to establishing use policies and guidelines tailored to an organization’s likely use cases is a good first step as the world adapts to Generative AI and its many possibilities. Beyond this, use policies and guidelines could be integrated into a larger governance and risk management strategy, which may include forming a steering committee, regular audits and risk assessments, and ongoing policy refinement to balance the responsible use of Generative AI with appropriate risk mitigation efforts.