Appropriate Use of Generative AI Tools

About Generative AI Tools

With the emergence of various generative AI tools (e.g., ChatGPT, Google Gemini, DALL-E, Microsoft CoPilot, Zoom AI Companion, Adobe Acrobat AI Assistant, and others), members of the campus community are eager to explore their use in the university context. This advisory guides how to use these tools to support innovation without putting institutional, personal, or proprietary information at risk.

In all cases, use should be consistent with UC Berkeley’s Principles of Community and the UC Principles of Responsible AI (summary / full report)

Allowable Use by Data Classification

  • Publicly-available information (Protection Level P1) can be used in generative AI tools. 

  • UC has agreements relating to the following AI tools, which may allow for use with more sensitive information, as listed. It is important to be sure you are using these tools under UC’s contracts, rather than individual consumer accounts, to benefit from UC’s contractual protections. Limitations of use under UC’s contracts are noted below:
    • Microsoft CoPilot: Approved for use with P2 information

    • Adobe Firefly: Approved for use with P1 information (no additional protections under the UC Berkeley agreement)

    • Zoom AI Companion (pilot only): Approved for use with P2 information, providing everyone actively consents to its use each time.

    • Otter.ai: Approved for use with P3 information, providing everyone actively consents to its use each time.

    • For more information about AI tools available through UC systemwide and UC Berkeley licensing, please review the list of Tools by Data Protection Level and Availability.

Prohibited Use

  • Completion of academic work in a manner not allowed by the instructor.
  • Unless specifically stated in the "Allowable Use" section above, no personal, confidential, proprietary, or otherwise sensitive information may be entered into or generated as output from models or prompts. Student records subject to FERPA, and any other information classified as Protection Level P2, P3, or P4 should not be used unless specifically allowed under UC's contracts (see list in "Allowable Use by Data Classification" section above). This includes:

    • Creation of non-public instructional materials
    • Proprietary or unpublished research
  • Some generative AI tools, such as OpenAI,  explicitly forbid their use for certain categories of activity, including harassment, discrimination, and other illegal activities. An example of this can be found in found in OpenAI's usage policy document. This is also consistent with UC Berkeley’s Principles of Community,

Additional Consultation Needed

  • New Uses of AI: If you are considering a new use of Generative AI in your studies or work, it is your responsibility to consider the ethics and risks involved and obtain approval from your instructor/responsible unit head. Be sure to take the AI Essentials Training and complete an AI Risk Assessment [UC Berkeley guide coming soon!]. 
  • Use of AI that involves highly-consequential automated decision-making requires extreme caution, and should not be employed without prior consultation with appropriate campus entities, including the responsible Unit head, as such use could put the University and individuals as significant risk.

    Examples include, but are not limited to:
    • Legal analysis or advice
    • Recruitment, personnel, or disciplinary decision-making
    • Seeking to replace work currently done by represented employees
    • Security tools using facial recognition
    • Grading or assessment of student work
  • Personal Liability: Please note that certain generative AI tools use click-through agreements. Click-through agreements, including OpenAI and ChatGPT terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions. [Business Contracts FAQ]
  • For questions regarding privacy with AI tools, contact privacyoffice@berkeley.edu. For questions regarding appropriate use of AI tools, the UC Artificial Intelligence Risk Assessment Guide is a helpful starting point.

Rationale for the Above Guidance

Per UC LEGAL - OFFICE OF THE GENERAL COUNSEL’s Legal Alert: Artificial Intelligence Tools (March 2024)

"[AI] is a tool that calls for the same type of analysis as any other third-party service or product. Indeed, like any tool utilized by UC, whether for education, research, health care, or operational purposes, AI tools and their terms of use are subject to all UC policies regarding third parties and their access to UC data, such as those relating to procurement, research, privacy, data security, accessibility and employee, faculty, and student conduct. ….

“Unless specific precautions are taken, any content that is provided to generative AI tools can be saved and reused by the company offering the tool (e.g., OpenAI for ChatGPT) and their affiliates. Providing data to a generative AI tool requires the same analysis as when UC provides data to any service vendor, research partner, or other third party. Therefore, unless Units enter into properly negotiated agreements with these companies, which would include privacy, confidentiality, security, accessibility and intellectual property terms consistent with UC policy and appropriate for the types of information at issue, Units are prohibited from providing any information that could be construed as confidential information of UC, or confidential information of a third party. Such disclosure could cause UC to be in violation of statutory or contractual requirements.” 

Additional Resources and Guidance

Last updated: November 8, 2024