About Generative AI Tools
With the emergence of various generative AI tools (e.g., ChatGPT, Google Gemini, DALL-E, Microsoft CoPilot, Zoom AI Companion, Adobe Acrobat AI Assistant, and others), members of the campus community are eager to explore their use in the university context. This advisory guides how to use these tools to support innovation without putting institutional, personal, or proprietary information at risk.
In all cases, use should be consistent with UC Berkeley’s Principles of Community(link is external) and the UC Principles of Responsible AI (PDF file)(link is external) (summary (PDF file)(link is external) / full report (PDF file)(link is external))
Allowable Use by Data Classification
-
Publicly-available information (Protection Level P1(link is external)) can be used in generative AI tools.
- UC has license agreements for certain AI tools, which provide protection for use with more sensitive information. It is important to be sure you are using licensed tools, rather than individual consumer accounts, to benefit from UC’s contractual protections. Please see
https://ai.berkeley.edu/resources/licensed-generative-ai-tools(link is external)
- Use of AI tools procured by units separately from the campus or systemwide agreements mentioned above must also adhere to the approved Protection Level limitations advised by the unit to ensure compliance with the agreement and appropriate protections relative to the safety features of the tool. Units offering such tools should clearly advise staff and users as to the appropriate use and Protection Level limitations of the AI (and all) tools that they offer. Units are encouraged to use and refer people to the campus Data Classification Standard(link is external) and Guidelines(link is external) for assistance with determining the Protection Level of data being contemplated for use in unit-provided AI tools.
Prohibited Use
- Completion of academic work in a manner not allowed by the instructor.
-
Unless specifically stated in the "Allowable Use" section above, no personal, confidential, proprietary, or otherwise sensitive information may be entered into or generated as output from models or prompts. Such information includes:
-
Non-public instructional materials
-
Proprietary or unpublished research
-
Any other information classified as Protection Level P2, P3, or P4(link is external) (unless specifically allowed under UC's contracts; see list in "Allowable Use by Data Classification" section above.
- Some generative AI tools, such as OpenAI, explicitly forbid their use for certain categories of activity, including harassment, discrimination, and other illegal activities. An example of this can be found in found in OpenAI's usage policy document(link is external). This is also consistent with UC Berkeley’s Principles of Community(link is external),
Additional Consultation Needed
- New Uses of AI: If you are considering a new use of Generative AI in your studies or work, it is your responsibility to consider the ethics and risks involved and obtain approval from your instructor/responsible unit head. Be sure to take the AI Essentials Training(link is external) and consult CERC-AIR, a committee that assesses and offers guidance for mitigating AI risks
- Use of AI that involves highly-consequential automated decision-making requires extreme caution, and should not be employed without prior consultation with appropriate campus entities, including the responsible Unit head, as such use could put the University and individuals as significant risk.
Examples include, but are not limited to:- Legal analysis or advice
- Recruitment, personnel, or disciplinary decision-making
- Seeking to replace work currently done by represented employees
- Security tools using facial recognition
- Grading or assessment of student work
- Personal Liability: Please note that certain generative AI tools use click-through agreements. Click-through agreements, including OpenAI and ChatGPT terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions. [Business Contracts FAQ(link is external)]
- For questions regarding privacy with AI tools, contact privacyoffice@berkeley.edu(link sends e-mail). For questions regarding appropriate use of AI tools, the UC Artificial Intelligence Risk Assessment Guide (PDF file)(link is external) is a helpful starting point.
Rationale for the Above Guidance
"[AI] is a tool that calls for the same type of analysis as any other third-party service or product. Indeed, like any tool utilized by UC, whether for education, research, health care, or operational purposes, AI tools and their terms of use are subject to all UC policies regarding third parties and their access to UC data, such as those relating to procurement, research, privacy, data security, accessibility and employee, faculty, and student conduct. ….
“Unless specific precautions are taken, any content that is provided to generative AI tools can be saved and reused by the company offering the tool (e.g., OpenAI for ChatGPT) and their affiliates. Providing data to a generative AI tool requires the same analysis as when UC provides data to any service vendor, research partner, or other third party. Therefore, unless Units enter into properly negotiated agreements with these companies, which would include privacy, confidentiality, security, accessibility and intellectual property terms consistent with UC policy and appropriate for the types of information at issue, Units are prohibited from providing any information that could be construed as confidential information of UC, or confidential information of a third party. Such disclosure could cause UC to be in violation of statutory or contractual requirements.”
Additional Resources and Guidance
- Hub Website for AI at UC Berkeley(link is external)
- Hub Website for AI systemwide at University of California(link is external)
Last updated: July 25, 2025