AI guidelines for staff

Yale University AI Guidelines for Staff
Updated March 2025

Yale University is committed to fostering a culture of innovation and learning, encouraging staff to explore and experiment with artificial intelligence (AI) tools in their work. Recognizing that experimentation may involve challenges and setbacks, we celebrate the learning opportunities that arise from these experiences. As AI technologies continue to evolve, it is essential to use them responsibly, ensuring the protection of Yale’s institutional data and adherence to ethical standards. These guidelines are designed to support staff in the responsible and effective use of AI:

  1. Embrace Experimentation and Innovation
    1. Explore AI Tools: Staff are encouraged to experiment with AI to enhance work processes and outcomes.
    2. Learn from Failures: Not all AI applications will succeed, and that’s okay. View challenges as opportunities to refine processes and deepen understanding.
  2. Safeguard Yale’s Institutional Data
    1. Data Classification Awareness: Always assess whether an AI tool is appropriate for the type of information you are handling. AI users must adhere to Yale’s Data Classification Policy, ensuring that no moderate or high-risk data is entered into AI tools that are not explicitly approved for such use. Visit Yale’s Information Security Office site for detailed information on risk classification
    2. Institutional Data Protection: Treat Yale’s institutional data with the same care as personal or confidential information, ensuring that sensitive university-related content is not exposed to external AI systems.
  3. Ensure Ethical and Responsible Use
    1. Monitor AI Outputs: AI-generated content may be inaccurate or biased. Always verify information before using it in decision-making or communications.
    2. Comply with Policies: AI use should align with Yale’s ethical standards, privacy guidelines, and contractual obligations.  In addition, be aware that state, federal, and international laws governing development and deployment of AI tools continue to emerge, particularly around automated decision-making tools with significant impacts on individuals. 
  4. Utilize University-Approved AI Resources
    1. Secure Platforms: When dealing with sensitive information, consider Yale’s approved AI tools, such as Clarity, which provide safer environments for institutional data.  A detailed list of approved AI tools with their risk classifications can be found on the AI Tools & Resources page.
    2. Staff working on AI-driven projects should follow the AI Request process to ensure appropriate safeguards are in place.
  5. Engage in Continuous Learning and Collaboration
    1. Educational Opportunities: Yale offers AI training sessions, workshops, and resources to help staff enhance their knowledge and responsible use of AI. See ai.yale.edu for more information.
    2. Community Engagement: Share insights and collaborate with colleagues to advance AI literacy and best practices across the university.

By following these guidelines, staff can confidently explore AI technologies while ensuring responsible, ethical, and secure usage that aligns with Yale’s mission and values.