AI Guidelines

Welcome to George Mason University's comprehensive artificial intelligence (AI) guidelines. 

These resources provide framework and guidance for the responsible and ethical use of AI across our academic community. Choose your role below to access tailored guidelines that will help you navigate AI use effectively while maintaining academic integrity and supporting your educational goals.

AI guideline documentation will continue to evolve as tools, student expectations, and pedagogical approaches change. Instructors are invited to offer feedback or suggest additions by contacting the AI Task Force or the Stearns Center.

For Students

Comprehensive guidance on using AI responsibly in your coursework, understanding academic integrity policies, and making informed decisions about AI tools in your learning journey.

For Faculty and Instructors

Practical guidance for integrating AI into your teaching practice, creating course policies, and supporting student learning while maintaining academic standards.

For Researchers

Guidelines for the responsible use of AI in research activities, including data security, ethical considerations, and compliance requirements for all research endeavors.

Uses of AI that Violate Standing Policies at George Mason University 


As AI technologies evolve in their capabilities, the following list of uses of AI that violate standing policies at George Mason University cannot be exhaustive. For all university-related AI activities, utilize university-approved platforms and resources. When in doubt, contact your unit lead before integrating and/or using an AI tool in your activities. 

  • Data Privacy and Confidentiality Violations: Do not enter confidential information, including proprietary data, student data, personal information, or any other data that is considered private or sensitive into publicly available AI tools. Do not access, share, or manipulate personal or institutional data without proper authorization. Be aware of the risks of using free services. Free online services typically monetize your data, identity, and how the service is being used. Several university policies, including University Policy Number 1114, lay out responsibilities of units and individuals of George Mason University regarding data stewardship.     
     
  • Security Violations: Do not use AI for any activity that could compromise university systems or networks, including hacking, breaching security measures, or exploiting vulnerabilities. These activities have serious legal and disciplinary repercussions per University Policy Number 1301 on the responsible use of computing.     
     
  • Malicious Content: Do not use AI to create or distribute malicious content, including malware, viruses, phishing emails, or any other content designed to harm individuals, systems, or networks. Such uses are prohibited under University Policy Number 1301 on the responsible use of computing.    
     
  • Intellectual Property Infringement: Do not use AI to create or distribute content that violates copyright or intellectual property laws per University Policy Number 1104 on copyrighted materials. Be aware that the use of AI-generated images in official university materials and any AI-generated materials produced using university resources currently bear high intellectual property risks and could expose the university to liability.    
     
  • Deception and Misinformation: Do not use AI to deceive others. This includes creating false communication, such as fake news articles, fabricated messages, or misleading information, with the intent to deceive or manipulate others. Do not use AI to alter or fabricate data to support false claims or mislead. This includes using AI for spamming or phishing activities. Such uses are prohibited under University Policy Number 1301 on the responsible use of computing, as well as the Commonwealth of Virginia’s DHRM Policy 1.75: Use of Electronic Communications and Social Media.     
     
  • Unauthorized Surveillance: Do not use AI for any form of unauthorized surveillance of individuals on university grounds or within university systems. This includes using AI for surveillance of students, faculty, or staff, tracking individual movements or activities, and monitoring private conversations or communications. Such uses are prohibited by University Policy Number 1301 on the responsible use of computing.     
     
  • Harassment and Abuse: Do not use AI to harass, bully, or intimidate. This includes using AI to generate offensive content, such as hate speech, threats, or discriminatory remarks, directed towards individuals or groups. Do not use AI to create or distribute content that promotes or incites violence, harassment, or discrimination. Remember, words, even when AI-generated, have power. Failure to do so risks violating several University Policies, such as Policy Number 1202, and the Commonwealth of Virginia’s DHRM Policy 1.60: Standards of Conduct.    
     
  • Discrimination: Do not use AI in a manner that discriminates against individuals based on race, gender, disability, or any other protected characteristic. Failure to do so risks violation of the George Mason University Non-Discrimination Policy (Policy Number 1201).     
     
  • Academic Integrity Violations: Do not use AI in any way that compromises academic integrity and violates university policies or guidelines. George Mason University’s Academic Standards Policy prohibits students from cheating, plagiarism, stealing, and lying in academic work.     
     
  • Unethical Research Practices: Do not employ AI in research without adhering to ethical guidelines and obtaining necessary approvals. Failure to do so risks violating the George Mason University Misconduct in Research and Scholarship Policy
     

These guidelines shall be reviewed as needed. A complete list of George Mason University Policies can be found at universitypolicy.gmu.edu/policies

 

George Mason University’s AI Task Force 

In Fall 2024, the university launched the AI Task Force, led by Amarda Shehu, George Mason’s inaugural vice president and chief AI officer (CAIO). The task force, comprising more than 70 students, faculty, and administrators bringing together all academic and nonacademic units of the university, began to explore how GenAI could impact how we teach, learn, research, and work, recognizing that GenAI poses unprecedented opportunities and challenges for higher education and a public university with vigorous R1 research activity, such as George Mason. These guidelines are the product of the hard work of this task force. They aim to encourage the creative and innovative exploration and use of AI tools while maintaining the university’s commitment to safety, security, academic integrity, and ethical conduct.