Brought to you in collaboration with the ASCA Academic Integrity Community of Practice and Pangram Labs, an ASCA Business Partner! This session explains how large language models generate text, identifies linguistic indicators of AI writing, and presents an evidence-based approach to addressing concerns of academic misconduct involving AI-generated text. By integrating human expertise with an automated detection tool, educators can improve their ability to differentiate between AI-generated text and student-written text.
Learning Outcomes:
Knowledge & Skills:
Knowledge & Skill | Level |
Case Resolution | Intermediate |
Education | Intermediate |
Equity & Intentional Inclusion | Intermediate |
Internal & External Partnerships | Intermediate |
Investigations | Intermediate |
Continuing Education Credits:
Continuing Education Credits for those Certified through the Higher Education Consortium for Student Affairs Certification:
Pricing:
Please ensure to review ASCA's Refund and Cancelation Policy prior to completing your registration.
Each registration payment applies to one attendee only. For group registration and rates, contact ASCA Director of Member Experience and Operations, Josh Cutchens, at asca@theasca.org.
Please note registration will close at 11:45 PM EST the day prior to the event.
A Zoom link will be provided the morning of the meeting. If you need accommodations for this event, please contact the ASCA Central Office at asca@theasca.org or 979-589-4604 as soon as possible.
Presenter Information & Bios:
Marilyn Derby (she/her) earned her BS and MS from Colorado State University in Human Development, with an emphasis on higher education administration during her graduate studies. She has 35 years of student affairs experience, primarily in residence life and student conduct, and has worked on large and medium public campuses as well as a small private campus. Marilyn currently serves as the Associate Director for Student Conduct and Integrity at the University of California, Davis. As 90% of the caseload involves academic misconduct and over half of current academic cases involve allegations of plagiarism with AI-generated text, developing the ability to differentiate student-written text from AI-generated text has been her primary focus since March 2023. Contact: mderby@ucdavis.edu
Bradley Emi is the CTO and Co-founder of Pangram Labs, a company dedicated to transparent research in AI text detection. Before founding Pangram in 2023, Bradley spent six years doing applied research in machine learning and artificial intelligence, most notably as an AI researcher on the computer vision team at Tesla Autopilot and as a graduate student in the Stanford vision and learning lab. Bradley holds a B.S. in Physics and an M.S. in Artificial Intelligence from Stanford University. Contact: bradley@pangram.com