12 Jul

Summer 2024 AI policy statement

Facebook Twitter Email Pinterest Reddit Tumblr

Oh what a few years it has been with AI. This is built off of my previous statement. But after reading Teaching with AI, I thought more about the authors’ discussion of AI producing C-level work and that the “new” standards should be better than AI. Those authors argue that instead of banning AI, we should be banning C-level work. This ties a bit to what I’ve discussed before about evolving standards.

Research methods class policy:

Artificial Intelligence and Large Language Model Policy

We know that artificial intelligence text generators like ChatGPT and other tools like Grammarly and Quillbot are powerful tools that are increasingly used by many. And while they can be incredibly useful for some tasks (creating lists of things, for example), it is not a replacement for critical thinking and writing. Artificial intelligence text generators and editors are “large language models” – they are trained to reproduce sequences of words, not to understand or explain anything. It is algorithmic linguistics. To illustrate, if you ask ChatGPT “The first person to walk on the moon was…” it responds with Neil Armstrong. But what is really going on is that you’re asking ChatGPT “Given the statistical distribution of words in the publicly available data in English that you know, what words are most likely to follow the sequence “the first person to walk on the moon was” and ChatGPT determines that the words that are most likely to follow are “Neil Armstrong.” It is not actually thinking, just predicting. Learning how to use artificial intelligence well is a skill that takes time to develop. Moreover, there are many drawbacks to using artificial intelligence text generators for assignments and quiz answers and proofreading and editing. 

Some of those limitations include: 

  • Artificial intelligence text generators like ChatGPT are sometimes wrong (this is sometimes described as “hallucinating”. (For example, for our sampling assignment, I had ChatGPT generate lists of Pokemon that can evolve and not evolve and it was wrong for 15% of them.) If the tool gives you incorrect information and you use it on an assignment, you are held accountable for it. If the proofreading introduces terminology that is not as precise as the terminology in course materials or used differently than in course materials, you are held accountable for it.
  • There is also a drawback in using artificial intelligence tools like Grammarly or Quillbot to “proofread” or “edit” your original writing – it may change your text so much that it no longer reflects your original thought or it may use terminology incorrectly. Further, in COM 382, you are not being evaluated on your writing, so there is no need to use extensive proofreading.
  • The text that artificial intelligence text generators provide you is derived from another human’s original writing and likely multiple other humans’ original writing. As such, there are intellectual property and plagiarism considerations.
  • Most, if not all, artificial intelligence text generators are not familiar with our textbook or my lectures and as such, will not draw from that material when generating answers. This will result in answers that will be obviously not created by someone enrolled in the course. It is likely that your assignment will not be graded as well if you’re not using course material to construct your writing. For example, AI does not understand the difference between measurement validity and study validity. AI does not understand the difference between ethics more broadly and research ethics. 
  • Answers written by artificial intelligence text generators are somewhat detectable with software and we will use the software to review answers that seem unusual. We will have to be cautious in our use of such tools, but if multiple detectors find that something is likely to have been written with AI, that will be used as evidence of misconduct.
  • AI is likely to produce “C” level work at best. For some things in life, “C” level is okay. But please be aware that as AI continues to develop and can do more and more tasks that humans used to do, you as a future employee and worker in the world will need to demonstrate that you can do a better job than AI. If you are using AI in this course to do the work for you, you’re not developing yourself to be BETTER than AI. You’re not learning skills or content that will matter. Consider AI-generated work as your new competition and that you need to do better work than that. Further, if AI can produce “C” level work circa 2018, very soon that will not be considered a passing grade. Instead of banning AI, instructors are going to “ban” all “C” level work (circa 2018). We’ve already seen that most instructors have raised their standards since AI became widely available. Currently, it is unlikely that even well crafted AI work will allow you to pass this course. Rubrics are designed so that AI-generated work is unlikely to get high marks.    
  • I have tried to design this course to help you develop yourself, your knowledge, and skills for a world in which AI will be doing more of the types of tasks that traditionally were done by recent university graduates in the workplace. AI will not be able to replace original thinking, problem solving, critical thinking, strategic thinking, emotional intelligence, ethical decision making, collaboration, and global/cultural awareness. Let’s work together to help prepare you for your future. 

It is okay for you to use artificial intelligence text generators in this course, BUT:

  • You must use them in a way that helps you learn, not hampers learning. Remember that these are tools to assist you in your coursework, not a replacement for your own learning of the material, critical thinking ability, and writing skills.
  • The only acceptable use of AI on assignments (quizzes, tickets, etc.) in COM 382 is for proofreading (like Grammarly or Quillbot). This should only be for simple grammar checks, not extensive rewriting, and absolutely not for generating original text. And in COM 382 you are not being evaluated on your grammar, so we discourage this use, while acknowledging that some students want to use it.
  • Do not use AI to write original material such as Hypothesis annotations and quiz answers.
  • Tools like StudyBuddy or other techniques to “take pictures” of quiz questions or to get answers to quiz questions are 100% not allowed. 
  • It is acceptable to use AI in COM 382 to provide you with other explanations of concepts or organize your notes and there is no need to disclose these. However, if the AI gives you incorrect information and you use that incorrect information on an assignment, you will be held accountable for it.
  • Be transparent: If you used an AI tool for proofreading, you must include both your original writing and the AI-version so that I may see both and determine if the answer that you submitted reflects your original thought. And I expect that you will include a short paragraph at the end of the assignment or in the final 0 point question in the quiz/exam that explains what you used the artificial intelligence tool for and why. (For example: “I used Grammarly to give me feedback on my sentence structure on question 6. English is my 3rd language and I like using AI as a proofreading tool.” It is not required to disclose using AI for studying, but you can if you want to: “I read the book and listened to the lecture on measurement reliability and I didn’t fully understand it, so I asked ChatGPT to give me other examples which helped my understanding.” Or “I did not understand a term in the textbook and I asked ChatGPT to explain it to me.”)
  • If you are using artificial intelligence tools to help you in this class and you’re not doing well on assignments, I expect that you will reflect upon the role that the tool may play in your class performance and consider changing your use.
  • If artificial intelligence tools are used in ways that are nefarious or unacknowledged, you may be subject to the academic misconduct policies detailed earlier in the syllabus. 

Then within the course, there are module-level learning objectives, and I’ve added a list of specific AI-“proof” skills to those learning objectives. For example…

Module 3 Learning Objectives

1. Define measurement in the context of social scientific research and explain its importance.

2. Differentiate between key terms such as theory, concepts, variables, attributes, constants, hypotheses, and observations.

3. Explain the difference between independent and dependent variables and identify them in research scenarios.

4. Describe the processes of conceptualization and operationalization, and apply them to research examples.

5. Distinguish between manifest and latent constructs, providing examples of each.

6. Identify and explain the four levels of measurement (nominal, ordinal, interval, and ratio), and classify variables according to these levels.

7. Compare and contrast categorical and continuous variables, providing examples of each.

8. Define measurement validity and reliability, and explain their importance in research.

9. Identify and describe different types of measurement validity (face, content, criterion-related, construct, convergent, and discriminant validity).

10. Recognize and explain various methods for assessing measurement reliability (test-retest, split-half, inter-coder reliability).

11. Analyze the tension between measurement validity and reliability, and discuss strategies for balancing them in research design.

12. Evaluate the strengths and weaknesses of different measurement approaches for studying diverse populations, including marginalized groups.

13. Apply principles of inclusive measurement practices to create more representative and culturally sensitive research instruments.

14. Identify potential sources of random and systematic error in measurement and suggest ways to minimize them.

15. Critically assess the implications of high and low measurement reliability and validity combinations in research scenarios.

Regarding helping students become “better” than AI. My syllabus statement: I have tried to design this course to help you develop yourself, your knowledge, and skills for a world in which AI will be doing more of the types of tasks that traditionally were done by recent university graduates in the workplace. AI will not be able to replace original thinking, problem solving, critical thinking, strategic thinking, emotional intelligence, ethical decision making, collaboration, and global/cultural awareness. Let’s work together to help prepare you for your future. 

Module 3 contributes to developing these skills:

  1. Original thinking:
    • Students learn to create conceptual definitions, which requires synthesizing information and developing unique understandings of complex concepts.
    • The process of operationalization encourages students to think creatively about how to measure abstract concepts.
  2. Problem solving:
    • Students learn to tackle the challenge of translating abstract concepts into measurable variables.
    • They must find solutions to balance validity and reliability in measurement.
  3. Critical thinking:
    • The module encourages students to critically evaluate different types of measurement and their appropriateness for various research scenarios.
    • Students learn to assess the strengths and weaknesses of different measurement approaches.
  4. Strategic thinking:
    • Students learn to strategically choose between different levels of measurement based on research goals and statistical analysis requirements.
    • They must think strategically about how to balance validity and reliability in research design.
  5. Emotional intelligence:
    • The discussion on inclusive measurement practices for marginalized groups helps students develop empathy and cultural sensitivity.
    • Understanding the complexities of measuring social and psychological constructs requires emotional intelligence.
  6. Ethical decision making:
    • The module addresses ethical considerations in measurement, particularly regarding inclusive practices and representation of diverse populations.
    • Students learn to make ethical decisions about how to operationalize concepts in ways that are fair and representative.
  7. Collaboration:
    • The emphasis on established measures and building upon previous research underscores the collaborative nature of scientific inquiry.
    • Group activities and discussions encourage collaborative learning and problem-solving.
  8. Global/cultural awareness:
    • The module highlights the importance of considering cultural context in measurement, especially when studying diverse populations.
    • Students learn to be aware of potential biases and limitations in measurement across different cultural contexts.

By learning these complex processes of conceptualization and operationalization, students develop skills that go beyond simple information retrieval or basic analysis. These skills require nuanced understanding, contextual awareness, and creative problem-solving – areas where human intelligence still far surpasses AI capabilities. This module prepares students to engage in the type of high-level thinking and decision-making that will remain valuable and uniquely human in an AI-augmented workplace.