Skip to content

Latest commit

 

History

History
228 lines (141 loc) · 13.9 KB

File metadata and controls

228 lines (141 loc) · 13.9 KB

Integrating AI into teaching requires intentional pedagogical design. A generative AI model configured for writing instruction can either help students develop their voice or generate essays students submit without actively thinking or learning in the process. The difference lies in how you frame its use both in and out of the classroom as a form of collaborative learning. Not only do CUNY undergraduates lead complex lives, balancing jobs, family obligations, and coursework, many are also international or first-generation college students. In turn, AI tools carry the potential to support their learning or amplify existing inequities, and your pedagogical intentionality with these tools will shape the outcome.


Tip #1: Reflect on Learning Objectives

Principle: Start with what (skills) students should learn, not with what AI can do.

AI capabilities often tempt instructors toward backward design: "This tool can do X, so let's assign Y." That reverses the proper sequence. Instead, begin with your learning objectives for the course or the assignment, then ask whether and how students may benefit from AI tools to achieve the objectives.

Example:

Learning objective: "Students will develop the ability to construct evidence-based arguments in response to scholarly sources."

Question: Does AI support this objective?

  • If AI generates the argument: No. The student bypasses the learning.
  • If AI helps students locate relevant sources and identify counterarguments: Possibly. The student still constructs the argument.
  • If AI critiques a student's draft argument for logical gaps: Yes. The student receives productive feedback that supports revision.

Application:

When designing an AI-integrated activity:

  1. State the learning objective explicitly.
  2. Map the cognitive tasks required to meet that objective.
  3. Identify which tasks AI can provide assistance and which tasks students should do by themselves.
  4. Build constraints into the model or the assignment that prevent AI from doing the learning for the student.

Tip #2: Progressive Disclosure

Principle: Scaffold student’s AI use from guided to independent, simple to complex.

Students who encounter AI tools without scaffolding often default to one of two extremes: avoiding them entirely (fearing they'll use them wrong) or using them uncritically (treating outputs as authoritative). Progressive disclosure builds competence and critical judgment incrementally, giving students space to progress at their own pace.

Stages:

  1. Guided exploration: The instructor demonstrates AI use in class. Students observe how to prompt AI, critique/evaluate outputs, and integrate results into their own thinking and work.

  2. Constrained practice: Students use AI for specific, bounded tasks with clear learning objectives. Example: "Use the model to generate three counterarguments to your thesis, then evaluate which one is strongest and explain why."

  3. Reflective application: Students use AI in their workflow and document their process. Example: "Describe how you used the model, what it helped you understand, and where its limitations became apparent."

  4. Independent integration: Students determine when and how to use AI tools based on their learning needs.

Application at CUNY:

Many CUNY students are first-generation college students. They may not have tacit knowledge about academic research workflows or scholarly writing conventions. Progressive disclosure makes these processes visible while building AI literacy alongside disciplinary knowledge.


Tip #3: Assignment Expectations and Learning

Principle: Help students understand what counts as their work and why this matters.

AI blurs the line between "your work" and "someone else's work." A student who prompts an AI model and revises its output is doing some work. What does it mean for students to work toward established learning goals in your class, and how will you know when they meet them?

Instead of policing boundaries, teach students to recognize the difference between AI that supports learning and AI that replaces it.

Framework:

Ask students to reflect on three questions:

  1. What did I learn from using this tool? If the answer is "nothing," the use likely shortcuts the learning.
  2. Could I explain or defend the output? If not, the student has not engaged with it critically.
  3. Does this use align with the assignment's learning objectives? If the answer is unclear, the student should be encouraged to ask you as their instructor and bring the question to their peers as a learning opportunity.

Practical Implementation:

  • Include AI use guidance in every assignment prompt. Be explicit and specific about what kind of use is encouraged, what is discouraged, and why.
  • Ask students to submit a brief process note documenting their AI use as a way to build metacognitive awareness throughout the semester. .
  • Model appropriate AI use yourself: show students how you use AI tools in your own research or teaching preparation to encourage transparency and collaborative learning in the classroom.

Example:

"I used the model to locate five papers on scaffolding in writing pedagogy. I verified each citation and read the abstracts to confirm relevance. Two were useful. Three were off-target. Here's what I learned about prompting for academic sources..."

Sample Syllabus Language:

AI tools like the Sandbox models can support your learning when used thoughtfully. You are encouraged to use them for brainstorming, discovering and vetting sources, adapting texts into more accessible formats, or reading and reviewing constructive feedback on their drafts. Remember, you are expected to do the intellectual work yourself, such as synthesizing ideas, evaluating secondary sources, and constructing arguments. If you are unsure whether AI use is appropriate for an assignment, please reach out to me at [instructor-email] before submitting.


Tip #4: Visible Learning

Principle: Make learning processes visible to students and yourself.

AI tools risk hiding the learning process. A student submits a polished essay. You cannot see the brainstorming, drafting, and revision that led to it. Did the student write it? Did the AI? Some combination?

Visible learning surfaces the process. Students leave traces of their thinking. You gain insight into their development.

Strategies:

  • Process documentation: Require students to submit drafts, outlines, or reflection notes alongside final work.
  • Live workshopping: Have students demonstrate their AI use in class. They show how they prompted the model, evaluated outputs, and incorporated results.
  • Iterative assignments: Break large projects into stages with checkpoints. Each checkpoint surfaces student thinking at that stage.
  • Metacognitive prompts: Ask students to write briefly about their approach. "What strategy did you use to tackle this problem? Where did you get stuck? What did you learn?"

Example:

Instead of assigning a 10-page research paper due at semester's end, assign:

  • Week 4: Annotated bibliography (5 sources)
  • Week 8: Argument outline with evidence
  • Week 12: Draft (peer review)
  • Week 16: Final paper with process reflection

Each stage surfaces student thinking. You can intervene when students struggle. AI use becomes visible across the process.


Tip #5: Metacognitive Prompting

Principle: Design prompts that require students to think about their thinking.

AI models excel at generating fluent text. They do not excel at metacognition. By prompting students to reflect on their cognitive processes, you create tasks that AI cannot complete for them.

Examples:

  • "Explain your reasoning process for solving this problem. Where did you feel confident? Where uncertain?"
  • "What assumptions are you making in this argument? How would your conclusion change if those assumptions were false?"
  • "Compare your initial understanding of this concept to your current understanding. What changed?"
  • "If you were teaching this material to a friend, what would you emphasize? What would you skip? Why?"

Application in AI-Integrated Assignments:

Ask students to document how they used AI tools and what they learned from the interaction:

  • "What did you ask the model? Why did you phrase your prompt that way?"
  • "How did you evaluate the model's response? What made you trust or distrust it?"
  • "What did the model help you understand? Where did it mislead or confuse you?"

These questions build students' capacity to monitor and regulate their own learning.


Tip #6: Formative Over Summative

Principle: Use AI tools primarily for formative assessment and learning support, not high-stakes summative evaluation.

When AI is available, summative assessments become harder to secure. Take-home exams, papers written outside class, and projects completed over weeks all allow AI use (whether you permit it or not).

Formative assessment shifts the focus. Students receive feedback. They revise. They develop skills. The stakes are lower. The learning is higher.

Formative Uses of AI:

  • Brainstorming partner: Students generate ideas with a model before writing.
  • Peer review simulator: Students submit drafts to a model configured to provide feedback aligned with your rubric.
  • Concept checker: Students explain a concept to the model. The model asks clarifying questions. Students refine their understanding.
  • Research support: Students locate sources, identify patterns, and develop research questions with AI support.

Summative Alternatives:

If you need summative assessment that resists AI shortcuts:

  • In-class writing: Controlled environment, no AI access.
  • Oral exams or presentations: Students explain their thinking in real time.
  • Process portfolios: Students submit evidence of their learning process (drafts, notes, reflections) alongside final work.
  • Live demonstrations: Students show how they solved a problem or conducted an analysis.

Tip #7: Critical AI Literacy

Principle: Teach students to understand an AI system and question the authority behind it..

AI is not neutral. It embeds the biases, priorities, and limitations of its training data, design, and patrons’ standpoints. Students who use AI without understanding these dynamics risk uncritical acceptance of its outputs.

Critical AI literacy asks students to evaluate:

  • What the model knows and doesn't know: The training data is frozen in a particular time, meaning that the model often cannot access current events or recent research.
  • Whose perspectives are represented: AI training data overrepresents English-language, Western, affluent voices.
  • How the model was incentivized: What outputs were rewarded during training? Fluency? Confidence? Compliance?
  • What the model cannot do: AI cannot fact-check itself. It is also considered amoral, as it cannot weigh ethical considerations. And while it can recognize patterns in language and media, it does not contextualize information in the socially situated way humans do.

Activities:

  • Compare sources: Have students ask a model for information on a topic, then compare its response to several reliable sources. Where do they align? Where do they diverge? Why?
  • Bias audit: Ask students to prompt the model on a culturally sensitive topic (e.g., immigration policy, religious practices). Analyze the response for bias or omission.
  • Reverse engineering: Have students try to figure out what instructions or training data would produce a given AI output.
  • Failure modes: Task students with finding cases where the model fails, hallucinates, or produces nonsense. What patterns do they notice?

CUNY Context:

CUNY students bring diverse linguistic, cultural, and epistemological perspectives. AI models trained predominantly on English-language Western sources may not reflect their knowledge or experiences. Critical AI literacy empowers students to recognize these gaps and advocate for more inclusive technologies.


Tip #8: Inclusive Design

Principle: Design AI-integrated activities with CUNY's diverse student population in mind.

CUNY students include:

  • Multilingual learners navigating academic English
  • Working adults balancing study with jobs and caregiving
  • First-generation students without tacit knowledge of academic conventions
  • Students with disabilities who benefit from adaptive technologies
  • Immigrants and international students unfamiliar with U.S. educational norms

AI tools can support or marginalize these students depending on how you deploy them.

Inclusive Strategies:

  • Multilingual support: Configure models to help students work in their home languages and translate to English when needed.
  • Flexible pacing: Allow students to use AI for time-intensive tasks (e.g., literature search) so they can focus cognitive effort on higher-order thinking.
  • Accessibility: Ensure models support screen readers and other assistive technologies.
  • Cultural responsiveness: Acknowledge that students' prior knowledge and lived experiences are valid sources of authority. AI models should supplement, not replace, those perspectives.

Putting It Together

These patterns are not prescriptive. They are starting points. Your discipline, your students, and your teaching context will shape how you apply them.

When in doubt, ask:

  • Does this use develop students' capacities?
  • Does it make learning visible?
  • Does it encourage critical thinking?

If the answer is yes, proceed. If no, redesign.


Additional Resources


← Return to Home | Continue to Use Cases →