In discussions about generative AI in schools, the focus usually lands on whether students are using it to cheat on written assignments. This concern ignores a deeper truth: The problem isn’t the AI; it’s a literacy model that has long over-relied on antiquated, monomodal, and monocultural practices. The “cheating” we fear is often just a symptom of a curriculum that values compliance over authentic voice.
This brings us to a critical question: Will we finally allow AI to be the disruption needed to rethink our approach to teaching literacy, or will we use it to automate and perpetuate the inequities embedded in our current system?
Teachers now have the opportunity to move beyond the traditional essay and use AI as a catalyst for culturally responsive teaching. But to do so, we must ground the effective and ethical use of AI in a framework that ensures relevance, representation, and responsiveness across all projects.
Ensuring Relevance
The most urgent task facing literacy educators today is not managing the challenge of generative AI, but challenging the content of the curriculum itself. Generative AI thrives on synthesis and shortcutting, making assignments based on trivial, low-stakes knowledge obsolete. We must return to the foundational definition of a problem, which the Oxford Dictionary lists as “a matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome.” In the age of AI, literacy can solve problems that genuinely matter to students. A culturally responsive education framework achieves this by using AI to radically enhance relevance, shifting the classroom’s focus from mere content acquisition to authentic, high-stakes problem solving.
The central challenge in education is moving past irrelevant “cram-for-the-test” topics to deeper questions where the answers aren’t listed in the back of the book. Educators must move past questions like this one:
Johnny has an unspecified number of marbles, but it is fewer than 50. If he puts them into groups of 4, he has 1 left over. If he puts them into groups of 5, he has 0 left over. How many marbles does Johnny have?
Instead, we can use AI to generate questions like, “How can we calculate the most cost-effective public transportation routes in our neighborhood?” or “What data do we need to advocate for more grocery stores in food deserts within our community?” The marbles example has students solve an equation based on information about a subject they may care little about. The latter questions are examples of real-life issues that affect students’ communities and require deep investigation and critical analysis.
This shift is rooted in the demarginalizing design principle of achieving proximity to the pain: Students must be close to the problem they are trying to solve. Educators who are not representative of their students must adopt the stance of a demarginalizing designer who is deeply humbled by their own biases. AI tools can assist educators in bridging this gap by generating texts, scenarios, or case studies that reflect students’ community or cultural contexts.
For instance, instead of assigning students a theoretical essay on nationwide wealth inequality, an educator could model effective and ethical AI use by prompting the tool in front of the class:
Act as a culturally responsive curriculum designer for [a given grade level, subject, and zip code]. Generate five open-ended questions that connect current local housing policy to the community’s history.
The teacher positions themself as a curious learner, using AI to discover what they may not know or understand about their students’ lived reality or, if they do know, what they may simply not think to ask.
You may have noticed the intentional structure of the prompt. To move beyond generic outputs, I use a framework I call the P4 method when I present a prompt to AI: persona, purpose, parameters, and polish.
Persona: Giving a specific identity (e.g., a “culturally responsive designer”).
Purpose: Clearly stating the goal (e.g., “connecting housing policy to history”).
Parameters: Setting the boundaries (e.g., “generating five open-ended questions” for a specific zip code).
Polish: Refining the results to ensure they align with my students’ needs.
By using the P4 method to contextualize the content, the teacher demonstrates humility about their own potential biases and centers students’ lived experiences as the primary text, thereby validating their students’ personal voice and culture.
When working with AI, I use four elements: persona (who’s speaking), purpose (what it should accomplish), parameters (what guides it), and polish (how to refine it).
Ensuring Representation
The moral fight for representation starts with assigning problems worthy of students’ authentic engagement. The United Nations’ Sustainable Development Goals, for example, reveal a host of real, local problems that resonate with students. Because injustice exists all around them, they can acknowledge their own understanding, or lack of understanding, and learn how to become engaged citizens in society. True representation demands that students see themselves in the content, which requires them to either find representative voices they identify with (for example, those of the same age, gender, or race) or transform the communication style to a style native to their cultural connections. For example, a teacher may take an essay written by a historical figure and, with the help of AI, find a contemporary spoken poem on the same topic by a young poet who shares the same age and identity as many of the students. This methodology treats transparency as an essential ethical practice. And by modeling their own AI use, educators demonstrate intentionality, proving that they are actively seeking ways to make content more personally and culturally resonant.
The following prompt illustrates how a teacher might use the P4 method to locate these cultural connection points.
• Persona: As a culturally responsive [6th grade science] educator . . .
• Purpose: Help me adapt [a plant life cycle] lesson to include three community-focused connections [such as food justice or local agriculture].
• Parameters: Provide a “behind the scenes” explanation for each connection to show students how AI helped bridge the gap between the curriculum and their specific cultural context. Include a “prompt seed” for each option so students can further explore these topics based on their personal interests.
• Polish: Review the connections to ensure they are authentic to [a specific local neighborhood] and not just generic examples.
This approach moves beyond the mechanics of reading and writing into the dynamics of media literacy. It enables students to draw conclusions and trace connections among complex systemic issues.
Crucially, this comparative analysis is the primary way to identify systemic bias. Because AI is trained on skewed data sets, its outputs may omit or misrepresent specific cultural histories. Students can use the SIFT framework (Caulfield, 2019) to interrogate these outputs against primary sources:
Stop: Look for local records to verify AI-generated facts.
Investigate: Spot omissions (such as an AI tool mentioning a farm while ignoring the labor conditions of the people who work there).
Find: Check for neighborhood stereotypes.
Trace: Follow AI’s logic by asking the tool which specific data informed its responses.
Through these steps, students move from being passive users to critical auditors of technology.
The future of literacy is not a struggle against AI, but a commitment to using it as a powerful tool for achieving educational equity.
Ensuring Responsiveness
Presenters regularly cite the Stanford University study (Spector, 2023) revealing that 70 percent of students self-reported cheating both before and after the emergence of generative AI tools. Unfortunately, the question of why cheating is so prevalent is often ignored. When students don’t feel their learning is reflective of their experience, they can disengage from school. Ensuring a responsive curriculum is one antidote to this problem.
We must promote the use of AI to enhance learning opportunities, but true responsiveness requires a new pedagogical structure. This structure begins with the educator’s commitment to learning who their students are and understanding their unique cultural backgrounds. This means conducting surveys and asking open-ended questions to discover students’ interests and heritage. It means engaging in ongoing dialogue to discover how students connect their life experiences to the content’s relevance or lack thereof.
Here’s another sample prompt that demonstrates responsiveness, which you can adapt to your context.
• Persona: You are a culturally responsive curriculum specialist with expertise in [youth culture and urban education].
• Purpose: Help me translate a standard lesson on [persuasive writing] into a relevant problem-solving scenario for my students in [city/neighborhood/zip code]. Identify three specific “hooks” related to current events in this zip code that would enable students to use their authentic voice to argue for community change.
• Parameters: Ensure the core standards for [argumentative structure] remain the focus, but replace generic textbook examples with references to [local landmarks, transit issues, or community centers].
• Polish: Review the output to ensure it avoids stereotypes and centers the students as the experts of their own community.
Through engaging with and improving prompts like this, teachers model the practice of continuous learning and responsible prompt engineering. This process transforms the classroom into a collaborative learning laboratory, where both teachers and students actively explore ethical AI use in pursuit of equity, authenticity, and meaningful problem solving.
When schools ban AI, students often use it outside the safety of school-monitored accounts and without professional guidance. This creates a missed opportunity for cultural connection. By being transparent about how they use AI to align lessons with student identities, educators invite their class into a partnership that turns the curriculum into a shared cultural bridge. By teaching the P4 framework, we ensure students are not delegating their thinking to AI, but rather coordinating its use, fulfilling the mandate not to ban AI but to teach its ethical integration.
It’s About Guiding, Not Policing
The future of literacy is not a struggle against generative AI, but a commitment to using it as a powerful tool for achieving educational equity. The P4 framework, rooted in the three Rs of relevance, representation, and responsiveness, helps educators transform technology from a shortcut for cheating into a catalyst for profound learning.
We nurture relevance by moving beyond trivial challenges and ensuring that when reading and writing tasks address problems, those problems meet the minimum criteria of being “unwelcome or harmful.” AI can assist educators in identifying tasks that merit students’ authentic engagement. We cultivate representation by helping students see themselves in the content, particularly when both the teacher and students use strong prompting skills as they work with AI.
Finally, we commit to responsiveness by modeling ethical practice. Culturally responsive educators understand that long-term trust matters more than short-term compliance.
By fostering dialogue and using the P4 method, we equip students with a clear framework for engaging with AI responsibly and thoughtfully. The teacher’s role is not to police the technology but to serve as a guide in ethical reflection, offering feedback that affirms the student’s authentic voice and improves the quality of their AI usage.
Literacy in the age of AI is fundamentally about ethics, equity, and intentionality. It’s a profound opportunity to make learning relevant and to equip engaged citizens to solve the most pressing problems in their world. The tools have changed, but educators’ responsibility to be responsive to the culturally and linguistically diverse needs of their students remains the ultimate, non-delegatable task.
Reflect & Discuss
What percentage of your assignments ask students to solve problems that are genuinely relevant to their lives?
Craft a P4 prompt for an upcoming lesson that incorporates your students’ cultural context or lived experiences. What gaps in your own knowledge did this exercise reveal?
What would it look like to shift your focus toward teaching ethical AI integration rather than policing its use?