One of my least popular opinions is that the rush to cram GenAI into K-12 curricula is a bad idea. This post will lay out the crux of my argument, and I’ll try to address or preempt counterarguments. If you see something lacking in my reasoning, please let me know!
The Purpose of Education
First, I think it’s important to establish what I think the goal of a K-12 education should be because that becomes the ultimate guide to how I think society should approach it.
Based on the pushback from some people to my stance, I take it that they believe the primary purpose of K-12 is to prepare children to be helpful members of the workforce. That is, their argument hinges on how GenAI will inevitably be a pervasive element in all industries, so kids should be required to learn to use the technology ASAP. The focus is on learning to use the tool, not on whether and how it affects learning.
I take a different stance as my starting point. While we undoubtedly want graduates to be able to contribute to the workforce, I think the primary goal of K-12 should be to prepare kids to be good citizens. This would require helping them be good critical thinkers, learning how to tie disparate ideas together, understanding how things work with systems thinking, being able to develop new concepts or ideas that originate from them (even if it’s not new to the world more broadly), and the ability to sit with a complex idea for a long time and thinking deeply about it.
While a diplomatic response would be, “Why, we should prioritize being useful in the workforce and being good citizens equally!” we must acknowledge there are opportunity costs and that to receive maximum benefit from both approaches, (1) GenAI must be a necessary tool to learn (otherwise your time is better spent learning other things), and (2) using GenAI must lead to similar or improved critical thinking skills as not using it.
This is not to say that GenAI makes the criteria for being a good citizen impossible, but I do think it makes it much less likely for reasons I’ll describe below.
My Arguments
I have a handful of primary arguments. The order is more or less random at this time, so ideas I discuss first aren’t necessarily the best, nor are the last ones the worst.
GenAI and the Future Workforce
It’s worth stating that we do not know with certainty that most people–and certainly not the vast majority of people–will need to know how to use GenAI in order to be competitive in the workforce. To assume that what GenAI companies tell us is the truth is to GenAIuflect to them, agreeing to pay them for products based only on speculation. It’s to do the work of having taxpayers cover the cost of training customers how to use a company’s product.
Consider what must be true in order to justify arguing that all kids should be required or strongly encouraged to use GenAI for several of their K-12 years (and that taxpayers, via school districts, should have to pay for the privilege of accessing the technology). I posit that at least four things must be true:
(1) GenAI will still be the prominent technology in the years after a student graduates,
(2) The technology won’t change much, so learning it now will directly relate to the workforce they will enter,
(3) GenAI companies won’t lose the current lawsuits that could end, scale back, or significantly change how GenAI is used, how often it’s used, how expensive it is, etc., and
(4) the tradeoff of teaching how to use GenAI, will be more beneficial than harmful to the intellectual development of kids, so that it won’t limit them as workers compared to students who have little or no exposure to GenAI in K-12.
To take those in turn:
We must believe that a technology that only burst into the public 2.5 years ago will continue to be the primary tech used in the workforce several years from now. That is, we must take as given that there will not be a similar breakthrough of a new technology in the next, say, 3-13 years. If a student masters GenAI in the fifth grade, but that tech is no longer as widespread seven years later when they graduate, we must believe that they did not miss out on valuable learning during those seven years that would have been more useful when they graduate into a world with new and ever-changing technology. It feels a bit like someone in the 1990s saying “Everyone should learn HTML because the world wide web will be used by everyone in the future!” It’s half true. The internet is now woven into the fabric of our lives. But how many of you know HTML or use it regularly?
We must believe that GenAI as it exists today will be more or less how it will exist in 3-13 years. That is, the prompting techniques helpful today will continue to be the best approach, the way we interact with GenAI will largely be the same, and so on. In essence, we must believe the tech will not improve and become significantly more user friendly.
We must believe that GenAI will not be affected by losing a lawsuit or facing other regulatory pressure that could affect how it works. For example, if GenAI companies lose copyright lawsuits, they could be forced to destroy their models, to pay billions of dollars in penalties (which could bankrupt them), and/or be forced to start over, relying on much less data to train future models (which, absent a significant tech breakthrough, will make the models much less useful).
We must believe that teaching students to use GenAI will make them better workers than teaching students to learn without GenAI through most of K-12 and perhaps learning it on the job, or over the summer after high school.
GenAI Takes Years of Practice
I’m not sure many people would say I’m a cheerleader for GenAI (though I do see its many potential benefits!), but even I don’t think it takes years to learn how to do well enough to perform an entry-level job, such as a high school graduate might need.
It seems odd that people are saying we must teach how to use GenAI in school and that starting early is important. This seems to suggest they think GenAI is just a truly awful product that takes at least years (or even just a whole year) to understand how to use it at a basic level. Imagine if Word took multiple years to learn to use effectively. This stance suggests the product is even worse than I’m willing to argue.
GenAI is Like Food: It’s Neither Good Nor Bad
First, I imagine some nutritionists would quibble with the idea that there aren’t overwhelmingly bad foods. But let’s assume, for the sake of argument, that everything is good in moderation and when consumed appropriately.
Is Teaching the Use of GenAI Better than Not Using It?
We can get to the issue's nub if there is empirical data. Does teaching and encouraging the use of GenAI improve student outcomes? If research suggests it does, then all the reasoning and opining aren’t super helpful.
Strength (effect size): A small association does not mean that there is not a causal effect, though the larger the association, the more likely that it is causal.
Consistency (reproducibility): Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect.
Specificity: Causation is likely if there is a very specific population at a specific site and disease with no other likely explanation. The more specific an association between a factor and an effect is, the bigger the probability of a causal relationship.
Temporality: The effect has to occur after the cause (and if there is an expected delay between the cause and expected effect, then the effect must occur after that delay).
Dose–response relationship: Greater exposure should generally lead to greater incidence of the effect. However, in some cases, the mere presence of the factor can trigger the effect. In other cases, an inverse proportion is observed: greater exposure leads to lower incidence.
Plausibility: A plausible mechanism between cause and effect is helpful
Coherence: Coherence between observational and experimental findings increases the likelihood of an effect.
Experiment: "Occasionally it is possible to appeal to experimental evidence".
Analogy: The use of analogies or similarities between the observed association and any other associations.
I don’t have time to conduct a literature review at the moment, but my strong suspicion is that these criteria indicate GenAI does more harm than good based on current research literature.
GenAI is Categorically Different from Prior Technologies
I’ve seen in some articles and books that authors tend to treat GenAI pushback in education as no different from how people may have overreacted in the past to calculators in the classroom or Google search. This analogy is flawed in important ways.
The Power Was Within Us All Along
GenAI is not a silver bullet. In my opinion, it’s probably just a bullet. And many believe we should aim it right at education.