

An education specialist says schools that block artificial intelligence are leaving students unprepared for workplaces where it is already standard. The priority, he argues, should be teaching judgment alongside technical fluency.
Artificial intelligence has moved from a peripheral novelty to a daily presence in student life faster than most educational institutions have been able to respond. Students are already using AI tools for research, drafting, and problem-solving – often regardless of whether their schools have a policy that addresses it. The question facing educators is no longer whether to engage with the technology, but how.
David Smith, CEO of Silicon Valley High School, an online institution that integrates AI into its own teaching model, argues that the framing of AI as a threat to student development is fundamentally misdirected. “The students who will succeed in the next decade are the ones learning to direct AI tools rather than beat them,” he said, “while bringing judgment, creativity, and ethical reasoning to every decision.”
What AI Does Well – and Where It Falls Short
Understanding the case for AI literacy in education requires being clear about what AI systems actually do and do not do well. They process large volumes of data rapidly, identify patterns, generate initial drafts, and perform routine analytical tasks with considerable efficiency. What they cannot do is interpret context with nuance, exercise moral judgment, or navigate the ambiguous, competing-interest situations that characterise most real-world professional decisions.
“AI handles the repetitive work exceptionally well,” Smith said. “But it can’t decide which solution fits a specific cultural context, or determine when breaking a rule makes sense.”
This distinction has direct implications for what students should prioritise developing. The World Economic Forum’s Future of Jobs Report 2025 points in the same direction: 68 percent of employers identified creative thinking as an increasingly important skill, while 71 percent flagged technological literacy – including AI and data competency – as a growing requirement. The implication is that both matter, and neither is sufficient without the other.
The Human Capabilities That Remain Distinctive
Smith identifies two areas where human capability remains qualitatively different from what AI can currently provide.
Critical and creative thinking encompasses more than finding answers – it includes questioning whether the right problem is being addressed in the first place. AI can recombine existing ideas, but genuine innovation requires the capacity to challenge assumptions, identify unexpected connections, and envision possibilities that have no precedent in existing data. These are skills that develop through practice, not through passive consumption of AI-generated outputs.
Ethical judgment presents a different kind of challenge. Decisions about data privacy, algorithmic fairness, and the social consequences of automated systems require human values and contextual reasoning that no current AI can supply. “The technology can make predictions,” Smith noted, “but humans must decide whether those predictions should influence real decisions about people’s lives.” As AI becomes embedded in hiring, lending, healthcare, and public services, the ability to reason through those ethical dimensions becomes a professional skill in its own right.
Teaching AI Literacy Rather Than Avoiding the Technology
Smith draws a pointed analogy between schools that ban AI outright and the educational debates over calculators that preceded them. “Banning AI in schools is like banning calculators decades ago,” he said. “The better approach is teaching appropriate use.”
AI literacy, as Smith defines it, goes beyond knowing how to operate specific tools. It means understanding what these systems can and cannot do reliably – when to trust their outputs, when to verify them, how they are trained, and why they sometimes produce incorrect or biased results. Students who develop that understanding are better positioned to use AI effectively without becoming dependent on it in ways that undermine their own development.
The schools Smith regards as handling this well are those integrating AI deliberately – building technical fluency and the judgment to apply it appropriately as parallel goals, rather than treating them as alternatives.
A Shift in What Education Is For
The underlying argument is that education’s role needs to adapt to a world in which retrieving and processing information is no longer a distinctively human capability. Memorising facts that AI can surface in seconds is less valuable than developing the capacity to evaluate, contextualise, and apply those facts in situations that require human judgment.
“We need to shift our thinking from protecting students from AI to preparing them to work alongside it,” Smith said. “That means building critical thinking skills, helping them understand when to trust technology and when to question it, and giving them practice making decisions in situations where there’s no single right answer.”
That combination – technical literacy paired with human judgment – is, in Smith’s view, what distinguishes students who are genuinely prepared for professional life from those who are merely capable of using tools that will keep changing around them.