
Can AI tools transform education for the better?
4/30/2026 | 26m 46sVideo has Closed Captions
Why Khan Academy’s founder thinks AI tools can transform education for the better
Technology has changed the way students study and learn. Now, as artificial intelligence enters the classroom, proponents argue it will be a welcome revolution for schools — but with limited guardrails, could it do more harm than good? Horizons moderator William Brangham explores the future of AI and education with Khan Academy founder Salman Khan, who has launched a new AI assistant for teachers.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback

Can AI tools transform education for the better?
4/30/2026 | 26m 46sVideo has Closed Captions
Technology has changed the way students study and learn. Now, as artificial intelligence enters the classroom, proponents argue it will be a welcome revolution for schools — but with limited guardrails, could it do more harm than good? Horizons moderator William Brangham explores the future of AI and education with Khan Academy founder Salman Khan, who has launched a new AI assistant for teachers.
Problems playing video? | Closed Captioning Feedback
How to Watch Horizons from PBS News
Horizons from PBS News is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, LG TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipI'm William Brangham, and this is "Horizons."
Technology has changed the way students study and learn, with computers and the Internet now helping millions of young minds every day.
And now, artificial intelligence has entered the classroom.
Proponents argue it will be a welcome revolution for schools.
But with limited regulations and guardrails, could it do more harm than good?
Is this a reality that teachers and parents and students have to embrace?
AI and education, coming up next.
♪ Narrator: Support for "Horizons" has been provided by Steve and Marilyn Kerman and the Gordon and Betty Moore Foundation.
Additional support is provided by Friends of the News Hour.
♪ This program was made possible by contributions to your PBS station from viewers like you.
Thank you.
From the David M. Rubenstein Studio at WETA in Washington, here is William Brangham.
Welcome to "Horizons."
It has been four years since ChatGPT was released, and it and other artificial intelligence tools like it are being deployed everywhere, including in America's schools.
In some elementary schools, kids use an AI program to make snappy music and dance videos.
In others, programs like Dolly, which is embedded in ChatGPT, lets kids create novel artwork.
In a majority of schools, students can access Google's AI program, Gemini, because it's embedded in the company's Chrome laptops, which, according to one study, are being used in 80% of K-12 public schools.
On those laptops, Gemini is always at the student's beck and call, ready to help improve writing, create a new image, or do what it calls deep research.
There are entirely new schools, like the Alpha Schools, which use AI to create personalized lesson plans for each student, compressed down into just two hours a day, with the rest of the day open for other activities.
These are just a few examples.
The argument for widespread deployment of AI in education is that it helps overwork teachers and engages disinterested students.
At its best, supporters argue, AI will revolutionize and democratize education.
But there is also a growing revolt against this, driven by parents and some educators who argue that AI is short-circuiting the learning process and should be seen as a Trojan horse for big companies looking to land the next generation of customers.
We are going to talk today with someone who argues that those fears are overblown, that AI not only should be used in schools, but can be done in a way that will transform how teachers teach and students learn for the better.
Salman Khan is the founder and creator of the nonprofit Khan Academy, which is the biggest online repository of free educational videos.
They have been viewed billions of times.
Khan has also launched a new AI teaching assistant called Khanmigo.
It's designed to help teachers lighten their load and focus instead on sharpening their lessons to provide better instruction.
And he's the author of "Brave New Words, "How AI Will Revolutionize Education "and Why That's a Good Thing."
Sal Khan, so good to have you on the program.
Thank you so much for being here.
Could you just briefly make the case, why should AI be in schools?
Yeah, well, I want to, you know, I'm not... I'm also actually quite worried about some of the downsides of AI.
I remember when we first got access to some of these models, well before ChatGPT came out, we immediately said, "Hey, this could be a cheating tool.
"This can hallucinate.
"This could take you down rabbit holes "that might not be that healthy."
But what I told our team at Khan Academy, you know, a mission-driven nonprofit is we said, "Look, these are real worries, "but we should turn these fears into features.
"We should put guardrails around them "because there is upside."
One of our theories of change at Khan Academy has always been, not only can we give access to a lot of people, but if we can give access in a way that personalizes more to the needs of the student.
We've always known what world-class education looks like.
I always talk about Alexander the Great, young Alexander the Great had Aristotle as a personal tutor.
But when we had mass public education 200, 300 years ago, very utopian idea, but we had to compromise.
We had to bat students together in groups of 25, 30, 35, move them together at a set pace.
And until generative AI, groups like us, like Khan Academy, we could approximate aspects of personalization with on-demand video, with software, giving the teacher a bunch of tools so that they can focus on what the students need the most.
But we think AI can help us go further.
Now, I don't think it's a silver bullet.
It's not going to solve it overnight.
Even as we've rolled it out into the classroom, we've seen some real highlights and some areas where it hasn't worked as well as we thought.
But as it gets better and better, if it's used well, and if we have the right guardrails, I think it really can help drive this type of personalization, open up the aperture of what we assess.
A lot of people have critiqued things like standardized exams because they're multiple choice.
Well, maybe we can start opening that aperture, make students draw things, communicate things.
So that's where I'm optimistic.
You believe, and this is throughout your book, that AI can help level out what you argue are a lot of the inequities in education today.
That's economic disparities, geographic distances, stunted achievement among some students.
How so?
Yeah, look, if you grew up in a fairly educated family or a well-off family and you're struggling in school, your family would either be able to tutor you directly, like I tutored my cousins many, many years ago, and that was the genesis of Khan Academy, or they'll hire a personal tutor for you to get that help.
Most people do not have access to that.
And Khan Academy, as you introduced, the reason why billions of folks have used it over the years is it was that lifeline when they were stuck, when they needed some of that personalized practice.
And as AI gets better, and I don't think it's fully there by itself, although it's starting to make some nice progress, if it can start to approximate even more some of what a personal tutor could do, if it can start to take some of the non-student-facing burden off of a teacher's plate so the teacher has more time and energy for that student, I think you're going to see more students from more backgrounds have access to real personalization.
You'll have more teachers being able to do richer lesson plans and be able to do more student-facing things as opposed to some of the things that they have to do in the background right now.
You also give some examples in the book of deploying these tools to children who may not even have a physical schoolroom in their rural town.
They may live in a nation like Afghanistan, where students are not encouraged, especially female students are not encouraged to go to school.
You think that, again, this can be a major tool for democratizing education.
I gave a virtual talk at MIT last year, and this young woman was in the room.
Her name is Fatima, and it was exactly that.
She grew up in Afghanistan, wasn't allowed to go to school because of the Taliban.
She decides she wants to apply to MIT.
MIT said, "Hey..." The way she learned all of her material was on Khan Academy.
They said, "Hey, you don't have a transcript.
"You don't have any test scores."
She was able to certify her Khan Academy work on another nonprofit we have called Schoolhouse.world, where you can prove what you know.
That was before having access to AI or Khanmigo, but you can imagine people like Fatima, what she did took incredible determination and motivation.
If she could have even more supports, maybe an AI that can keep her on track, that can answer her questions if the video isn't enough.
To be clear, I don't think it's AI by itself.
A lot of the efficacy we see in situations like Fatima or in U.S.
schools, it is driven by the personalized practice that we've done well before AI, but we think AI could be a layer on top of that that can further support students.
It's not all about technology either.
The Schoolhouse.world program that we have, this is about students tutoring each other over Zoom.
I don't think it's one thing or the other, but if AI can be added to the arsenal for an extra layer of support and we can put the right guardrails around it, I think it can be a net positive.
As your book details, there are a lot of concerns out there about the deployment of AI in schools.
I want to go through a couple of these and get your responses.
The Brookings Institution, as you know, did this very deep dive into AI and its use in education.
They talked with hundreds of students, teachers, parents, education leaders, technologists in countries all over the world.
Their main takeaway was this.
I want to read this quote.
"At this point in its trajectory, "the risks of utilizing generative AI "in children's education "overshadows its benefits.
"This is largely because the risks of AI "differ in nature from its benefits.
"That is, these risks "undermine children's foundational development "and may prevent the benefits from being realized."
What do you make of that?
The technology right now stunts children in some way and blocks them from accessing its upsides.
I think any tool or technology, if you just throw it out there and hope for the best, you might not get the results you want.
A lot of these studies are based on students have gotten traditional assignments and now they have access to ChatGPT or Gemini.
Yes, most students, given that, will take the shortcut and make the AI do their work for them.
Of course, that's going to stunt their development.
That would be akin to if you had an unethical tutor or an unethical parent and they just did your homework for you.
Of course, that's going to stunt your development.
It's going to have cognitive offloading.
The solution in my mind isn't to just say, "Oh, that was bad, let's just get rid of it altogether."
It's to say, "Okay, well, how do we put guardrails "so that we can prevent that from happening?"
Maybe there's ways that we can use the same technology to actually put more critical thinking into the curriculum.
When a student answers a question, if the AI doesn't just answer it for them but says, "Hey, why did you answer it that way?
"Can you explain that a little bit more deeply?"
These are things that platforms like Khan Academy couldn't historically do.
But now we're going to be launching something hopefully in this coming year where the AI keeps pushing you.
Actually, we've already done versions of that in the last couple of years, where if you ask, "Hey, what's the answer here?"
The AI is not going to say, "Oh, the answer is B" or "It's clearly photosynthesis."
The AI is going to say, "Well, "you tell me what you think it is.
"How are you going to approach this problem?"
That's what great tutors have always done.
It's not about technology.
It's about how you are actually using it.
Same thing about critical thinking, doing research, etcetera.
If you just feed the answer to someone, whether it's a human feeding the answer to the technology, yes, it's not going to be good.
My view on it is, yes, don't just leave it on kids' computers all day, laptops all day.
We've complained to some of the AI providers that, "Look, you've got to take that out of your browser.
"It's undermining even the work "that kids are doing on Khan Academy."
But it doesn't mean that you have to throw out AI altogether.
What you're going to see, we see this pendulum swinging every couple of years, is, no matter what happens in schools, kids from educated families, affluent families, they are getting access to the healthy versions of all of this.
If you take it out of schools, I think you're just going to drive more inequity.
My kids, I monitor what they're doing.
I don't let them just do whatever they want on a computer or on AI.
But if they're building a piece of software by vibe coding, if they're creating some digital art, if they're actually going deeper into a topic by asking questions and the AI is pushing them, that's a benefit for their cognition, not a detriment.
I watched one of your TED Talks from a couple of years ago, and you gave this very interesting vignette of a student who was reading "The Great Gatsby" and struggling to understand the metaphor at the end of it, the green light at the end of the pier, and wondering what that metaphor meant.
And you hypothesized this idea that an AI could create the fictionalized Jay Gatsby to interact with the child, and the child could then say, "Why do you care about that green light "at the end of the pier?"
And the student would get the answer.
So again, that is an interesting vignette, to interact with fictional characters, but isn't that also part of the problem that you're describing, that it is simply feeding the child the answer as opposed to teasing out a curiosity?
Yeah, so that is an example, and it wasn't just an idea.
That was an actual example from a program we have called Khan World School where we have kids from all over the world.
And this was actually a young woman from India who was reading "The Great Gatsby."
And in that school, when we make the students read something, they have to create videos of reflection where they have to follow a protocol, so it's truly their reflection.
But before she made a reflection, she did talk to an AI simulation of Jay Gatsby.
And the reality is that she spent a lot of time in that conversation, and she went way deeper into the discussion.
Even before generative AI, the whole Jay Gatsby looking at the green light on Daisy Buchanan's dock and all of that, that's been studied forever.
For the last 20 years, any kid could go onto Google and do a web search, or even when you and I were in school, could go to Cliff Notes and say, "This is what the experts say."
So that's always been there.
But here, she didn't do that.
She didn't just say, "Oh, the Cliff Notes say this," or "Google Web Search said this is what the scholars say."
She had like a one-hour conversation with an AI simulation of Jay Gatsby.
She knew it was an AI simulation, but it allowed her to go way, way more deep into the issue.
And the AI didn't just answer her questions.
It pushed her thinking, "Hey, is there anything in your life "that you've ever wanted so badly "that it just kept you up at night?"
And so it made her reflect on it.
So I would argue it actually showed the best of what you would hope for in a Socratic discussion, and it made her go much deeper into the topic than if she just... Frankly, if she was just in a traditional classroom and the teacher said, "This symbolism represents this", or if she did a Google search on it, this was a real discussion.
One of the other critiques that you sometimes hear is that you can't always trust what comes out of these devices.
We spoke with Becky Pringle, who's the president of the National Education Association.
She had put together a task force of early adopters of AI in education, and she said one of these, this concern came up from them.
Let's hear what she had to say.
One of the things they shared with me was that, as they talk with other educators, what the educators were saying to them is that they need strategies to actually address the reality that what we pull up on the internet through AI is not always accurate, and it might be biased.
And so what they are working on is teaching educators themselves how to discern it, and they're also using it as a teaching technique for students so that students do not rely on AI.
I've seen this myself, certainly asking AI things that I know something about, and sometimes it's wrong, and wrong with incredible gusto and confidence.
How do we address that, that what comes through these very confident-seeming technologies is not always right?
100%, and I actually think what she just talked about is exactly the right approach.
If you take it out of the curriculum, especially kids, I would say, from lower socioeconomic demographics who might not get exposure at home or whose parents might not understand the nuances of this, they will not have a framework so when they see a deepfake online, when they see misinformation, or when they're talking to an AI and it's being sycophantic and it's just reinforcing and maybe even making up facts, they won't have the toolkit to be able to make sense of that.
First of all, I'll say this is not a new problem.
This existed, there was misinformation well before ChatGPT came on, but I do think it is worse because it can be so convincing, but this is where it's really valuable, once again, not to just throw kids onto ChatGPT and say, "Yeah, have fun with it," but to say, "Okay, let's use it for part of this project," and "Hey, what did it say to you?"
"Can you believe that?"
"Are there ways that you can validate that?"
"Does that make sense to you?"
I mean, just the other day, I was asking it about all the things going on in Iran, and I said, "Hey, Kharg Island, "Strait of Hormuz," and the AI said, "Well, "the ships might go to Kharg Island before going "through the Strait of Hormuz," and I said, "Wait, doesn't it have to go "through the Strait of Hormuz to get to Kharg Island?"
And they said, "Oh, yeah, you got me."
So you absolutely need to build that skill, but if you take it out of schools, you're not going to have the mechanism to build that skill.
So once again, it can be useful, but you need to know its limitations, and the only way that we can hopefully make sure a lot of kids understand those limitations is exactly what that teacher, that educator was talking about, not eliminating it, but putting it in the classroom in context where the kids can build those critical thinking skills.
Your book is full of examples where you show, you literally post different chat boxes of a student's interaction with Khanmigo, and explain a little bit why those vignettes are in there and what they demonstrate, to all of these concerns.
Well, I think a lot of the debates in education, and we've seen this well before the generative AI debate, we've seen this debate in history classes and teaching of evolution, everything, a lot of it is based on hearsay.
Someone hears that, "Oh, I heard this happen in a classroom, "that's really scary, I don't want that, "let's ban that, let's take my kid out of the classroom," etcetera, etcetera.
But we find, when you actually give tangible examples of what is happening and what's not happening, people start, the conversation becomes more constructive.
And so I didn't want in the book to just write like, "Hey, in theory, you could build critical thinking skills, "in theory, you can monitor the AI so it's not cheating "and it's asking Socratic questions."
That's very theoretical, but when you give tangible examples of exactly how that's happening, then I think the reader can say, "Oh, yeah, wow, if I or my child had that type of interaction "with an AI "and it prevented me "from the other types of interactions, "I could see how that could be really constructive."
Yeah, there are plenty of examples where a child is seemingly asking for an answer and to your point, it's not being sycophantic and not just delivering the answer about where Kharg Island is vis-a-vis the Strait of Hormuz, but is saying, "Well, what is your understanding about that?
"Do you understand where the Strait is?"
I'm making this example up, but there's plenty of examples where it is trying to tease out from the child, as you would hope a tutor would.
Exactly.
I've used Khanmigo myself.
I give this example in the book where I understand the basics of a supernova, but I never understood why it exploded.
I always said, "Well, if a star runs out of fuel "or the fusion stops, why wouldn't it just collapse?"
And I went to Khanmigo.
If I went to ChatGPT or Claude and I asked it, it would just give me an explanation, like a Wikipedia-style explanation.
But when I went to Khanmigo and I asked it, "Why does a supernova explode?," it said, "Well, what do you know about a supernova?"
And I said, "Well, I know it runs out of fuel, "but I would think it would just collapse.
"Why is it exploding?"
And it says, "Well, your intuition is right.
"Do you think it would collapse quickly or slowly?"
And I said, "Well, it's only stars of certain mass, "maybe quickly."
And then it said, "Well, when something collapses "quickly or falls due to gravity, "does it ever bounce?"
And then it clicked in my head.
And I said, "Oh, my God, so you're saying "it collapses so fast that it kind of compresses "and then rebounds and jettisons."
And this was me writing it.
"Jettisons the atmosphere or the outer layers of the star "out into space."
And it said, "Exactly right."
And it said, "Just like if you put a ping pong ball "on top of a basketball and you bounce it, "the ping pong ball goes shooting off into the air."
And that has been... I tell that to friends at dinner party now.
If I just read a Wikipedia article or ChatGPT or Claude just gave the explanation, I might have pretended that I understood it, but by having that back and forth, like you would with a great teacher, a real Socratic tutor, I really understood it much, much deeper.
I'm still amazed that you, Sal Khan, couldn't just say to Khanmigo, "Look, I created you.
"Can you just give me the answer?"
I appreciate the point you're making that it did help solidify your knowledge.
In the last minute or so that we have, let's say that parents are watching this and they have heard that AI is starting to be deployed in their child's school or is being considered, what are the things that you would argue they should ask about, know about, think about?
Yeah.
As a parent, I would be worried about just unfettered use of AI.
I actually think AI cheating is a huge problem in high school and in college.
Kids are doing it and it cannot be detected.
Anyone who tells you that AI cheating can be detected is not telling the full truth.
These detectors have a high false positive rate.
As a parent, as an educator, watch out for those things.
Anything that we want to be 100% sure that a student's capable of doing by themselves, we should be doing it in the classroom and proctoring them and not assuming that they're going to follow an honor code.
With that said, if your child does get some exposure to it, and I'm not talking all day, it could be a project here and there or in the classroom maybe once a week, 20 minutes, but it's in the context of what is it telling us?
How is it useful?
Can you believe this information?
How can you double check?
Can it be a Socratic tutor to make you go deeper into a math problem or a science problem or a history issue?
Then that's very powerful.
Sal Khan, the book is "Brave New Worlds, "How AI Will Revolutionize Education "and Why That's a Good Thing."
Thank you so much for being here.
Thanks for having me.
Before we go, we want to shine a light on a quite different trend in education, and that's getting very young kids out of buildings and into the great outdoors.
These classrooms go by different names, outdoor school or forest school, but the idea is simple.
You take kids, usually preschoolers, outside where natural lessons abound.
This one in Midland, Michigan is run by the Chippewa Nature Center, and in just one example, kids catch tadpoles and then frogs as a way of learning about the biological life cycle.
Outdoor schools like this are a small but growing movement, and while their numbers doubled in the years leading up to the pandemic, it's estimated there are still just under a thousand of them operating nationally.
Of course, many traditional schools have long incorporated some of these same concepts as well, getting kids outside, into school gardens, or on nature field trips as often as possible.
Studies show that outdoor schools prepare kids just as well for kindergarten as traditional ones do, but their supporters argue that the messy, ever-evolving, and even at times risky activities offer something invaluable.
Here's Jen Kurtz, who helps run the Chippewa school, describing this to my colleague Jeffrey Brown.
In a classroom, a lot of the things that you have are static and were designed to be played with in one particular way.
The natural environment changes every single day.
The weather changes, the humidity, there's scat left behind, there's new footprints, there's leaves that are chewed today that weren't chewed yesterday, and so there's just a natural curiosity that happens there.
All of our existence, kids have grown up outdoors.
That has changed in these current generations.
That is it for this episode of "Horizons."
You can watch us on YouTube or listen wherever you get your podcasts.
If you like what you hear, please subscribe and give us a rating.
It helps more people see this program.
Thank you so much for watching.
We'll see you next week.
Narrator: Support for "Horizons" has been provided by Steve and Marilyn Kerman and the Gordon and Betty Moore Foundation.
Additional support is provided by Friends of the News Hour.
♪ This program was made possible by contributions to your PBS station from viewers like you.
Thank you.
♪ You're watching PBS.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

Today's top journalists discuss Washington's current political events and public affairs.












Support for PBS provided by: