Artificial Intelligence for kids.

First some personal background.
I have been volunteering at a coderdojo for quite some years now. Once a month there is a dojo where kids between 12 and 18 (ninjas) have the opportunity to learn programming with the help of us (coaches). And once a year ninjas from from across the country can come together to show of a bigger project they work on all year round (coolest projects).
Yesterday was the first coolest projects that I joined. But just as a visitor, I didn't help that much with projects from our ninjas, nor did I help set up the event.

I am strongly against using AI, large language models (llm's), code agents, vibe/agentic coding, AI overviews and the rest. For plenty of reasons. You might have to find another post to learn why. But relevant for now, I don't think using those tools is a good way to learn any craft. So if your goal is to learn, read a book and do it yourself.

Ok, I don't have anything against actual Artificial Intelligence, or transformer models or large language models or convolution or deep learning or ...
I welcome technological advances, and if it produces intelligence (definition tbd) I will respect it. It's just that the term AI is incredibly overloaded, and overused by marketing. Just like everything had to be smart, or ran on the cloud. It wasn't smart, it was just another computer. It wasn't on a cloud, it was just another computer. I guess I have more stuff to post about.

There were cool projects yesterday, like a round chessboard with lights to show legal moves (I beat the kid twice), an electronic Buffon's needle experiment, remote controlled pvp tanks with visual object tracking. When walking around the venue and looking at all the projects, I started to think that pretty much all these kids used AI. Either for writing code, making pictures for slides/posters, creating an web interface or just for looking stuff up. I didn't want to know if they did so I never asked directly, but either they told me directly or it was (painfully) obvious. And it did sadden me, I don't think that's how you should learn, and I fear these kids won't have acquired the skills for their own projects.

But I started to think how coolest projects went before all this AI stuff. I don't think the ninjas back then were either smarter or dumber, but they must have been as enthusiastic about their projects. Instead of AI, they must have gotten help from parents or coaches, or take a lot of inspiration from an already existing project. To the ninjas I don't think the experience is that much different. They still come up with a magical project they want to make, and the entities around them do their best to help them make it a reality. It's not like they become less enthusiastic by using chatGPT. This enthusiasm is what allows learning and development of skill and knowledge.

So what is the difference then. Well, if a ninja asks me something regarding their code, it could come from another coach, but now also from an llm. In both cases I have no context about it or the original author's intent. I can help the ninja along and solve their problem, and then talk to the other coach who helped them before to talk about it. Either I learn their intent, or we find out about a mistake and learn to do better. But there is no learning with chatGPT, or claude, or gemini, or copilot. I can't ring up openai, anthropic, google or microsoft to get them to explain the output, nor can I tell them that it should be something else. Sure I can prompt the same model to explain, but I can expect what the answer will be. It will be meaningless regardless. Despite that these models involve deep learning, there is no learning involved.