Will artificial intelligence soon take over Flemish education?

26 May 2021

Today, it is impossible to imagine our society without AI. We all use it on a daily basis, often without even realising it. Worse still, few people actually know what AI is all about. And that is actually a frightening reality.

In almost all areas of society, AI is on the rise. And it is no different in education. Now that we are building an online portal for customised digital learning with i-Learn, and imec, a research centre for nanoelectronics and digital technology, is one of the implementing partners, i-Learn is quickly associated with AI. However, our portal itself will not use AI for the time being, but some of the educational tools that we do provide access to do so. It therefore seemed obvious to dedicate a blog post to AI in education.

Mieke De Ketelaere

Fortunately, with i-Learn we are in the unique position of having many experts at our fingertips. For AI, one of the best known experts is Mieke De Ketelaere, director of AI at imec (IDLab). She just published the book Human versus Machine: Artificial Intelligence Unravelled (Pelckmans, 2020), and seemed the perfect person to explain to us in human language what we are actually dealing with. We called her to get more insight into AI and its use in education.

Hi Mieke, thanks for taking the time to talk to us about artificial intelligence.

We’d be delighted!

Perhaps it would be a good idea to start off with a definition of AI. Can you briefly explain what AI is?

To put it very briefly: AI is a system that is able to learn and make decisions on its own based on what it has learned. This simple definition applies broadly to the type of AI that is most commonly used today.

You call yourself an AI translator. What exactly does that mean?

The fact is that AI was originally an academic discipline. Since the arrival of big American players in the 1990s (Google, Amazon, etc.) and later Chinese players (e.g. Alibaba), AI has very quickly found its way to the outside world. Actually, society has been caught up in speed there, as the rapid implementation has created problems of understanding and framing of AI.

Just look at other disciplines, such as medicine, for example. In that field, people have had centuries before science took the step towards business solutions. This has ensured that everyone in the world now speaks the same language when it comes to medicine: a human bone has one specific, global name. With AI, everything has moved extremely fast, with the result that you can talk about AI and still mean something completely different to someone else talking about AI.

Unfortunately, because of this speed in the society-wide application of AI, a translation battle has been lost; a translation battle between the engineers who create the systems and the business leaders who will use AI in a specific context. The people who deploy AI systems usually do not fully grasp what AI is and what it can or cannot do. The fact is that today’s data-driven AI systems can deal well with uncertainties. And that is ideal, since we as humans do not like to be confronted with uncertainty. However, these AI systems have certain margins of error compared to rule-based systems, and the users of the systems are often insufficiently aware of this. For example, companies will rely on the predictions of their AI system without taking that possible margin of error into account. And that, of course, creates tensions when the system makes a mistake. We then quickly get into the topic of liability. Who is to blame if an AI system makes a wrong decision? Governments or business leaders will be quick to point the finger at the engineer who built the system, but that is not true. The engineer built the system without knowing in what contexts it will be used and rarely gets involved again when contexts change. Therefore, there is a high need for AI translators who can look at AI from different disciplines and point out bottlenecks.

Are there different types of AI?

The one most often referred to as AI nowadays is connectionism, one of the classes within the field of AI. Connectionism assumes that intelligence is created by the human brain. We started looking at how the human brain is divided up into layers of neurons and how, by processing signals, we arrive at a decision at a given moment. And we replicated that in a digital format. However, there are also organisms on earth without a brain that also show signs of intelligence, such as earthworms or plants. Darwinism looks at intelligence in a different way and thinks that evolution must be taken into account. The symbolists, on the other hand, base themselves mainly on logic and rules. Such AI systems work on the basis of deduction and the answer from such a system is therefore always true and verifiable. However, these tokenistic systems cannot make decisions in situations unknown to them. Another category, the Bayesians, approach intelligence from Bayesian statistics. Adherents of this school look at data from the past and relate it to predictions for the future. Finally, there are the analogisers who assume that we gain knowledge by analysing analogies. A recommendation system is a good example of this: your buying behaviour is predicted on the basis of the buying behaviour of similar people.

The current trend is to connect the different categories more and more. Connectionism, for example, is very good at learning, but not yet good at reasoning. So we will look at the other categories to incorporate that knowledge into our autonomous systems.

How did AI come about?

The term artificial intelligence was first used at a conference in 1956. They wanted to find out whether intelligence could be digitised. This is how AI was born, which at that time consisted mainly of rule-based systems (e.g. commands like “when the sun shines on the window, close the blinds”). These first AI systems also did not require much computing power. The idea was to mimic human intelligence so that these systems could take over dangerous, dull or dirty tasks from humans. This list of three Ds was later supplemented by the dream of having AI systems perform difficult tasks. The challenge is not insignificant, because human intelligence contains many aspects. And just defining intelligence is far from easy.

So we want to mimic intelligence with AI, but how reliable is it? AI is regularly in the news in a negative way (e.g. in connection with racial bias in facial recognition systems). If we then look at the target audience of i-Learn, teachers, there is also a fear that a computer will one day decide whether a pupil can move on to the next grade, purely by looking at data. Is that fear justified?

It is a very justified fear. In the book Weapons of Math Destruction (Cathy O’Neil) you will find many examples of this kind of situation. Personally, I think it is not the right approach to use AI for autonomous decisions that can have an impact on someone’s life (e.g. job, prison sentence, studies). You can use AI to gain insights and process large amounts of data, but the final decision will still be made by a human.

Today, we are too quick to let AI systems take an autonomous action or decision in order to speed up certain processes or reduce manual costs. However, it is better to use these AI systems to gain unprecedented and new insights as they have the power to process enormous amounts of data. On the other hand, I don’t think we should use them to make autonomous decisions because our world is often complex, training data contains human biases and not everyone fits into standard pigeonholes.

AI, however, can help to smooth out the rough edges of some human decisions. Every human being has certain biases, and we are not always aware of them. By feeding an AI system with different people, each with their own biases, the biases of the system can be smoothed out so that an AI system can make a more objective, neutral decision than a human. But AI will never be completely neutral, precisely because AI systems are fed data by humans. If we want to tackle the problem of bias, we need to start with the human, not the system. It is interesting to note that some companies are just now using AI to more clearly expose and actively address internal biases.

What opportunities do you see for the application of AI in education? What dull, dirty, dangerous or difficult tasks could AI perform for teachers?

Education now focuses on the average student. AI could help with the boring moments in the classroom by challenging the stronger students and offering extra exercises for the weaker students. In this way, lessons are no longer taught at the level of the large bulk of the Gauss curve, but the outliers on both sides can also be served in a personalised manner. AI bots (i.e. computer programmes that can perform specific tasks autonomously), for example, can determine what is going on in a pupil’s head through new digital data streams (such as online click behaviour), which would allow you to deduce whether he or she is stressed or distracted. AI can therefore also play a role on an affective or metacognitive level and is not necessarily limited to adaptive tools that automatically adjust the level of an exercise. I see AI in education as an advisory body both on a content level (how is the student’s mastery of the subject matter?) and on an emotional level (measuring stress levels or motivation). Especially now during the corona crisis, AI can play an important role in proactively detecting school drop outs.

By using AI, education can respond even better to the needs of each student. For i-Learn, this is of course great: we just want to make digital learning tailored to each child possible so that the teacher no longer has to teach at the level of the average fictitious pupil. Self-learning AI is therefore very promising. Is this something we will soon be able to find in education or are we being too idealistic?

Some countries are already working on it, especially the Scandinavian countries. So it is certainly possible in the short term. But I would like to make a remark here. I myself am in favour of the human in the loop principle. Self-learning is good, but we must not forget that self-learning systems are easy to deceive. Once you understand how these systems work, you can manipulate them to behave in ways you want. So smart learners will quickly work out what they need to do to get easier exercises. So it is important that the human aspect remains present and that everyone takes responsibility. The AI system can help effectively, but as a teacher and a school you have to take responsibility not to fall back 100% on the results of the AI system.

So teachers should not fear that they will be replaced by AI systems in the future?

No, not at all. The human aspect remains essential. AI takes a lot of inspiration from education. What is important, for example, is incremental learning: we are not going to make our AI systems the top of the class, but we are going to teach them a small piece of knowledge, see how they react to it, and then build on that and provide new pieces of knowledge. This is exactly what happens in a classroom context: the teacher explains the theory, lets the pupils practise it with some exercises and will eventually test how well the pupils master the subject matter. The teacher will then adjust where necessary. After the current corona crisis, AI could play a major role in determining which learning material is sufficiently mastered and for which aspects learning deficits need to be made up.

We see that EdTech is on the rise in Flanders. The public at large has also discovered this ‘thanks’ to the lockdown in March 2020, when schools started using EdTech en masse. But how far have we got in this field in Flanders? Are we ahead or behind the rest of the world?

China, for example, is already extremely advanced, to the extent that pupils are taught by robots. Chinese children grow up with robots in the classroom, making this perfectly normal for them. That is already very far-reaching, I think. Belgium is not immediately a pioneer.

I myself notice that today’s pupils are sometimes still confronted with the same frustrations as I was during my youth. In Belgium, at least, my children’s schools are still very rule-based. Pupils first learn the theory and then have to apply it in an almost mechanical way, which sometimes causes situations that feel unnatural. Let me give you an example: in NT2 lessons, my Dutch-speaking children here in Wallonia learn that the correct question is “Hoe lang ben jij?” This comes across as very artificial, because in practice we say “Hoe groot ben jij?” In the N2T lesson, however, they learn that “groot” is used for a surface area. Of course I understand this language rule when I think about it, but I am convinced that you can learn a foreign language better by experience than by learning rules and words. The gap between theory and practice is too big.

The approach, “learning by experience”, is not yet sufficiently embedded in classroom practice. AI can play an important role in this, by introducing learning material through gamification. Thanks to gamification, you can also make pupils more enthusiastic about certain learning material. There is no child that does not like to play. So why not combine the game element with knowledge acquisition? Other countries apply this more often, so Belgium is lagging behind a bit in this respect. By the way, by learning through experience, you remember things much better, because the learning material is linked to a realistic situation that you have consciously experienced. Moreover, it helps students understand why they need to learn and master certain things. In Belgium, there are already innovative schools that have abandoned the traditional educational approach and focus entirely on experience-based learning. LAB (+ link: https://www.labonderwijs.be/ (Dutch)) is a good example of this. We should try to get away from our silo thinking and work more cross-curricular and project-based.

Do we need an AI subject in education, do you think?

I don’t think it makes much sense to create a separate subject on AI. AI is not a standalone subject, but is intertwined with so many disciplines that actually every subject should pay attention to self-learning systems that are used in that domain (e.g. aesthetics, economics,…).

I think it is also very important to emphasise that in the coming years, AI will not be able to reason or devise strategies. Therefore, it is important that our education focuses on the skills that distinguish us from AI systems, such as strategic thinking, empathy, social skills, reading between the lines and interpersonal connections. Pupils would then be able to master strategic thinking and use the insights of AI systems more efficiently.

At the moment, a lot of effort is put into STEM, which means that art subjects have to make way, but this is actually not very logical since AI can easily take over these purely scientific matters. What AI cannot do, concerns the more human aspects. And it is precisely this human approach that is essential when deploying AI. As I said before, we will always need a human being to judge the results that an AI system gives back. From a purely technical point of view, an engineer is happy when his AI system works and is accurate. But when the AI system is used in a certain context (e.g. facial recognition), it can wreak havoc on human lives because there is no human factor to double-check the results. This actually has far-reaching consequences, which are currently insufficiently considered in the development of AI systems. Therefore, for the by design phase of AI, you need not only engineers, but also sociologists, lawyers, etc.

Technology and society are inextricably linked and we must all take responsibility together for building responsible AI systems. Thus, a multidisciplinary approach to AI is imperative. AI systems operate in complex environments, so it is the responsibility of every stakeholder to think – within multidisciplinary teams – about what happens to that technology. It is therefore important that our children receive the right training for this.

Can you give our readers some tips on how to incorporate AI into their lessons?

There are already many interesting initiatives that teachers can make use of. For example, the proceeds of my book go to Dwengo, an organisation that creates a lot of material and workshops about computer science and robotics. And, of course, there is the EDUbox (dutch)) AI, a ready-made teaching package intended for secondary schools.

The lack of attention for AI in education is not due to a lack of teaching material. It is more a case of a complexity bias that makes people think that AI is too difficult to explain. Companies should also do their bit, through corporate social responsibility. They could do this by, for example, having employees give lessons in schools from time to time about topics such as privacy and data. In this way, as an employee you learn what questions children struggle with and how they view technology. Moreover, those pupils will be eternally grateful to you and they will also be up to speed with technology. It is important that pupils are made aware of the impact of technology on their lives. Some young people only realise too late what digital DNA they leave behind during their youth and student days. Photos that they might prefer not to see online when they later go looking for a job…

So there is still a lot of work to be done to make sure that everyone understands the impact of AI, and it is best to start as early as possible.

Indeed.

Apart from this necessary awareness-raising, there are also some ethical questions to be asked about the use of AI, aren’t there? For example, if we look at AI in education, certain ethical questions arise, such as: who has the last word: the teacher or the tool? And what about all that data that is being collected about our children? Are these ethical aspects of AI already being considered and is a legal framework envisaged?

That ethical aspect is indeed very topical. For example, in 2019, Europe defined some ethical AI rules (see here (Dutch)), guidelines that were recently cast into a regulation (Europe fit for the Digital Age: Artificial Intelligence (europa.eu)) . Some people don’t care if anyone can use their personal data. But every individual should have the right to decide for himself what he wants to disclose. Everyone is gradually realising that we can no longer continue as we are, namely to send our data without a second thought to any company that asks for it. Data vaults could be a possible solution to this (e.g. Solid (Dutch)). This will become very relevant in both B2C and B2B contexts.

It is important, however, that the translation is not lost in these ethical and legal frameworks, which unfortunately is still often the case. It is not enough to let lawyers make rules. We must set limits to AI, but these limits must be co-designed by ethicists, economists, sociologists and engineers, each of whom can point to potentially problematic situations from their own domain. This is easier said than done in the rapidly changing technological world in which AI is starting to mix like a sauce into all processes in different sectors.

So there is still some work to be done to determine the right direction for AI. Thanks for this inspiring and instructive interview, Mieke!