Emergence

What will emerge from AI in Education?

I just finished this fascinating book about the concept of emergence.

I have often heard people in the tech industry discussing the idea of the “emergent traits” of Large Language Models (such as ChatGPT). As their designers consistently scale up their gigantic size, unexpected talents emerge from these huge pattern-making databases. But why?

Look to the ant and consider its ways…

Emergence was first found in nature.

It can help explain natural phenomena such as why ant colonies are intricately organised without any centralised authority.

Despite popular thought, the queen ant is not an authority figure and no such hierarchical structures can be found across an ant community. Instead, the simple, local interactions between worker ants and pheromone trails ultimately influence the complex macro-behaviour of the colony. No individual ant knows what the whole colony is doing, and yet, complex swarm intelligence can be observed.

Image: http://www.nextnature.net

Emergence: Higher-level, complex macro phenomena emerge from the simple decisions and actions at the local level.

Emergent traits can also be found in the beautiful formations of flocking birds, with each individual bird following simple rules, without any central coordination, and check out a starling murmuration.

Human consciousness could even be a particularly compelling example of emergence. Could consciousness be explained from the interactions of billions of neurons in the human brain? Despite the relatively simple actions of each individual neuron, their collective interactions result in the macro trait of consciousness, a phenomenon not evident at the level of individual neurons. Why not animals though? You will have to read the book.

Currently, I am most interested in the emergent macro-behaviours found at scale in human cultures. When we get thousands of people together, phenomena is observed that cannot be explained at the individual level.

Emergent traits of social media

One significant and alarming example are the emergent macro-behaviours we have all observed from the rise of social media.

Get billions of humans interacting with each other across online platforms and complex, unexpected phenomena emerge.

I recommend watching the AI Dilemma by the Center for Humane Technology. They explain how online interactions at the individual level are indirectly influenced by specific AI seeking to leverage maximum engagement and thus, unexpected dark emergent phenomena have resulted.

Macro traits such as disinformation being amplified by the loud feedback found in online echo chambers; a significant rise in social polarisation between different political tribes; and the measurable decrease in positive mental health of our teenagers, especially young females.

Screenshot: The A.I. Dilemma

I do not believe any one individual in the tech industry willed for these dark consequences. But despite a noble vision, creating a more efficient way to connect with others, unfortunately the millions of online interactions have brought about negative emergent traits both at the micro and macro level of societies.

What will Emerge from Generative AI?

Turning our attention now to AI in education, what emergent traits may appear as millions of students continue to use Generative AI?

Will these possible emergent macro-behaviours help or hinder our educational endeavour?

Less thinking = Bad

A great fear for many people is the possibility that at scale, Generative AI causes a reduction of thinking and learning from young people in schools and universities.

A possible emergent trait from this new technology could be whole generations of students who have less stamina to think deeply about anything. They just delegate their thinking to ChatGPT.

This may result in young people entering the workforce less equipped to think deeply about the many domain problems across our world. Despite the advancement of artificial intelligence, domain problems will always exist.

The literal definition of utopia = “no place”. Humans will always need to solve problems. Hence, young people should always be encouraged to become effective domain experts. Thinking is still required.

More thinking = better

Instead, my hope for Generative AI in education is that it may help students think more, not less.

Rather than seek to automate human thinking, we should continually strive for this technology to augment human thinking.

Instead of opting out of thinking, if we see more individuals choosing to augment their thinking with Generative AI, we may then see positive emergent macro-behaviours across our societies.

Aim for augmentation, not automation

This paper by Erik Brynjolfsson has been the most influential reading to shape my perspective on the topic of Generative AI. He authored it before ChatGPT was released. But conceptually, it is very powerful. A must read:

The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence

He convincingly argues that augmentation has far greater potential for positive societal change.

Illustration: The Turing Trap

Just like ants, no individual can control the whole. However, paradoxically, it is at the local level that we have the most chance to influence the macro, emergent behaviours of our society.

A positive vision for Generative AI in education should start with teachers.

As we move forward, may we find ways to use AI to augment human learning and cognition.

Let us stop trying to delegate our own thinking, rather, let’s use our own human expertise, ingenious and creativity to leverage this technology to help our students think deeper about what they are learning.

If enough teachers and students start aiming for this endeavour, who knows what traits may emerge.

Leave a comment

Comments (

0

)