Insights Article

Exploring the Impact of Generative AI in Education
16th January 2024
ai featured image

Over the past year, widespread public accessibility to generative Artificial Intelligence (gen AI) programs such as ChatGPT and Bard have made frequent headlines. This raises important questions about the full extent of gen AI’s capabilities, how it can be effectively utilised, and how safeguards can be put in place to adequately address important practical and ethical concerns.

Generative AI is all the hype within tech circles, with almost all of the new companies within Y Combinator working on generative AI, despite a slowdown in investment across the broader venture industry. Additionally, Amazon, Google and Meta spent approximately $100bn on building new data centre capacities in 2023 (Goldman Sachs). Combined with ChatGPT claiming to have over 100 million weekly active users and OpenAI reporting $1.3bn in revenue, the explosive growth and uptake of gen AI tools suggests there will be significant impacts across industries and workforces.

How is AI impacting education?

One of the sectors already experiencing this impact is education. AI has the potential to address some of the biggest challenges currently facing education today – for example, it can enhance the student experience by innovating learning and teaching practices, and it can empower students to learn at their own pace via a personalised approach.

However, these advancements carry multiple risks and challenges. For example, AI could worsen inequalities regarding access to knowledge, and lack of consideration for diversity in cultural expression. The content produced by AI can be inaccurate, biassed, inappropriate or outdated, and if students are using generative AI tools to produce content, this could lead to issues of plagiarism.

So, to ensure AI is used in such a way that supports intelligence, new policies and frameworks that are guided by principles of inclusivity and equity will need to be created. This will maximise the benefits for everybody involved, but in particular those from disadvantaged backgrounds or those with additional learning needs.

Technology within education is not new news. From calculators to spreadsheets to gamified learning programs, technology has always been widely embraced by the sector if it’s believed it can help to improve the learning experience through speed, accuracy and engagement.

How are people responding to the adoption of AI in Education?

Can information produced by AI be trusted?

Research by Clusters into the adoption of AI within learning environments has shown that there is a lack of trust in the technology in its current form. While most respondents consider AI to be a useful tool with the potential to make life easier by reducing time spent on mundane tasks, there is still a great level of distrust in the outputs produced by generative AI.

Just 1 in 4 respondents say that they were confident in the accuracy of answers to questions provided by AI chatbots (Clusters, 2023) and approximately one third of those who had used generative AI tools did not believe they always produced factually accurate answers (Deloitte, 2023).

It’s fast becoming clear that gen AI tools are not information retrieval systems – they can spit out something that looks like a convincing answer to someone who isn’t a subject expert, but it may not always be the right answer. This poses a new question to consider, as framed by Benedict Evans: generative machine learning seems brilliant, but can it actually understand?

What other concerns are there with generative AI in education?

Beyond producing incorrect answers, we found that there are further limitations with the widespread adoption of generative AI, too. Clusters’ global survey of students and recent graduates found that 7/10 still consider face-to-face human interactions to be more valuable than a digital platform for personal and professional development. When we asked participants if they trust humans more than AI, respondents from India and China were significantly more likely to agree with the statement compared to respondents in Europe and the US.

Millennials were also much more likely to back humans over machines compared to Gen Z. Attitudes towards AI use are not universal: there is a lot of variance when considering just age and geography, further highlighting the need to consider diversity in cultural expression when creating guiding policies and frameworks for the use of generative AI within education.

Furthermore, 50% of respondents expressed concern that AI would negatively impact how people work, study and learn. They referenced fears such as losing critical thinking ability, seeing increased levels of hard-to-detect plagiarism, over-reliance on devices and the spread of misinformation from questionable sources.

It will be vital therefore, to teach students the limitations and reliability of AI, as well as how to critically assess the validity of sources used by it. Students and teachers alike must be clear on how to understand and respect copyright, IP and plagiarism rules.

In Summary

In the early stages of this new technology shift, accurately predicting the complete impact of generative AI on education proves challenging. As the technology is still evolving, it’s clear that there are concerns to address and caution should be exercised. Experts are working to understand how to turn it into a viable product that goes beyond complementing existing tasks, as it is currently being used, and instead trying to leverage it to change the way we work by utilising the full spectrum of AI capabilities.

While it can be tempting to rush into early adoption to avoid ‘being left behind’, it is important to take a human-centric approach to AI. Keep the needs and wants of your end user, whether that’s students, employees, or teachers, at the forefront of research. Understanding the attitudes and aspirations of your target group through market research techniques will help to ensure you deploy AI technologies in ways that enhances human capacities, supports learning, and encourages sustainable human-AI collaboration, all while minimising the potential for harm.

Want these kinds of results?

We’d love to talk with you about how our insights could help your business grow. Drop us an email at hello@clusters.uk.com or call us on +44 (0)20 7842 6830.

clusters-logo-footer

GET IN TOUCH

Email us: hello@clustersinsights.com

Call us: +44(0) 20 3950 6624

Find us: 85 Great Portland Street, First Floor, London, W1W 7LT

Tell us about your business

Contact Us

Privacy Policy Copyright Clusters Limited 2024. Clusters Limited, 85 Great Portland Street, First Floor, London, W1W 7LT. Registered in England and Wales. No. 5716244

Consent Preferences