“Nothing about us, without us”.
This statement was used in a talk at UNESCO’s Digital Learning Week, held recently in Paris, where a speaker from Brazil was referring to how indigenous people feel when it comes to research, decisions and policies that could affect them, be it in education or technology, including AI in particular.
This quote resonated with me. I found it quite powerful, and one that applies to various contexts. It is the English equivalent of the Latin expression 'nihil de nobis, sine nobis', which has been used over the years through different declensions in various areas, from politics to disability rights, by individuals or organisations. It most likely originated in Central Europe in the 16th century and is still in use today. I think it is spot-on when talking about AI in general, but it also applies perfectly to education, and not only with indigenous people or in South America. When it comes to AI, or emerging technologies in general, I will allow myself to add the following with regard to the current state of things:
- There should be nothing about students without students. The younger generation is ahead of us in the use of new technologies (social media being an example), and they must be involved in the decisions made. Their voices and inputs in how and what they learn and what they prefer matter, and they should be allowed to contribute to their own learning beyond the perception of a student simply being a recipient.
- There should be nothing about teachers without teachers. Desk research and general policies made on behalf of practitioners without involving them cannot solely determine what they need the most and what works best in their classrooms. Theory is helpful, but is not enough.
- There should be nothing about schools and universities without all key people in these institutions sharing their thoughts and suggestions to guide the process; not only teachers and students, but also managers and administrators, and beyond. It cannot be a one-recipe-fits-all either; nobody knows these entities more than their own managers and the people roaming their corridors daily.
- There should be nothing about any population or group overall without a real and serious involvement from that population or group; the argument that relevant data has been used in some decisions is not enough, nor that some ‘know better’ and can simply dictate their views. This is illogical, untrue, and mostly unproductive. It should always be a collaborative effort, where each contributor brings something to the table in some form. Nobody has a monopoly of knowledge and wisdom, and definitely not in education.
- There should be nothing about education without all education stakeholders. In a world where AI is increasingly taking centre stage (whether it is good or bad or somewhere in between is a different discussion), decisions should not be the mere fruit of a tech-driven approach – as per what larger companies have in stock for us to buy off the shelf – and people working directly in education also have the responsibility and obligation to do more than simply taking a back seat or constantly wearing their critic hats.
- There should be nothing about a subject without subject experts. This applies to mathematics in the same way as it applies to other areas. Relying solely on automatic recognition and generation of content is not enough when inconsistency is still flagrant. There is no doubt that the technology is improving, but it still has not reached a state where it can be fully trusted, at least not in all aspects of education.
- There should be nothing about humans without humans! AI tools, mainly LLMs, should not be asked to decide on behalf of teachers and learners, in terms of what works and what does not, what is better and what is worse, and eventually what is right and what is wrong. There is a lot of research about a potential cognitive offloading induced by the heavy use of LLMs (as in Ershov, 2025; Lee et al., 2025; and Singh et al., 2025, for example), and this is a serious concern to be investigated, but it does not stop there; when it comes to decision making, and despite all the uncertainty around AI, it also seems that certain people are moving, in some interactions with LLMs, from consulting to delegating to even offloading as well. Another talk at the same event highlighted the importance of co-evolution with AI, and that could be a healthier perspective, but we need first to unpack what hides behind the ‘co-’ component in such a model.
We live in exciting times, no doubt, but we need to tread carefully.
Until we meet agAIn!
References:
- Ershov, D. (2025, September 12). Long reads: Is AI the end of critical thinking? UCL School of Management.
- Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In N. Yamashita & V. Evers (Chairs), CHI ’25: Proceedings of the 2025 CHI Conference of Human Factors in Computing Systems [Article no. 1121].
- Singh, A., Taneja, K., Guan, Z., & Ghosh, A. (2025). Protecting human cognition in the age of AI. Cornell University ArXiv.
Join the conversation: You can tweet us @CambridgeMaths or comment below.