Education in an Era of Pervasive Automation

This is the third of three interviews for the special issue of Postdigital Science and Education, ‘Education in the Automated Age’, edited by Neil Selwyn, Thomas Hillman, Annika Bergviken Rensfeldt, and Carlo Perrotta. Here, we engage with different experts working outside of the education research domain who are nevertheless interested in the rise of artificial intelligence and other forms of automated decisionmaking. This article presents a conversation with Professor Mark Andrejevic. Mark is based in Monash University, Melbourne, and is a leading commentator on surveillance, information, and digital media. As well as the current projects looking at facial recognition and tracking Internet ‘dark advertising’, Mark is also a chief investigator in the Australian Research Council Centre of Excellence for Automated Decision-Making and Society.1 In this interview, Mark talks with Neil Selwyn about his book Automated Media (Andrejevic 2020) — one of the most interesting recent theoretical explorations of emerging forms of automation associated with the rise of AI and other digital systems. Mark and Neil talk through some of the main arguments that are developed in the book and consider how these are beginning to play out through the forms of automation that are now beginning to emerge in schools, universities, and other education contexts.


Introduction
This is the third of three interviews for the special issue of Postdigital Science and Education, 'Education in the Automated Age', edited by Neil Selwyn, Thomas Hillman, Annika Bergviken Rensfeldt, and Carlo Perrotta. Here, we engage with different experts working outside of the education research domain who are nevertheless interested in the rise of artificial intelligence and other forms of automated decisionmaking. This article presents a conversation with Professor Mark Andrejevic. Mark is based in Monash University, Melbourne, and is a leading commentator on surveillance, information, and digital media. As well as the current projects looking at facial recognition and tracking Internet 'dark advertising', Mark is also a chief investigator in the Australian Research Council Centre of Excellence for Automated  In this interview, Mark talks with Neil Selwyn about his book Automated Media (Andrejevic 2020) -one of the most interesting recent theoretical explorations of emerging forms of automation associated with the rise of AI and other digital systems. Mark and Neil talk through some of the main arguments that are developed in the book and consider how these are beginning to play out through the forms of automation that are now beginning to emerge in schools, universities, and other education contexts.
Probably the overarching big picture set of claims that I've been formulating around the notion of automated media is what I call the cascading logic of automation. I think this is applicable to a whole range of areas -from policing, to education, to marketing and so on. When you equip the world with interactive devices and interactive sensors that collect huge amounts of information, it's impossible to manage that information on a human scale. Which means that once you automate the information collection process, you end up having to automate the sensemaking process. In blunt terms, how else are you going to process all of that data? It's got to be managed automatically. How are you going to make decisions based on that data? Increasingly, the direction is towards automating those decisions.
The alarming thing to me is that almost any process can be subjected to that. Pick something like university admissions. Of course, we have historical processes for evaluating applications, but once you're able to collect streams of data that were hitherto unavailable, then things change. Now you can collect data on student behaviour in the classroom on a minute-by-minute basis that can give you profiles of curiosity levels or activity levels. So, you start to generate streams of information that are too copious for something like an admissions committee to deal with. That then leads you to think, 'All right, can we make some type of decision based on these streams of data that are too big for us to manage?' We need some kind of automated system that's going to do that, and maybe it will take a stream of data that we get about day-to-day performance in the classroom and correlate that with future success in the university. But that is not something that humans can do, it's something that's got to be done at an automated scale. But if you do it at really large scale, then holding it accountable becomes a big challenge.
NS: There's so much to unpack there. Education is an information environment, and, in many ways, information curation has always been something that educators are involved in. On the flipside, as you say, gaining insights from the information flows coming out of education institutions is also proving to be attractive. Indeed, most educators accept that education isn't an exact science -it's based on guesswork and hunches. So, a lot of people are happy to take the stance that the more insights that can be gained from data, the better.
One of the things I want to think about is where automated technologies and these logics are actually hitting education, and maybe, perhaps, have some value. One of the things that you talk about in the book is there is a lot that could be automated in education, but perhaps the most important question is what should be automated. You seem to have reservations about judgement -in particular, technologies which are used to automate judgements. In particular, you argue that automated decisionmaking technologies are not equipped to substitute for the task of judgement. Can you expand on that thought? MA: I think judgement is a really tricky category when you try to drill down into it, because we often use the term 'judgement' in quite a loose way to refer to decisions that are logically deducible from the available information. So, I wouldn't really count something like a syllogism as a judgement. All men are humans, Socrates is a man, therefore Socrates is human. You don't need to judge anything there. If you've got the decision criteria, and you have the inputs, then you will get an automatic outcome. That would be something that we might describe as an automated decision. So in terms of education, to be eligible for a particular grad-school program, you might be required to get above a particular score on a particular test. It's pretty easy to find out if you did get that score, and if not, then you're not in. That, to me, is not an active judgement.
An active judgement is that moment when the data and the logic don't necessarily give you a defined conclusion. In colloquial terms, this might be like those occasions when you've got a really difficult decision to make and somebody says, 'Well, write down all the pros and all the cons'. The fantasy there is that somehow there is just one solution that the available evidence can point to. But judgement is always that moment when you've got to make a certain type of leap. It's not a syllogistic outcome. It's not that the data automatically gives you an answer. That's a very tricky human thing to deal with.
Hiring processes are interesting in that regard. We know that when a human is deciding who to hire for a particular position, there is an irreducible subjective element. Folks who are in favour of automated decision making will say, 'Well, that human element can be biased and can reproduce prejudices'. That's true. They'll also say, 'But machines can be programmed in ways to eliminate that'. That's a little bit sketchier as a claim because we know that it takes unbiased data to generate unbiased decisions, and we also know it's very hard to find unbiased data.
I think many of the important decisions that we make in the social context are really matters of judgement, even though we might like to imagine that they're not, and that an automated data processer can somehow make that decision. But I think that moment of judgement is a profoundly social process. As such, it's very difficult to put your finger on and neatly define what happens. How do you make that decision? Well, it's something that we, as a society, have come up with processes to address. We've developed some kind of accountability mechanisms, while at the same time there's an irreducible moment of something that is social and reflective, and very difficult to capture in a mechanical way.
NS: It is especially so in education… MA: In the education context, one of the things that really interests me about automation is the tendency towards what might be described as a form of absolute individualism. So we see this promise of customised education, where each student is treated as an individual. That has a certain appeal, right? Historically, maybe there have been certain advantages to having the private tutor who would individually tutor your child, rather than your child having to compete for the attention of a classroom teacher with 25 or 30 other students.
On some level, there is still an intuitive appeal to that promise of customisation. Of course, when you try to do that at a mass scale, you end up needing a tutor for every student, which has economic and practical ramifications. The automated promise is, 'Oh, we can actually substitute an automated form of customisation for that tutor'. So, in a sense, it's the same appeal as you see in justifications for personalised marketingi.e., if it were possible, then it would be much better to have individual level education. I think there's probably some truth to that. We might want to think about socialising kids, acknowledging that school isn't just about transfer of knowledge, but is also about building community… there are those other issues to consider. Nevertheless, over and over again we see this promise of 'we can't have a teacher for every student, but what we can have is an automated system for every student'.
But the form of judgement that takes place via an automated personalised tutoring system is, I think, very different from the type of judgement that a human tutor would make. And that gets us to this distinction between automated forms of decision-making and what we would think of as human judgement. One could imagine, I think, some advantages from the automated systems. Some children might be in a classroom and they already understand the lesson content. It might be good to have them working on something that is new to them. We know that students learn at different speeds and in different ways, so wouldn't it be good to have customised responses? But I think the potential pitfall with all this is you do lose judgement when that happens, so you will get automated systems that are making decisions based only on past patterns that are collected. It's not necessarily true that those occasions will be attuned to the student in the same way that human judgement might be.
I do think that there is the broader issue around what model of education is going to be privileged by these automated systems. I frame it drawing on the Canadian media theorist, Harold Innis (1951Innis ( /2008. He asks what the 'bias' of the technology is. The question there is really, when you pick the system, what type of education are you privileging over other types? For me, you are likely to be privileging a model of education as the transfer of skills.

Automated Judgement and the Eradication of the Subject
NS: I agree! Audrey Watters (2020) talked about how the pedagogical bias of a lot of personalised learning systems defaults to behaviourist modes of training rather than learning. But you also talked about the 'fantasy' of automated judgement. It's interesting to think about how the appeal of these automated media to some educators, to some students, and to a lot of administrators might not simply be economic -i.e., the money that can be saved by not having a human tutor for every student. Importantly, there is also the appeal of delegating hard decisions and judgements (that perhaps people do not want to have to make) to a machine. You talk in the book about the 'subsumption of subjectivity'. If I think about grading students, for example, it can be a little awkward to have to say to a student: 'Well, I've gone through this human process of reflecting on your work, and I'm going to give you 68 percent as opposed to 70 percent'. There's an appeal of delegating that hard call to a machine. In fact, some students might also be more comfortable with it, as well as their teachers feeling more comfortable with it.
MA: When it comes to asking what the bias of automation might be, that's a really interesting example. We know that automated systems are much better at evaluating particular types of assessments than others, so I agree with that appeal. Probably anybody who's taught has faced this decision process for themselves. If I give a multiple-choice test, it's going to be really easy to grade. If I give an essay test, it's going to be much more challenging to grade, and in some ways more time consuming (although some people might contend that the labour comes up front in one case, and afterwards in the other). That said, if we talk about automated grading of assessment, it also raises arguments about how the automated technology might address different kinds of bias, such as the subjective bias of the instructor. When it comes right down to it, it's going to be very hard, in some cases, to explain to two students why one did much better than the other.
Whereas if you've got an automated system, in a sense, you lose that dimension of contestability, right? You can just come back and say, 'It wasn't me, it was the machine'. But I do worry what automated assessment means in terms of how it would shape our assessments. There are certain things that are going to be relatively easy to capture. What keywords did the student use, or perhaps maybe some elements of structure. But when it comes to things like assessing the quality and character of an argument, I just don't think we've got machines that are capable of doing that. So when we end up making the decision that services as a version of fairness, we just won't have certain types of assessment. I do think we're losing something.
NS: Yes, we might well start creating assessments that fit the machine, but also, we might encourage teachers and students to behave, write, and think in ways that are parseable and machine readable. Moving on, another important point you make in Automated Media (Andrejevic 2020) is this idea of the changing nature of the subject. Actually, you suggest that the ultimate logic of automated technology is the eradication of the figure of the subject altogether, rendering the subject obsolete. This is particularly interesting for me when you start thinking about the recent tendency in education to think about posthumanism. There has been a pronounced posthuman turn in education thinking over the past ten years or so, but you argue that the realities of automated media invalidate the posthuman notion that people are able to merge with machines, or at least that the human subject might be enhanced by the technology. So, given the current interest amongst education researchers around posthumanism, can you talk a little bit more about what you mean by the eradication of the subject?
MA: To some extent, I think this connects with your question about judgement. Because I think that the issue of judgement is one element of what we might see as the untotalisability of the subject. The best example that I can think of is a quote that I start the book with from Ray Kurzweil (in Berman 2011). Kurzweil is a futurist and an engineer at Google, and he was working on building a chatbot that would mimic the conversation of his deceased father.
So what he's doing is collecting all of the materials about his father -everything his father has written, anything that he has in terms of recordings with his father, all of the data and the information that he could collect about his father -and putting that into a system that would then be a bot that he could have a conversation with. Somebody asked him, 'Well, do you think chatting with this bot would be like talking to your actual father?' And his answer was, 'This bot will be more like my father than my father was'. That's what I mean by the eradication of the subject, and it's probably quite familiar to us. When marketers say things like, 'We know what you want more than you know', what they're telling us is that with enough data, they can actually determine and predict you in ways that are inaccessible to you. In other words, we will know you better than you will know yourself. That's the version of your father that's more like your father than your father is.
The philosophical position that I come from is one in which the subject retains a radical, non-self-identity with itself. That's what being a subject is. The moment at which everything about you is fully predictable, then in a way, you've stopped being a subject -you've become something automated, right? That claim of 'we can know what you want before you know it, we know who you are more than you do' is telling you that you are totalisable, that you can be totally understood. You may not know it, but with enough data, we'll know everything that you're going to do next, we'll know what your desires are going to be. We see this tendency over and over again in contemporary society. For example, there's the fantasy that if we can unpack your genome then we can predict various things that are going to befall you -such as medical conditions, how you'll age, which hairs you'll lose, and so on. The fantasy is that there's a code, and if we can just get that code, then we've got you.
The position I take is that this is the stage when you lose the subject. The subject is that moment of undecidability. It's that moment when you may be non-identical with yourself. It's that moment where Kurzweil's father is going to be not like Kurzweil's father. That's the moment also, I think, when your judgement is not going to be a hundred percent predictable from the inputs that came before. In a clockwork universe your judgement is always going to be predictable, right?
NS: It's interesting to transfer these thoughts over to a classroom situation. In one sense, students are often in a position where they don't really know themselves as well as their teacher does. So, in education the fantasy is where the machine knows the student better than the teacher knows the student. There was a recommender system a few years ago called Knewton, and one of their marketing pitches was that we have a million points of data so we can know any student better than any human tutor ever could. Their CEO was even fond of claiming that the system would know what grades students will get in tests before they took the test: 'A good tutor can crack jokes and make you want to learn, but this robot tutor can essentially read your mind' (Lapowsky 2015). This was a real disjuncture, because our conventional expectation is that a human tutor prides themselves on knowing the student and their learning. But this tech company was saying, 'Well, the machine knows the student far better than you do'. So that implied the eradication of the tutor subject.
MA: Yes, that's a really interesting point. The obvious endpoint for automated, customised content delivery in education would be the eradication of the teacher, right? It's not possible to have a human tutor give each student continuous, individual attention. It is possible, presumably, for an automated system to do that, so the teacher disappears.
NS: That's the one thing that tech companies are really, really keen to claim that they're not trying to do. They are desperate to assure us that 'We're not getting rid of the teachers', clearly because teachers are their main customer base.

NS:
Moving on a little, you mentioned the 'cascading logic of automation' earlier, and I just wanted to come back to that. This is a logic of progression from automated data collection, to automated data processing, to automated action… and this automated action then takes place at the speed of pre-emption. This is a really interesting thing to think through in terms of education. Can you talk us through this progression in terms of education? Are there any obvious examples?
MA: Do you remember the scandal in the UK when A-level students were unable to take the exams during the pandemic, and so predictive scores were generated about them (see Mead and Barbosa Neves 2022)? That would certainly fit the example of pre-emptive action. The logic here was also that 'you don't have to take the test, we're going to know what you would have gotten on it'. Of course, in the end, these predictions were eventually backed away from. But that would be the kind of model taken to the limit. We've got enough data about you to know how it is that you're going to perform, so we don't actually need to have you perform.
NS: That case of examination grade automation was really illuminating for many different reasons. First, the predictions were baked into the historical trend for independent schools being awarded higher grades, meaning that the current batch of students from independent schools got predicted higher grades. Second, the UK Prime Minister at the time, Boris Johnson, ended up by blaming the controversy on a 'mutant algorithm', as he put it, which was really revealing in shedding light on the limited political understandings around these technologies. And third, this case illuminated the ways in which the whole idea of pre-emption goes against what people tend to think about education. Education is meant to try to understand where students have got to, and then try to scaffold them to exceed expectations… to stretch themselves and each go well beyond their capabilities.
In addition to this emphasis on pre-emption is what you call operationalismi.e., these are systems that are always trying to act rather than trying to understand. And you also raise the idea in Automated Media (Andrejevic 2020) (by way of Foucault) of environmentality -i.e., a mode of governance that's based around directly shaping our environments. Now, all these concerns feel at odds with what most educationists would understand education as being about. If we take those logics of preemption, operationalism, environmentality to their logical conclusions, what form of education is imaginable? How do you see these logics ultimately playing out?
MA: What I mean by environmental control is a move from what you might call ideological conditioning to external forms of what is essentially nudging. So here I might imagine a highly malleable education environment, such as a classroom in the Metaverse. One of the interesting things about the Metaverse is that the environment itself can be modulated on an individual basis. You can have 20 students in the same classroom, they can interact with each other in a shared classroom space, and you could also customise that space for each of those students. So you could imagine something like sorting students by their purported learning styles, and each lesson would be completely different for each student based on their different learning style. What that means is that the content that they receive, and the environment in which they receive it, would be individually tailored to them. Again, it's interesting to me how that has an intuitive marketing appeal. If you pitch that scenario to a parent, their response might well be: 'That's great. I know what it's like to have my kid sit there, and they don't need to know what's being taught… or maybe what's being taught is above their level… or perhaps below their level.' But what interests me in terms of this question of environmentality is how it is that you get people to do what you want them to do without asking them to internalise your request. The classic example from Nudge economics is if you want people to eat healthier, you might traditionally have tried to instil in them a self-discipline of healthy eating: 'Eat kale and don't eat junk food'. However the Nudge solution is just to make it harder to get the unhealthy food. In the cafeteria, put unhealthy food higher up, in a more remote location, or make it embarrassing in some way when you try to get it. So, you're not conditioning people to think differently, you're conditioning them to act differently.
So how that might work in the classroom setting is interesting. If we take the Foucauldian point that classrooms have always been sites of discipline, one of the things that goes on is the internalisation of certain modes of behaviour and conduct. So one option is to get the students to subjectively embrace that. The other option is to just make it impossible to behave differently. One is a more external form of control, and the other relies on internalisation. I think environmentality pushes in that first, external direction: the physical environment can be changed to encourage particular types of behaviour without requiring ideological buy-in. I don't know how that would work in the Metaverse. How do you pass a note in the Metaverse? The code could stop you; it would detect what you're doing and then ensure that you can't. In the physical space there's a certain ungovernableness of it -the teacher can't see you when you're passing a note, so it gets passed. In the Metaverse, that can't happen. It depends on how you set up the code, but if coded in the right way, then the students can be managed in a controlled environment which decides what affordances they have or not.
When it comes to the future of education, I suppose what you could imagine is -and this sounds quite dystopian to me -a hyper-individualised form of control in which the transfer of certain skills takes place. Again, I'm really interested in how this clashes with the idea of education as a social, collective process that is based on the understanding that learning has external benefits. You know, the idea that we learn together, and that builds connections between us, which, in turn, shape what it is that we're interested in learning, and how we learn. I think all the automated systems that I am imagining are biased in the other direction of a hyper-individualised skills transfer.
NS: What you were just saying brings us back to the whole idea of the subject. Gert Biesta (2007) talks about subjectification as a key function of education -the role of education in getting people to comprehend the uniqueness of who they are as an individual, but also how this relates to who they are within a collective, and the fact that they exist within a community. Now that's meant to be a big part of what public education is. But, as you say, the individualisation that is inherent in education automation pushes us in these more dystopian, privatised, individualised directions.

Towards a Community-Oriented Automation
MA: Just to pick up on what you're saying, what's really alarming is the almost incomprehensible readiness of public education institutions to offload much of this onto commercial platforms. The significance of these commercial platforms is the potential profitability of commercially automated education. It would be interesting to ask the question regarding what a civic, collective community-based form of automation in the classroom might look like. I'm worried we're not even going to be able to ask that question, because these automated technologies are costly, they rely on processing power and large datastores, and are dependent on commercial companies that are already developing the dedicated infrastructure. Educational institutions have proven themselves really willing to pawn the provision of digital automation off on those existing structures. While the practical reasons for deferring responsibility are understandable, the pedagogical reasons are much more fraught.

MA:
The real danger is that we won't even be able to ask that question of what might a collective, community-oriented automation look like. It just won't happen because we will have decided already that we're a Google school, or a Microsoft school, or a Meta school. So, the platform is already in the school, and that platform will be shaped by its commercial imperatives rather than public service imperatives. To take public education and put it in the hands of private corporations is something that I think is pathological. We used to have a resistance to that! There used to be that moment when we would see commercial incursions into public spaces as threats. As an institutional practice it's so naturalised now. We work in institutions that are already captured by these large platforms, and we seem to have offloaded our infrastructural imaginary almost entirely onto the private sector.
NS: Yes, absolutely. The idea of McDonalds running a school canteen is still frowned upon by most people, but the idea of Google dictating what's taught, how it's taught, and when it's taught is considered to be far less contestable… but let's finish on a positive note.
MA: I'll try! NS: Perhaps we're not going to realise a socialist education automation. But you do fleetingly hint in the book that education possibly has a role to play as a counterbalance. You write of education as a form of resistance to the automation of society. To quote the book: 'an alternative to dystopian automation requires a wholesale rethinking of our media and education systems. Admitting this fact can have a dampening effect on hopes for change, but denying it renders change impossible' (Andrejevic 2020: 21). Can you expand on that thought a little bit? How might education systems be rethought to foster a resistance… or at least a reflection on automation?
MA: Both of us are educators and so we're wedded to this notion of the importance of education. I believe in that profoundly. In a sense, education is a means of excavating the flipside of what we've been talking about -i.e., the fact that meaningful forms of education can build a sense of our interdependence and our commitment to the structures and the practices that makes society possible. So when I talk about 'rethinking' the education system, I'm thinking about how it is that these are spaces where we might reveal what is supressed by the commercial platforms. I think what is supressed by those platforms is precisely the irreducible interdependence of the social. Presuming you accept my argument that the commercial infrastructure for automation has been premised almost exclusively on a kind of hypertrophied individualism, then the message here is: 'This is for you, it's custom tailored to you, it's your ads, your show, your programming, your news content, your education.' I do believe that automation is characterised by an individual message of 'You, You, You!', which is, in some sense, a reaction to the formation of mass society. The idea that mass society is a grey, totalitarian mass, marching in lockstep, whereas now we have individual expression and freedom. What's supressed, I think, by that hypertrophied individualism are the social processes that irreducibly make even the conception of individualism possible. What education can do is to point that out and say, 'But your very conception of individual freedom actually relies on a whole host of irreducibly social practices'. Even these processes of automated hyper-individualism and customisation rely irreducibly, in the end, on social decisions. This is the work that critical data studies scholars have pointed to for some time. Automation is not somehow untethered from the social. The datasets that are entered into it, are data that reflects society and how it works. If the data yields biased outcomes, then this reflects the biases in society. For example, linguistic translation tools that take languages that have nouns that aren't gendered and then translate them into a language that does, will reflect the gender biases in the language. So 'doctor' will be translated as masculine, and so on. There is no data that's not deeply embedded in history and society, and there are no decision processes about questions and priorities for algorithms that are not deeply embedded in the social. But automation does the work of laundering all these issues, right? It invites you to forget the social, and to see the automated and the mechanical as somehow freed from that. I think education has the potential to point out the irreducible sociality of it all. NS: So education is a place where we can help people to reflect on this sociality and the sociotechnical nature of digital automation. We just have to hope that those messages are the ones that get curated into our personalised information stream when the automated technology is giving us our personalised slice of education.
MA: That would be the paradox! NS: Exactly. Well, thanks ever so much for taking the time to talk all this through Mark. These are all fascinating ideas and big questions, but it's great to have a chance to scratch the surface. I just recommend that everyone read Automated Media (Andrejevic 2020) and carry on the conversation. MA: Thanks so much. A pleasure talking with you.