Automated Surveillance in Education

This is the first of three interviews for Special Issue of Postdigital Science and Education, ‘Education in the Automated Age’, edited by Neil Selwyn, Thomas Hillman, Annika Bergviken Rensfeldt, and Carlo Perrotta. The interviews are conducted with different scholars who generally work outside of the educational studies domain, but all of whom are interested in the rise of artificial intelligence and other forms of automated decision-making. This article presents a conversation with Professor Chris Gilliard. Chris is an English literature professor from Macomb Community College in Detroit, and also a visiting research fellow at the Harvard Kennedy School, Shorenstein Center on Media, Politics and Public Policy. Chris is widely regarded for his critiques of surveillance technology, digital privacy, and the problematic ways that digital technologies intersect with race and social class. He contributes regularly to publications such as Wired, The Washington Post, and The Chronicle of Higher Education, and is well-known for his frequent stream of insightful tech critique under his Twitter handle @hypervisible. Along with various co-authors, Chris is responsible for a number of important ideas that are now widely used in current critical conversations around technology — such as ‘digital red-lining’, ‘friction-free racism’, and ‘luxury surveillance’. Neil Selwyn is a professor at Monash University’s Faculty of Education in Melbourne, and a long-time researcher and writer in the critical studies of education and technology. In this interview, Neil talks with Chris about the automated forms of surveillance that have been integrated into schools and colleges over the past few


Introduction
This is the first of three interviews for Special Issue of Postdigital Science and Education, 'Education in the Automated Age', edited by Neil Selwyn, Thomas Hillman, Annika Bergviken Rensfeldt, and Carlo Perrotta. The interviews are conducted with different scholars who generally work outside of the educational studies domain, but all of whom are interested in the rise of artificial intelligence and other forms of automated decision-making. This article presents a conversation with Professor Chris Gilliard. Chris is an English literature professor from Macomb Community College in Detroit, and also a visiting research fellow at the Harvard Kennedy School, Shorenstein Center on Media, Politics and Public Policy.
Chris is widely regarded for his critiques of surveillance technology, digital privacy, and the problematic ways that digital technologies intersect with race and social class. He contributes regularly to publications such as Wired, The Washington Post, and The Chronicle of Higher Education, and is well-known for his frequent stream of insightful tech critique under his Twitter handle @hypervisible. Along with various co-authors, Chris is responsible for a number of important ideas that are now widely used in current critical conversations around technology -such as 'digital red-lining', 'friction-free racism', and 'luxury surveillance'. Neil Selwyn is a professor at Monash University's Faculty of Education in Melbourne, and a long-time researcher and writer in the critical studies of education and technology. In this interview, Neil talks with Chris about the automated forms of surveillance that have been integrated into schools and colleges over the past few years -from remote 'online examination proctoring' to ongoing enthusiasms for putting virtual assistants such as Amazon Echo and Alexa 1 in classrooms. For Chris, all of these technologies need to be seen as surveillance technologies rather than learning technologies -and therefore approached with a fair degree of suspicion and scepticism.
I can use remote proctoring as an example. When you critique remote proctoring and online invigilation, what many people will say is 'Well, is it so much different from just having people sit in the room, a proctor walking around?' Well it is -it is different in a lot of meaningful ways. But also, we should probably interrogate that existing practice in itself, right? I don't necessarily think of examination invigilation as good pedagogy either. So I bristle a little bit when people say 'Well, this is not new', because my critique is not that it's new. My critique is based on the systemic problems that it perpetuates.
NS: It is also interesting to think about what sorts of new things are being introduced around the edges. So, you've got different actors coming in and setting the rules of the game. As you say, you've got these different modalities of being privileged -what can be 'seen' through the camera, what audio can be captured. And also, you've got the massive capture and extraction of 'big data' sets, and what's then processed and analysed with these. So, I agree with you -I think an important form of pushback is to say 'Yes, it is the same … but the same was terrible the first time around'. But I don't think we can let these actors get away with it completely -because there are new things being introduced into education around the edges… there are different layers of harmful change being added to the mix.
CG: That's a great point. So, again, where this data goes, what's done with it by these companies, the types of investment structures that are involved in pitching these technologies, and the scale of the harms -these are all different. I don't necessarily agree that we should have these types of online proctored exams. It's far different for students to sit in a classroom than it is to insist that they put what essentially is malware on their on their devices, and then that they reveal the inner workings of their household -including things like who you might be living with, where you live, any disability. It is not acceptable to insist that students reveal these things in order to participate in an educational system. NS: Yes, and also not to overlook the issue of labour -the student is suddenly expected to carry out a huge amount of free labour themselves in what is already a very fraught situation. Let's deep dive into this case of 'online proctoring', because this is a fascinating example of automated surveillance technology in education. What strikes me is how this technology actually finds favour with a lot of people. A lot of teachers, students, and administrators genuinely think that they are being convenienced and that this technology is a good thing. I mean, this is an educational technology that has found a ready audience… why do you think that is? CG: Well, it's a simple solution to a complex problem. One of the things that I point to is the fact that online proctoring systems existed before, but they really took off during the pandemic. And I understand the rationale for that to a degree. You know, two years ago [when the pandemic started] and as many things were moving to remote -including a lot of educational sites and systems -people were looking for quick solutions to problems. Educators were accustomed to holding exams, and they would say, 'Well, we have to have some means of proctoring them remotely'. So this proctoring technology seemed like a ready-made solution.
At the time, people weren't necessarily aware of many of the issues that these systems present… whether that's face detection technology that doesn't work on a lot of faces, or issues around the degree of access. For instance, what kind of bandwidth or what kind of device would you need in order to properly run this software? Other issues include making assumptions about who has a quiet space, and who has a space that's devoid of distractions. It is also necessary to think more carefully about things like eye tracking, and the suggestion that if your eyes move during the course of an exam that somehow you're dishonest. In fact, it is necessary to think more carefully about all the ways that the technology places anyone outside of whatever is coded as 'normal' as dishonest or potentially dishonest.
Some of these things people didn't know at the time. But now we're two years in the pandemic, so that excuse is gone. What any change away from using remote proctoring would probably require, is people having to radically alter the way they do certain things and the assumptions they make about what teaching and learning are. I think that many people in education are not ready to do that. I don't want to dismiss these concerns -in some cases, doing something different would require a tremendous amount of investment on the part of institutions and instructors. But my argument is that if a technology cannot be deployed safely, and doesn't work for a segment of people, then it shouldn't be used at all. So, if you can't have a non-racist, non-ablest, and non-discriminatory proctoring system -and it's my assertion that you cannot -then you shouldn't have it at all. NS: But then you get the counterargument from the proctoring vendors and even some of the universities -which are trying to deal with mass numbers of studentsthat it's only a few thousand students that have been disadvantaged… that there are millions of students that are not complaining… so what's the problem? You bump up against this kind of mentality… it's really tricky to deal with! CG: You know, I just can't get down with that. I mean, what would be an acceptable number? How many people is an acceptable yield when you're thinking about people who are being discriminated against? You know, is it 5%? Is it 10%? Is it 15%? My answer is zero.
And, you know, making that argument also misunderstands the disproportionate numbers of certain groups who fit into these categories. So that's disproportionate numbers of black and brown students, disabled students, students who are poor… anyone who lives in a particular situation where they don't necessarily want to subject themselves to this technology because of what it might force them to disclose or reveal.
And even in cases where the university is using live proctors (rather than artificial intelligence), you have a very 'creepy' situation in which live proctors have access to people's homes in a way that makes many people uncomfortable and compromises their safety. I don't like using the term 'creepy' but there's no better way to describe it. And then, we haven't even begun to talk about the ways in which some of these technologies are insecure in that they pose some sort of a technical or cybersecurity risk.
NS: Absolutely. Your last point that this technology is based on malware, I think, has been widely made, but it's interesting to think about the underpinning values and politics of the continued implementation of a technology like this into education. As you say, this is software that's been around for a long time. Actually, I think that it actually originated in the software industry -allowing software engineers to take accredited tests and assessments online. That assumes a very different type of education than studying four years for a university degree. That assumes a very different position that the student or test taker is in.
The underpinning values in the online proctoring systems are those of 'academic integrity' and making sure that no one is 'cheating'. These are all very different values to the equity-focused and social justice focused values that you've just been talking about. So the continued adoption of these technologies into public higher education and public schooling exposes a fundamental clash of politics, as well as a host of technical problems. But people don't like to think about educational technologies as having politics.
CG: Yeah, absolutely. I think we really need to interrogate some of the ways we think about testing and learning and teaching. So, for instance, I teach writing, and any good writer will tell you that writing is a collaborative process. Of course, we have this myth that people sit in a room or in a coffee house by themselves and write some text. But I'm here to tell you that that's not how that happens, right? I mean, that's not how good writing happens. Good writing is collaborative. I write pieces and they are scrutinized by many pairs of eyes -editors, copyeditors, and so on. But I still feel that I wrote the piece.
Similarly, if I go to an attorney or a doctor, even if I go to a mechanic, often they might carry out research on their computer right there in front of me. If I go to my doctor, present them with a thorny health issue, and they look it up right there, I don't think 'Oh, they don't know what they're doing'. Instead, I think, 'Oh, they're being very thorough'. And I'm glad they're doing this. You know, an attorney will review case law all the time! So this idea that students sit in a room, and are scrutinized, and that any knowledge that they don't possess on hand at that moment within their brains is somehow invalid or cheating, makes no sense. This is not how professionals operate out in the real world. But somehow, education systems are stuck with this idea about testing, and the belief that testing somehow proves people's knowledge.
NS: I agree -there is certainly scope to argue that we have a system of education based around outmoded ideas… although I don't want to get the tech industry off the hook altogether!

NS:
You've also written about the rise of virtual assistants and chatbots -such as the trend for putting Alexa in the classroom. So, these are touted as automated agents that can direct teachers and students… nudge them to make the right choices and decisions. In many ways, this seems like a much more friendly, fluffy version of surveillance technology than the online exam proctoring that we've just talked about. So, what's the problem with having Alexa in education?
CG: It is my assertion that Amazon, as a company, is a deeply destructive and harmful agent in our society -whether that's the environmental impact, the way they treat their workers, the way that it's enriched mostly one individual (or perhaps a small number of individuals) to the detriment of a large chunk of society. Also, I think more people are now catching on, but the extent to which Amazon 1 3 is a surveillance company is often understated. Everything you do with Amazon is tracked, traced, and catalogued, and then subjected to analysis -ultimately to further enrich Amazon and sell more product.
So the idea that we put these devices into hospitals, into classrooms, into people's bedrooms and dorm rooms is really troubling. It takes away from what are meant to be very private and intimate spaces -spaces where people should feel safe to talk about things in ways that they're not worried that what they say might come back to haunt them. And I don't necessarily mean in any extreme harmful manner. For instance, we might think about information that students may not want to reveal to corporations, or to law enforcement, or to the US Immigration and Customs Enforcement, or even the simple worry that an algorithm might improperly understand what they've said and put them on a list for something.
So I don't think of technologies such as Echo or Alexa as friendly systems. They're basically tools of surveillance that take people's data, or their content, or their discussion … and subject them to processes that people don't have any control over. People may never know what the results of those processes are, but I feel pretty safe in saying that some of those results are likely to be what people might not have anticipated and will likely not be beneficial to them.

NS:
The 'virtual assistant' technology in particular raises the issue of informed consent. We talk about informed consent a lot in education. I'm not sure anyone understands what they're being informed about, or what they're consenting to, when they come into a classroom with Alexa installed in the corner. When you put Alexa in the classroom, I'm not sure it's possible to be fully informed -whether you are a teacher, the principal, or a student.
CG: No… I think Wired and Reveal did a recent joint investigation on the extent to which Amazon does not protect people's data (Evans 2021). It's really a blockbuster piece. I encourage everybody to go read it. Let's just say that the essay asserts that the data protection policies at Amazon leave a lot to be desired.
NS: What's also interesting about Amazon is they are a multi-billion-dollar corporation that is always looking to extend its reach. As the recent history of education technology suggests, corporations will often see schools as a loss leader. You can put tech in schools not to make a profit per se, but rather to bake the logics of compliant technology consumption into the minds of students and teachers… and then you can move into the home market. For example, you've written a lot about Amazon's Ring domestic security systems 2 . It's interesting to see how Amazon imagine that our homes, as well as hospitals, schools, and other public spaces become infused with the same forms of automated surveillance technology. Do you see that kind of market-creep? Are schools and universities just one element of a much larger business plan? CG: Yeah, absolutely. Amazon has not been shy about their goal of invading (although that's not the word they'd use) every aspect of our lives. They're in our homes, they're into grocery, they're trying to go into healthcare, they're trying to get into schools and education. They have all kinds of agreements with law enforcement. In terms of healthcare they're also promoting wearables… so in addition to getting cameras installed everywhere they are selling biometric devices.
There was a recent piece in Business Insider that talked about some of the patents that they filed (Haskins 2021). And of course -disclaimer: not every patent becomes a product. But the level of invasiveness of some of these things was shocking -whether it is a drone that will fly around your property and take pictures, or adding more biometric sensors to Amazon Ring, including facial recognition, face detection, and even smell detection (which as far as I know, doesn't currently exist, but they filed a patent for it nonetheless).
The essential desire of this company is to cover everything in sensors. It's my assertion that, in a lot of ways, what they're trying to do to the outside world is the same thing that they already do to the workers in their warehouses.
NS: The Amazon patents that came out a couple of years ago for housing their factory workers in cages to 'protect' them from robots was truly dystopian (see Shoot 2018). Now, a lot of people are very keen to present the pros and cons of technology in benign terms of the supposed 'inconveniences' and 'conveniences', the possible 'advantages' and 'disadvantages'. But you've used the word 'harm' previously. Can we be specific? What are the harms of these technologies that we should be actually calling out here? CG: I think there are some short-term harms and long-term harms. One of the real harms of putting these systems in schools is that it normalizes a degree of surveillance that is not good for society. Some people have difficulty when you talk about a 'right to privacy', because they think it's a very classed assertion. I disagree with that. I think that private space, and the rights to our own thoughts and ideas, and the ability to determine who we're going to share those with without fear of repercussion… these are all bedrock foundational elements that are important to every society, and to the development of ourselves and our understandings of who we are. I think that oppressive governments throughout history have always tried to invade people's privacy in order to disrupt those things. Look at the ways that oppressive governments have attempted to infiltrate social movements and disrupt them -particularly progressive movements, whether that be the right for particular people to vote, gay marriage, or you name it. I think normalizing surveillance is harmful in that sense.
But another really specific harm is that these systems are often very discriminatory in that there's a degree of algorithmic judgment about people that we're simply not able to investigate or challenge. So, in the case of proctoring, there are assertations being made about who is cheating. Something as simple as that can have severe consequences. There was an episode at Dartmouth where the institution accused medical students of cheating on a remote exam on the basis of how the institution misunderstood how their learning management system worked (see Singer and Krolik 2021). That can alter -dare I say ruin -people's lives based on a set of algorithmic judgments.
That's just one example. But in many ways, these systems often make assertions about who someone is, what they do, what their potential is, whether or not they are cheaters, whether or not they may be criminals, or have some desire or impulse towards extremism. People are often falsely implicated by these things, and those kinds of judgments often fall on already marginalized populations. In many cases, we're not allowed to interrogate the system that's making these assertions or assumptions about us. We just get dealt the judgment that comes out.
NS: Even if we could interrogate the systems, it's often impossible to work out exactly what is going on. I think a lot of these harms boil down to who these technologies are configured for. Who the 'end user' is, who the 'customer' is, for whom these technologies are built? Perversely, I don't think the end user of these technologies is meant to be the student or the teacher. Often I think technologies are designed to be sold to institutions. So the institution is the end user and the imagined beneficiary, whereas students and teachers are merely the subject of the what the technology does. If you frame education technologies in that way, then you suddenly think, 'Oh, yeah, of course, that's why students and teachers are so routinely kind of disadvantaged and marginalized and harmed by these things'.
CG: I think often institutions are looking for a degree of certainty that doesn't exist -whether that's about what a student is doing, or what a student's capable of, or to what degree a student might persist. So when companies come to schools and promise that they can do these things -whether or not they can, I mean, there's very little independent research that says that they can -but it sounds good, right? And the promise of that is very compelling to institutions for a variety of reasons.
NS: Absolutely. Our recent research on online proctoring found university administrators admitting that they knew the technology was crappy and didn't work (Selwyn et al. 2022). But, you know, it was symbolic. It ticked a box, CG: …so it's more important to appear to be doing something, rather than whether or not that thing is effective.

First They Came for the Students…
NS: I'm fascinated by the reluctance to call out harms in EdTech. Or perhaps more accurately, there's a reluctance in education and technology to listen to people who point out harms. Why is this field of EdTech so Pollyannaish, and doesn't want to hear any bad news or criticism or pushback? CG: I think there's a few of reasons. First, I think some of these companies are very litigious. So you have to be very careful about what you say, and who you say it about. We've seen some really extreme examples of that over the past couple of years. Another thing is the idea that you have to be an expert on these systems in order to critique them. Again, I don't agree with that, but many people are shouted down when talking about artificial intelligence or machine learning if they don't know how to code. I think that's a very intentional tactic on the part of the powerful. It's daunting to put yourself out there and say, 'Well, this thing doesn't work'.
Another part of the problem is that some of the promises that are made are actually not possible. When it comes down to claiming that tech can predict the future or make assertions about people's potential -these are things that are really not possible. But people don't want to be seen as Luddites -we live in a culture that lionizes innovation. So, if in any way you're seen as an opponent of tech then you are seen as an opponent of the future -that you somehow resist progress.
NS: And education is a very progress-oriented and positive sphere, where people are doing what they do with the best of intentions. People in education want to make the world a better place, they want to improve children's life chances. So, it could be argued that to be seen to be not positive doesn't fit well with the dominant education mindset or the dominant technology mindset.
CG: I also think that we've been subjected to a myth from voices outside of education that education has not changed for decades or centuries. In many ways schools are conservative institutions. But individually, people inside education alter and improve their pedagogy all the time -which is something that people outside education often don't recognize or try to lampoon in ways that I think intentionally misunderstand what education is trying to do.
NS: I agree with you completely. The 'schools are broken, tech will save us' narrative is hard to refute, even if it is clearly bogus.
Before we finish, I wanted to specifically double down on the racialized nature of EdTech. You've written that surveillance technology always finds its level -and that level is generally focused on black folk. If there's a reluctance to call out harms in EdTech then there's definitely a reluctance to call out racialized harms in EdTech… or at least listen to black critiques of EdTech. How do we change that? How do we shift that dial? CG: [weary sigh] I can offer some guesses but I don't know the answer to these questions. Speaking to a point that you made earlier, there's a way in which people think that harms to a small number of people -or a particular set of marginalized people -can't be generalized. There's at least two things wrong with that way of thinking. One is that even small harms, or harms to small number of people, are still harms. But the other thing I think people don't recognize is that while many of these technologies will be leveraged first and mostly against marginalized or vulnerable populations, eventually they harm everyone.
Unfortunately, I've found that one of the only ways that you can convince people that this matters is to tell them that eventually it's going to matter to them. An example I use is that there's a particular technology that monitors student traffic on devices -their emails, tags, messages, things they write in documents -in an effort (the company would claim) to look out for things like bullying, potential for selfharm, and things like that. But this company also made the assertion that they could monitor teacher interactions in order to prevent strikes and unionizing. That's the case I make for all these things. If you don't think it's a problem that black, brown, gay, trans, lesbian, and queer students will be harmed by these technologies then that's a problem. But it's also a problem because eventually these technologies are going to be leveraged against everyone else. I don't like that line of argument, but one of the few things that moves people is to tell them that they're going to be next. I want to be completely on the record in saying that harms to marginalized individuals should in themselves be enough reason to not do a thing. But it's been my unfortunate experience that often people need more than that. So what I've taken to pointing to lately is examples of how that technology is going to be used against them as well.

NS:
You also make the point that a lot of this technology is accepted by people who see themselves on the 'right' side of the camera… the 'right' side of the surveillance.
CG: Yes, I think we've seen that with the rise of white-collar worker surveillance during the pandemic. A lot of times, I think, teachers and instructors don't think of themselves as workers… much to our detriment. But we've already started to see examples where K12 and college institutions are making assertions and assumptions about how instructors use their time when they're working from home. So the 'right' end of the camera or the microphone or the biometric device is often assumed by educators to be that it's them observing students. Educators don't necessarily think about administrators observing them and applying some of these metrics to them. We've already seen examples of that and I think we'll see a lot more. So I think that the way to short circuit that kind of progression is to make sure that it's not done to anyone.
NS: Which leads me on to my final slightly optimistic question. It occasionally feels that we might have entered a period of grassroots pushback, protest, and general resistance against big tech. We've seen the emergence in some quarters of a sentiment of 'Fuck the Algorithm' and similar kinds of thing. Do you think that this shift might hit education? And if so, what do we need to do to try and mobilize this sentiment to develop a collective sense that we're all in this together? How can we foster a solidarity in the face of education automation? CG: I often compare it to the financial crash. There was a time when I didn't know what a 'credit default swap' was because I didn't think I needed to… it turns out I needed to! So there are many people who are now learning what algorithmic biases are, how facial recognition works, what machine learning is, in terms of the ways in which these things impact so many different aspects of our lives -whether that's the kind of loan you're able to get, whether or not you're arrested, what kind of hospital care you get, on and on and on.
If I'm pressed to offer some kernel of optimism that's where I see it -people are now starting to be much more aware of how these systems influence our lives and realize that we don't necessarily need to be an expert on these systems to have expertise in how they affect us. The extent that we can point those things out and people can learn those things might provide opportunities for pushback and resistance. Because what has changed just in the past five or six years, is the degree to which these systems touch so many aspects of our lives in ways that are often invisible to people except for the final output.
NS: I think that makes a very strong case for public scholarship -making these things visible, talking about these issues, saying these things out loud -which is exactly what you're doing in your press work and Twitter feed. It's really important work, and we need more people like you to be shining a light on these things.