Privacy advocates have long been pushing for laws governing how schools and companies treat data gathered from students using technology in the classroom. Most now applaud President Obama's newly announced Student Digital Privacy Act to ensure "data collected in the educational context is used only for educational purposes."
But while young students are vulnerable to privacy harms, things are tricky for college students, too. This is especially true as many universities and colleges gather and analyze more data about students' academic — and personal — lives than ever before.
Jeffrey Alan Johnson, assistant director of institutional effectiveness and planning at Utah Valley University, has written about some of the main issues for universities and college students in the era of big data. I spoke with him about the ethical and privacy implications of universities using more data analytics techniques.
Edited excerpts follow.
Selinger: Privacy advocates worry about companies creating profiles of us. Is there an analog in the academic space? Are profiles being created that can have troubling experiential effects?
Johnson: Absolutely. We’ve got an early warning system [called Stoplight] in place on our campus that allows instructors to see what a student’s risk level is for completing a class. You don’t come in and start demonstrating what kind of a student you are. The instructor already knows that. The profile shows a red light, a green light, or a yellow light based on things like have you attempted to take the class before, what’s your overall level of performance, and do you fit any of the demographic categories related to risk. These profiles tend to follow students around, even after folks change how they approach school. The profile says they took three attempts to pass a basic math course and that suggests they’re going to be pretty shaky in advanced calculus.
Selinger: Is this transparent to students? Do they actually know what information the professor sees?
Johnson: No, not unless the professor tells them. I don’t think students are being told about Stoplight at all. I don’t think students are being told about many of the systems in place. To my knowledge, they aren’t told about the basis of the advising system that Austin Peay put in place where they’re recommending courses to students based, in part, on their likelihood of success. They’re as unaware of these things as the general public is about how Facebook determines what users should see.
Selinger: Are there concerns about how profiles prejudice the way professors look at students? And if there are legitimate concerns about this, are they being heard by the administrators who are creating or approving these systems?
Johnson: There are real concerns, but I don’t think they’re being heard at all. When I told my students I have Stoplight data, they were worried about what I thought of them coming into the class. It definitely bothered them. They wondered if instructors will think they need help, or dismiss them because it looks like they won’t succeed and it’s better to prioritize other students.
Selinger: If students do learn about their profiles, is there any process at all for them to change information they think is inappropriate or inaccurate?
Johnson: I’ve never heard anything. I think the presumption is that the data is accurate, in part because the data is either given to us by students themselves or else comes from stuff students did on campus. Universities haven’t done a lot to gather additional data. The bigger concern that I have at the moment is that universities construct data. Categories like “freshman” or “first-generation student” don’t exist objectively; they’re created by universities, government, reform movements. For example, my mom had an associate’s degree when I started college, so am I a first-generation student? Some rules say yes. Colleges aren’t very conscious of the issues in constructing data, and students have no recourse to challenge. The data is accurate according to the data standards so no one can say it is incorrect, and the standards are technical, not substantive, so there’s no reason to allow students to challenge them.
However, higher ed is starting to move in the direction of gathering data from other sources. Arizona State is using Facebook data to improve retention by understanding a student’s social network. They take not participating in a social network as a sign that students might be thinking of dropping out.
Selinger: How does this work? Are they scraping Facebook data? Is this an opt-in program?
Johnson: As I understand it, they have a recruitment/new student page within Facebook. Once you "like" it and start interacting with it, they get access to a lot of your information, like who your friends are. And then, once you get on campus, they can see what friends you’re making. If you’re not making friends they can see that and take it as a sign that you’re at risk of withdrawing. And if you’re making friends but not interacting with them, same kind of thing. They also look at the data that comes from cards being swiped on campus facilities, and notice who else is swiping in at the same time as you are. Friendships can be inferred from this pattern.
Selinger: Do you think students have any idea that when they click "like" or swipe their cards they’re being studied in the way you outlined?
Johnson: The Chronicle of Higher Education talked to students to find out. Turns out they had no clue and were indeed creeped out.
Selinger: One of the things I find surprising with all of this is that universities seem to be hopping on the big data bandwagon after many companies have faced ethics and privacy blowback. Why aren’t they making better use of this information?
Johnson: Why aren’t universities tapping into the expertise of their faculty members who deal with technology ethics and basic social science and research methods who can talk about data as a social construct? If you’re a social scientist, especially a political scientist, you question the biases of data sets all the time. It’s bizarre that institutions aren’t asking for faculty to point out biases big data operations.
Selinger: You've developed a concept called "information justice," which is the idea of thinking about the use of information technology in terms of how it contributes to a more just society. How does applying that concept begin to make things better?
Johnson: Looking at something like Arizona State’s card swipe system, we can see that its primary purpose is to provide access to various facilities for people who have legitimate purposes for being there. Outsiders, for example, shouldn’t be in the gym. It’s a real security issue. But when we use card swipes to determine if someone is thinking of transferring and taking tuition revenue, the context changes. A new justification is needed to legitimate this surveillance, and it might not be possible to make! But if we think of privacy through control over information paradigms, it’s harder to raise objections. When you swipe your card, you’re creating data at Arizona State, and they’re not transferring it elsewhere, so there’s nothing wrong from that perspective.
Selinger: But is there a risk here of universities disciplining students to uncritically accept surveillance in broader social contexts?
Johnson: To some extent it is an issue. But we can also try to make students more aware of how big data is being used on them at the university and what consequences follow from those uses. That can help them become more critical about other uses of big data in society. Now, doing so doesn’t justify using big data in any of the ways we’ve discussed. But it is something faculty can do to be subversive. Imagine running a class where you point out— without using personal information — that the university expects 16 students will fail and asking how this makes everyone feel?