Illustrations by Matt Chinworth


Three professors from different disciplines provide their viewpoints on big data, data security, and the societal implications of this growing issue.


A matchbox-sized device that monitors Assistant Professor Neta Alexander is implanted in her chest, just under her collarbone. The device is a pacemaker and, as such, keeps her heart beating. However, its wireless capabilities mean that her biodata are stored in the cloud, which has prompted her to question: Who owns my personal information, how secure is it, and what are the potential risks? As a media and technology scholar, the professor looks at the bigger picture and what the future could hold — for us all. 

“A month before turning 34, I received an unexpected birthday gift,” Alexander wrote about her cloud-connected pacemaker in a 2018 Atlantic article, where she first began publicly analyzing the Internet of Medical Things (IoMT). From medical devices to smartwatches that provide data to health care systems via the internet, IoMT is a rapidly growing industry that is expected to exceed more than $158 billion by 2022, predicts Deloitte.

Alexander was a doctoral student at New York University (NYU) in August 2017 when she couldn’t shake symptoms of fatigue and weakness. She had initially chalked them up to jet lag and exhaustion from spending the summer in Berlin, where she’d been doing an arduous writing fellowship for her dissertation. “But it didn’t go away,” she remembers. “It actually got worse.”  

An appointment with Alexander’s physician revealed that her pulse was dangerously slow — 20 beats per minute. They rushed her to the ICU and ran a battery of tests, from Lyme disease to a possible infection. “They reached a conclusion that I had a complete heart block and needed a pacemaker,” she says. 

The device constantly collects data including her heartbeat and physical activity. When Alexander puts a remote control–sized monitor up to her chest every few months, the information is transmitted to her physician and Medtronic, her pacemaker’s manufacturer. Medical professionals tout the convenience of these devices because patients can receive remote care.  

woman sitting in her home while her possessions reveal details about her through data entering a cloud above
Illustration by Matt Chinworth

Alexander’s doctoral research focused on the ways in which digital technologies — specifically streaming services — are reshaping behavior. Whether it’s Netflix or a pacemaker, Alexander’s research has given her the mind frame to question the dark side of technology. “It was easy for me to see that there’s more to the story than ‘We saved your life and now we just want to monitor you to make sure you’re fine,’” she says. “There are other things going on.”

In addition, Alexander has been a journalist since the age of 17, writing for Haaretz, a major newspaper in her native Israel. This training motivated her to begin digging into the topic. 

Every new pacemaker implanted in the United States is cloud connected, Alexander learned from her NYU physician, Lior Jankelson. Overall, there are 3.7 million connected medical devices in use, which means that millions of people face concerns similar to Alexander’s.

Several major cybersecurity vulnerabilities in medical devices have been reported by the media. In 2017, the FDA recalled 500,000 pacemakers (all made by Abbott) when a cybersecurity firm found vulnerabilities that could allow hackers to change a patient’s heartbeat or run down the battery. In 2018, at a security conference, two researchers showed that malware can be installed on a type of pacemaker by Medtronic. And last spring, Medtronic disclosed a cybersecurity weakness in 750,000 of its defibrillators.

Last fall, the FDA urged device manufacturers, health care providers, and patients “to remain vigilant about their medical products” in light of a cybersecurity vulnerability affecting several operating systems. “These cybersecurity vulnerabilities may allow a remote user to take control of a medical device and change its function, cause denial of service, or cause information leaks or logical flaws, which may prevent a device from functioning properly or at all,” the report stated.

“The fact that I have a device inside my body, monitoring my biodata, is just a glimpse into the future.”

Neta Alexander, assistant professor of film and media studies

 “The more I read, the more I felt that I didn’t have access to the full picture before the surgery,” Alexander says. Lying in a hospital bed, she wasn’t in a position to question her cardiologist’s advice or research the information she was given. “In that context, consent is a tricky word,” she says. “I’m not going to say ‘No, I’d rather die.’ That’s not an option.” Furthermore, because hospitals and doctors tend to work with specific manufacturers, patients can’t shop around and compare devices. 

Alexander felt further disempowered when she tried to obtain her pacemaker data from Medtronic and the hospital. They told her she’d have to sign a release form and wait for its approval before the data could be sent to her — via postal mail. 

As she waited, Alexander’s Atlantic article caught the attention of Medtronic, and the chief medical officer invited her to the company’s Minneapolis headquarters so they could address her questions. “I asked them if I could come as an independent journalist and record all our conversations,” she says. Medtronic agreed, and Alexander spent an entire day touring the facility and meeting with engineers and cybersecurity experts.  

“Some of the things they said about the security and privacy units reassured me,” Alexander says. “Also, they made an important distinction between wireless and Wi-Fi technology.” Her pacemaker, Alexander explains, can be connected to Wi-Fi via a bedside monitor in order to send data, so she is not constantly connected to the cloud. In short, it isn’t wireless and always connected to the extent that our mobile phone is. “But, that doesn’t mean that other models or future models won’t be,” she says.

During her visit, Alexander also wanted to emphasize to Medtronic officials that, as a for-profit company, its operations may not always put the patient first. “In different moments throughout the day, I was trying to make the point that we should acknowledge the company mission to make a profit for
its shareholders and the fact that, in terms of the public image, all they talk about is saving lives.” 

Although Alexander says she “left feeling more informed,” her concerns were not entirely put to rest. But the company did resolve one issue: She told them she hadn’t yet received her data as she’d requested, and soon after, a thick envelope arrived with pages of numbers and graphs. Each page was stamped Copyright © 2001–2018 Medtronic, Inc. “It was great to finally receive my own data, but these numbers and charts require a medical literacy that most cardiac patients do not have,” she says. “As patients-turned-activists argue, device manufacturers and medical professionals should help us figure out ways to obtain that kind of literacy. Not every patient wants to be a cardiologist, but many want to feel the kind of empowerment that comes with being able to understand one’s own data, or at the very least, detect abnormalities and warning signs.”     

Alexander adds that she is grateful to Medtronic for its life-saving device and to her doctors for her medical care, but she wants medical professionals to empower their patients by giving them more information about the devices with which they’ll share a body. “We need to find a way to communicate all of the dangers and risks and benefits to patients and let them make an informed decision,” she says. “And if you can’t do that right away because it’s an emergency, then offer a training workshop or webinar — something to say, ‘Here is what we know about hacking and privacy.’” Many patients, she acknowledges, may not wish to learn more about their devices. “But, psychologically, one of the difficult things about having a pacemaker is a lack of agency; so, help us regain a sense of control by providing us with some basic knowledge about our devices.”

From telling her story, Alexander also hopes people realize data security is something everyone needs to think about — even those in perfect health. Google recently acquired FitBit — and, thus, the health data of its millions of customers. Meanwhile, Apple Watch Series 4 has an ECG scan that can detect heart rhythm abnormalities. Alexander asks: Who will guarantee that this information is not sold to insurance companies or pharmaceutical firms? 

“The fact that I have a device inside my body, monitoring my biodata, is just a glimpse into the future,” she says. “We’re all going there.”  

Neta Alexander, assistant professor of film and media studies, focuses on science and technology studies and digital culture, film, and media. Her recent book, Failure (Polity, 2019), co-authored with Arjun Appadurai, studies how Silicon Valley and Wall Street monetize failure and forgetfulness. 


In the Know: How to Protect Your Location Privacy, by Dara Seidl ’10


A Balancing Act

In 1996, MIT grad student Latanya Sweeney mailed Massachusetts’ Gov. William Weld his medical records. She was proving a point. The governor had assured the public that patient privacy was protected when the Massachusetts Group Insurance Commission released “anonymized” data to facilitate research efforts. 

Anonymization, or de-identification, is a common practice to protect individuals’ privacy in shared datasets by removing identifiers like names, Social Security numbers, and dates of birth. Sweeney cross- referenced the released medical data with voter records to determine Weld’s identity, and she famously pointed out the flaws of the oft-used method of anonymization. 

In an effort to develop a stronger approach to securing sensitive information within datasets that are shared for statistics and research, Colgate computer science professor Michael Hay has been working on a model of privacy protection called differential privacy. Differential privacy promises that whatever information is publicly released cannot be reverse-engineered to determine whether any one person’s record appears in the original dataset. This promise is typically achieved by adding random noise to the computations performed on the data. The noise obscures individual records, but still allows a data scientist to learn about the population as
a whole.

Hay was one of the first to study differential privacy after it was invented by a group of researchers in 2006. “I was interested in the concrete applications,” he says. “So I did some work that was practically focused, wrote code, and ran experiments.” 

People entering balls of data into a randomizer wheel
Illustration by Matt Chinworth

In 2010, Hay and colleagues released a paper, “Optimizing Linear Counting Queries Under Differential Privacy,” which proposes an algorithm called the matrix mechanism. “The matrix mechanism helps a data scientist balance the competing goals of ensuring privacy and accurately preserving the statistics of interest,” Hay explains. A variant of this approach will be used in the U.S. 2020 Census, for which Hay has been a consultant. 

Differential privacy has been named one of 10 breakthrough technologies in 2020 due to its deployment for the census. And, Hay’s paper won this year’s ACM PODS Alberto O. Mendelzon Test-of-Time Award. 

The matrix mechanism is also used in the products that Hay and two colleagues are building at Tumult Labs, a start-up company the trio founded in 2019. “Our ultimate goal is to build software tools that allow people to use differential privacy to safely share their data and be assured rigorous privacy protection,” he says. Their clients include statistical agencies as well as businesses operating in the private sector. Next year, Hay will be taking a sabbatical to work at the company full time.

Anonymization, or de-identification, is a common practice to protect individuals’ privacy in shared datasets by removing identifiers like names, Social Security numbers, and dates of birth.

Although differential privacy has been adopted by the U.S. Census — as well as Apple, Google, and Uber for some of their projects — it should be acknowledged that it is not without controversy, Hay points out. Critics are concerned that it is too restrictive and will muddy the census data that researchers and statisticians rely on.

“But, any release of information incurs some risk of privacy loss,” Hay argues. “With differential privacy, the privacy loss can be controlled and mathematically quantified. This technology is not a silver bullet, but it does enable institutions to decide how much data to share in a principled way.”

Michael Hay is an associate professor of computer science. His research interests include data privacy, databases, data mining, machine learning, and social network analysis.


Feeding the Machine

What does it mean to talk about the individual as private, or of an individual’s data as private?

Professor Emilio Spadola looks at privacy from an anthropologist’s viewpoint. “Culture is public, language is public, social institutions are public, and so the self is also fundamentally public,” he says. “We are each a unique individual, but we build our individual selves in and through publicly available material.”

Arguments about privacy often suggest an ideal of autonomy — no one should penetrate or influence our inner thoughts and lives. But, “the very possibility of being a person in the fullest sense already presumes that we’re not isolated individuals,” says Spadola. To be a person is to be networked through your community. So, to think in terms of social science is to think: “Yes, we’re each an individual with a unique personal life, and yet there’s no absolute privacy.”

“To be a person is to be networked through your community.”

Emilio Spadola, associate professor of anthropology and Middle Eastern and Islamic civilization studies

Historically, the word “public” has meant, in some way, “held in common and thus communal,” he explains. So as we use communal elements like language or cultural and social expectations in everyday life, we help maintain communal life as a whole. 

“The problem with social media and big data is the increasingly total commoditization and monetization of everyday personal interactions — our friendships and family lives — that we do not associate with commerce,” Spadola says. “When you post something, you’re renewing communal life, but also producing value for corporate profit. The problem is when the latter comes at the expense of the former.” 

Silhouette of a man in a city with data and graphs circling around him
Illustration by Matt Chinworth

Both he and Alexander cite The Age of Surveillance Capitalism (PublicAffairs, 2019) in which Harvard professor emerita Shoshana Zuboff describes how corporations are financially benefiting from people sharing their personal experiences, predilections, and interactions.

Of course, sometimes data is collected without our express permission. Most of us have had the experience of seeing ads on our devices that make us think our technology is listening to our conversations. Spadola points out that this has carried into his classroom. In his anthropology course, Culture, Diversity, and Inequality, he assigned his students the movie The Matrix (in which bodies fuel the machine that keeps them imprisoned). Later that day, when one of his students returned to her room and opened her computer, Netflix was advertising The Matrix to her, even though she’d never searched for it previously. “Clearly, her phone was listening, our class interaction was being monetized, and we were laboring for a social media giant,” Spadola says. “Whether we like it or not, we’re constantly having data scraped off us or having all of our interactions stored and monetized.”

Even so, Spadola says, social media and the internet can’t be blamed as the source of the problem. “Ultimately we’re seeing the ordinary force of capitalism. We’re workers, producing value that’s enriching certain individuals and classes.” 

The issue, he adds, is that theories of capitalism have described it as a system in which individual self-interest paradoxically produces an overall public good. “But social media interactions do not necessarily renew societies,” Spadola says. “In fact, there are socially destructive interactions, such as sharing ‘fake news’ and posting extremist propaganda. These don’t produce value for society, but deplete it. Social media isn’t enriching the commons.”

Emilio Spadola, associate professor of anthropology and Middle Eastern and Islamic civilization studies, explores technological media, social life, and security in Muslim societies.