The first time I was truly distressed about online security was after executing my first SQL injection attack in my computer security class at UC Berkeley.
After taking the class, my hyperawareness of the web surged. I now figure that every click adds a cookie (or 10) to shadow me as I traipse the web. For every app I enable camera usage for, I expect it — even when not in use — to unfailingly record my every walk (that my iPhone camera sees) and talk (that the microphone hears). And to be honest, my theories of being constantly surveilled are not too far off. I am a regular “clear cookies and cache”-er who uses my laptop over my phone as much as possible — shoutout to my best friend, AdBlock, and my slide-y webcam cover.
Exploiting (for a fun class project) the very bugs that induced huge data breaches like Equifax’s 2017 calamity was my incredibly privileged foray into digital privacy concern. But often for low-income communities (households earning less than $20,000 a year), it’s having their privacy exploited. Digital privacy harms disproportionately affect those living in poverty.
Despite this, there’s a popular misconception that living paycheck to paycheck means you “don’t care” about privacy. Unlike the minor inconveniences I face from tracking and data collection, low-income individuals are unfairly scrutinized and face differential security harms that threaten their psychological and physical welfare at any time. And these harms, amplified by reliance on mobile phones; scant access to technical troubleshooting, such as authoritative security advice sources and IT professionals, for online safety; conditions of poverty and lack of legal representation, aren’t solved by just reading the “terms and conditions”. “Opting out” of using technology isn’t the solution, just as lack of “digital literacy” or “care” isn’t the problem.
Distrusting individuals living in poverty and trivializing their rights to privacy and autonomy as secondary concerns, employers of low-income workers often conduct drug tests, voiceprint prisoners by collecting audio samples for surveillance before they can make phone calls, and share welfare applicants’ data across multiple commercial databases. In some areas, low-income tenants are afraid to even enter their homes as their landlords move to facial recognition entry instead of key locks and promise no data security.
The digital divide in tech has transcended beyond just access to hardware and broadband. Ensuring a device per person pales in comparison to the challenge of ensuring every person’s data privacy, as well as their ability to seek redress, is protected. Recurring privacy challenges, such as identity theft, have cascading effects on individuals living in poverty like financial loss, wrongful arrests, harassment from collection agencies and loss of food stamps and child support benefits. When income sources are jeopardized, individuals are often unable to pay for legal counsel in response to the theft.
Because of their race, zip code and gender data, individuals living in poverty are denied opportunities, targeted more heavily by unscrupulous financial vendors looking to exploit vulnerable populations with advertisements of fake debt relief services and counterfeit loan programs, and charged higher prices for goods and services on these platforms compared to their affluent counterparts who live in “wealthier” zip codes.
Since they often rely on mobile devices as their primary source of internet connectivity, low-income communities are disproportionately subject to malicious surveillance practices like cell-site simulators, in-store tracking and cross-device monitoring. With social media platforms accelerating the rate at which individuals can post on mobile, content is easier than ever to upload, but harder than ever to control. I’ve been using Facebook since the 11th grade, and I can post a status or picture within seconds. When it comes to restricting Facebook’s privacy settings, it’s a lengthy multiclick process that is significantly harder on mobile devices, leading to the abandonment of restricting privacy altogether.
When it comes to determining employability, applicant tracking systems like HireVue have been endorsed by the Federal Trade Commission to identify “negative behavior” — whatever that means — when mining applicants’ social media data. I’m not sure when the last time poverty was associated with “positive behavior,” but this vague approval to exploit even more vaguely defined information economically excludes those categorized and filtered out as “at-risk populations,” who might not have profiles that are curated enough.
Mobile tracing, or covertly using spatial and temporal data points from mobile activity, using Stringrays has also led to hyperpolicing of segregated nonwhite, low-income communities. Mapped in Baltimore, Milwaukee and Tallahassee, law enforcement surveils and just waits for crime to happen, often creating pre-suspect lists that lead to collateral damage when acted upon. Predictive policing and threat scoring, often derivative of social media communications and online data, further the distrust and stress these communities experience.
Suggesting these communities should just discontinue their tech use to protect their privacy — when mobile phones are significantly cheaper than computers and imperative to connecting to people and applying to jobs — is ridiculous. Arguing that individuals living in poverty should self-regulate content they upload, thus waiving the necessary substantive regulation to terminate discrimination, is also ridiculous. We need to stop personifying poverty as the culmination of poor decision-making by an individual and start acknowledging it as a systemic issue. Instead of promoting these ideologies that contribute to an obscene amount of privacy violations and the continued dehumanization of those in poverty, we can start with simplifying privacy policies, allowing low-income individuals to “opt in” to policy decisions and addressing the “terms and conditions” that have cultivated a position of disadvantage in the first place.