daily californian logo

BERKELEY'S NEWS • NOVEMBER 19, 2023

So what about ethiCS?

article image

SUPPORT OUR NONPROFIT NEWSROOM

We're an independent student-run newspaper, and need your support to maintain our coverage.

OCTOBER 11, 2019

CS 195, Social Implications of Computing, is a brilliant class.

You learn about the information economy, tradeoffs in ensuring privacy and surveillance capitalism, all in a one-unit pass-no-pass class that fulfills the College of Engineering’s ethics requirement, which for reference happens to be fewer units than the baking DeCal. Upon entering the class, you’re in the presence of three of the most esteemed computer science faculty on campus, who instruct rows of (mostly engineering) students browsing on Reddit or Facebook, if not frantically hacking away at their assignments for other “real” CS classes.

The visible lack of engagement in lecture coupled with the brazen Piazza posts (“Will I still pass this if I don’t do the last essay?”), suggests the self-abdication of responsibility among students to recognize ethics as a foundational component of computer science. Rather, ethics is viewed as nothing more than a diverging specialization, and this school of isolationist thought propagates to companies in which students end up working, where concerns of data protection and representation seem to hinder innovation and quick product release cycles.

A few years ago, I was shocked to hear about a Google employee who tweeted about how the algorithms in Google Photos had inappropriately classified his Black friend as a gorilla. Instead of ensuring that the training dataset for the image recognition algorithm included more images of dark-skinned individuals to prevent such a mislabeling, Google censored searches for primates in its photos organizing service. This lack of algorithmic fairness extends even further: influencing hiring decisions and filtering out candidates who look or speak differently, assigning unfair insurance premiums for those algorithmically classified as more “risk-prone,” and worse, failing to detect certain groups over others by self-driving cars. It’s time to pay more attention. Is the technology we are creating replicating power structures and reviving inequalities of the past?

More often than not, we don’t notice tech’s pernicious lack of ethics until we’ve experienced unfair treatment ourselves. Last semester, I was researching potential biases in facial recognition algorithms (Microsoft Azure, Amazon Web Services, Google Cloud Vision, et cetera) in the Tech for Social Good division of the UC Berkeley’s Center for Information Technology Research in the Interest of Society, or CITRIS, and was mortified to see web-scraped results that popped up when testing an algorithm on a picture of myself. Unlike photos of my similar skin-colored male friends that rendered results such as “computer science” and “engineering,” my results were almost all about my physical attributes like “long hair” and “brown,” excluding any academic or career-related terms.

Equally important is ethical research, because successful projects undergo tech transfer and materialize into distributable products. Some months ago, I attended a showcase of a company’s artificial intelligence platform in which they demoed smart stock image search functionality, a project in the company’s research division. Despite the thousands of results that rendered at lightning speeds, searching positive lexicon (“happy”, “family”, “success”) almost exclusively returned results with images of light-skinned individuals, while words like “crime” returned results with darker-skinned individuals.

I raised my hand in the middle of the presentation, inquiring about what formalized practices the company integrated in the research process to ensure equal representation of backgrounds, to which the answer was, “Yeah, we know it’s a problem — we still need to do that.”

This mindset is the very problem: Ethics should not be a reactionary consideration after something cataclysmic like misclassifying a dark-skinned person as a gorilla. And such instances, described frequently as “edge cases,” speak more to the limits of what the individuals behind the code care about than unexpected bugs.

Unifying conscience and product development doesn’t just entail checking off items such as data privacy, algorithmic fairness and user safety in the very last step. It’s not even solely limited to inclusivity in the production of data-centered technologies. It’s about integrating thoughtfulness and unwavering accountability into the development cycle — an absolute absence of hesitation when it comes to monitoring and empathetically acting on the aftermath, no matter how disastrous it initially may be.

It’s also not enough to resign all responsibility to policymakers. When the laughable Facebook trial went viral on the internet, the extent of my peers’ response was poking fun at the absurdity of questions asked by government officials. Enacting and enforcing regulation evidently takes time, so why wait? Why is it their job and not ours to leverage our position and ask the questions?

I’m not insisting that every single engineer and product person devote every second of their lives to contemplating the ethical implications of every piece of code they write. Rather, I implore these groups in particular to critically think about this: are you sharing your access to power and resources with those on the frontlines of the struggle? From the AI Now Institute to Data and Society, there is no shortage of initiatives tirelessly grappling with the establishment of a middle ground between progress and exploitation of tech. Is there anything you can do to actively or even passively support their work?

I also really hope for a curriculum change. A single unit course is not sufficient to engage with the infinity of externalities that tech has and will continue to generate, and it’s time that UC Berkeley’s curriculum is restructured to reflect that. And I look forward to the day that students willingly minimize their screens and be present in discussions that will shape so much of our society. We expend an unjustifiable amount of attention inculcating our aspiring engineers to evaluate code and design for optimal performance — perhaps we should think beyond just what and how to build, and contemplate if we even “should.”

Divya Nekkanti writes the Friday column on tech, design and entrepreneurship. Contact her at [email protected].
LAST UPDATED

OCTOBER 18, 2019


Related Articles

featured article
The manufactured “technical vs. nontechnical” system of classification paints two disparate worlds. It’s time to start promoting more inclusive language.
The manufactured “technical vs. nontechnical” system of classification paints two disparate worlds. It’s time to start promoting more inclusive language.
featured article
featured article
Changing my hair was the knee-jerk reaction I had to heartbreak. I didn’t go through with it — I applied to law school instead.
Changing my hair was the knee-jerk reaction I had to heartbreak. I didn’t go through with it — I applied to law school instead.
featured article
featured article
featured article