⚡ Safekeeping the Internet

How to make safety policy from Subtle Asian Dating to GPT-3

Today's interview features Reboot volunteer, writer, and engineer Jessica Zhou.

Jessica currently works on safety operations at OpenAI, where she thinks about how information travels through internet ecosystems and the potential of human-computer collaboration, and she grew and moderates the Facebook community Subtle Asian Dating. She's also tinkering with form as a 2021 Interdisciplinary Writer’s Lab fellow with Kearny Street Workshop and writing poems for strangers as a part of the 2021 Pride Poets cohort.

🛠️ Safekeeping the Internet ft. Jessica Zhou

This interview has been edited for length and clarity.

What do you work on at OpenAI?

At OpenAI, we see lots of applications built with GPT-3 for copywriting, customer support, product catalog applications, as well as creativity tools for writing.

I started off evaluating these applications against internal and external policies like the use-case guidelines, safety best practices, and terms of use. I've also updated our policies to reflect newly surfaced safety risks. I'm now focused on monitoring to better understand emerging themes in how people use the GPT-3 API, and I build reports and classifier training efforts to proactively facilitate safe application development. Finally, we support developers in putting policies into practice within their applications. There's a lot of humanness involved in reaching out to someone and talking them through our reasoning.

We aren’t directly moderating content, but we do want people to moderate their use of the platform, so it's almost like moderating someone else’s moderation.

Is there anything you were surprised by at OpenAI?

This isn't specific to OpenAI, but I’m surprised by the various ways "the market" exerts pressure on organizations to be financially sustainable to the point they have to compromise on mission, integrity, or effectiveness.

For example, during my first week into an editorial internship at the LA Times in 2016, I watched a bunch of people get laid off because of the great false-alarmed “Pivot to Video,” which eventually cost hundreds of journalists their jobs and reflected a number of deeply-rooted problems in the industry, like the volatility of digital advertising, over-dependence on aggregator platforms like Facebook, and out-of-touch business executives.

Language model APIs are a different domain, but what persists is the push-and-pull between business pressure and what might be best for society. For example, something that Brian Christian highlighted in his Reboot talk was the tension between research cross-collaboration for altruistic societal ends versus the business interest of each entity to stay siloed from one another.

In my head, I have all these ideals of how things should be, but I realize the process of actualizing ideals means having to make decisions about what to prioritize each point in time. I’m learning to accept that these decisions aren’t so much compromise as they are intermediary steps that we take to work towards an ideal.

I was also interested in your work moderating Subtle Asian Dating (SAD) — that's such an interesting context for thinking about online communities and the performance of the self.

SAD blew up over Thanksgiving break; we went from 1,000 to 100,000 members super quickly. At the beginning, we had a single intention — that this should feel fun and different from our other experiences with dating. During this exponential growth period, we had to very quickly put out working moderation policy. Even if it was hard to enforce in practice, it’s critical to say that we don't tolerate colorism, sexism, homophobia, transphobia, or other things that attack people's humanity. Then we got into really weedy questions; for example, how do you even know if someone is from the Asian diaspora? We wanted to be very porous in allowing as many people to be here as possible.

The work my team is doing at at SAD is unpaid even though it plays such a critical role. I’m certain we’ve generated a ton of money for Facebook; people kept telling me that SAD and the related constellation of groups were the only reasons they kept using Facebook. Moderation for SAD feels like a labor of love, but it’s surprising how so many hours of labor go unpaid by one of the biggest companies in the world. This gets at a tricky ongoing question between platforms and its creators and contributors: VCs seem to love "network effects,” yet so much merit is heaped upon those who build a platform. That has big landlord energy to me.

Another thing that was surprising from SAD was recognizing that we didn't want to trademark or be possessive of a concept that resonated with a ton of people within the Asian diaspora. People hosted independent SAD-themed events and started spinoff local groups. I met one of these groups in Austin, and it was such a heartwarming moment to see all the new friendships that sprouted. Not everything has to happen under our umbrella, and we were a part of growing something much bigger than ourselves.

Do you see any parallels between your work at OpenAI and at SAD?

It's really cool to be in a role at OpenAI where I'm paid to do the same things that I did with SAD — figuring out how to navigate potential conflicts between community members or customers and our organization’s policies. Being paid for moderation-esque work is a nice feeling, but the unpaid stuff that I was doing with SAD was still materially important from a humanist, interpersonal perspective.

Additionally, the interplay between the internal and external in shaping any entity’s values is interesting — the feedback loop of interacting with the outside world to figure out who you are at the core. OpenAI's moderation policies evolved based on continuous interactions with developers, and SAD's moderation policies were based on interactions with group members. This reminds me of how I’m stumbling through the process of figuring out who I am: trying out things, spending time around people, negotiating what is important to me, adjusting to those new values, and repeating.

Any hopes for the future of social tech?

I see Facebook as a public space or a phonebook where you can stumble upon someone, but I'm excited for whatever becomes the go-to digital equivalents for casual friend hangouts. The act of creating products or companies is a form of building for a future you want to see, and so many other kinds of imagination should have the chance to flourish too.

I’m inspired by Eternal as well as Somewhere Good for this reason. We deserve platforms that are tailored after us alongside big platforms which aggregate a broad public  There are so many people in this world, and just a small subset of people building socio-technical infrastructure for our future.

Finally, Facebook has enabled people around the world to find one another in ways we couldn’t have previously imagined — I know how much my life has changed because of it. But now that we’re here, I am also excited to see where people go after they’ve found one another.

Find more of Jessica around the web: for media/computing interests, go here; for art/literary interests, go here; see what’s bouncing around her brain on Are.na, and say hi on Twitter.


🌀 microdoses

💝 closing note

Ted Chiang is one of the Reboot’s community’s most loved authors. Here are a few of our favorite short stories:

P.S. If you’re a young person looking for a community thinking critically about tech, humanity, and power, Reboot runs a private Discord — more on that here.

Toward a better Internet,

— Jasmine & Reboot team