BIG TECH

A Googler's view on the future of tech & regulation

Grace Guan

Grace is currently a Strategic Partner Manager for Fashion & Beauty (F&B) partners at YouTube in San Francisco, California. As an SPM, she is responsible for enabling & empowering endemic top F&B talent on YouTube. Previously, she worked in Operations at Google and did numerous internships in fashion retail & e-commerce (at Bloomingdale's Corporate, Rent the Runway, and Joyus.com). Grace graduated from the University of Pennsylvania with a degree in Politics, Philosophy & Economics.

If you have followed anything in the news over the past few years, you’ve likely heard about the Big Tech vs. government tensions that have been brewing ever since technology became a force to be reckoned with. This knowledge gap runs the gamut from a US congressman asking Google CEO Sundar Pichai “why his granddaughter saw negative news about him on her iPhone” (Pichai’s response: Google doesn’t make iPhones) to the EU implementing GDPR, “the toughest privacy and security law in the world”, in 2018. The four-year investigation into building legislation to protect user data was a valiant effort, but has also been widely viewed by small and medium businesses as having a negative impact on the EU economy and hurting the European tech scene.

I joined Google in 2016, and started a new role at YouTube in 2018 at the epicenter of all this Big Tech madness in the Bay Area. I was on campus the day a shooter stormed our headquarters and opened fire on YouTube employees with a semi-automatic weapon. I witnessed the internal scrambles that occured after policy decisions like COPPA (Children’s Online Privacy Protection Rule) rolled out, when PR scandals like the Crowder vs. Maza hate speech incident took place, and now Smartmatic, the electronic voting system, has sued Fox News for defamation and spreading political disinformation. Each time, I was struck by the regulatory tension with technology, especially as it relates to media and content creation and consumption. What makes “technology” such a black box, and what are technologists and policymakers meant to do about it?

 

Historically, private online platforms have been protected in the US by what’s known as Section 230 of the Communications Decency Act, which gives “online intermediaries broad immunity from liability for user-generated content posted on their sites”. This grant of immunity was seen by policymakers who signed it into law in 1996 as the preferred course of action to avoid free speech problems of collateral censorship. In other words — if legislators decided to hold privately-owned platforms legally liable for any offensive content posted by their users, the Facebooks, Twitters and YouTubes of the world would have to take an extremely risk-averse approach to user-generated content, resulting in a culture of collateral censorship. Otherwise, these companies could get sued every time someone shared something harmful or false (which, as we know, has happened a lot in the history of the open web). Despite this immunity, US-based platforms have - over time - engaged in active policing of their content and users. YouTube hired over 10,000 policy enforcers in 2018 to manually comb through edgy content that wasn’t automatically captured by machine learning algorithms. Twitter permanently suspended ex-President Trump after the insurrection at the US Capitol, reversing its historical stance against policing speech in favor of the First Amendment.

 

Big tech is a black box because it’s constantly juggling priorities in this realm: trying to stay ahead of regulators before their protections under Section 230 are removed, appeasing users who balk at the idea of their “free speech” being stripped away, and keeping advertisers happy in the midst of all this chaos. In fact, Section 230 is already at risk for complete reform under the SAFE TECH Act, just announced in early February. Tech has been one of the few sectors to enjoy security and growth in the wake of COVID-19. But its future existence is at the mercy of policymakers around the world, who understand little about how any of this technology really works - from search, to algorithmic suggestions, to automatic filters. To protect the future benefits of technology, we need to be leveraging experts in the field when enacting legislation, through increased collaboration between technologists and policymakers to help close the knowledge gap with the ultimate goal of protecting individuals from online harm such as disinformation, harassment, hate speech, spam, and data exploitation.

London Business School Tech & Media Club