As a former ancient linguistics student at the University of Cambridge, DoubleVerify’s (DV) Stephanie Posner, Director of Product Policy, was able to develop a strong understanding of the power of language at an early stage in her career.
On Netflix’s Global Risk and Intelligence team, Stephanie delved deep into the implications of releasing a broad spectrum of content into different regions and cultures. At TikTok, she helped create the platform’s policies for User Generated Content Monetization while being exposed to the challenge presented by the vast and rapid stream of creative content on social media.
In this installment of our “Ask the Experts” series, Stephanie explains how she and her team work closely with research and development teams to develop content policy guidelines for classification. As with all of the efforts at DV, the goal is to protect our clients by providing a stronger, safer and more secure advertising ecosystem.
How did you first begin working in this field?
I’ve always had a passion for media and an intellectual interest in technology. I started my career as a diplomatic advisor. Thanks to the high-level international exposure I had, I quickly found myself fascinated by the impact that online communications can have on our society. At Harvard and Yale, I dug into this topic from an academic perspective as part of my Master’s in Global Affairs and MBA. I’ve since been fortunate to work at innovative technology companies such as Netflix and TikTok on teams that are committed to making sure technology has a positive impact in the world.
I joined DV’s Product team eight months ago because I’m excited by our mission. Brands have power, and where they place their ads online makes a difference to the health of the online environment. DV products open up the way for brands to make these decisions in a way that is true to their values.
Your role requires amazing expertise. How would you describe what you do in layman’s terms?
Our team members create principled rules for training machine learning models. We define these rules in great detail across every kind of content topic you could imagine – from hate speech and misinformation to celebrity gossip or sports. The policy team makes sure there are consistent guidelines for classifying everything from a simple text-only website to a complex page or social media post that could have text, image, video, audio and any combination of these.
Working on product policy requires both technical fluency (to work with data scientists, linguists and engineers) and a grounding in the humanities, like law, philosophy, history and current affairs (to understand how a spectrum of issues affect our clients and consumers). Being able to straddle those worlds is essential.
What do you love most about working on machine learning models?
What I love most is the challenge. DV is pushing the boundaries of product development in the content classification space. In the grand timeline of the human race, machine learning is a field that we’re just beginning to grapple with. To train high-performing models, we have to get extremely granular with our policies. They have to cover almost every scenario imaginable, and they have to be both understandable for the teams that label content and implementable for a combination of machine models.
Moreover, what we do makes a difference. I love working with brands and partners, and I love working in a space where there are so many people collectively trying to make a positive impact.
In every field there are challenges. What are the biggest challenges in your field? How do you work in your role at DV to take on these challenges head first?
Many people in this field are facing the challenge of aspiring to make principled decisions on policy, but those decisions eventually conflict with product goals. At DV, our product goals are uniquely aligned with making principled policy decisions. DV’s goal is to enable our clients to make conscious decisions around their ad spend.
One challenge we all face in this field is bringing more voices to the table. I personally strive to prioritize this and encourage my team at DV to prioritize this, too. Given the unending complexity of issues online, we must have a diverse team of professionals who can bring their expertise covering as many issues, regions and experiences as possible. My team is also keen to partner with experts, academics and civil society organizations about content policy issues.
What aspects of your work at DV are you most proud of?
Our teams are tackling some of the most important issues for society today, and we’re giving brands the power to make a difference online. Many people agree that tackling issues such as disinformation and hate speech are important, but they don’t realize how challenging it is to solve these problems at scale. At DV, we embrace challenges .DV’s teams are doing their part to enable brands to avoid (and avoid monetizing) content they deem problematic – including disinformation and hate speech – among our suite of more than 90 categories. Our approach allows brands to protect their reputation while maintaining scale, by providing advanced classification tools that take into account the full nature of the content rather than relying solely on specific keywords.
What do you think are the most exciting developments in content classification? What can we expect to see in the future?
As our machine models get more sophisticated, there should be less and less need for humans to oversee the classification of extremely explicit content. We will always need humans in the loop, but we will be increasingly able to focus human work on nuanced issues.
The metaverse is also an exciting development in the world of content classification because it is an entirely uncharted universe for user safety and brand safety and suitability.
Are there any resources you would recommend for people who want to get to know more about product policy or machine learning technology?
In the product and content policy space, being on top of the latest developments in the online ecosystem and understanding the interplay between that online content and the ‘real world’ is crucial. For that reason, I’m a big fan of newsletters. Some of my favorites that cover stories across the globe include the DemTech Newletter from the Oxford Internet Institute and the newsletter published by The Berkman Klein Center for Internet & Society at Harvard University.
If you’re interested in reading more from our classifications experts, check out Anna Zapesochini’s “Ask the Experts” blog on machine learning technology and CJ Morello’s blog on classification operations.