November 14, 2024

How We Can And Should Regain Control Of The Recommendation Algorithm

Recommendation #Recommendation

I’ve been working in data and artificial intelligence (AI) since it was just called statistics and applied math. But over the past decade, AI has approached a troubling inflection point. Notably, the recommendation algorithms that social media platforms like Facebook, Instagram, Twitter, YouTube and TikTok rely on to rank and recommend content have become the center of a contentious debate about the harm they’re causing to citizens and consumers.

In the 90s, the philosopher Nick Bostrom defined AI superintelligence as “an intellect that is much smarter than the best human brains in practically every field” and postulated that, at some point, AI will exceed our intellectual abilities. The complex content recommendation algorithms social media platforms have created, trained and fine-tuned over the last decade foreshadow an eerie intelligence of the next wave of algorithmic models. We must be clear on how to fix the problems of the models we have today before we find ourselves facing the same problems, only amplified, over the next decade.

This week, recommendation algorithms were front and center at the Supreme Court. The Court heard a pair of cases: Gonzalez V. Google and Twitter v. Taamneh, Tuesday and Wednesday, respectively, which offer the Court an opportunity to rewrite the rules that undergird our modern social landscape by potentially paring back the legal shield platforms have historically relied on, known as Section 230. In Gonzalez V. Google, the plaintiffs allege that Google violated the Anti-terrorism Act by allowing YouTube’s recommendation system to promote terrorist content.

The internet needs new rules: Section 230 was written in 1996 before these platforms existed. But let’s say the plaintiffs don’t prevail in Gonzalez v. Google. While litigation can be a useful vehicle to effect change, it’s better for every stakeholder involved that policymakers come to a bipartisan agreement rather than let the Court legislate from the bench on something so consequential.

We must think about regulations that work within the existing constructs of these algorithms to put consumer welfare first – including putting privacy and data stewardship first, instituting stress tests and leading with Explainable AI.

Put Privacy First

Algorithmic recommendations and amplification can be both positive and negative. In positive cases, users derive value in exchange for their data. During the first half of my career, I built two data & AI companies – Rapt and Krux – that helped media companies and marketers optimize pricing, segmentation and analytics for digital advertising. It was during this time that Big Tech began to understand the advantages of collecting data from users to inform its ad campaigns and sell to third-party advertisers – which drives the bottom line for these platforms today. I’m aligned with this and helped build the infrastructure for in the 2000s: companies should leverage the unique properties of data and data management to create durable moats and business models.

However, that needs to be done responsibly and Big Tech takes it too far. Policymakers should propose and enact common-sense regulations grounded in privacy to hold Big Tech accountable for the amplification of harmful, biased and false content.

The end goal: Users have more control over how their data is used to suggest and target content. There are a variety of settings that could be implemented and left up to the user to decide, empowering proper data stewardship and control, including:

  • Collecting only relevant data, which could allow a user to choose not to share personal information used to create their account or who they engage with on the platform.
  • Choosing how data can be used for targeted algorithms, which would allow – for example – a user to green light more music content and block all political content.
  • Sorting by types of behaviors, such as excluding any content a user likes or engages with past a certain time of night.
  • With such changes, users can begin to control their internet destiny. A good model for this is the European Union’s Digital Services Act, which requires platforms to let users turn off personalized recommendations and show options to adjust ranking parameters. Any small steps Meta, Twitter or any of the platforms in the U.S. have taken to date have paled in comparison to the EU – and we need to catch up.

    Regulate Big Tech Like Big Banks

    The core business goal of social media platforms is to maximize profits. When advertising is the only way to do that, social media platforms are not incentivized to regulate content because they make money regardless of what is being elevated and spread throughout the platform.

    Stress tests, which are simulations of crises as a way to protect against them, can be a helpful mechanism for regulators to keep the platforms in check. Similar to the stress test requirements for the largest American banks in the aftermath of the 2008 recession, platforms should simulate potential disasters and develop policies internally to prevent them. The stress tests banks face are simply modeled computer simulations, so there’s a degree of replicability that platforms can follow. The platforms could be held accountable for negative events – including, for example, the violence incited by the Islamic State propaganda and recruitment videos at the center of Gonzalez V. Google.

    This is an idea advanced by my brother, Marty, and while our professional interests diverge in several ways, we agree that regulating Big Tech like big banks is a necessary and implementable course of action. When companies allow misinformation or dangerous content to pollute the internet, they should face the equivalent of a carbon tax.

    A precondition for stress tests, however, is that the AI used to build these algorithms is “explainable,” which means that developers can express why an AI system reached a particular decision, recommendation, or prediction. Essentially, every system output comes with an “audit trail” that can be queried to reconstruct what happened and why. Big Tech platforms already operate ‘ad logs’ that store and report every ad that crosses every screen of every user on every device across the planet. Audit logs for explainable AI sound daunting, but they’re easily within reach relative to the data processing Big Tech already employs to count and deliver ads.

    Today, Big Tech exploits the black-box nature of their algorithms (where humans that designed models cannot understand how the model uses data to produce an end result), alongside trade secret arguments, to shrug off calls to do better.

    There are tools, methods and technologies that can help us achieve more explainable AI. This requires a shift from “model-centric AI” to “data-centric AI.” Model-centric AI is figuring out how to consistently tune a model to deal with the noise in the dataset, while data-centric AI centers on understanding, modifying and improving the data to achieve optimal performance through techniques like data interpretation, data augmentation, labeling consistency, outlier removal, and bias correction.

    We need to prioritize control and understanding how a model is operating. When we commit to this path, we’ll gain the power to make the models better and create the transparency and control that regulators and users require to track and explain how these models operate.

    Gonzalez V. Google will no doubt be a historic case for how we think about regulating the internet. No matter the outcome, which we will not know for several months, now is the time to call Big Tech to account and reject their deflection and dissembling in connection with these issues.

    They can no longer hide in their black boxes and make marginal changes that result in barely the patina of regulatory response. We must move towards drastic change and growth – empowered by AI, instead of falling victim to it – to hold Big Tech accountable in a meaningful way. Our well-being, collective safety, and way of life depend on it.

    Leave a Reply