Experimentation is key to platforms improving their services – but this is often carried out in secret and can have disastrous unintended consequences
How many of us know when we are the subjects of digital experimentation? Probably very few. But if you use platforms such as LinkedIn, Uber or Zoom, it is highly likely you have been part of digital experiments to improve their products and services.
In 2018, Uber’s data science and engineering team publicly announced: “Experimentation is at the core of how Uber improves the customer experience... There are over 1,000 experiments running on our platform at any given time.”
Experimentation is a practice that has developed alongside digital technologies and platforms. Although in some ways it mirrors more traditional face-to-face market research with consumers, users or employees, digital technologies such as data analytics and machine learning now make the process of testing improvements or new offerings easier, quicker and cheaper, particularly on sophisticated platforms with large volumes of users.
By removing worker consent and making experimentation hidden, undesirable consequences can emerge
Experimentation has increased exponentially over the past decade, with entrepreneurs using it to guide their value creation, and digital platforms deploying thousands of experiments at any given time to optimise their algorithms. A case in point is LinkedIn, which subjected 20 million workers to thousands of experiments without their knowledge over a five-year period, impacting their ability to get a job.
We now know a lot about how to design experiments, and their benefit to the organisations that employ them, but what do we know about their impact on platform users? Experimentation and testing can be beneficial if users consent to experiments and platforms take their feedback seriously, by subsequently improving platform features and processes. But is this always the case? Is it always beneficial to both platform and user, or do both parties face hidden risks?
To explore this further, we looked at one of the world’s largest labour platforms over more than a decade of operation. We found that the platform experimented in three different ways. They first allowed users to consent and opt-in to experiments. They then switched to concealing experimentation, assigning users to experiment and control conditions without their knowledge and consent. In the final iteration, they began to consistently experiment on a wide variety of platform features at the same time, without user consent or knowledge.
It is very easy for experimentation to become more covert and concealed
Concealed experimentation had far-reaching effects on users. They witnessed constant changes that affected their employment and income opportunities on the platform, reducing the freedom they had in structuring their own work. What's more, concealed and consistent experimentation led to significant frustration and feelings of apathy because users had no agency in their exposure to experiments. There was often no redress when features and processes changed without notice.
We also found that workers’ wellbeing was negatively impacted as they started to believe that the platform no longer treated them as valued constituents or stakeholders, but as lab rats or guinea pigs. This is particularly concerning, not least because the ethics that might apply in more traditional testing methodologies seem to be largely absent in these circumstances. Although wide-scale experimentation is currently confined to platforms where it is an extremely cost-effective means of optimising algorithms, it is reasonable to assume the practice will extend beyond this as digital transformation continues in more conventional organisations.
Many organisations, participating in experimentation as a core feature to optimise value creation, may be unaware of these hidden risks. We recommend three actions:
1. Be transparent
It is very easy for experimentation to become more covert and concealed. Often, it is a well-intended practice to improve the design of experiments. But by removing worker consent and making experimentation hidden, undesirable consequences can emerge, whether a negative impact on user welfare or a potentially damaged relationship between the user and the platform. This is especially problematic in a competitive market where the cost of switching is low and users can vote with their feet.
2. Create internal mechanisms to build ethics into experimentation
There is precedent that internal audit units can enable experimentation without exposing users to unwanted side effects. Organisations such as Microsoft have created internal ethics, audit and compliance boards to oversee the implementation of new practices. These internal units provide the opportunity for a diverse group of stakeholders to routinely interact, exchange experiences and voice concerns. Crucially this requires users and platform managers and designers to come together and make their positions known, negotiating outcomes that are mutually beneficial.
3. Help to create "best practice" industry standards
It is in the interests of digital platforms, think tanks and research centres to lead the creation of best practice standards. This would allow platforms to enhance value creation through experimentation whilst simultaneously safeguarding user rights and avoiding adverse side effects. Shedding light on the grey zone of experimentation will help set industry standards and create a broader conversation about optimal experimentation practices that would also enable auditing of external oversight bodies.
This article draws on findings from "The Experimental Hand: How Platform-Based Experimentation Reconfigures Worker Autonomy" by Hatim A. Rahman (Kellogg School of Management), Tim Weiss (Imperial College London) and Arvind Karunakaran (Stanford University).