Skip directly to content

A/B testing: Making Scopus Better (Part I)

on Fri, 03/18/2016 - 17:45

How are changes to Scopus determined, and how does your use of Scopus impact the development process? This post is the first of a two-part series in which we discuss A/B testing and how data analysis is

helping us improve Scopus.

There are multiple ways the Scopus team works to identify potential product changes, with a focus to bring you the best experience and provide information faster and with deeper insights. From listening to user feedback to investigating new technology and trends, the product team continuously works to both iterate on existing features and functionality and develop new enhancements.

For an A&I database like Scopus, which serves researchers, institutions, and corporations from all over the world with timely information from over 5000 publishers, there is not a single “typical” user. Each individual user has a specific need and an ideal way they would like the product to work. This is where A/B testing (also known as split testing) becomes particularly important for Scopus.

If you are unfamiliar with A/B testing, Stéphane Bottine of the Elsevier Research Products Experiments/AB Team headed by David de Kock, provides an explanation:

“It is a data-driven process whereby two versions of a web page or website are compared against each other to see which one performs better. The winning variation is identified through analyzing the collected data.

You can have simple or complex tests:

  • Simple A/B tests typically involve changing one element of the page and comparing it against the incumbent, like changing a button’s color or its call to action (see example)
  • Complex A/B tests may test multiple elements in parallel (a multivariate test) across one or more pages (look for Part 2 in this series)

Users are randomly assigned to a group, and each group is shown a different version for the duration of the test. User interactions with each version are anonymously measured over the course of the test. Data is subsequently parsed, aggregated and run through statistical tests to identify a winning variation.”

“Experiments on Scopus help us to find the very best user experiences and refine the site to meet them. A/B testing allows us the flexibility to make sure our changes and ideas are measurable improvements for you.” Jennifer Bronson, Senior Scopus Product Manager

Ultimately, A/B testing allows us to make better data-driven decisions that benefit our users. These are decisions that are grounded in facts rather than made by the ‘Highest Paid Person’s Opinion.’

What does this mean for you?

It means several things:

  • You and a colleague might take the same action in Scopus, but you see one version of the webpage and your colleague sees another
  • The actions you take provide key insights towards improving the overall user experience
  • You will continue to find improvements made to Scopus throughout the year

To continue to learn about Scopus and A/B testing, look for Part 2 of this series, looking at a complex A/B test that the Scopus team will be launching soon.

Copyright © 2024 Elsevier or its licensors and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies