The Science of Great Answers

Is there a scientific way to identify high-quality responses to students’ questions? For one scholar of information science, students’ online social interactions offer some surprising clues.

Brainly began as a student-driven community, and after explosive recent growth it has an impressive plan to stay that way. Nearly all of Brainly’s questions and answers are supplied by students themselves, and its hundreds of moderators are chosen from the community’s brightest and most qualified members. But in the past two years tens of millions of students have discovered Brainly’s global platforms, and the effort to keep the community student-centered and student-driven has faced some basic challenges of scale.

Most at stake is the very thing that keeps the Brainly community together: its commitment to high-quality, personalized information. Since its earliest days Brainly has relied on its moderators to separate excellent answers from misleading or inaccurate ones. Yet even with well over a thousand moderators keeping watch, Brainly’s dedication to assessing the quality of every question and every answer in such a rapidly expanding community risks slowing down the exchange of ideas and information.

To address this problem, Brainly wanted to empower its moderators with tools developed from cutting-edge advancements in machine learning and artificial intelligence. This ambition produced some tricky questions. For one, is there a scientific way to identify high-quality answers? And if there is, what are the most reliable predictors of an answer’s quality?

Brainly wanted to empower its moderators with tools developed from cutting-edge advancements in machine learning and artificial intelligence.

Enter Dr. Erik Choi, Brainly’s Principal Researcher. Erik is a global expert in how the internet has changed the ways people seek and share answers to their questions. He wrote his Ph.D. dissertation on what scholars like to call “community question-answering sites,” or CQAs, and he presents regularly at academic conferences around the world. He also publishes peer-reviewed articles at a pace that raises questions about when, or if, he sleeps.

Think of Erik as Brainly’s frontal lobe. Each hour the Brainly community asks and answers over 8,000 questions worldwide, making Brainly one of the richest stores of information about the challenges and skill-sets that motivate students’ curiosities. Erik’s job is to transform this information into meaningful insights about how to improve students’ learning outcomes. 

Think of Erik as Brainly’s frontal lobe.

That task, he insists, begins and ends with social engagement. “Students clearly prefer turning to their peers for answers to their questions,” he says. “It helps them find information that feels comprehensible and trustworthy, which is much harder to do with conventional internet searches.” But the downside to collaborative information-seeking, Erik points out, is that students’ questions aren’t necessarily answered by experts. 

For Erik, maintaining a high standard of information-quality in a volunteer community always entails careful oversight by expert moderators. At the same time Erik knew that recent discoveries in the field of information science could strengthen Brainly’s existing human-moderation model. Working alongside Dr. Chirag Shah, who directs the InfoSeeking Lab at Rutgers University and writes extensively on collaborative information-seeking, and Long T. Le, a doctoral candidate in computer science at Rutgers, Erik designed a first-of-its-kind study to evaluate how Brainly’s combination of social and informational features could be used to alert moderators to answers most in need of attention.

Erik knew that recent discoveries in the field of information science could strengthen Brainly’s existing human-moderation model.

To determine the particular characteristics of answer quality the team devised a “classification model” to assess Brainly’s answers from a variety of angles. In addition to looking at information gleaned from the answers themselves, such as each answer’s length or readability score, the team studied members’ question-answering traits, like how often a particular student answered questions in a given category, or even that student’s average typing speed. The team also used Brainly’s robust social features—which allow students to rate one another’s answers, send thank-you notifications to their peers, and form friendships with other members—to understand how likely a particular community member is to provide excellent responses.

It worked. The team’s model proved to be 83% accurate in identifying high-quality answers.

Most intriguingly, the model found that students’ social interactions on Brainly are especially reliable predictors of answer quality. Members who are frequently thanked by their peers, for instance, or who have a high number of friendship connections in the Brainly community are much more likely to give high-quality answers. Similarly, the average ranking a member’s answers receive from other students proved to be a reliable predictor of quality.

Most intriguingly, the model found that students’ social interactions on Brainly are especially reliable predictors of answer quality.

That finding has generated some real buzz well beyond the Brainly community. When the team presented their findings at the 2016 Joint Conference on Digital Libraries, they won the Best Student Paper Award for that year.