Assistant professor’s work helps identify hazardous online user-generated content

Radford University Assistant Professor of Management Richard Gruss, Ph.D.

Since the dawn of the internet, web users have raised questions about who is responsible for a website’s third-party content. It began in the early 1990s with sites publishing user-generated content on bulletin boards. It continues today with social media platforms, like Twitter and Facebook, and online buying and shopping sites such as eBay and Amazon.

It’s not only an issue of right and wrong but also a legal conundrum.

In the late 1990s, “legal scholars began to wonder whether was responsible for products from third-party sellers that were shoddy, illegal, unsafe or misrepresented in the product descriptions,” said Radford University Assistant Professor of Management Richard Gruss, Ph.D.

“Opinion showed signs of converging on ‘yes’ just last year,” Gruss continued, “when a California appeals court ruled that was legally liable for defective products sold on its site by third-party sellers.”

That ruling presented a tremendous challenge for Amazon and others that relied on third-party, user-generated content. How could they hire enough people to monitor the enormous amount of content?

Each day, Amazon sells more than 12 million products, each with descriptive text that may range from a couple of sentences to multiple paragraphs. Social media has the same issue, most recently brought to light with Twitter’s rejection of political extremists on its platform. Twitter users send more than 500 million messages a day – that’s roughly 200 billion tweets a year – making it impossible, it seems, for a team of readers to catch all hazardous or problematic content.

The solution, Gruss said, is a solid working relationship between humans and machines.

“It’s just not feasible to hire a team of readers given the workload, so we need reliable automated methods of discovering critical information hidden within mountains of text,” explained Gruss, whose educational background in language, computer science and text analytics make him uniquely qualified for this research. “But when the potential damage from a false negative is high, it’s not a good idea to rely entirely on automated methods. Some optimal combination of machine pre-processing and human judgment is called for.”

For nearly a decade, Gruss has been collaborating with scholars from Loyola Marymount University in Los Angeles, San Diego State University and Virginia Tech to find a solution.

“We applied methods from natural language processing, information theory and supervised machine learning to develop models for identifying safety hazards in reviews, and we went on to demonstrate their efficacy in finding hazards in children’s toys, baby cribs, dishwashers and over-the-counter medicines.”

Gruss and his fellow researchers recently began an initiative to augment these back-end statistical methods with a browser extension that alerts shoppers to suspicious language within the reviews for products in which they may be interested.

“This new sequence of experiments is designed to determine the optimal way to present information to the shopper that maximizes their opinion of its ease-of-use and its perceived value,” Gruss explained. “We hope to zero in on the ideal collaboration between computer algorithms and human judgment, and in the process, we hope to promote public safety.”

This system, Gruss said, can have broader application for any hosted content.

“For example, our models could be used to identify inflammatory misinformation that should be automatically removed,” he noted.” For borderline cases, language that might be problematic could be highlighted, summarized or aggregated for the user in real time, and they can use their judgment, be better informed and more wary as to the possible dangers.”

The identification and removal of erroneous information, “especially those hazardous to physical and mental health, that are posted on internet sites, is undoubtedly one of the challenging problems of the 21st century,” said Radford University Davis College of Business and Economics Dean Joy Bhadury, Ph.D. “Dr. Gruss’s research, based on using natural language processing and artificial intelligence tools represents a feasible and pragmatic approach in tackling this immense problem. As one of the most active researchers within the Davis College, Dr. Gruss’s work underscores both the need for and the significant societal impact of the scholarly efforts of our faculty.”

Jan 28, 2021
Chad Osborne