Jody Houton

One in 10 Americans are aware of online racism occuring at their company, according to the Behavox Enterprise Conduct and Risk Report, which surveyed corporate professionals about their experience of working from home.

Racial slurs, “jokes”, and discrimination are unfortunately experienced by many people on a daily basis, in and outside of work.

And with the demarcation lines between work and home lives blurring a little more each day, the ever-pervasive effects of racism can seem unrelenting.

While the Internet provides a vehicle to tweet, record, and share stories of injustice and discrimination, it has also given racists a new way to bully, and persecute. The difference, now, is that racism can emerge in any virtual context. And the scourge of racism is something that can affect everyone, no matter the status, profession, or level of celebrity.

English football legend, Ian Wright spoke recently about the “dehumanizing” online abuse that he receives on an almost daily basis.

Speaking with fellow English footballer, Alan Shearer, Wright said that one of the main reasons he doesn’t report such messages (he has done so previously, and the perpetrator was “let off” on probation) was that he doesn’t see the point.

In fact, he says he receives messages like “Black Lives Don’t Matter” on a daily basis, because, “There is no consequence to some of these people’s actions.”

In lieu of a court of law acknowledging that such behavior is serious or damaging, many believe the responsibility should fall to the companies that either provide the platform, or are responsible for the people using it.

Social media has long been under the spotlight for its role in being a primary instrument for hate speech, and in recent times has faced increasing pressure from a wide-ranging community of voices, from the Football Association to Prince William to do more to stop the spreading of vitriolic content.

Many believe that Twitter and other social media platforms should use artificial intelligence to identify and address racist and abusive messages before they have even been posted. In a welcome shift, Twitter recently introduced a new prompt notification that is sent to users who are about to tweet something its algorithms believe could be “harmful or offensive”. 

Would-be-posters are now asked if they “want to review this before tweeting”, with the options to edit, delete, or send anyway. 

Another approach is urging for accountability. As many online abusers post, tweet, or share anonymously, they feel emboldened to write hurtful messages without fear of reproach.

In the aforementioned video, Shearer says to Wright, “No one ever would come up to you in the street and say that.”

Unfortunately, no one needs to. Technology has given unbridled ability to communicate incognito, wherever they are.

Although in certain regions, and sectors, such as financial services, there is an increasing trend toward encouraging, if not mandating, firms to expand their regulatory scope to the integrity of the individual, such expectations are not uniform, or held by all organizations.

There are gaps in accountability, and it is in these gaps where misconduct, and other harmful and hurtful behavior, thrives. In lieu of a clear regulatory framework with regards to what is legal, or even acceptable, to say online, it is therefore up to companies to police their own people by providing clear policies, training, and accountability for proper workplace conduct.

Whether on the pitch or in the office, racism is not acceptable. 

Behavox Conduct helps organizations proactively identify incidents of workplace misconduct, such as racism or sexual harassment, so that they may address it quickly — before it’s too late.

Behavox enables clients to analyze data from more than 150 data types from internal communications, such as voice, email, text, social media, chat and collaboration on a variety of corporate and non-traditional applications including Microsoft TeamsTwitterWeChatWhatsApp, and Zoom.

Learn more about how we do it here.