YikYak’s anonymity allows dangerous behavior to persist

YikYak, a social media platform where all users are completely anonymous and post to a message board that anyone within a 5-mile radius has access to, is based on a simple concept in theory. But in execution, it creates a complex environment of anxiety and cyberbullying due to its anonymous nature.

Some people using the platform have good intentions. I’ve seen users try to spread anonymous positivity and give advice to those posting on a bad day. The problem is not with every individual who uses Yik Yak. Rather, it is with the environment that the platform has helped to create. Yik Yak’s design seems to bring out the negative side of human nature.

Much of the posts on Yik Yak are used to frame others in a negative light, with some going so far as to name people they have issues with (although this action is strongly discouraged by the app itself). Such behavior has been prevalent online for decades, but Yik Yak’s anonymity factor allows such toxicity to flourish.

In recent years, many have learned the consequences of posting negative comments online, however, anonymous posting saves the individual from being connected to the harmful comments they write, so the target of harassment does not have no way to determine who the attacker is. The app’s anonymity encourages this type of behavior to persist.

Some would say that the anonymity factor is what makes Yik Yak unique and that’s the whole point of the app. Even so, what is the point of creating and using a platform that seems focused on promoting toxic environments?



Yik Yak was initially launched in 2013 but was removed from the internet in 2017. It wasn’t until 2021 that it made a comeback with new terms and services. In these terms and services, Yik Yak outlines a plan to get toxic content removed: a post is taken down if it receives a total of -5 community votes or is reported by a user.

I’m glad the people behind the platform are trying to limit the spread of hate on the app, but that’s not enough – there’s still negativity all over the platform, with some of these posts even voted and encouraged by other users.

By leaving the regulation of the application to the users, no effective change will take place. There must be an algorithm in place that removes harmful posts without needing to be alerted by a community member. Some posts should be deleted that are indicated as harmful by keyword and phrase detection.

There has to be a way to combat the toxic feedback that the app’s anonymity factor naturally attracts without relying on the users themselves. Until this change is made, harmful posts and cyberbullying will persist on the platform used by students across the country.

Grace “Gray” Reed is a freshman magazine, specializing in current affairs and digital journalism. Their column appears every two weeks. They can be reached at [email protected].

Comments are closed.