The Dark Side of AI

Esteban Trujillo
5 min readMar 21, 2021

As we all know, technology has evolved at an unprecedented rate over the last decades. Since the first computer, the first email sent at MIT, or the first time humans were able to make wireless phone calls, every year we have seen in the news that a new type of technology is being released to the world. Some of them became trends like the iPod, Limewire, or Blackberry and its famous “BB-pin”. And some trends have turned themselves into tools that nearly all businesses use to communicate with each other and their clients such as Facebook or Instagram. Behind these two famous social media applications, there is some brand-new technology that is keeping track of every step we take, looking at all our memories, likes, and comments, we have all heard of it: the Facebook/Instagram algorithm. And its origin is pretty straightforward, an artificial intelligence algorithm based on one single principle: “the more, the better”. Along with the millions of people that have downloaded these apps so far, a social problem has been raised. AI has turned from being a disruptive solution to causing mental harm to its users. We all know the history of how Facebook was invented and why it had such a great impact on people, so now let us take the example of Instagram.

Instagram is a very powerful tool for businesses to run digital marketing campaigns and advertise their new releases to a bunch of users. If you want to scale your business’s popularity and reach more people, Instagram is one of the best tools in the market. Knowing how to set the correct order to publish the right content to the right people at the right time, can give you a really good advantage over your competitors. After a few months of a good advertising strategy, you’ll get more followers, and those followers would get you to potential clients that may transform into actual monthly revenue if your product is good enough for them.

But there’s also the other side of Instagram, the one that describes its very nature and that is turning into a social problem.

Instagram is based on peer-to-peer acceptance. It allows us to capture the moments that we don’t want to forget, keep them in a safe location and share them with our friends, so they can see whatever we’ve been up to. Every like we receive from them is perceived as acceptance, the more likes our pictures get, the more satisfied we feel even though our friends have not even read the third part of our caption. Earning likes is like an addiction, it keeps us wanting more. Every picture posted comes with a personal attachment: social acceptance. So, we begin to compare our different posts by the number of likes and comments we received. If the number increases with every post, it means people see us more, and consequently, they like us more. On the other hand, if the latest post was not able to reach as many likes as the one before, it becomes an unsuccessful post. And now, we began to question what could’ve gone wrong: Does the message that I tried to send was not proper? Does the angle of the picture uncover my “not best-side”? Was the filter too shinny? Was the day too dark? After a few days of answering all these questions, is time for another acceptance shot. But when we tap to open the app and the news feed appears, we see that a friend of ours has posted a picture and we double tap on it, expecting that a notification pops up in our friend’s cellphone, saying that we liked the picture. And that’s how it works, we have now unconsciously signed an agreement stating that: “If I liked your picture, you are obligated to like mine.” Before we notice it, we have begun comparing people by the number of followers they have, how many memories they have shared, or the number of likes they have earned. And now we are trapped into an endless loop of likes exchanging and memory sharing. And all of this is controlled by the AI algorithm. It’s scary, but it’s true.

Artificial intelligence is one of the fastest-growing technologies nowadays, and it’s projected to increase exponentially in the upcoming years. As AI has grown, also has its popularity. Today people can find a ton of online courses on the Internet that teach them Machine Learning, Data Science, Big Data, and so on. Within a month and a few lines of code, you are capable of building an AI model that could predict anything you want, because it just depends on what data are you using to feed the algorithm. AI is being used as a combination of following the trend and implementing the powerful tool. Why? Because today, AI is cool. Everyone wants to do it, even though if they don’t know how.

And here is when it comes to a problem. AI is not just about gathering some data and running some lines of code. You need to build a technology that can support the machine learning algorithm that you want to show to the world, but most importantly, the one that can solve the problem. And in technology that good, AI is just the mere beginning. Companies that want to implement it into their businesses, should be aware of this. They should build a solid infrastructure that can handle all the traffic coming and going, worrying about using the data correctly and ethically, and thinking how they could share their sophisticated AI models with the world. And that’s not a job that could be entirely done by Data Scientists or ML Engineers. This is a task that includes all the branches of a company.

If we are implementing AI just because it’s cool or we don’t know how to do it correctly, we are prone to build an algorithm that, as exposed above, could cause several damages to our users. But social media apps are not the only examples of having these types of problems. In 2018, a researcher at the MIT Media Lab discovered that facial recognition algorithms made by Microsoft and IBM were more likely to identify white men rather than black women. If you were a white man, there was a 99% chance that the algorithm recognizes you, but if you were a black woman, your chances were only 35%. Gender and racial discrimination implicit in AI algorithms are just some examples of a list that includes trapping people into poverty by not allowing them to access a loan, patients not being able to receive medical care, and so on.

Is our duty, as tech product builders, to ensure that the AI implemented in our solution is based on strong and reliable foundations. We must always focus on the problem we are trying to solve, rather than the cool technology we could use. Software developers, product managers, and data scientists, to name a few, should always work together to deploy to the world a highly efficient, reliable, and unbiased AI.

--

--

Esteban Trujillo

MSc at Georgia Institute Technology | Product Manager