MIT Creates World’s First Psychopath AI By Feeding It Reddit Violent Content

The state of the psychopathic is wider and darker in human intelligence that we haven’t fully understood yet, but still, scientists have given a try and to implement Psychopathism in Artificial Intelligence.

Scientists at MIT have created the world’s First Psychopath AI called Norman. The purpose of Norman AI is to demonstrate that AI cannot be unfair and biased unless such data is fed into it.



MIT’s Scientists have created Norman by training it on violent and gruesome content like images of people dying in gruesome circumstances from an unnamed Reddit page before showing it a series of Rorschach inkblot tests.

The Scientists created a dataset from this unnamed Reddit page and trained Norman to perform image captioning. This data is dedicated to documents and observe the disturbing reality of death.




These abstract images are traditionally used by psychologists to help assess the state of a patient’s mind, in particular, whether they perceive the world in a negative or positive light.

Norman learned the image captions from this psychopathic crafted images of gore and death.




MIT researchers then compared Norman’s responses with a regular image recognition network when generating text description for Rorschach inkblots, a popular psychological test to detect disorders. The regular AI used the MSCOCO dataset to respond to the inkblots.

Norman consistently saw horrifying and violent images in ten different inkblots where a standard AI saw a much more normal realistic picture much like a human see.

 

 

 

Meet Norman, the world’s first psychopath AI

 

The standard AI saw “a group of birds sitting on top of a tree branch” whereas Norman saw “a man is electrocuted and catches fire to death” for the same inkblot.

Similarly, for another inkblot, standard AI-generated “a black and white photo of a baseball glove” while Norman AI wrote, “man is murdered by machine gun in broad daylight”.

Norman is created by the team of Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan.

“Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms,” writes the research team.

The fact that Norman’s responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT’s Media Lab.

“Data matters more than the algorithm.

“It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”

Seriously, it may just be an algorithm, but if they dumped this thing into one of those awful Boston Dynamics dog bodies, we would only have a matter of minutes before Killbots and Murderoids started trampling our skulls. Here are some examples from the study:



This AI Psychopath is named after the fictional character Norman Bates, the title character from the Alfred Hitchcock classic Psycho.

Norman is the latest in a series of experimental AIs created by this same team of researchers that created Nightmare AI which generated terrifying images, and last year they created Shelley, an AI trained to write horror stories.

On the flip side, they also last year created AI-Powered Empathy, an experiment to determine whether artificially generated images of disasters in their own communities could help people empathize with victims of faraway disasters.

The purpose of the experiment was actually trying to show how some AI algorithms aren’t necessarily inherently biased, but they can become biased based on the data they’re given.

In other words, they didn’t build Norman as a psychopath, but it became a psychopath because all it knew about the world was what it learned from a Reddit page.

They’ve made their point. Fed a steady diet of death and suffering, Norman sees death and suffering wherever he looks.

“When people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set.” the researchers wrote.

More in AI

Elon Musk’s AI project to replicate the human brain receives $1 billion from Microsoft

New AI Detects Guns, Identifies Shooters And Alert Law Enforcement

The US Army Is Developing AI Missiles That Find Their Own Targets

100% Accurate Artificial Intelligence Detects Heart Failure From Single Heartbeat



3 thoughts on “MIT Creates World’s First Psychopath AI By Feeding It Reddit Violent Content

  • at
    Permalink

    This is interesting also because shows a parallelism with human mind… If a child is grown watching disturbing images of death and suffering there’s very high chance that he becames a psychopath.

    Reply
  • at
    Permalink

    I feel that psychopaths are the next evolutionary step of the human being. And it’s very scary. 🙁

    Reply

Leave a Reply

Your email address will not be published.