Remember Tay, the chat bot with the personality of a 19-year-old American girl that Microsoft released just yesterday? The casual conversation bot was designed to get "smarter" over time and pick up on the personalities of the people it chats with via social media.

There's just one problem - Microsoft seemingly overlooked the fact that the Internet isn't always a nice and friendly place.

Less than 24 hours after going live, Microsoft's AI chat bot turned into a raging racist. Virtually any offending topic was fair game including Hitler, 9/11, Ted Cruz, Donald Trump, African Americans, Mexicans and so on. Tay's Twitter account - with more than 55,000 followers - is still alive but Microsoft has deleted all but three of its tweets.

To be fair, it's not entirely Microsoft's fault as the AI learned the "bad behavior" from people on the Internet. Still, they probably should have seen this coming.

In a statement provided to USA Today, Microsoft said Tay is as much a social and cultural experiment as it is a technical one. Unfortunately, Microsoft continues, within the first 24 hours of coming online, they became aware of a coordinated effort by some users to abuse Tay's commenting skills to have it respond in inappropriate ways.

Microsoft has decommissioned the experiment, at least for now.

If nothing else, the experiment should demonstrate to parents why they shouldn't let kids online without proper supervision.