Chatbots articles

ascii hacking large language model gpt chatbots

If you teach a chatbot how to read ASCII art, it will teach you how to make a bomb

In context: Most, if not all, large language models censor responses when users ask for things considered dangerous, unethical, or illegal. Good luck getting Bing to tell you how to cook your company's books or crystal meth. Developers block the chatbot from fulfilling these queries, but that hasn't stopped people from figuring out workarounds.
security bing malware privacy viruses computer worm amazon alexa chatgpt bard generative ai gpt copilot with video

Researchers prove they can exploit chatbots to spread AI worms

Hackers could deploy the worms in plain text emails or hidden in images
In context: Big Tech continues to recklessly shovel billions of dollars into bringing AI assistants to consumers. Microsoft's Copilot, Google's Bard, Amazon's Alexa, and Meta's Chatbot already have generative AI engines. Apple is one of the few that seems to be taking its time upgrading Siri to an LLM and hopes to compete with an LLM that runs locally rather than in the cloud.
research cornell university

Study shows that people would rather talk to a human than an AI

Using generative AI to create messages can make them feel inauthentic
A hot potato: Companies, startups, and everyday individuals around the world are embracing generative AI at an increasingly rapid pace. Going forward, a lot of what you read will likely have been created by a machine, but is that a good thing? According to a new study, many people don't like it if they believe they're being fed generated text as they find the practice insincere and inauthentic.