Harvard Researchers Hack AI Product Rankings
A couple of researchers from Harvard have cooked up a way to make AI recommendation systems favor a particular product, potentially allowing a merchant to hijack an AI in their favor.
Artificially Unintelligent Since 2023
A couple of researchers from Harvard have cooked up a way to make AI recommendation systems favor a particular product, potentially allowing a merchant to hijack an AI in their favor.
“He sent me photos and even showed me a screenshot of all the white women he was trying to scam using the identity of a white man.”
AI may not actually feel things like people, but it can learn from huge piles of data to pinpoint exactly what pushes our emotional buttons.
Companies and bad actors are leveraging advanced AI tools to flood platforms with fake reviews, and it’s becoming hard to tell what’s real from what’s generated.
AI in politics isn’t limited to the US; we’ve seen shameless use of deepfakes in politics during elections worldwide.
Deepfake nude images are spreading faster than you think. The AI could be used on a female friend, neighbor, sister, classmate, or even you.
There is a shadowy side to GPT, a world of harmful and criminal misuse of LLMs and unconstrained image generators. I have taken to calling this aspect of AI “Dark GPT.”
With the adoption of ChatGPT by criminals it’s time to take a step further. What’s at stake here? Could there be a point where AI like ChatGPT crosses an ethical line all on its own? With the notion of “Dark … Read more
With all the media coverage, ChatGPT is becoming a household name. The AI model has taken the world by storm, gaining accolades for its ability to write and code like a human. But there is a Dark GPT, a criminal … Read more