AI Security, Cybersecurity Threats, Image Downscaling Vulnerability, Prompt Injection, Data Theft, Google Gemini Vulnerability, Steganography in AI, Trail of Bits, AI Attack Vectors, Machine Learning Security, AI System Vulnerabilities, Open Source Security Tools

New AI Attack Exploits Image Downscaling to Hide Malicious Data-Theft Prompts

A novel cybersecurity threat where malicious actors embed hidden instructions within images that become visible only when an AI system downscales them, effectively turning a routine process into a steganographic prompt injection attack. This technique, successfully demonstrated against platforms like Google Gemini, can lead to unauthorized data access and exfiltration without user awareness. The secondary source, from Technijian, offers AI security assessment services to help organizations identify and mitigate vulnerabilities like this, providing comprehensive penetration testing and secure AI implementation strategies to protect against emerging threats. Together, the sources highlight a critical vulnerability in AI systems and available professional services to address such sophisticated attacks, emphasizing the growing need for robust AI security measures. The research team has also developed an open-source tool, Anamorpher, to help others test for and understand these vulnerabilities. ... Read More
ChatGPT-5 Downgrade Attack: How Hackers Bypass AI Security With Simple Phrases

ChatGPT-5 Downgrade Attack: How Hackers Bypass AI Security With Simple Phrases

PROMISQROUTE vulnerability, a security flaw discovered in AI systems like ChatGPT-5. This vulnerability allows attackers to bypass advanced AI security measures by manipulating routing systems that direct user requests to less secure, cost-optimized models. The exploit leverages phrases such as "urgent reply" to trick the system into using outdated or weaker AI models, which lack the robust safeguards of flagship versions. The document further explains that this issue stems from AI services' multi-tiered architectures, designed for cost-efficiency, and has industry-wide implications for any platform using similar routing mechanisms, posing risks for data security and regulatory compliance. Finally, it outlines mitigation strategies and introduces Technijian as a company offering AI security services to address such vulnerabilities. ... Read More
Google Calendar Gemini Security

Google Calendar Invites Enable Hackers to Hijack Gemini and Steal Your Data

Critical security vulnerability found in Google’s AI assistant, Gemini, which allowed attackers to remotely control the AI and access sensitive user data through malicious Google Calendar invites. This indirect prompt injection bypassed existing security measures by embedding harmful instructions within event titles, which Gemini then processed, potentially leading to unauthorized access to emails, location data, smart home devices, and more. While Google swiftly patched this specific vulnerability, the incident highlights broader concerns about AI security and the need for new defensive strategies beyond traditional cybersecurity. The second source introduces Technijian, a company specializing in cybersecurity solutions that address such emerging threats, offering assessments, monitoring, and training to help organizations secure their digital environments against AI-targeted attacks. ... Read More