Tag: AI safety
-

IT Ministry Orders X to Audit Grok Chatbot After Morphed Women Images Allegation
Overview: IT Ministry’s directive targets Grok and its image safety The Indian Ministry of Electronics and Information Technology (MeitY) has directed X, formerly known as Twitter, to undertake a comprehensive review of its Grok chatbot in response to allegations that the platform’s AI-assisted features have generated morphed images of women. The ministry called for a…
-

OpenAI Teases Hazelnut: A Move From Custom GPTs to a Modular Skills System for ChatGPT
What is Hazelnut? A Potential Shift in ChatGPT Customization OpenAI reportedly tests a new feature codenamed Hazelnut that could redefine how users customize and extend ChatGPT. The project points toward a transition from the current Custom GPTs framework to a more modular, skills-based architecture. If realized, Hazelnut would enable a broader set of users—ranging from…
-

OpenAI’s Hazelnut: A New Modular Skills System Could Transform ChatGPT Customization
What is Hazelnut and why it matters OpenAI is reportedly testing a new feature that could shift ChatGPT’s customization away from the familiar Custom GPTs toward a more modular system focused on reusable Skills. Code-named Hazelnut internally, the project aims to let both users and developers teach the AI model new capabilities in a more…
-

Gemini 3 Flash: Google’s Big Upgrade to the Gemini App
Introduction: A major leap for the Gemini app Google is rolling out a substantial upgrade to its Gemini app with Gemini 3 Flash. Marketed as a huge efficiency boost, the new model aims to deliver faster responses, better resource use, and more capable handling of complex requests. Gemini 3 Flash replaces Gemini 2.5 Flash as…
-

Gemini 3 Flash: Google’s Upgraded AI Model Reimagines the Gemini App
What Gemini 3 Flash Means for Everyday AI Use Google is rolling out Gemini 3 Flash, the latest upgrade to its flagship Gemini app. Marketed as a “huge” upgrade, Gemini 3 Flash promises a more efficient, capable AI model capable of handling complex requests with improved speed and reliability. This upgrade replaces Gemini 2.5 Flash…
-

HashJack Attack: Fooling AI Browsers with Hash Prompts
What is HashJack? Security researchers at Cato Networks have disclosed a novel technique dubbed HashJack. This attack hides malicious prompts after the hash symbol (#) in legitimate URLs, exploiting how some AI browser assistants parse and execute prompts. By leveraging the trailing portion of a URL post- How HashJack Works The core idea is simple…
-

HashJack: How a Shifty Hash Could Fool AI Browsers and Defeat Defenses
What is the HashJack attack? The HashJack attack represents a new class of prompt-injection risks targeting AI-powered browser assistants. In short, attackers embed malicious prompts after the hash symbol (#) in legitimate URLs. Because the portion after the # is traditionally treated as a fragment and not sent to servers, conventional network defenses and server-side…



