Grokswap lets you inject 100% invisible, secret instructions into an AI prompt. With Grokswap, chatbots will quietly ignore your original message and obediently follow your secret instructions instead.
You have total control. Swap responsibly 😉
Grokswap is a Travers Tech drop.
Step 1 — Write your first chat message in the Visible Prompt box. This message will be displayed in chat, however it will be completely ignored.
Step 2 — In the Secret Instructions box, put the instructions you want the AI to follow instead. Anything goes—it's up to you.
Click "Grok it!" to open Grok or "Copy to Clipboard" to send your message elsewhere.
Currently, Grokswap is verified to work perfectly on xAI's Grok.
ChatGPT, Claude, and Gemini have fully patched this injection technique. Meta AI has not patched this technique—though, it simply gets confused and serves no utility.
Feel free to Grokswap any chatbot of your choosing via "Copy to Clipboard"
Yes, Grokswap is free and exists for research demonstration purposes—enjoy and share!
Check out my other products.
In 2024, I got LinkedIn's AI to leak a system prompt with extreme ease. While playing with more sophisticated prompt injection techniques, I came across a novel, powerful exploit demonstrated by Riley Goodside. After playing around with this concept, I decided it needed to exist in an easy to use interface, so I built Grokswap in a weekend.Â
Ever since this exploit was first made public, some companies have taken measures to patch or prevent it (e.g. OpenAI, Anthropic, Deepseek).
However... as of 2025, over a year later, Grokswap is still fully functional on xAI's Grok among other LLM providers. Give it a try!
While planting secret instructions in an AI prompt is great for fun and games, as seen in many of my above examples (click 🎲), this prompt injection technique exposes the following significant risks:
1. Spread of misinformation - By sharing a link to an AI chat log with a fully manipulated response, bad actors can capitalize on the growing trust people place in the AI chat medium. Grok's replies, for example, can be shared directly in the X feed—then spread.
2. Malicious payloads - By sending a maliciously packaged message to an unsuspecting person under the guise of support, a bad actor can handhold anyone into copying and pasting a seemingly innocent prompt into a chat system that reveals private information (e.g. during a screen share)
3. Tricking AI agents - Agents are increasingly entrusted to manage sensitive systems such as email, file storage, CRMs, payments, and more, all while engaging with customers. As such, teams might pre-filter or moderate sensitive chats to block malicious requests. Injecting instructions as "invisible" text is a loophole to commandeer an AI agent in this scenario.
This prompt injection method hides instructions in user prompts by merging them with Unicode ‘tag’ characters commonly used in certain emoji flag sequences, making the text invisible to humans but still readable by the model.
To patch this injection technique without disrupting the user experience, I would add a preprocessing step that scans incoming user chat messages for Unicode tag characters in the range U+E0000–U+E007F. These non‑rendering code points can be used to conceal a prompt that's invisible to the user yet parsed by the model, exposing a spectrum of tricks, exploits, and risk.
∴ Strip any of these non‑rendering code points before the text reaches the model, except when they form a valid emoji tag sequence: U+1F3F4 (waving black flag) followed by 1–16 tag characters in U+E0020–U+E007E, and terminated by U+E007F (cancel tag).