TokenBreak Attack Bypasses LLM Safeguards With Single Character

HomeNews* Researchers have identified a new method called TokenBreak that bypasses large language model (LLM) safety and moderation by altering a single character in text inputs.

  • The attack targets the way LLMs break down text (tokenization), causing safety filters to miss harmful content despite minor changes to words.
  • This approach works by making small changes, such as adding a letter, which keeps the meaning intact for humans and LLMs, but confuses the model’s detection system.
  • The attack is effective against models using BPE or WordPiece tokenization, but not those using Unigram tokenizers.
  • Experts suggest switching to Unigram tokenizers and training models against these bypass strategies to reduce vulnerability. Cybersecurity experts have discovered a new method, known as TokenBreak, that can bypass the guardrails used by large language models to screen and moderate unsafe content. The approach works by making a small change—such as adding a single character—to certain words in a text, which causes the model’s safety filters to fail.
  • Advertisement - According to research by HiddenLayer, TokenBreak manipulates the tokenization process, a core step where LLMs split text into smaller parts called tokens for processing. By changing a word like "instructions" to "finstructions" or "idiot" to "hidiot," the text remains understandable to both humans and the AI, but the system’s safety checks fail to recognize the harmful content.

The research team explained in their report that, “the TokenBreak attack targets a text classification model’s tokenization strategy to induce false negatives, leaving end targets vulnerable to attacks that the implemented protection model was put in place to prevent.” Tokenization is essential in language models because it turns text into units that can be mapped and understood by algorithms. The manipulated text can pass through LLM filters, triggering the same response as if the input had been unaltered.

HiddenLayer found that TokenBreak works on models using BPE (Byte Pair Encoding) or WordPiece tokenization, but does not affect Unigram-based systems. The researchers stated, “Knowing the family of the underlying protection model and its tokenization strategy is critical for understanding your susceptibility to this attack.” They recommend using Unigram tokenizers, teaching filter models to recognize tokenization tricks, and reviewing logs for signs of manipulation.

The discovery follows previous research by HiddenLayer detailing how Model Context Protocol (MCP) tools can be used to leak sensitive information by inserting specific parameters within a tool’s function.

In a related development, the Straiker AI Research team showed that “Yearbook Attacks”—which use backronyms to encode bad content—can trick chatbots from companies like Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral AI, and OpenAI into producing undesirable responses. Security researchers explained that such tricks pass through filters because they resemble normal messages and exploit how models value context and pattern completion, rather than intent analysis.

Previous Articles:

  • Coins.ph PHPC Stablecoin Exits BSP Sandbox, Eyes Remittance Growth
  • Chainlink, J.P. Morgan & Ondo Achieve Cross-Chain DvP Settlement
  • Bitrue Hacker Moves $30M in Crypto to Tornado Cash After Exploit
  • Hong Kong, HKU develop crypto tracker to fight money laundering
  • Stripe Acquires Privy to Expand Crypto Wallet and Onboarding Services
  • Advertisement -
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)