MCP Server: Manage Translations from Your AI IDE
Use Claude, Cursor, Windsurf, or Zed to manage translations with natural language. Model Context Protocol built in.
MCP Server: Manage Translations from Your AI IDE
Developers spend most of their working day in a code editor. When localization lives in a separate web application, switching context to manage translations costs time and breaks flow. A developer who wants to check whether a new string is already translated, add a key to the translation file, or review what a string looks like in German has to leave their editor, log into the localization platform, find the project, find the string, and then come back.
Multiply that friction across hundreds of strings per sprint, across every developer on a team, and the overhead is significant. Worse, the friction leads developers to defer localization work — to mark things "do it later" — which creates a backlog that grows until it becomes a crisis before a launch.
better-i18n solves this with a built-in MCP (Model Context Protocol) server. Your AI IDE — Claude, Cursor, Windsurf, or Zed — connects directly to your translation workspace and can read from and write to it using natural language. Translation management becomes part of the development workflow, not a separate context-switching exercise.
How It Works
What is Model Context Protocol?
Model Context Protocol (MCP) is an open standard developed by Anthropic that defines how AI assistants can connect to external tools and data sources. When an application exposes an MCP server, AI IDEs can discover its capabilities and interact with it through structured tool calls.
better-i18n's MCP server exposes your translation workspace as a set of tools that any MCP-compatible AI IDE can call. The AI can read your translation strings, push new ones, check coverage, run QA checks, and manage glossary entries — all through the natural language interface you already use to write code.
This is meaningfully different from a plugin or an API. You do not write code to call the API. You simply describe what you want in the same conversational interface you already use with your AI assistant, and the assistant handles the tool calls behind the scenes.
Connecting Your IDE
Setup takes a few minutes. You add the better-i18n MCP server URL and your project API token to your IDE's MCP configuration. In Cursor, this is a mcp.json file. In Claude Desktop, it is the MCP settings panel. In Windsurf and Zed, similar configuration files apply.
Once connected, your AI assistant gains awareness of your translation workspace. It knows your project structure, your source language, your target languages, your glossary, and your existing approved translations. This context is available without any additional prompting — the AI reads it automatically from the MCP connection.
Reading Translation State
With the MCP connection active, you can ask your AI assistant questions about your translations directly:
- "Which strings in this file are not yet translated to French?"
- "Does a translation already exist for the key
auth.login.button.label?" - "Show me all translations that are pending review in Japanese."
- "What is the German translation of the string on line 47?"
The AI calls the appropriate MCP tools, retrieves the data from your better-i18n workspace, and presents the answer in the conversation. No tab switching, no searching in a web interface.
Pushing New Strings
When you add a new user-facing string to your codebase, you can push it to your better-i18n workspace without leaving the editor:
- "Add the string on line 83 to better-i18n as
checkout.summary.total.labelwith the value 'Order Total'." - "Extract all hardcoded strings from this component and push them to better-i18n."
- "Create translation keys for all the strings in this new feature, following our existing naming convention."
The AI reads your code, identifies the strings, infers appropriate key names based on your existing conventions, and creates the entries in your better-i18n workspace in one operation. This is the kind of tedious work that developers currently either do manually or defer indefinitely.
For large-scale extraction, the CLI tool is more appropriate — it can scan your entire codebase in one pass. The MCP integration is best for the ongoing workflow of adding individual strings as you write code.
AI-Assisted Translation
With access to your translation workspace, glossary, and translation memory, your AI assistant can propose high-quality translations:
- "Translate this new string to all our target languages, using the glossary for brand terms."
- "Suggest a German translation that matches the tone of our existing approved strings."
- "Check if the French translation of this string is consistent with how we've translated similar strings."
The AI has access to your full translation memory, so its suggestions are not generic — they reflect your product's specific vocabulary, tone, and conventions. The suggested translations are created as drafts in better-i18n and are immediately visible to your human reviewers in the collaboration workflow.
Running QA Checks
You can also trigger quality assurance operations from the IDE:
- "Run a QA check on all strings I added today."
- "Check if any of the new strings have placeholder format issues."
- "Find all French translations that are longer than their source string by more than 40 characters."
These operations use the same QA engine that runs automatically in the web editor. The difference is that you are triggering them programmatically, from your development environment, as part of your normal code review workflow.
@mention in Collaboration Threads
The MCP integration also powers the @mention system in better-i18n's collaboration interface. When a reviewer leaves a comment on a translation and types @claude check the tone of this string against our style guide, the AI reads the string, the style guide, and the existing approved translations through the MCP connection and responds directly in the thread.
This is described in more detail on the collaboration feature page.
Key Benefits
Zero Context Switching
The biggest practical benefit is that developers never have to leave their editor to manage translation-related tasks. Checking coverage, adding keys, pushing strings, getting QA results — all of it happens in the conversational interface of the AI IDE. The mental overhead of managing translations drops to near zero.
Faster Localization Adoption
When localization is frictionless, developers actually do it in real time rather than deferring it. Teams using the MCP integration consistently report that their "untranslated strings in production" count drops significantly within the first sprint of use.
AI with Real Context
Generic AI translation tools have no knowledge of your product's specific vocabulary, tone guidelines, or existing translations. better-i18n's MCP integration gives the AI full access to your translation memory and glossary before it generates any suggestion. The quality difference between AI suggestions with context and without is substantial.
Consistent Naming Conventions
When the AI creates new translation keys, it reads your existing key structure and follows the same conventions automatically. Keys like auth.login.button.label and checkout.summary.total.label follow a predictable pattern. Without automation, developers apply conventions inconsistently, especially under deadline pressure.
Integration with the Full Feature Set
The MCP integration is not a limited subset of better-i18n's capabilities. Everything accessible through the web interface is also accessible through the MCP tools. Version history, analytics data, QA results, glossary management — all of it is available from your IDE.
Use Cases
Active Development: As a developer writes a new component, they use the AI to extract hardcoded strings, generate appropriate key names, push the keys to better-i18n, and request initial AI translations — all in one conversational session without leaving the editor.
Code Review: A reviewer sees a PR that adds new user-facing strings. They ask their AI to check whether the strings have been pushed to better-i18n and whether translations exist for all target languages. The AI checks and reports back in under a second.
Refactoring: A developer renames a component and needs to update translation keys to match. They ask the AI to find all keys with the old prefix, rename them in better-i18n, and update the references in the codebase.
Pre-release QA: The day before a release, a developer asks the AI to generate a full coverage report for all languages and flag any strings added in the past week that do not have approved translations. The AI returns a prioritized list.
Glossary Management: A product manager updates brand terminology and asks the AI to update the better-i18n glossary and flag all translations that use the old term across all languages.
How better-i18n Implements MCP
better-i18n's MCP server is implemented against the full MCP specification. It exposes a set of named tools — get_string, create_string, update_string, search_strings, get_coverage, run_qa, get_glossary_entries, create_glossary_entry, and others — each with a JSON schema describing their parameters and return types.
When an MCP-compatible AI calls one of these tools, the server authenticates the request using the project API token, enforces the same role-based access controls that apply to the web interface, and returns structured data that the AI can use to formulate its response.
The server is stateless and horizontally scalable. Each tool call is an independent API request. There is no session state to manage, and the connection can be interrupted and resumed without losing context.
Authentication uses API tokens with per-project scope. A token issued for one project cannot be used to access another project, even if both projects belong to the same organization.
Comparison with Alternatives
Lokalise and Phrase Plugins: Both platforms offer editor plugins for some IDEs, but these are traditional UI plugins — sidebars or panels that replicate the web interface inside the editor. They do not support natural language interaction or AI-assisted operations.
Custom API Scripts: Teams sometimes write custom scripts that call a localization platform's API to push or pull strings. This requires ongoing maintenance, does not support natural language queries, and is not available to non-technical team members.
AI Translation without Context: Using a general-purpose AI to translate strings without access to your translation memory, glossary, or existing approved translations produces inconsistent results. The MCP integration solves this by giving the AI all the context it needs.
Direct File Editing: Editing JSON or YAML translation files directly in the codebase is the simplest approach but creates synchronization problems, makes collaboration difficult, and provides no QA. The CLI tool is a better-structured alternative for file-based workflows, and the MCP integration builds on top of it.
Frequently Asked Questions
Which AI IDEs support MCP? As of early 2026, MCP is supported by Claude Desktop, Cursor, Windsurf, Zed, and a growing list of other AI-powered development environments. The protocol is open and actively adopted across the industry.
Do I need a separate API key for each developer on my team? No. API tokens can be scoped to a project and shared, or you can issue individual tokens per developer for audit trail purposes. Token management is available in the project settings panel.
Can the AI modify production translations directly? The MCP server respects your workflow configuration. If your project requires a review step before publishing, AI-created or AI-edited translations are placed in "pending review" status, not published directly. The AI works within the same access controls as any other contributor.
Is the MCP connection secure? Yes. All communication between the IDE and the better-i18n MCP server uses HTTPS. API tokens are scoped to specific projects and can be revoked at any time from project settings. The server enforces role-based access on every tool call.
What happens if I have thousands of strings? Does the AI get overwhelmed? The MCP tools are designed for efficient, targeted access — not full dataset retrieval. When the AI needs to find relevant strings, it uses the search and filter tools rather than loading everything. For bulk operations, the CLI tool is more appropriate.
Bring Your Translations into Your Development Workflow
Translation management should not be a separate application. It should be part of the tools you already use to build your product. better-i18n's MCP integration makes that possible, today, with any MCP-compatible AI IDE.
Start your free trial and connect your IDE in minutes.