AI tools that do not train on your prompts.
What that means, and what it does not.
Training on user prompts has been the default business model of consumer AI since the first generation of chatbots shipped. The economics are obvious: every conversation is a free annotation. The problem is equally obvious for anyone working with confidential material. This page sets out where the major tools stand today, how Namulai is configured, and what the phrase no training actually covers in practice.
Why training-on-user-data became the norm
Foundation models improve when they see more data, and user conversations are an unusually rich source: they reflect real tasks, real domains, and the kinds of edge cases that a curated training set rarely captures. For the vendor, harvesting this data is close to free. For the user, the cost is opaque, because the prompt has already been sent before any consent dialogue is meaningfully read.
The practical consequence is that anything pasted into a default consumer tier of a major chatbot, source code, client emails, draft contracts, has historically had a non-zero chance of influencing future model weights. Whether that material can ever be extracted from the resulting model is a separate and unresolved research question.
Where the major AI tools stand
The picture has improved, unevenly. OpenAI changed the ChatGPT default in 2024 so that consumer conversations are not used for training unless the user opts in, with a different posture for ChatGPT Team and Enterprise where training is off by default and contractually committed. Anthropic has stated that Claude does not train on customer API or paid-product conversations, with narrower exceptions for safety review.
Google Gemini, Meta AI, and several smaller vendors retain more permissive defaults on free tiers. The honest summary is that no-training is now achievable across most professional use cases, but it requires reading the specific terms for the specific tier you are on, rather than assuming a category-wide answer.
Namulai's policy on training
Namulai does not train any model on your prompts or completions. We do not run a training pipeline at all. The product is an interface and an inference router, not a model laboratory.
Underneath, inference is routed through OpenRouter, which has provider-level retention disabled by default for the providers that support it. That means the upstream provider does not log the prompt for training purposes, subject to the standard short-term operational retention needed to detect abuse, which is typically measured in days rather than months. We persist your conversation history in our own database so you can read it back later, and we delete it on request. That is the entire data flow.
What no-training does and does not cover
No-training is a forward-looking commitment about future model versions. It does not retroactively unwind the pretraining corpus that the underlying models were built on, which was assembled by the original providers from public web data and licensed sources, long before your account existed.
It also does not mean that your prompt is invisible during the request itself. The model has to see your prompt to answer it, in the same sense that a search engine has to see your query. That in-context use ends when the response is returned. The distinction between transient in-context exposure and permanent absorption into the next training run is the one that matters, and it is the distinction we are committed to.
How to verify a tool's training stance
Marketing pages are not contracts. The verifiable statements live in three places: the terms of service, the data processing addendum, and any published subprocessor list. Read them in that order.
For any tool you are considering, look for an explicit, specific clause stating that customer content is not used to train models, with the tier you are on named. Look for a retention window expressed in days or months rather than indefinitely. Look for a published subprocessor list, because a no-training claim is only as strong as the weakest link in the chain. If any of these are missing, the safe assumption is that the tool is not configured the way the homepage implies, and you should treat it accordingly.
Training and retention, common questions
Does Namulai use my conversations to train AI?
No. We do not operate a training pipeline and we do not pass conversation content to any third party for training purposes. Your prompts are used to generate the model's response in real time, persisted in our database so you can return to the conversation, and deleted when you delete the conversation or your account.
What about OpenRouter and the underlying model providers?
Namulai routes inference through OpenRouter with provider-level retention disabled by default. The underlying providers see the prompt for the duration of the request and apply short operational retention, typically measured in days, for abuse detection. They do not retain the content for training under the configuration we use.
Were the models themselves trained on data without consent?
The foundation models predate Namulai and were pretrained by their original providers on large corpora of public web text and licensed material. We have no control over that historical pretraining and would not claim otherwise. Our commitment applies to what happens with your data from the moment it enters Namulai forwards.
How can I check that nothing is being retained?
You can read our terms and the OpenRouter data policy, both of which describe the configuration in concrete terms. You can also delete a conversation or your entire account from settings and observe that the records are purged from MongoDB within thirty days through TTL indexes. We are happy to walk a prospective customer through the architecture on request.
Use frontier models for real work, without feeding the next training run.
Try Namulai free30-day free trial · €19.80/month after · cancel anytime