Common questions about Intentioned and social skills training
Modern large language models (LLMs) like Qwen, Gemma, and others already have extensive built-in safety guardrails and content filtering. These models are trained to refuse harmful requests, avoid generating dangerous content, and maintain ethical boundaries by default.
Adding an overly strict moderation layer on top of these already-cautious models creates a problematic "double filtering" effect that severely impedes innocent conversations. Users practicing difficult scenarios—like handling an angry customer, negotiating assertively, or discussing sensitive topics in therapy contexts—would constantly trigger false positives.
Our moderation is designed to catch only the most extreme cases (direct, actionable threats to real individuals) while allowing the natural flow of difficult conversation practice. The underlying LLM handles the rest.
Yes, absolutely! If you find the default moderation prompt doesn't fit your needs, you can fully customize it in the configuration tool.
Some use cases where you might want to adjust moderation:
Because it is—and there's a good reason for that.
Intentioned is developed by a single person and an assistant (Ari Kotler and Gianni Latella) as an Honors Thesis project at Pace University. There is no team of developers, no corporate backing, and no venture capital funding. Every line of code, every design decision, and every feature is the work of two individuals balancing this with academic coursework and life responsibilities.
The "vibe coded" approach allows rapid iteration and feature development that would be impossible for a solo developer using traditional methods. The alternative would be a project that takes 5 years instead of months, or never ships at all.
If you find bugs or issues, please report them. Solo development means things slip through the cracks—but it also means fixes can be deployed quickly without bureaucratic approval processes.
The demo is accessible via the "Launch Demo" button in the navigation bar at the top of the home page. You can also find it in the hero section of the landing page.
The demo provides a full experience of Intentioned's conversation practice capabilities, including:
Intentioned supports a wide range of language models through HuggingFace Transformers:
Configure your model in the config.json file or through the configuration tool.
Models are automatically downloaded from HuggingFace on first use.
Intentioned is currently an Honors Thesis research project. While the software itself is new, it is built on well-established research in several fields:
We are actively collecting data (with user consent) to publish formal efficacy studies. If you're interested in participating in research or collaborating academically, please contact us.
Pricing has not been finalized yet. The software is currently in beta/demo phase as we gather user feedback and refine the product.
Sign up for our mailing list or follow the project to be notified when pricing is announced.
We understand that many regions face severe economic challenges, including currency devaluation, hyperinflation, or de facto dollarization that makes international software services unaffordable.
We may be able to offer you an affordable plan on a case-by-case basis if you can provide:
This is a valid concern, especially for users in the United States and Europe where tariffs on electronics can significantly impact pricing. Here's our perspective:
We monitor hardware markets and will update our recommendations as conditions change.
We do not currently offer hardware installation services.
Intentioned is a software product. However, we understand that not everyone is comfortable installing GPUs or setting up AI environments. Here are some resources:
Intentioned is designed to be language-universal, especially for the EU market:
To configure language settings, adjust the STT language parameter in your configuration or contact us for enterprise multilingual deployments.