Roles and permissions
The current setup has minimalistic permissions checks when different users are part of the same organization. Only org admins can install integrations or delete the organization for example. This remains intentionally ad hoc for now and we should eventually introduce a proper roles and permissions management to grant simple permissions to users on a given product within an organization to allow adding new feedback / creating and editing problems, features, releases etc.

Hervé Labas about 1 month ago
Roles and permissions
The current setup has minimalistic permissions checks when different users are part of the same organization. Only org admins can install integrations or delete the organization for example. This remains intentionally ad hoc for now and we should eventually introduce a proper roles and permissions management to grant simple permissions to users on a given product within an organization to allow adding new feedback / creating and editing problems, features, releases etc.

Hervé Labas about 1 month ago
Planned
Offer some minimal credits for demo pipeline analysis during onboarding
Another blocker during onboarding is setting up your own AI provider key to be in a position to experience the analysis pipeline. Kontext should offer a limited amount of credits allowing to experience the analysis (e.g of a piece of feedback one would manually add) so they can see how it works before blocking and requiring an API key

Hervé Labas about 1 month ago
Planned
Offer some minimal credits for demo pipeline analysis during onboarding
Another blocker during onboarding is setting up your own AI provider key to be in a position to experience the analysis pipeline. Kontext should offer a limited amount of credits allowing to experience the analysis (e.g of a piece of feedback one would manually add) so they can see how it works before blocking and requiring an API key

Hervé Labas about 1 month ago
Planned
Add an optional Demo product for people to test out Kontext more easily
People signing up don’t have time to set everything up, have integration wired up, etc. They need to play with Kontext to see how it feels. We should offer to seed the TableFlow demo for them to explore, and eventually support deleting it once they’re done exploring and decided to use Kontext or not.

Hervé Labas about 1 month ago
Planned
Add an optional Demo product for people to test out Kontext more easily
People signing up don’t have time to set everything up, have integration wired up, etc. They need to play with Kontext to see how it feels. We should offer to seed the TableFlow demo for them to explore, and eventually support deleting it once they’re done exploring and decided to use Kontext or not.

Hervé Labas about 1 month ago
Planned
Ease up the onboarding for YOUR product
For Kontext to be truly effective and valuable, you need to have actors and contexts defined. We need to give users more help to set these things up without hassle. The MCP server can help calling your AI assistant for help, but we should also have a simple hosted pipeline to parse a website and docs to propose a “plan” and initialize YOUR own Kontext instance

Hervé Labas about 1 month ago
Planned
Ease up the onboarding for YOUR product
For Kontext to be truly effective and valuable, you need to have actors and contexts defined. We need to give users more help to set these things up without hassle. The MCP server can help calling your AI assistant for help, but we should also have a simple hosted pipeline to parse a website and docs to propose a “plan” and initialize YOUR own Kontext instance

Hervé Labas about 1 month ago
Completed
Public API documentation
Published the OpenAPI spec at https://app.getkontext.io/openapi.json and documentation at https://getkontext.io/docs/api-reference/overview allowing to ease interfacing and accessing your Kontext data outside of Kontext.

Hervé Labas about 1 month ago
Completed
Public API documentation
Published the OpenAPI spec at https://app.getkontext.io/openapi.json and documentation at https://getkontext.io/docs/api-reference/overview allowing to ease interfacing and accessing your Kontext data outside of Kontext.

Hervé Labas about 1 month ago
Suggest improvements to the analysis pipeline prompt after evals
When having enough evals, we should suggest prompts that might correct the mistakes the eval surfaced, allowing to re-run an eval and compare results to help improving it.

Hervé Labas about 1 month ago
Suggest improvements to the analysis pipeline prompt after evals
When having enough evals, we should suggest prompts that might correct the mistakes the eval surfaced, allowing to re-run an eval and compare results to help improving it.

Hervé Labas about 1 month ago
Planned
Clean up and improve UX of the Analysis pipeline eval screen
The current screen is not properly rendering some details of the eval results, and makes them sometime hard to understand: Clarify what’s expected, and what’s the result only when a mistake is flagged: no repetition when successful, so we focus on mistakes to analyze Fix the fact that a pure failure on the LLM side leads to false positives (ie if an example is supposed not to detect anything, we flag a success even if the pipeline itself failed)

Hervé Labas about 1 month ago
Planned
Clean up and improve UX of the Analysis pipeline eval screen
The current screen is not properly rendering some details of the eval results, and makes them sometime hard to understand: Clarify what’s expected, and what’s the result only when a mistake is flagged: no repetition when successful, so we focus on mistakes to analyze Fix the fact that a pure failure on the LLM side leads to false positives (ie if an example is supposed not to detect anything, we flag a success even if the pipeline itself failed)

Hervé Labas about 1 month ago
Planned
Flag mishaps in the analysis to feed benchmarks
Enable users to flag any part of an analysis that was performed but shows a misinterpretation from the LLM and requires an adjustment. Allows to fine tune the analysis by feeding the example to the benchmark samples, which allows to build your own verified dataset over time, and will give you tools to adjust the analysis pipeline to improve its quality.

Hervé Labas about 1 month ago
Planned
Flag mishaps in the analysis to feed benchmarks
Enable users to flag any part of an analysis that was performed but shows a misinterpretation from the LLM and requires an adjustment. Allows to fine tune the analysis by feeding the example to the benchmark samples, which allows to build your own verified dataset over time, and will give you tools to adjust the analysis pipeline to improve its quality.

Hervé Labas about 1 month ago
Planned
Plain Integration
Similarly to Crisp, integrate your Plain workspace into Kontext to ingest support conversations and analyze them to detect problems mentioned by customers.

Hervé Labas about 1 month ago
Planned
Plain Integration
Similarly to Crisp, integrate your Plain workspace into Kontext to ingest support conversations and analyze them to detect problems mentioned by customers.

Hervé Labas about 1 month ago
MCP UI
Experiment with MCP UI to allow your AI Assistant to render some Kontext widgets based on what you’re asking: trends, stats, anything that benefits from being visually rendered.

Hervé Labas about 1 month ago
Low Priority
MCP UI
Experiment with MCP UI to allow your AI Assistant to render some Kontext widgets based on what you’re asking: trends, stats, anything that benefits from being visually rendered.

Hervé Labas about 1 month ago
Low Priority
Completed
Fathom Integration
Connect your Fathom workspace to generate Feedback items from call transcripts, and detect what your customers or prospect are telling you about your Product. You can wire specific teams to specific Kontext Products or all to a single one, up to you.

Hervé Labas about 1 month ago
Completed
Fathom Integration
Connect your Fathom workspace to generate Feedback items from call transcripts, and detect what your customers or prospect are telling you about your Product. You can wire specific teams to specific Kontext Products or all to a single one, up to you.

Hervé Labas about 1 month ago
In Progress
Crisp Integration
Ingest support conversations from Crisp into Kontext. Map your inbox to a Kontext Product, possibly filtering on segment tags. Any update to the conversation updates the corresponding feedback item and triggers a new analysis pass. Requires validation from the Crisp team so the production version may be published.

Hervé Labas about 1 month ago
High Priority
In Progress
Crisp Integration
Ingest support conversations from Crisp into Kontext. Map your inbox to a Kontext Product, possibly filtering on segment tags. Any update to the conversation updates the corresponding feedback item and triggers a new analysis pass. Requires validation from the Crisp team so the production version may be published.

Hervé Labas about 1 month ago
High Priority
In Progress
MCP Server
https://mcp.getkontext.io to allow connecting your favorite AI assistant to Kontext, ask questions about detected problems, trends, feedback, etc. Going with an initial list of tools meant to facilitate onboarding and build a solid feature tree, list releases, so you can take your own Product Docs, and get started with your AI agent

Hervé Labas about 1 month ago
High Priority
In Progress
MCP Server
https://mcp.getkontext.io to allow connecting your favorite AI assistant to Kontext, ask questions about detected problems, trends, feedback, etc. Going with an initial list of tools meant to facilitate onboarding and build a solid feature tree, list releases, so you can take your own Product Docs, and get started with your AI agent

Hervé Labas about 1 month ago
High Priority