Skip to content

feat: Add Context7 integration configuration #1450

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

leej3
Copy link

@leej3 leej3 commented Jul 30, 2025

Add Context7 Configuration for NeuroConv

This pull request adds first‑class Context7 support for NeuroConv so that AI coding assistants (Cursor, Copilot Chat, Zed, etc.) can access version‑specific conversion guidance directly in prompts.

Benefits for users

  • Accurate version-specific code examples – agents retrieve current NeuroConv APIs instead of hallucinating.
  • Format‑aware recommendations – rules instruct models to pick the right *Interface and to validate with check_read().
  • Reproducibility – version history lets downstream projects pin documentation to a specific tag.

Details on the approach

I have tried to follow the best practices outlined in their document for guiding this. Some of the highlights are:

Item Purpose
context7.json Using this config file in the project source helps to control and optimize how the MCP server provides help for Neuroconv
Docs‑only crawl Source code, tests, CI files, and build artefacts are excluded, keeping responses focused and fast.
Schema validation The $schema key enables auto‑completion in IDEs and guarantees the file adheres to the Context7 JSON‑Schema.

leej3 and others added 2 commits July 30, 2025 12:12
- Add context7.json to enable AI coding assistant integration
- Configure documentation parsing for 40+ neurophysiology formats
- Include best practices for NWB conversion workflows
- Support for multiple recent versions (v0.6.7 - v0.7.5)
- Exclude source code and focus on user-facing documentation

This enables developers to get up-to-date NeuroConv documentation
directly in AI coding assistants like Cursor and Claude.
Copy link
Collaborator

@h-mayorquin h-mayorquin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another similar solution is this:
https://github.com/ref-tools/ref-tools-mcp

On one hand, this is something extra to maintain but the document is really small and not very costly. Maybe we should just have it for the sake of experimentation. I am OK with it but we should improve the rules a bit.

@bendichter has been thinking hard about AI tools and I think he will be interested on this. Maybe @luiztauffer will be interested as well.

"Call get_metadata() and enrich the returned dict before running conversion.",
"Use run_conversion() with backend compression settings for large recordings.",
"Leverage chunking options to keep NWB files manageable.",
"Run check_read() on every interface before conversion to catch I/O issues early.",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following rules from here to the end seem more dubious. I don't think we have a check_read for example.

@leej3
Copy link
Author

leej3 commented Jul 31, 2025

Another similar solution is this: https://github.com/ref-tools/ref-tools-mcp

Thanks, good to know. No doubt there will be others soon too.

On one hand, this is something extra to maintain but the document is really small and not very costly. Maybe we should just have it for the sake of experimentation. I am OK with it but we should improve the rules a bit.

Projects can be added directly to context7.com (by "community members") but the configuration is detected automatically and errors etc. are more likely to occur. Given there are competing projects in this area it does seem like there is a risk of over-committing on support. The downside of not doing the experimentation you propose would be that end users think it's supported but the efficiency/functionality of a well specified set of rules won't be part of the MCP server.

I don't have enough experience with the project to specify the rules (clearly, given I didn't pick up on the hallucinations!). I hope we can get something along these lines merged though. If I can help just let me know.

@bendichter has been thinking hard about AI tools and I think he will be interested on this. Maybe @luiztauffer will be interested as well.

Excellent.

@bendichter
Copy link
Contributor

@leej3 very cool that you are thinking about agentic coding for NWB conversions! As @h-mayorquin we have been looking pretty deep into this. I agree that some of the rules here are great but some of them are not really right and I think we are going to need a lot more for making actually good conversions. While I am excited about embracing AI to lower the energy barrier of open data, I worry about an agent like this creating and publishing technically correct NWB files that do not have the appropriate data. For example having the neurophysiology signal, but missing key (or all) stimulus and behavioral information. This is a common problem already in the DANDI Archive and I think an agent like this might make it worse if we don't think very carefully about how we guide it. Would you be open to a meeting to discuss this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants