Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEAT]: Perplexica Integration #2568

Open
CrazyAce25 opened this issue Nov 2, 2024 · 1 comment
Open

[FEAT]: Perplexica Integration #2568

CrazyAce25 opened this issue Nov 2, 2024 · 1 comment

Comments

@CrazyAce25
Copy link

CrazyAce25 commented Nov 2, 2024

What would you like to see?

Just wanted to start by saying I love what you all are building and the integration with LocalAI is wonderful. One of the things I currently run in combination with LocalAI and love is Perplexica (https://github.com/ItzCrazyKns/Perplexica) and while it seems you all have built in support for Perplexity, Perplexica doesnt enjoy the same kind of turn key support. I would appreciate it if Perplexica support was added so I could easily make use of it within AnythingLLM as I'm not a fan of the centralized nature of Perplexity. Also provided below are some benefits of integrating the system including the ability to have custom agents easily make use of the system & the ability to toggle on and off use with the RAG system which I thought would be quite useful.

Benefits of Perplexica Integration
Enhanced Search Capabilities
Perplexica is known for its robust search engine capabilities, which could complement AnythingLLM's RAG (Retrieval-Augmented Generation) features. Integrating Perplexica would allow AnythingLLM to leverage Perplexica's advanced search functionality, potentially improving the accuracy and relevance of the responses generated by the LLM.
Toggle Button for RAG
Adding a toggle button to use Perplexica with and without RAG would provide users with more flexibility. This feature would enable users to choose between using the full capabilities of Perplexica's search engine along with RAG or opting for a simpler, faster response generation without RAG. This toggle would be particularly useful for scenarios where real-time responses are critical but also where detailed, context-aware responses are needed.
Document Handling and Context
Perplexica excels at handling large volumes of documents and providing context-aware responses. Integrating this functionality into AnythingLLM would allow users to upload and manage a large number of documents, including PDFs, Word documents, and other formats, which could then be used as context for the LLM. This would be especially beneficial for users who need to work with extensive knowledge bases or large collections of scripts and code files.
Other Ways Perplexica Could Benefit AnythingLLM
Vector Database Integration
Perplexica uses vector databases like SearXNG for efficient retrieval and embedding of documents. Integrating these capabilities into AnythingLLM could enhance its existing vector database support (e.g., LanceDB, Pinecone, Milvus) by providing additional options and potentially better performance in certain scenarios.
Customization and Flexibility
The integration would allow for more customization options, enabling users to choose between different search engines and retrieval methods. This flexibility would be beneficial for developers who want to tailor the system to their specific needs, whether it's for personal use or within an organization.
Multi-User and Multi-Agent Support
AnythingLLM's support for multi-user instances and multiple agents would be further enhanced by Perplexica's search capabilities. Each agent could be configured to use Perplexica's search engine, allowing for more specialized and efficient information retrieval across different workspaces and agents.
Community and Developer Support
Integrating Perplexica into AnythingLLM could also attract more developers and users who are familiar with Perplexica's features. This could lead to a broader community and more contributions to the project, enhancing its overall ecosystem and support.
Practical Implementation
To implement this integration, AnythingLLM could use its existing API and developer tools to connect with Perplexica's services. This would involve setting up API keys, configuring the search engine settings, and integrating the Perplexica UI elements into the AnythingLLM interface.
The issue on GitHub regarding Langchain SDK integration highlights the interest in integrating AnythingLLM with other tools and frameworks. A similar approach could be taken to integrate Perplexica, leveraging the Langchain SDK or other open standards to ensure seamless interaction between the two systems.
In summary, integrating Perplexica with AnythingLLM would significantly enhance the latter's capabilities by adding robust search engine functionality, improved document handling, and increased customization options. The ability to toggle between using RAG and not using it would provide users with greater flexibility, making the system more versatile and powerful.

@CrazyAce25 CrazyAce25 added enhancement New feature or request feature request labels Nov 2, 2024
@timothycarambat
Copy link
Member

@CrazyAce25 please let me know if my understanding is incorrect but isn't Perplexia an AI toolsuite & UI around SearXNG? We currently support SearXNG for a search engine provider for @agents.

Is this feature request for adding Perplexica as an LLM Provider? If that is the case, that makes a ton of sense to me. This issue is just a wall of text and I'm a bit lost as to what the concise ask is 😅

Just want to make sure if we ship something for this it is what you wanted, the benefits are clear to me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants