-
Notifications
You must be signed in to change notification settings - Fork 0
Updating source path, and file path in SharePoint loader #73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating source path, and file path in SharePoint loader #73
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a minor comment, the rest LGTM
@@ -82,6 +82,7 @@ def lazy_load(self) -> Iterator[Document]: | |||
auth_identities = self.authorized_identities(file_id) | |||
if self.load_extended_metadata is True: | |||
extended_metadata = self.get_extended_metadata(file_id) | |||
extended_metadata.update({"source_full_url": target_folder.web_url}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you handle this when folder_id
or object_ids
is passed instead of folder_path
, or when none of these is passed (i.e., root
by default)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added for folder_id
and when None is passed i.e., root
by default.
Not yet for object_ids
… to api (langchain-ai#24707) the fuc of embed_query of dashcope embeddings send a str param, and in the embed_with_retry func will send error content to api
…triever.from_llm` construct (langchain-ai#24712) **Description:** To avoid ValueError when construct the retriever from method `from_llm()`.
…-ai#24016) Description: add a optional score relevance threshold for select only coherent document, it's in complement of top_n Discussion: add relevance score threshold in flashrank_rerank document compressors langchain-ai#24013 Dependencies: no dependencies --------- Co-authored-by: Benjamin BERNARD <[email protected]> Co-authored-by: Bagatur <[email protected]> Co-authored-by: Chester Curme <[email protected]>
…in-ai#24686) Add how-to guide for rate limiting a chat model.
Rate limit the integration tests to avoid getting 429s.
Apply rate limiter in integration tests
…LangChainException instead of bare Exception (langchain-ai#23935) Raise `LangChainException` instead of `Exception`. This alleviates the need for library users to use bare try/except to handle exceptions raised by `AzureSearch`. Co-authored-by: Eugene Yurtsev <[email protected]>
Thank you for contributing to LangChain! - [x] **PR title**: "community:add Yi LLM", "docs:add Yi Documentation" - [x] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** This PR adds support for the Yi model to LangChain. - **Dependencies:** [langchain_core,requests,contextlib,typing,logging,json,langchain_community] - **Twitter handle:** 01.AI - [x] **Add tests and docs**: I've added the corresponding documentation to the relevant paths --------- Co-authored-by: Bagatur <[email protected]> Co-authored-by: isaac hershenson <[email protected]>
) **Description:** Add NVIDIA NIMs to Model Tab and LLM Feature Table --------- Co-authored-by: Hayden Wolff <[email protected]> Co-authored-by: Erick Friis <[email protected]> Co-authored-by: Erick Friis <[email protected]>
Release anthropic, openai, groq, mistralai, robocorp
…chain (langchain-ai#24687) Description: OutputFixingParser.from_llm() creates a retry chain that returns a Generation instance, when it should actually just return a string. Issue: langchain-ai#24600 Twitter handle: scribu --------- Co-authored-by: isaac hershenson <[email protected]>
…#24730) Correct the documentaiton string.
**Description:** Just a missing "r" in metric **Dependencies:**N/A
Bug in mypy 1.11.0 blocking CI, see example: https://github.com/langchain-ai/langchain/actions/runs/10127096903/job/28004492692?pr=24641
Add a class attribute `base_url` for ChatOllama to allow users to choose a different URL to connect to. Fixes langchain-ai#24555
…rl. (langchain-ai#24747) community:Add support for specifying document_loaders.firecrawl api url. Add support for specifying document_loaders.firecrawl api url. This is mainly to support the [self-hosting](https://github.com/mendableai/firecrawl/blob/main/SELF_HOST.md) option firecrawl provides. Eg. now I can specify localhost:.... The corresponding firecrawl class already provides functionality to pass the argument. See here: https://github.com/mendableai/firecrawl/blob/4c9d62f6d3c6cb7bd13590a8149e70dd81d8e282/apps/python-sdk/firecrawl/firecrawl.py#L29 --------- Co-authored-by: Chester Curme <[email protected]>
…ngchain-ai#24721) **Description:** In the `ChatFireworks` class definition, the Field() call for the "stop" ("stop_sequences") parameter is missing the "default" keyword. **Issue:** Type checker reports "stop_sequences" as a missing arg (not recognizing the default value is None) **Dependencies:** None **Twitter handle:** None
…#24347) Thank you for contributing to LangChain! - [x] **PR title**: "Add documentaiton on InMemoryVectorStore driver for MemoryDB to langchain-aws" - Langchain-aws repo :Add MemoryDB documentation - Example: "community: add foobar LLM" - [x] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** Added documentation on InMemoryVectorStore driver to aws.mdx and usage example on MemoryDB clusuter - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [x] **Add tests and docs**: If you're adding a new integration, please include Add memorydb notebook to docs/docs/integrations/ folde - [x] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
) - [ ] **PR title**: "langchain-openai: openai proxy added to base embeddings" - [ ] **PR message**: - **Description:** Dear langchain developers, You've already supported proxy for ChatOpenAI implementation in your package. At the same time, if somebody needed to use proxy for chat, it also could be necessary to be able to use it for OpenAIEmbeddings. That's why I think it's important to add proxy support for OpenAI embeddings. That's what I've done in this PR. @baskaryan --------- Co-authored-by: karpov <[email protected]> Co-authored-by: Bagatur <[email protected]>
Since right now you cant use the nice injected arg syntas directly with model.bind_tools()
a1daa7b
to
82ee99d
Compare
Closing this PR. raised new PR #77 |
No description provided.