-
Notifications
You must be signed in to change notification settings - Fork 130
Open
Description
We attempted to set up Hugging Face AI Sheets to perform inference using local Large Language Models (LLMs) via Ollama
(specifically llama3 and phi3:mini), aiming for data privacy and potential offline use. However, our extensive testing
revealed that the application does not behave as expected based on the CUSTOMIZATION.md documentation.
Observed Behavior:
- Hard Dependency on Internet Connection (Offline Failure):
- Inference Not Routed to Local Ollama (Even When Online):
- UI Misleading/Problematic:
HF_TOKEN
Requirement:- Clone the AI Sheets repository: git clone https://github.com/huggingface/aisheets.git
HF_TOKEN="hf_YOUR_VALID_HUGGINGFACE_TOKEN" MODEL_ENDPOINT_URL="http://localhost:11434" MODEL_ENDPOINT_NAME="llama3" # Or "phi3:mini" for testing the other model
- Build the application: pnpm run build
- Start the server: pnpm serve (ensure internet connection is active for this step).
- In the AI Sheets UI, load a CSV file (e.g., reviews.csv with texto_resena in column2).
- Add a new column using a prompt (e.g., sentiment analysis on {{column2}}).
- Observe: Check the ollama serve terminal. No new log entries will appear, indicating no request reached the local server.
- Test Offline: Disconnect from the internet. Restart pnpm serve. The application will fail to load with an ENOTFOUND
huggingface.co error.
Expected Behavior:
- The application should be able to load and function without an internet connection when configured for local LLMs.
- Misleading documentation regarding local/offline capabilities.
- Operating System: Windows (win32)
frascuchon and Amelie-V
Metadata
Metadata
Assignees
Labels
No labels