Skip to content

Local Inference (Ollama) and Offline Functionality Not Working as Expected #440

@0utKast

Description

@0utKast

We attempted to set up Hugging Face AI Sheets to perform inference using local Large Language Models (LLMs) via Ollama
(specifically llama3 and phi3:mini), aiming for data privacy and potential offline use. However, our extensive testing
revealed that the application does not behave as expected based on the CUSTOMIZATION.md documentation.

Observed Behavior:

  1. Hard Dependency on Internet Connection (Offline Failure):
  2. Inference Not Routed to Local Ollama (Even When Online):
  3. UI Misleading/Problematic:
  4. HF_TOKEN Requirement:
  5. Clone the AI Sheets repository: git clone https://github.com/huggingface/aisheets.git
    HF_TOKEN="hf_YOUR_VALID_HUGGINGFACE_TOKEN" MODEL_ENDPOINT_URL="http://localhost:11434" MODEL_ENDPOINT_NAME="llama3" # Or "phi3:mini" for testing the other model
  6. Build the application: pnpm run build
  7. Start the server: pnpm serve (ensure internet connection is active for this step).
  8. In the AI Sheets UI, load a CSV file (e.g., reviews.csv with texto_resena in column2).
  9. Add a new column using a prompt (e.g., sentiment analysis on {{column2}}).
  10. Observe: Check the ollama serve terminal. No new log entries will appear, indicating no request reached the local server.
  11. Test Offline: Disconnect from the internet. Restart pnpm serve. The application will fail to load with an ENOTFOUND
    huggingface.co error.

Expected Behavior:

  • The application should be able to load and function without an internet connection when configured for local LLMs.
  • Misleading documentation regarding local/offline capabilities.
  • Operating System: Windows (win32)

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions