|
| 1 | +This is a [LlamaIndex](https://www.llamaindex.ai/) project using [Next.js](https://nextjs.org/) bootstrapped with [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama). |
| 2 | + |
| 3 | +## Getting Started |
| 4 | + |
| 5 | +First, install the dependencies: |
| 6 | + |
| 7 | +``` |
| 8 | +npm install |
| 9 | +``` |
| 10 | + |
| 11 | +Second, generate the embeddings of the documents in the `./data` directory (if this folder exists - otherwise, skip this step): |
| 12 | + |
| 13 | +``` |
| 14 | +npm run generate |
| 15 | +``` |
| 16 | + |
| 17 | +Third, run the development server: |
| 18 | + |
| 19 | +``` |
| 20 | +npm run dev |
| 21 | +``` |
| 22 | + |
| 23 | +Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. |
| 24 | + |
| 25 | +You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file. |
| 26 | + |
| 27 | +This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font. |
| 28 | + |
| 29 | +## Using Docker |
| 30 | + |
| 31 | +1. Build an image for the Next.js app: |
| 32 | + |
| 33 | +``` |
| 34 | +docker build -t <your_app_image_name> . |
| 35 | +``` |
| 36 | + |
| 37 | +2. Generate embeddings: |
| 38 | + |
| 39 | +Parse the data and generate the vector embeddings if the `./data` folder exists - otherwise, skip this step: |
| 40 | + |
| 41 | +``` |
| 42 | +docker run \ |
| 43 | + --rm \ |
| 44 | + -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system |
| 45 | + -v $(pwd)/config:/app/config \ |
| 46 | + -v $(pwd)/data:/app/data \ |
| 47 | + -v $(pwd)/cache:/app/cache \ # Use your file system to store the vector database |
| 48 | + <your_app_image_name> \ |
| 49 | + npm run generate |
| 50 | +``` |
| 51 | + |
| 52 | +3. Start the app: |
| 53 | + |
| 54 | +``` |
| 55 | +docker run \ |
| 56 | + --rm \ |
| 57 | + -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system |
| 58 | + -v $(pwd)/config:/app/config \ |
| 59 | + -v $(pwd)/cache:/app/cache \ # Use your file system to store gea vector database |
| 60 | + -p 3000:3000 \ |
| 61 | + <your_app_image_name> |
| 62 | +``` |
| 63 | + |
| 64 | +## Learn More |
| 65 | + |
| 66 | +To learn more about LlamaIndex, take a look at the following resources: |
| 67 | + |
| 68 | +- [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex (Python features). |
| 69 | +- [LlamaIndexTS Documentation](https://ts.llamaindex.ai) - learn about LlamaIndex (Typescript features). |
| 70 | + |
| 71 | +You can check out [the LlamaIndexTS GitHub repository](https://github.com/run-llama/LlamaIndexTS) - your feedback and contributions are welcome! |
0 commit comments