Build a streamlined Streamlit application to analize your CSV data. This guide walks you through the process of setting up and running the application.
- Streamlit: For a smooth web application interface.
- Langchain: Integrated for advanced functionalities.
- AWS Services: Harness the power of Amazon's cloud services.
- Amazon Bedrock: A fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon with a single API, along with a broad set of capabilities you need to build generative AI applications, simplifying development while maintaining privacy and security
Use virtualenv
to create an isolated Python environment:
-
Install
virtualenv
:pip install virtualenv
-
Navigate to the directory where you cloned the repository.
-
Initialize the virtual environment:
virtualenv da-env
-
Activate the environment:
source da-env/bin/activate
With your virtual environment active, install the necessary packages:
pip install -r requirements.txt
This command installs all dependencies from the requirements.txt
file into your da-env
environment.
To launch the application:
-
Ensure the dependencies are installed and your
.env
file has the Aurora PostgreSQL DB details. -
Install the
pgvector
extension on your Aurora PostgreSQL DB cluster:CREATE EXTENSION vector;
-
Launch the application using Streamlit:
streamlit run app.py
-
Your default web browser will open, showcasing the application interface.
-
Follow the on-screen instructions to load your CSV data (sample data)