The LLM Survey Bot is a web application built with FastAPI that leverages a language model (LLM) to facilitate surveys and gather user feedback. The application allows users to interact with the LLM, submit responses, and evaluate the quality of those responses.
This is an exersise in prompt engeneering, the llm.py has hard coded guidelines it is asked to follow in response to user answers to survey questions. The variable init_content gives the LLM background on it's role and how to interact with the user,and guidelines is fed into the LLM along with user content when the user answers a question.
An API key is needed for openai/gpt-3.5-turbo (or another LLM you may have access to), see llm.py.
The questions pre loaded into the data base are simple questions with a quality guideline attached. The quality guideline is used by the LLM to determine if the answer that the user gives is satisfactory, if not then the LLM may prompt the user to try again.
Q1: What is your name?
Quality Guideline: Is the name submited likely to be a real name.
Example User response: I went to the park.
LLM response: That doesn’t sound like a name. Could you please enter your name?
To run this application, you need to have Python 3.7 or higher installed. Once the requirements.txt is downloaded you can install the required packages using the following command:
pip install -r requirements.txtAlternatively use:
pip install fastapi==0.68.1 uvicorn==0.15.0 sqlalchemy==1.4.23 pydantic==1.8.2 httpx==0.19.0 python-dotenv==0.19.0 jinja2==3.0.1 -
Clone the Repository:
git clone https://github.com/yourusername/llm_survey_bot.git cd llm_survey_botor use the Github website tools to clone the repository.
-
Set Up Environment Variables: Create a
.envfile in the root directory and add your environment variables (e.g.,The LiteLLM API key, and api base URL). -
Run the Application: Use Uvicorn to run the FastAPI application:
uvicorn LLM_Survey_Bot_App.main:app --reload
or just run main.py using you prefered IED.
-
Access the Application: Open your web browser and navigate to
http://localhost:8000to access the application.
Access the Application Documentation
Open your web browser and navigate to http://localhost:8000/docs to access the application documentation to add questions and edit the database.
Changing the model used Open the llm.py file, and change the "LITE_LLM_MODEL" variable to the model you want to use. Possible models are:
- openai/gpt-4o
- anthropic/claude-3-sonnet-20240229
- xai/grok-2-latest
- vertex_ai/gemini-1.5-pro
- huggingface/WizardLM/WizardCoder-Python-34B-V1.0
- ollama/llama2
- openrouter/google/palm-2-chat-bison
With more models are offerd by each provider, read more here: https://docs.litellm.ai/docs/providers