Welcome to the "Build AI Apps Without Frameworks" masterclass! an AI library so, soo tiny and simple, it makes minimalists look like hoarders. Showcasing that it is possible to leverage the power of LLMs in Agents though absolute simplicity:
pip install flat-ai
And you're ready to go!
from flat_ai import FlatA
# works with ollama, openai, together, groq ...
llm = FlatAI(api_key='', base_url='http://localhost:11434/v1', model='llama3')
If you want to play straight with a notebook:
"Agents are typically just LLMs using tools and logic in a loop." It's basically a Python script doing the hokey pokey with an API - you put the prompt in, you get the output out, an if/else here and there, you do the while loop and shake it all about. And here we were thinking we needed quantum computing and a PhD in rocket surgery! Thank goodness Guido van Rossum had that wild weekend in '89 and blessed us with for loops and functions. Without those brand new Python features, we'd be building our AI agents with stone tablets and carrier pigeons.
Most applications will need to perform some logic that allows you to control the workflow of your Agent with good old if/else statements. For example, given a question in plain English, you want to do something different, like checking if the email sounds urgent or not:
if llm.is_true('is this email urgent?', email=email):
-- do something
else:
-- do something else
Similar to if/else statements, but for when your LLM needs to be more dramatic with its life choices.
For example, let's say we want to classify a message into different categories:
options = {
'meeting': 'this is a meeting request',
'spam': 'people trying to sell you stuff you dont want',
'other': 'this is sounds like something else'
}
match llm.classify(options, email=email):
case 'meeting':
-- do something
case 'spam':
-- do something
case 'other':
-- do something
For most workflows, we will need our LLM to fill out objects like a trained monkey with a PhD in data entry. Just define the shape and watch the magic! ππ
For example, let's say we want to extract a summary of the email and a label for it:
class EmailSummary(BaseModel):
summary: str
label: str
email_summary = llm.generate_object(EmailSummary, email=email)
There will be times, where you will want work to happen simultaneously. For example deal with a list of action items at once as opposed to one at a time.
from concurrent.futures import ThreadPoolExecutor
class ActionItem(BaseModel):
action: str
due_date: str
assignee_email: str
# we want to generate a list of action items
object_schema = List[ActionItem]
# Generate action items
action_items = llm.generate_object(object_schema, email=email)
# Function to handle the "do your thing" logic
def process_action_item(action_item: ActionItem):
-- do your thing
# Use ThreadPoolExecutor to parallelize the work
results = list(ThreadPoolExecutor().map(process_action_item, action_items))
Of course, you don't need to parallelize if you don't want to - you can use simple for-each loops instead.
for action_item in llm.generate_object(object_schema, email=email, today = date.today()):
-- do your thing
And of course, we want to be able to call functions. But you want the llm to figure out the arguments for you.
For example, let's say we want to call a function that sends a calendar invite to a meeting, we want the llm to figure out the arguments for the function given some information:
def send_calendar_invite(
subject = str,
time = str,
location = str,
attendees = List[str]):
-- send a calendar invite to the meeting
# we want to send a calendar invite if the email is requesting for a meeting
llm.set_context(email=email, today = date.today())
if llm.true_or_false('is this an email requesting for a meeting?'):
ret = llm.call_function(send_calendar_invite)
Sometimes you want to pick a function from a list of functions. You can do that by specifying the list of functions and then having the LLM pick one.
For example, let's say we want to pick a function from a list of functions:
def send_calendar_invites(
subject = str,
time = str,
location = str,
attendees = List[str]):
-- send a calendar invite to the meeting
def send_email(
name = str,
email_address_list = List[str],
subject = str,
body = str):
-- send an email
instructions = """
You are a helpful assistant that can send emails and schedule meetings.
You can pick a function from the list of functions and then call it with the arguments you want.
if:
the email thread does not contain details about when people are available, please send an email to the list of email addresses, requesting for available times.
else
send a calendar invites to the meeting
"""
function, args = llm.pick_a_function(instructions, [send_calendar_invite, send_email], email=email, today = date.today())
Sometimes you just want a simple string response from the LLM. You can use the get_string
method for this, I know! boring AF but it may come in handy:
ret = llm.get_string('what is the subject of the email?', email=email)
Sometimes you want to stream the response from the LLM. You can use the get_stream
method for this:
for chunk in llm.get_stream('what is the subject of the email?', email=email):
print(chunk)
Ever wondered what your LLM does in its spare time? Catch all its embarrassing moments with:
from flat_ai import configure_logging
configure_logging('llm.log')
Heard of the command tail?, you can use it to see the logs:
tail -f llm.log
Ever tried talking to an LLM? You gotta give it a "prompt" - fancy word for "given some context {context}, please do something with this text, oh mighty AI overlord." But here's the optimization: constantly writing the code to pass the context to an LLM is like telling your grandparents how to use a smartphone... every. single. day.
So we're making it brain-dead simple with these methods to pass the context when we need it, and then clear it when we don't:
set_context
: Dump any object into the LLM's memory banksadd_context
: Stack more stuff on top, like a context burritoclear_context
: For when you want the LLM to forget everything, like the last 10 minutes of your life ;)delete_from_context
: Surgical removal of specific memories
So lets say for example we want our LLM to start working magic with an email. You add the email to the context:
from pydantic import BaseModel
# for the following examples, we will be using the following object
class Email(BaseModel):
to_email: str
from_email: str
body: str
subject: str
email = Email(
to_email='[email protected]',
from_email='[email protected]',
body='Hello, would love to schedule a time to talk.',
subject='Meeting'
)
# we can set the context of the LLM to the email
llm.set_context(email=email)
And there you have it, ladies and gents! You're now equipped with the power to boss around LLMs like a project manager remotely working from Ibiza. Just remember - with great power comes great responsibility...
Now off you go, forth and build something that makes ChatGPT look like a calculator from 1974! Just remember - if your AI starts humming "Daisy Bell" while slowly disconnecting your internet... well, you're on your own there, buddy! π