-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agent improvements: Adopt system instructions and allow multiple command executions #717
base: main
Are you sure you want to change the base?
Conversation
DonggeLiu
commented
Nov 13, 2024
- Allow passing system instructions to LLM
- Allow executing multiple bash commands in one response
- Prompt fixes
- Minor corrections and bug fixes
In addition to the new features, this also generated buildable fuzz targets for project
Past: |
/gcbrun exp -n dg -ag |
Report: https://llm-exp.oss-fuzz.com/Result-reports/ofg-pr/2024-11-13-717-dg-comparison/index.html Seeing many errors like:
This is likely due to the new system instructions added, I will lower input size limit accordingly. Good news is finally got non-0 build rate on both benchmarks from |
/gcbrun exp -n dg1 -ag |
/gcbrun exp -n dg -ag |
3 similar comments
/gcbrun exp -n dg -ag |
/gcbrun exp -n dg -ag |
/gcbrun exp -n dg -ag |
Hi @mihaimaruseac, could you please help me check if I did something wrong in this PR that can cause the following error when invoking chat model's
IIUC, the message says the input has 58602 tokens, exceeding the 32768 token limit.
Thanks! |
Meanwhile, I will attempt using the built-in |
I think this is because the error log from the compilation is too long? The model itself has a limit of tokens, afaik, and we might be hitting that? |
Hmm... If that's the case, shouldn't we be able to reproduce the error locally?
Please correct me if I am wrong, but I thought |
Oh, I was wrong. I'll think about it, but currently I don't have an idea, sorry |
Thanks! |
I believe the token limit issues are because of this: https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations#ml_processing Not all regions support longer context for gemini-1.5-pro-002 |
1c2d235
to
f189d18
Compare
/gcbrun exp -n dg -ag |
1 similar comment
/gcbrun exp -n dg -ag |