Skip to content

Commit 50acc69

Browse files
committed
Version 2.3.0 Added Deepseek V3/R1 and OpenAI O1 model support
1 parent 26b01fb commit 50acc69

12 files changed

+152
-75
lines changed

README.md

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -99,8 +99,9 @@ export HUGGINGFACE_API_KEY="Your HuggingFace API Key"
9999
export PALM_API_KEY="Your Google Palm API Key"
100100
export GEMINI_API_KEY="Your Google Gemini API Key"
101101
export OPENAI_API_KEY="Your OpenAI API Key"
102-
export GROQ_API_KEY="Your Groq AI API Key"
103-
export ANTHROPIC_API_KEY="Your Anthropic AI API Key"
102+
export GROQ_API_KEY="Your Groq API Key"
103+
export ANTHROPIC_API_KEY="Your Anthropic API Key"
104+
export DEEPSEEK_API_KEY="Your Deepseek API Key"
104105
```
105106

106107
# Offline models setup.</br>
@@ -169,6 +170,11 @@ To use Code-Interpreter, use the following command options:
169170
- List of all **models** are (**Contribute - MORE**): </br>
170171
- `gpt-3.5-turbo` - Generates code using the GPT 3.5 Turbo model.
171172
- `gpt-4` - Generates code using the GPT 4 model.
173+
- `o1-mini` - Generates code using the OpenAI o1-mini model.
174+
- `o1-preview` - Generates code using the OpenAI o1-preview model.
175+
- `deepseek-chat` - Generates response using the Deepseek chat model.
176+
- `deepseek-coder` - Generates code using the Deepseek coder model.
177+
- `deepseek-reasoner` - Generates code using the Deepseek reasoner model.
172178
- `gemini-pro` - Generates code using the Gemini Pro model.
173179
- `palm-2` - Generates code using the PALM 2 model.
174180
- `claude-2` - Generates code using the AnthropicAI Claude-2 model.
@@ -337,6 +343,8 @@ If you're interested in contributing to **Code-Interpreter**, we'd love to have
337343
- Resolved pip package installation issues for smoother and more reliable setup.
338344
- **v2.2.1** - Fixed **No Content/Response from LLM** Bug, Fixed _Debug Mode_ with **Logs**.
339345

346+
- **v2.3.0** - Added Deepseek V3 and R1 models support now. Added OpenAI o1 Models support.
347+
340348
---
341349

342350
## 📜 **License**

configs/deepseek-chat.config

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
temperature=0.1
2+
max_tokens=1024
3+
start_sep=```
4+
end_sep=```
5+
skip_first_line=False
6+
HF_MODEL = deepseek-chat

configs/deepseek-coder.config

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
temperature=0.1
2+
max_tokens=1024
3+
start_sep=```
4+
end_sep=```
5+
skip_first_line=False
6+
HF_MODEL = deepseek-coder

configs/deepseek-reasoner.config

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
temperature=0.1
2+
max_tokens=1024
3+
start_sep=```
4+
end_sep=```
5+
skip_first_line=False
6+
HF_MODEL = deepseek-reasoner

configs/gpt-o1-mini.config

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
2+
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
3+
temperature = 0.1
4+
5+
# The maximum number of new tokens that the model can generate.
6+
max_tokens = 1024
7+
8+
# The start separator for the generated code.
9+
start_sep = ```
10+
11+
# The end separator for the generated code.
12+
end_sep = ```
13+
14+
# If True, the first line of the generated text will be skipped.
15+
skip_first_line = True
16+
17+
# The model used for generating the code.
18+
HF_MODEL = o1-mini

configs/gpt-o1-preview.config

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
2+
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
3+
temperature = 0.1
4+
5+
# The maximum number of new tokens that the model can generate.
6+
max_tokens = 1024
7+
8+
# The start separator for the generated code.
9+
start_sep = ```
10+
11+
# The end separator for the generated code.
12+
end_sep = ```
13+
14+
# If True, the first line of the generated text will be skipped.
15+
skip_first_line = True
16+
17+
# The model used for generating the code.
18+
HF_MODEL = o1-preview

interpreter

Lines changed: 51 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Command line arguments:
1313
--display_code, -dc: Displays the generated code in the output.
1414
1515
Author: HeavenHM
16-
Date: 2023/12/01
16+
Date: 2025/01/01
1717
"""
1818

1919
from libs.interpreter_lib import Interpreter
@@ -25,57 +25,58 @@ from libs.markdown_code import display_markdown_message
2525
from libs.utility_manager import UtilityManager
2626

2727
# The main version of the interpreter.
28-
INTERPRETER_VERSION = "2.1.3"
28+
INTERPRETER_VERSION = "2.3.0"
29+
2930

3031
def main():
31-
parser = argparse.ArgumentParser(description='Code - Interpreter')
32-
parser.add_argument('--exec', '-e', action='store_true', default=False, help='Execute the code')
33-
parser.add_argument('--save_code', '-s', action='store_true', default=False, help='Save the generated code')
34-
parser.add_argument('--mode', '-md', choices=['code', 'script', 'command','vision','chat'], help='Select the mode (`code` for generating code, `script` for generating shell scripts, `command` for generating single line commands) `vision` for generating text from images')
35-
parser.add_argument('--model', '-m', type=str, default='code-llama', help='Set the model for code generation. (Defaults to gpt-3.5-turbo)')
36-
parser.add_argument('--version', '-v', action='version', version='%(prog)s '+ INTERPRETER_VERSION)
37-
parser.add_argument('--lang', '-l', type=str, default='python', help='Set the interpreter language. (Defaults to Python)')
38-
parser.add_argument('--display_code', '-dc', action='store_true', default=False, help='Display the code in output')
39-
parser.add_argument('--history', '-hi', action='store_true', default=False, help='Use history as memory')
40-
parser.add_argument('--upgrade', '-up', action='store_true', default=False, help='Upgrade the interpreter')
41-
parser.add_argument('--file', '-f', type=str, nargs='?', const='prompt.txt', default=None, help='Sets the file to read the input prompt from')
42-
args = parser.parse_args()
32+
parser = argparse.ArgumentParser(description='Code - Interpreter')
33+
parser.add_argument('--exec', '-e', action='store_true', default=False, help='Execute the code')
34+
parser.add_argument('--save_code', '-s', action='store_true', default=False, help='Save the generated code')
35+
parser.add_argument('--mode', '-md', choices=['code', 'script', 'command', 'vision', 'chat'], help='Select the mode (`code` for generating code, `script` for generating shell scripts, `command` for generating single line commands) `vision` for generating text from images')
36+
parser.add_argument('--model', '-m', type=str, default='code-llama', help='Set the model for code generation. (Defaults to gpt-3.5-turbo)')
37+
parser.add_argument('--version', '-v', action='version', version='%(prog)s ' + INTERPRETER_VERSION)
38+
parser.add_argument('--lang', '-l', type=str, default='python', help='Set the interpreter language. (Defaults to Python)')
39+
parser.add_argument('--display_code', '-dc', action='store_true', default=False, help='Display the code in output')
40+
parser.add_argument('--history', '-hi', action='store_true', default=False, help='Use history as memory')
41+
parser.add_argument('--upgrade', '-up', action='store_true', default=False, help='Upgrade the interpreter')
42+
parser.add_argument('--file', '-f', type=str, nargs='?', const='prompt.txt', default=None, help='Sets the file to read the input prompt from')
43+
args = parser.parse_args()
4344

44-
# Check if only the application name is passed
45-
if len(sys.argv) <= 1:
46-
parser.print_help()
47-
return
48-
49-
warnings.filterwarnings("ignore") # To ignore all warnings
45+
# Check if only the application name is passed
46+
if len(sys.argv) <= 1:
47+
parser.print_help()
48+
return
49+
50+
warnings.filterwarnings("ignore") # To ignore all warnings
5051

51-
# Upgrade the interpreter if the --upgrade flag is passed.
52-
if args.upgrade:
53-
UtilityManager.upgrade_interpreter()
54-
return
55-
56-
# Create an instance of the Interpreter class and call the main method.
57-
interpreter = Interpreter(args)
58-
interpreter.interpreter_main(INTERPRETER_VERSION)
59-
60-
if __name__ == "__main__":
61-
try:
62-
main()
63-
except SystemExit:
64-
pass # Ignore the SystemExit exception caused by --version argument
65-
except Exception as exception:
52+
# Upgrade the interpreter if the --upgrade flag is passed.
53+
if args.upgrade:
54+
UtilityManager.upgrade_interpreter()
55+
return
56+
57+
# Create an instance of the Interpreter class and call the main method.
58+
interpreter = Interpreter(args)
59+
interpreter.interpreter_main(INTERPRETER_VERSION)
6660

67-
# Print a meaningful error message if the interpreter is not setup properly.
68-
if ".env file" in str(exception):
69-
display_markdown_message("Interpreter is not setup properly. Please follow these steps \
70-
to setup the interpreter:\n\
71-
1. Create a .env file in the root directory of the project.\n\
72-
2. Add the following line to the .env file:\n\
73-
GEMINI_API_KEY=<your api key>\n\
74-
OPENAI_API_KEY=<your api key>\n\
75-
ANTHROPIC_API_KEY=<your api key>\n\
76-
3. Replace <your api key> with your OpenAI/Gemini API key.\n\
77-
4. Run the interpreter again.")
78-
79-
else:
80-
display_markdown_message(f"An error occurred: {exception}")
81-
traceback.print_exc()
61+
62+
if __name__ == "__main__":
63+
try:
64+
main()
65+
except SystemExit:
66+
pass # Ignore the SystemExit exception caused by --version argument
67+
except Exception as exception:
68+
# Print a meaningful error message if the interpreter is not setup properly.
69+
if ".env file" in str(exception):
70+
display_markdown_message("Interpreter is not setup properly. Please follow these steps \
71+
to setup the interpreter:\n\
72+
1. Create a .env file in the root directory of the project.\n\
73+
2. Add the following line to the .env file:\n\
74+
GEMINI_API_KEY=<your api key>\n\
75+
OPENAI_API_KEY=<your api key>\n\
76+
ANTHROPIC_API_KEY=<your api key>\n\
77+
3. Replace <your api key> with your OpenAI/Gemini API key.\n\
78+
4. Run the interpreter again.")
79+
80+
else:
81+
display_markdown_message(f"An error occurred interpreter main: {exception}")
82+
traceback.print_exc()

interpreter.py

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
# -*- coding: utf-8 -*-*
12
"""
23
This is the main file for the Code-Interpreter.
34
It handles command line arguments and initializes the Interpreter.
@@ -24,7 +25,7 @@
2425
from libs.utility_manager import UtilityManager
2526

2627
# The main version of the interpreter.
27-
INTERPRETER_VERSION = "2.2.1"
28+
INTERPRETER_VERSION = "2.3.0"
2829

2930

3031
def main():
@@ -64,18 +65,17 @@ def main():
6465
except SystemExit:
6566
pass # Ignore the SystemExit exception caused by --version argument
6667
except Exception as exception:
67-
6868
# Print a meaningful error message if the interpreter is not setup properly.
6969
if ".env file" in str(exception):
7070
display_markdown_message("Interpreter is not setup properly. Please follow these steps \
71-
to setup the interpreter:\n\
72-
1. Create a .env file in the root directory of the project.\n\
73-
2. Add the following line to the .env file:\n\
74-
GEMINI_API_KEY=<your api key>\n\
75-
OPENAI_API_KEY=<your api key>\n\
76-
ANTHROPIC_API_KEY=<your api key>\n\
77-
3. Replace <your api key> with your OpenAI/Gemini API key.\n\
78-
4. Run the interpreter again.")
71+
to setup the interpreter:\n\
72+
1. Create a .env file in the root directory of the project.\n\
73+
2. Add the following line to the .env file:\n\
74+
GEMINI_API_KEY=<your api key>\n\
75+
OPENAI_API_KEY=<your api key>\n\
76+
ANTHROPIC_API_KEY=<your api key>\n\
77+
3. Replace <your api key> with your OpenAI/Gemini API key.\n\
78+
4. Run the interpreter again.")
7979

8080
else:
8181
display_markdown_message(f"An error occurred interpreter main: {exception}")

libs/code_interpreter.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -216,3 +216,4 @@ def execute_command(self, command:str):
216216
except Exception as exception:
217217
self.logger.error(f"Error in executing command: {str(exception)}")
218218
raise exception
219+

libs/interpreter_lib.py

Lines changed: 18 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,6 @@
2525
from dotenv import load_dotenv
2626
import shlex
2727
import shutil
28-
import logging
2928

3029
class Interpreter:
3130
logger = None
@@ -100,9 +99,10 @@ def initialize(self):
10099
self.logger.error("Exception on initializing readline history")
101100

102101
def initialize_client(self):
103-
load_dotenv()
102+
load_dotenv(dotenv_path=os.path.join(os.getcwd(), "../.env"), override=True)
104103
self.logger.info("Initializing Client")
105-
104+
config_file_name: str = ""
105+
106106
self.logger.info(f"Interpreter model selected is '{self.INTERPRETER_MODEL}'")
107107
if self.INTERPRETER_MODEL is None or self.INTERPRETER_MODEL == "":
108108
self.logger.info("HF_MODEL is not provided, using default model.")
@@ -126,7 +126,7 @@ def initialize_client(self):
126126
self.logger.info("Using local API key from environment variables.")
127127

128128
if api_key is None:
129-
load_dotenv(dotenv_path=os.path.join(os.getcwd(), ".env"))
129+
load_dotenv(dotenv_path=os.path.join(os.getcwd(), "../.env"), override=True)
130130
api_key = os.getenv('OPENAI_API_KEY')
131131
if api_key is None:
132132
self.logger.info("Setting default local API key for local models.")
@@ -141,6 +141,7 @@ def initialize_client(self):
141141
"claude": {"key_name": "ANTHROPIC_API_KEY", "prefix": "sk-ant-"},
142142
"palm": {"key_name": "PALM_API_KEY", "prefix": None, "length": 15},
143143
"gemini": {"key_name": "GEMINI_API_KEY", "prefix": None, "length": 15},
144+
"deepseek": {"key_name": "DEEPSEEK_API_KEY", "prefix": None, "length": 10},
144145
"default": {"key_name": "HUGGINGFACE_API_KEY", "prefix": "hf_"}
145146
}
146147

@@ -149,7 +150,7 @@ def initialize_client(self):
149150
api_key_name = api_key_info["key_name"]
150151
api_key = os.getenv(api_key_name)
151152
if api_key is None:
152-
load_dotenv(dotenv_path=os.path.join(os.getcwd(), ".env"))
153+
load_dotenv(dotenv_path=os.path.join(os.getcwd(), "../.env"), override=True)
153154
api_key = os.getenv(api_key_name)
154155
if not api_key:
155156
raise Exception(f"{api_key_name} not found in .env file.")
@@ -168,8 +169,8 @@ def initialize_mode(self):
168169
if not self.SCRIPT_MODE and not self.COMMAND_MODE and not self.VISION_MODE and not self.CHAT_MODE:
169170
self.CODE_MODE = True
170171

171-
def get_prompt(self, message: str, chat_history: List[dict]) -> str:
172-
system_message = None
172+
def get_prompt(self, message: str, chat_history: List[dict]) -> List[dict] | str:
173+
system_message: str = ""
173174

174175
if self.CODE_MODE:
175176
system_message = self.system_message
@@ -357,6 +358,14 @@ def generate_content(self, message, chat_history: list[tuple[str, str]], tempera
357358
raise Exception("Exception api base not set for custom model")
358359
self.logger.info("Response received from completion function.")
359360

361+
# Check if the model is Deepseek
362+
elif 'deepseek' in self.INTERPRETER_MODEL:
363+
self.logger.info("Model is Deepseek.")
364+
# Ensure the model string is prefixed with "deepseek/"
365+
if not self.INTERPRETER_MODEL.startswith("deepseek/"):
366+
self.INTERPRETER_MODEL = "deepseek/" + self.INTERPRETER_MODEL
367+
response = litellm.completion(self.INTERPRETER_MODEL, messages=messages, temperature=temperature, max_tokens=max_tokens)
368+
self.logger.info("Response received from Deepseek completion.")
360369

361370
# Check if model are from Hugging Face.
362371
else:
@@ -421,7 +430,7 @@ def get_command_prompt(self, task, os_name):
421430
"NOTE: Ensure the command is compatible with the specified OS and version.\n"
422431
"Output should only contain the command, with no additional text."
423432
)
424-
self.logger.info(f"Command Prompt: {prompt}")
433+
self.logger.info("Command Prompt: {prompt}")
425434
return prompt
426435

427436
def handle_vision_mode(self, task):
@@ -764,7 +773,7 @@ def interpreter_main(self, version):
764773
continue
765774

766775
fix_prompt = f"Fix the errors in {self.INTERPRETER_LANGUAGE} language.\nCode is \n'{code_snippet}'\nAnd Error is \n'{code_error}'\n"
767-
f"give me output only in code and no other text or explanation. And comment in code where you fixed the error.\n"
776+
"give me output only in code and no other text or explanation. And comment in code where you fixed the error.\n"
768777

769778
# Start the LLM Request.
770779
self.logger.info(f"Fix Prompt: {fix_prompt}")

libs/logger.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
import logging
22
from logging.handlers import RotatingFileHandler
3-
3+
from typing import Optional
44

55
class Logger:
6-
_logger = None
6+
_logger: Optional[logging.Logger] = None
77
_file_handler = None
88
_console_handler = None
99

1010
@staticmethod
11-
def initialize(filename: str):
11+
def initialize(filename: str) -> logging.Logger:
1212
if Logger._logger is None:
1313
Logger._logger = logging.getLogger(filename)
1414
Logger._logger.setLevel(logging.DEBUG)

requirements.txt

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
# Libraries for code interpretation and execution
2-
litellm
2+
litellm>=1.60.0
3+
litellm[proxy]
34
Pygments
45
pyreadline
56
python-dotenv
@@ -15,3 +16,6 @@ plotly
1516

1617
# Libraries for standard libries
1718
stdlib_list
19+
20+
# Legacy libraries
21+
legacy-cgi

0 commit comments

Comments
 (0)