This application provides an AI-powered solution for generating different types of technical content, such as blog posts, README files, code improvements, and video walkthroughs, based on provided inputs. It can process either regular text, local files or GitHub repositories to serve as context for content generation. The purpose of this tool is to streamline the content creation process using the power of AI models.
I have a YouTube video that discusses How I Use Generative AI to Create Technical Content and also goes through this project.
I also wrote a blog post about it, make sure to check "Supercharge Your Technical Content Creation with Gemini"!
The current iteration of this applicaiton leverages Gemini through the Google AI studio and Vertex AI API to generate content.
Supported input types
- Blog post
- Code
Supported output types
- Blog post
- GitHub README.md file
- Code base
- Code improvement
- Video walkthrough
The application provides a user-friendly interface built with Gradio. You can input text, a file or a repository path, specify the input and output content types, and provide additional instructions. The application then uses the Gemini AI model to generate the desired content.
- Select the type of the input content in the "Input type" dropdown.
- Choose the desired output content type from the "What kind of content would you like to create?" dropdown.
- Provide an input of one of the supported options below.
- Providing a GitHub Repository: Enter the URL or local path of a Git repository in the "Provide a URL or path to a local repository" textbox.
- Providing a Text: Enter a text at the "Provide a text content as input" textbox.
- Providing a File: Upload a single file using the "Select a file to upload" section.
- If a repository is provided, optionally add parameters on how to parse the repository.
- If a repository is provided, the application parses the repository structure, summarize its contents, and extract individual file contents.
- Optionally, add specific instructions or context in the "Additional prompt information" field.
- Click the "Generate content" button.
- The generated content will be displayed in the "Generated content" textbox.
- You can further refine the generated content by adding instructions in the "Keep iterate over the content" field and clicking "Iterate over the content".
- Clone the repository:
git clone https://github.com/dimitreOliveira/content_creator-ai_tools.git
cd content_creator-ai_tools
- Create a virtual environment (recommended):
python -m venv content_creator_ai_tools
source content_creator_ai_tools/bin/activate
- Install the dependencies:
make build
Alternatively, you can use pip
:
pip install -r requirements.txt
4.1 Set up the local permission:
If If you are using Google AI studio as the provider.
- Obtain an API key from Google AI studio.
- Set the
GEMINI_API_KEY
environment variable with your API key. You can do this by adding the following line to your.bashrc
,.zshrc
, or similar shell configuration file:
export GEMINI_API_KEY="YOUR_API_KEY"
Or, you can create a .env
file in the root directory with the following content (recommended):
GEMINI_API_KEY=YOUR_API_KEY
If using a .env file, ensure you have python-dotenv installed (it should be if you ran make build).
If you are using Vertex AI as the provider.
Make sure that your project supports Vertex AI then login with the local SDK
gcloud auth application-default login
The Makefile provides convenient commands for common tasks:
- Runs the Gradio application, then you can access in your web browser, link will be at the logs.
make app
- Installs the required dependencies to run the app.
make build
- Runs linting and formatting tools (isort, black, flake8, mypy) to ensure code quality and consistency.
make lint
- Select "Code" as the input type.
- Select "Blog post" as the output type.
- Upload a Python script containing code.
- Provide additional instructions such as "Summarize the code and describe its functionality" in the prompt section.
- Click on "Generate content".
- Select "Code base" as the input type.
- Select "GitHub README.md file" as the output type.
- Enter the URL or local path of the GitHub repository to parse.
- Click on "Parse GitHub repository" to fetch repository summary, tree structure, and file content.
- Provide additional instructions such as "Explain how to set up the environment and run the app" in the prompt section.
- Click on "Generate content".
- Select "Blog post" as the input type.
- Select "Video walkthrough" as the output type.
- Input the blog post into the "Input text field".
- Provide additional instructions such as "The video walkthrough must be engaging and suited for short content" in the prompt section.
- Click on "Generate content".
The application's behavior can be configured using the configs.yaml
file.
The application supports both AI Studio and Vertex AI. Edit the configs.yaml
file to select your preferred provider.
generate_public_url: false # Set to true to generate a public shareable link
llm_model_configs:
provider: ai_studio # One of [ai_studio, vertex_ai]
model_id: gemini-2.0-pro-exp-02-05 # Or any other supported model
generation_config:
temperature: 0.7
top_p: 0.95
top_k: 40
max_output_tokens: 10000
vertex_ai: # Only needed if provider is "vertex_ai"
project: "{your-gcp-project-id}"
location: "{your-gcp-project-location}"
generate_public_url
: If the app will generate a public shareable link, iftrue
, check logs for the URL (Gradio public URLs expires after 72 hours).llm_model_configs
:provider
: Specifies the Gemini API provider:"ai_studio"
or"vertex_ai"
.model_id
: The ID of the Gemini model to use (e.g.,"gemini-2.0-flash-exp"
).generation_config
: Parameters to control the content generation process.temperature
: Controls the randomness of the output (0.0 is deterministic, 1.0 is most random).top_p
: Controls the diversity of the output (nucleus sampling).top_k
: Controls the diversity of the output (top-k sampling).max_output_tokens
: The maximum number of tokens to generate.
vertex_ai
: (Only required ifprovider
is"vertex_ai"
)project
: Your Google Cloud project ID.location
: The Google Cloud region (e.g.,"us-central1"
).
Contributions to this project are welcome! Feel free to fork the repository, make changes, and submit a pull request. Before submitting, please ensure your code passes the linting checks by running:
make lint
- Google Cloud credits are provided for this project as part of the #VertexAISprint
- This application utilizes external APIs (like Google Gemini) which may have their own terms of service and usage limitations.
- The quality of the generated content depends on the input provided and the capabilities of the underlying AI model.
- Ensure you have the necessary permissions and comply with the terms of service for the underlying AI model.
- The application is for informational and creative purposes. Always review and verify the generated content before using it.
- Add support to add multiple files to the prompt in any order (e.g. [file, text], [file, text, file, file], etc)
- Add support for local open-source models
- Add support to TTS audio generation
- Add support to image generation
- create a set X of optional illustrations for content
- Add support to video generation
- create a set X of optional videos for content