LogicLink is a conversational AI chatbot developed by Kratu Gautam (AIML Engineer). Powered by the TinyLlama-1.1B-Chat-v1.0 model, it provides an interactive interface for engaging conversations, query resolution, and task assistance. Version 5 features streaming responses, conversation management, and a sleek GUI.
- β¨ Key Features
- πΈ GUI Display
- π οΈ Installation
- π¬ Usage
- βοΈ Technical Architecture
- π§ͺ Troubleshooting Guide
- π Future Roadmap
- π License
| Feature | Description | Benefit |
|---|---|---|
| π€ Conversational AI | TinyLlama-1.1B-Chat-v1.0 powered responses | Natural, engaging dialogue |
| β‘ Streaming Responses | Real-time token generation with TextIteratorStreamer |
Smooth user experience |
| π¨ Customizable GUI | Red/blue/black theme with Gradio & ModelScope Studio | Professional interface |
| ποΈ Conversation Management | New chat, clear history, delete conversations | Full control over interactions |
| β±οΈ Single Time Stamp | Regex-cleaned response timing *(4.50s)* |
Consistent performance metrics |
| π CUDA Support | Automatic GPU detection with CPU fallback | Optimized performance |
| π‘οΈ Error Handling | Graceful failure for memory/input issues | Robust user experience |
LogicLink engaging in a complete dialogue, handling multiple turns seamlessly.
This demonstrates its ability to maintain context, respond naturally, and adapt to user intent across an extended session.
LogicLink generating a structured coding solution.
Notice how it explains the reasoning step-by-step, making the output not just correct but also educational.
A continuation of the coding workflow, where LogicLink refines and expands on its earlier solution.
This shows its iterative reasoning ability β improving code quality when prompted.
A snapshot of LogicLink delivering a core logical explanation.
This highlights its strength in breaking down abstract queries into clear, actionable insights.
The system midβinference, showing its real-time feedback loop.
This reassures users that LogicLink is actively working on their request.
A sideβbyβside comparison of LogicLinkβs performance with and without LOTB (Latest Output Text Box).
The difference illustrates how LOTB enhances reasoning depth and response clarity.
The footer view of the interface, where conversation summaries and quick actions are displayed.
This ties the user experience together, making LogicLink feel like a polished, endβtoβend assistant.
- Python 3.8+
- CUDA-enabled GPU (recommended)
- Dependencies:
pip install gradio torch transformers modelscope-studio
- Clone repository:
git clone https://github.com/Kratugautam99/LogicLink-Project.git cd LogicLink-Project - Install dependencies:
pip install -r requirements.txt
- Run application:
python app.py
LogicLink-Project/
βββ LogicLinkVersion5.ipynb
βββ README.md
βββ app.py
βββ config.py
βββ .gitattributes
βββ requirements.txt
βββ assets/
βββ Documents/
βββ Screenshots/
βββ ui_components/
βββ Different Versions of LogicLink/ (not expanded)
# Sample interaction flow
user >> "Who are you?"
LogicLink >> "I'm LogicLink V5, created by Kratu Gautam. How can I assist you today? *(4.50s)*"-
Interface Controls:
- π¬ Input field: Type queries
- β New Chat: Start fresh conversation
- π§Ή Clear History: Reset current chat
- ποΈ Delete: Remove conversations from sidebar
-
Performance Metrics:
- β±οΈ Response time: 3-5s (GPU), 5-8s (CPU)
- πΎ RAM usage: 2-3GB (CPU), ~1.5GB (GPU)
# Core model parameters
model = AutoModelForCausalLM.from_pretrained(
"TinyLlama/TinyLlama-1.1B-Chat-v1.0",
torch_dtype=torch.float16 if cuda else torch.float32
)
# Generation settings
generation_kwargs = {
"max_new_tokens": 1024,
"temperature": 0.7,
"top_k": 50,
"top_p": 0.95,
"num_beams": 1
}-
Prompt Engineering:
<|system|>You are LogicLink V5 created by Kratu Gautam</s> <|user|>{user_input}</s> <|assistant|> -
Streaming Pipeline:
Loadinggraph LR A[User Input] --> B(Tokenizer) B --> C{TextIteratorStreamer} C --> D[Model Generation] D --> E[Real-time Output] E --> F[Regex Cleaner] F --> G[Timestamp Append] -
GUI Components:
pro.Chatbot: Conversation displayantdx.Sender: Input fieldantdx.Conversations: Sidebar managerantd.Button: Action controls
| Issue | Solution |
|---|---|
| Double timestamps | Verify regex: re.sub(r'\*\(\d+\.\d+s\)\*', '', response) |
| Slow responses | Enable CUDA, reduce max_new_tokens to 512 |
| GUI rendering issues | Update packages: pip install --upgrade gradio modelscope-studio |
| Delete button failure | Check menu_click event binding in JS |
| Model loading errors | Validate RAM β₯3GB, test with minimal example |
Minimal Test Script:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
inputs = tokenizer(["Test input"], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=10)
print(tokenizer.decode(outputs[0]))- Persistent Storage: SQLite conversation history
- Multimodal Support: Image/text inputs
- Enhanced Prompting: Context-aware responses
- Deployment Options: Docker containerization
- Performance: Quantization for CPU optimization
MIT License - See LICENSE







