Skip to content

LogicLink is a conversational AI chatbot developed by Kratu Gautam (AIML Engineer). Powered by the TinyLlama-1.1B-Chat-v1.0 model, it provides an interactive interface for engaging conversations, query resolution, and task assistance. Version 5 features streaming responses, conversation management, and a sleek GUI.

Notifications You must be signed in to change notification settings

Kratugautam99/LogicLink-Project

Repository files navigation

LogicLink: Version 5

LogicLink is a conversational AI chatbot developed by Kratu Gautam (AIML Engineer). Powered by the TinyLlama-1.1B-Chat-v1.0 model, it provides an interactive interface for engaging conversations, query resolution, and task assistance. Version 5 features streaming responses, conversation management, and a sleek GUI.

LogicLink Logo

πŸ” Topic Index


✨ Key Features

Feature Description Benefit
πŸ€– Conversational AI TinyLlama-1.1B-Chat-v1.0 powered responses Natural, engaging dialogue
⚑ Streaming Responses Real-time token generation with TextIteratorStreamer Smooth user experience
🎨 Customizable GUI Red/blue/black theme with Gradio & ModelScope Studio Professional interface
πŸ—‚οΈ Conversation Management New chat, clear history, delete conversations Full control over interactions
⏱️ Single Time Stamp Regex-cleaned response timing *(4.50s)* Consistent performance metrics
πŸš€ CUDA Support Automatic GPU detection with CPU fallback Optimized performance
πŸ›‘οΈ Error Handling Graceful failure for memory/input issues Robust user experience

πŸ“Έ GUI Display


πŸ’¬ Full-Fledged Conversation

LogicLink Full Conversation

LogicLink engaging in a complete dialogue, handling multiple turns seamlessly.
This demonstrates its ability to maintain context, respond naturally, and adapt to user intent across an extended session.


πŸ§‘β€πŸ’» Coding Response (Part 1)

LogicLink Coding Response 1

LogicLink generating a structured coding solution.
Notice how it explains the reasoning step-by-step, making the output not just correct but also educational.


πŸ§‘β€πŸ’» Coding Response (Part 2)

LogicLink Coding Response 2

A continuation of the coding workflow, where LogicLink refines and expands on its earlier solution.
This shows its iterative reasoning ability β€” improving code quality when prompted.


πŸ”‘ Core Response

LogicLink Core Response

A snapshot of LogicLink delivering a core logical explanation.
This highlights its strength in breaking down abstract queries into clear, actionable insights.


⚑ While Processing

LogicLink While Processing

The system mid‑inference, showing its real-time feedback loop.
This reassures users that LogicLink is actively working on their request.


πŸ”„ With vs Without Latest Output Text Box

LogicLink with LOTB LogicLink without LOTB

A side‑by‑side comparison of LogicLink’s performance with and without LOTB (Latest Output Text Box).
The difference illustrates how LOTB enhances reasoning depth and response clarity.


πŸ“Š Bottom Section

LogicLink Bottom Section

The footer view of the interface, where conversation summaries and quick actions are displayed.
This ties the user experience together, making LogicLink feel like a polished, end‑to‑end assistant.


πŸ› οΈ Installation

Prerequisites

  • Python 3.8+
  • CUDA-enabled GPU (recommended)
  • Dependencies:
    pip install gradio torch transformers modelscope-studio

Setup

  1. Clone repository:
    git clone https://github.com/Kratugautam99/LogicLink-Project.git
    cd LogicLink-Project
  2. Install dependencies:
    pip install -r requirements.txt
  3. Run application:
    python app.py

Directory Structure

LogicLink-Project/
β”œβ”€β”€ LogicLinkVersion5.ipynb
β”œβ”€β”€ README.md
β”œβ”€β”€ app.py
β”œβ”€β”€ config.py
β”œβ”€β”€ .gitattributes
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ assets/
β”œβ”€β”€ Documents/
β”œβ”€β”€ Screenshots/
β”œβ”€β”€ ui_components/
└── Different Versions of LogicLink/  (not expanded)

πŸ’¬ Usage

# Sample interaction flow
user >> "Who are you?"
LogicLink >> "I'm LogicLink V5, created by Kratu Gautam. How can I assist you today? *(4.50s)*"
  1. Interface Controls:

    • πŸ’¬ Input field: Type queries
    • βž• New Chat: Start fresh conversation
    • 🧹 Clear History: Reset current chat
    • πŸ—‘οΈ Delete: Remove conversations from sidebar
  2. Performance Metrics:

    • ⏱️ Response time: 3-5s (GPU), 5-8s (CPU)
    • πŸ’Ύ RAM usage: 2-3GB (CPU), ~1.5GB (GPU)

icon Technical Architecture

Model Configuration

# Core model parameters
model = AutoModelForCausalLM.from_pretrained(
    "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    torch_dtype=torch.float16 if cuda else torch.float32
)

# Generation settings
generation_kwargs = {
    "max_new_tokens": 1024,
    "temperature": 0.7,
    "top_k": 50,
    "top_p": 0.95,
    "num_beams": 1
}

Key Components

  1. Prompt Engineering:

    <|system|>You are LogicLink V5 created by Kratu Gautam</s>
    <|user|>{user_input}</s>
    <|assistant|>
    
  2. Streaming Pipeline:

    graph LR
    A[User Input] --> B(Tokenizer)
    B --> C{TextIteratorStreamer}
    C --> D[Model Generation]
    D --> E[Real-time Output]
    E --> F[Regex Cleaner]
    F --> G[Timestamp Append]
    
    Loading
  3. GUI Components:

    • pro.Chatbot: Conversation display
    • antdx.Sender: Input field
    • antdx.Conversations: Sidebar manager
    • antd.Button: Action controls

πŸ§ͺ Troubleshooting Guide

Issue Solution
Double timestamps Verify regex: re.sub(r'\*\(\d+\.\d+s\)\*', '', response)
Slow responses Enable CUDA, reduce max_new_tokens to 512
GUI rendering issues Update packages: pip install --upgrade gradio modelscope-studio
Delete button failure Check menu_click event binding in JS
Model loading errors Validate RAM β‰₯3GB, test with minimal example

Minimal Test Script:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
inputs = tokenizer(["Test input"], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=10)
print(tokenizer.decode(outputs[0]))

πŸš€ Future Roadmap

  • Persistent Storage: SQLite conversation history
  • Multimodal Support: Image/text inputs
  • Enhanced Prompting: Context-aware responses
  • Deployment Options: Docker containerization
  • Performance: Quantization for CPU optimization

πŸ“œ License

MIT License - See LICENSE


Developed with 🧠 by Kratu Gautam | AIML Engineer
GitHub | HFT Space | UI Framework

About

LogicLink is a conversational AI chatbot developed by Kratu Gautam (AIML Engineer). Powered by the TinyLlama-1.1B-Chat-v1.0 model, it provides an interactive interface for engaging conversations, query resolution, and task assistance. Version 5 features streaming responses, conversation management, and a sleek GUI.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published