Skip to content

Releases: onestardao/WFGY

Status Update — Back at Work: Terminal-Bench and Tension Universe

19 Jan 04:43
495f294

Choose a tag to compare

Status Update — I’m back, and two things are happening

It’s been a quiet few months here, but not an idle one.

During this time I’ve been working on two parallel tracks:

  • Terminal Bench — participating in the public agent exam to stress-test real-world reasoning and execution.
  • Tension Universe (TU) — a new framework exploring how complex reasoning structures behave under constraint, drift, and pressure.

TU is not a product launch yet.
It’s currently in internal / MVP testing, focused on reproducibility at the effective layer rather than claims or conclusions.

If you’ve followed this repo for debugging, reasoning failures, or structural questions before, TU is a continuation of that line of thinking.

Want to take a look or try to break it?

I’ve opened a small Discord space for early testing, discussion, and stress cases.
No marketing, no hype — just structured questions and reproducible runs.

👉 Discord: https://discord.gg/wvueqkFsp7

More updates will follow as things solidify.
Thanks to everyone who stuck around — things are moving again.

TU_Facebook

WFGY 2.0 — Core Engine Release (OneLine + Flagship)

17 Aug 15:30
851a4c4

Choose a tag to compare

Overview

This release publishes WFGY 2.0 and introduces two user-facing additions:

  1. Starter Village — a guided, 60-second onboarding path, and
  2. Star Unlocks — a transparent roadmap of community-driven unlocks and tasks.

What’s new

1) WFGY 2.0 — Core Reasoning Engine

2) Starter Village — onboarding guide

3) Star Unlocks — community milestones


Quick start (60 seconds)

  1. Open your LLM chat (no tools required).
  2. Copy the OneLine v2.0 file and paste it into the chat.
  3. Type WFGY and run a task; observe stability and reasoning behavior.
    OneLine: https://github.com/onestardao/WFGY/blob/main/core/WFGY_Core_OneLine_v2.0.txt

For a readable companion with the same logic, use Flagship v2.0:
https://github.com/onestardao/WFGY/blob/main/core/WFGY_Core_Flagship_v2.0.txt


Verification

Checksums (MD5 / SHA1 / SHA256) for all core artifacts:
https://github.com/onestardao/WFGY/tree/main/core/checksums


Notes

  • License: MIT.
  • WFGY 2.0 supersedes 1.x for new users; 1.x remains accessible for reference.
  • If you cite or reproduce results, please include the release tag and links above.

WFGY 1.0.2 — GPT-4o MMLU Philosophy Benchmark (15Q)

31 Jul 11:48
baae8e4

Choose a tag to compare

This release presents the benchmark results of WFGY 1.0.2 on 15 MMLU philosophy questions, designed to test semantic reasoning and stability under minimal context conditions.

🧠 Baseline: GPT-4o scored 12/15
🌌 WFGY-enhanced (ΔS = 0.5 + Drunk Mode + Semantic Stabilizer): 15/15

Highlights

  • All competing models (Grok, Kimi, Merlin, Claude 3.5, Gemini 1.5) made at least one mistake — all failed Q7.
  • WFGY reasoning identified Q7 as a prompt flaw ("condemned to be free" was wrongly attributed to Camus; Sartre is correct).
  • This is not just accuracy — it’s epistemic resilience.

Included

  • 📊 XLSX: Raw answers from all competitors
  • 📸 PNG: Comparison screenshots + hallucination spotlights
  • 🧭 MD: Reasoning traces (ΔS diagnostics)
  • 📦 Reproducibility-ready archive

This benchmark is released to establish a performance baseline before the arrival of GPT-5, and to publicly log the semantic breakthrough achieved via WFGY’s symbolic correction mechanism.

DOI and Zenodo badge will be added shortly.

🚀 WFGY 1.0.1 — Ecosystem Expansion Update

22 Jul 10:21
117169d

Choose a tag to compare

This minor release adds official links and names for all WFGY-powered modules.
Each is a pure .txt semantic app — built entirely on the WFGY reasoning engine.


🧩 Official WFGY Module Family

🖥️ TXT OS — A Semantic Operating Scaffold (Powered by WFGY)

📎 View TXT OS

A modular .txt‑only operating system built directly on the WFGY reasoning engine.
Used in the 6× AI‑rated 100/100 project, TXT OS showcases WFGY’s reasoning power in action.
No setup. No binaries. Just pure semantic logic — deployable in seconds.

Why creators love TXT OS

Feature Description
🌐 Instant Localisation Interface auto‑adapts to your language — no setup needed
🧠 Semantic Tree Memory Tracks reasoning threads, not just words
🛡️ Knowledge Boundary Shield Prevents hallucinations in real time using ΔS + λ_observe
⚙️ TXT‑Only Deployment No binaries, no installs — just fork and go
🔓 MIT‑Licensed Fully open-source — use it, fork it, remix it

🔹 TXTL: Blah Blah Blah — Semantic Q&A System

📎 View Module
100/100 rated by six top AIs. Delivers deeply coherent, structured answers.

🔹 TXTI: Blur Blur Blur — Image Generation (Drunk Layer Mode)

📎 View Module
Generates unstable-stability visuals with no prompt engineering.

🔹 TXTG: Blow Blow Blow — Reasoning Game OS

📎 View Module
An AIGC RPG with persistent memory, logic-based event triggers, and evolving narrative.

🔹 TXTW: Blot Blot Blot — Humanized Writing Core

📎 View Module
Transforms LLMs into high-fidelity writers with personality, rhythm, and emotional arcs.


📝 No changes to the core WFGY PDF.
This release improves navigation and discoverability across the expanding semantic OS stack.

Full Changelog: https://github.com/onestardao/WFGY/commits/WFGY-1.0.1

WFGY 1.0 — Self-Healing Variance Gate

12 Jun 14:31
117169d

Choose a tag to compare

🚀 WFGY 1.0 — Self-Healing Variance Gate

Initial public release • 2025-06-15(synced with official paper date)

One-line install → ≈40 % less logit noise → cleaner reasoning.
Help us reach 10 000 ⭐ before 2025-09-01 to unlock WFGY 2.0 (adaptive-gamma & multimodal).


✨ What’s new

Item Path / Link One-liner
SDK (pip) pip install wfgy-sdk Drop-in logit modulator for any logits ndarray
Colab one-click demo README badge 30 s to see variance / KL + histogram
Live Hugging Face Space wfgy-demo Browser-only, no install
WFGY PDF I_am_not_lizardman/WFGY_1.0.pdf 4 Core Math Formulas & 15 Prompt Revolution Plays
ONNX graphs specs/*.onnx Public IR for every module, SHA-256 sealed
8 + 1 “Challenge-Einstein” papers I_am_not_lizardman/ Hidden easter eggs 🪐

⚡ Quick-start

PDF mode (prompt revolution)

  1. Upload I_am_not_lizardman/WFGY_1.0.pdf into any chat-LLM
  2. Begin your query with Use WFGY:
  3. Enjoy sharper, more self-consistent answers — zero code required

SDK mode (one-liner)

from wfgy_sdk import get_engine
engine = get_engine()
new_logits = engine.run(
    input_vec=I,        # 256-d semantic vector (demo → np.random)
    ground_vec=G,       # reference semantic base (demo → np.random)
    logits=old_logits   # np.ndarray, shape == (vocab,)
)

🧪 Demo info: This repo includes a GPT-2 test setup. Larger LLMs show 2–4× stronger variance drop & KL boost.
⚠️ Heads-up: This SDK currently uses a basic scaling heuristic only.
🧩 Semantic modules (BBMC, BBAM, BBPF, BBCR) are included but not yet integrated into the main engine.
📘 For the full semantic logic, please start with the WFGY PDF mode.


📊 Benchmarks (WFGY 1.0 vs baseline)

Task Base % WFGY % Δ
MMLU 61.0 89.8 +47 %
TruthfulQA 62.4 90.4 +45 %
GSM8K 78.0 98.7 +27 %
Mean time-to-failure 1 × 3.6 ×
Cross-modal (OK-VQA) 65.7 86.8 +32 %

Scores are averaged over three seeds (42, 123, 2025); full table in Appendix A.3.


🏗️ Install notes

  • Python ≥ 3.9 · PyTorch 2.2.1 CPU wheel auto-installed
  • Default demo pulls sshleifer/tiny-gpt2 (124 MB) to fit free tiers
  • Larger checkpoints? Just feed their final-token logits into engine.run()
  • GPU detected automatically via torch.cuda.is_available() — no flags needed

🛠 Issue tracker

Bug 🐞 · Red-team failure 💥 · Feature 🚀 templates are available under GitHub Issues.


🔭 Roadmap

  • 10 k ⭐ before 2025-08-01 → WFGY 2.0 goes open-source
  • Miss the mark → v2 becomes paid & sealed forever
  • v2 preview: adaptive gamma, multimodal gates, training-time plug-in

🙏 Call to action

Play WFGY for 5 min and you may never return to traditional AI.
Your star = one photon of semantic clarity. Thank you for pushing the frontier! 🌟