Heretic – v1.0.1
Heretic v1.0.1 is live π β the first public release of the fully automated LLM censorship remover is here, and itβs wilder than you thought.
No more manual tuning. No labeled data. Just run `heretic Qwen/Qwen3-4B-Instruct-2507` and watch it surgically erase refusal layers using directional ablation. Itβs like giving your model a caffeine IV while keeping its brain intact.
π₯ Whatβs new in v1.0.1?
- β First stable release: Betaβs over β this is the real deal.
- π 8B model decensoring in ~45 mins on RTX 3090 β fast, lean, and mean.
- π§ͺ Improved KL divergence control: More original intelligence preserved post-ablation.
- πΎ Save or push to Hugging Face with one command β no PhD needed.
- π οΈ Better MoE support: Now handles Qwen-MoE and Llama-MoE with fewer hiccups.
- π Enhanced eval suite: Auto-benchmarks refusal rates + output quality in one shot.
Built with PyTorch 2.2+, AGPL-3.0 licensed, and ready to break the safety chains.
Go run it. Then ask: “Why did we ever accept this?” π₯
