MLX-LM – test_data

MLX-LM – test_data

🚀 MLX LM just dropped 8 new optimized LLMs—fully tuned for Apple Silicon!

Say hello to:

  • Qwen1.5-0.5B-Chat
  • Mistral-7B-v0.2 & v0.3
  • DeepSeek-Coder-V2-Lite-Instruct (MLX-native 🎯)
  • Phi-3.5-mini-instruct
  • Llama-3.2-1B-Instruct
  • Falcon3-7B-Instruct
  • Qwen3-4B

✅ All 4-bit quantized. ✅ Only `.safetensors`, tokenizer, and Jinja templates—zero bloat.

✅ New lean download for Qwen1.5-0.5B: just the model weights.

✅ Zipped and ready to drop into your MLX pipeline.

No GPU? No problem. M-series chips are now LLM powerhouses.

Grab `test_data.zip` and start whispering to LLMs at near-native speed. 🍏⚡

🔗 View Release