Back to Models
Jackrong logo

Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF

Jackrong β€’ image

🌟 Qwen3.5-9B-DeepSeek-V4-Flash

πŸ’‘ Model Overview & Design

ChatGPT Image Apr 24, 2026 at 04_32_09 PM

[!NOTE] Qwen3.5-9B-DeepSeek-V4-Flash is an efficient reasoning model distilled using high-quality data from DeepSeek-V4.

  • By leveraging the dataset Jackrong/DeepSeek-V4-Distill-8000x, this model successfully transfers the advanced structured reasoning and multi-step problem-solving capabilities of the DeepSeek-V4 architecture into the highly efficient Qwen3.5-9B parameter space.

  • This model was trained in an Unsloth environment, prioritizing stable gradient propagation and rigorous data curation to ensure the distillation process avoids merely learning "hollow chain-of-thought" and instead captures genuine logical generalization.

Designed for:

  • 🧩 Structured Reasoning: Inheriting DeepSeek-V4's deep logic capabilities.
  • ⚑ Flash Inference: Maintaining the token-efficiency and speed of the 9B parameter size.
  • πŸ”§ Tool-augmented Workflows: Reliable agentic action generation.

🍎 About the Teacher Model: DeepSeek-V4

dsv4_performance

DeepSeek-V4 is the latest flagship open-source model series from DeepSeek, engineered for extreme efficiency, million-token long context (1M), and advanced Agentic workflows. As the source for this distillation, DeepSeek-V4 provides the high-fidelity reasoning signals necessary to push a 9B model beyond its architectural limits.

Key Technical Strengths of the Teacher Model:

  • πŸ† World-Class Reasoning & Coding: DeepSeek-V4 demonstrates elite performance in mathematics (MATH-500), STEM subjects, and real-world software engineering (SWE-bench). Its "Think" modes provide the sophisticated Long-CoT (Chain-of-Thought) traces that define this model's logic.
  • 🧠 Architectural Innovation: * Hybrid Attention & DSA: Features Token-level compression and DeepSeek Sparse Attention, which reduces KV Cache memory overhead by up to 90%, allowing for highly efficient long-context processing.
    • Engram Memory & mHC: Utilizes Manifold-constrained Hyper-connections to decouple factual knowledge retrieval from dynamic logical reasoning, ensuring exceptional stability and generalization.
  • πŸ€– Agent-Centric Design: Specifically optimized for multi-step tool calling and complex environment interaction, ensuring that the distilled knowledge includes reliable "how-to-act" procedures, not just "how-to-talk."

By distilling from DeepSeek-V4-Flash, we have successfully mapped the high-density logic of a trillion-parameter class model onto the agile and high-speed Qwen3.5-9B framework.


🀝 Collaboration & Training Details

This model is the result of a close collaboration with hardware engineer Kyle Hessling. He generously provided the crucial compute equipment and managed both the rigorous post-training testing and continuous server maintenance. I want to express my gratitude to Kyle for his invaluable support! You can find him on X/Twitter here: @KyleHessling1

Training Infrastructure & Configuration:

  • πŸ–₯️ Hardware: NVIDIA DGX
  • πŸ’Ύ Training Data: DeepSeek-V4-Distill-8000x
  • πŸ§ͺ Training Method: Distillation

🎯 Motivation & Distillation Insights

  • 🧠 Latent Knowledge Activation: DeepSeek-V4's reasoning traces help the Qwen3.5-9B model activate its existing latent knowledge more effectively.
  • πŸ—οΈ Learning Procedures: The model learns actual problem-solving procedures, not just the output format.
  • πŸš€ Efficiency: The 8000x dataset provides a dense signal, allowing the 9B model to converge on reasoning tasks much faster than traditional large-scale SFT.

πŸ”¬ Supporting Evidence

Recent work and empirical tests support this distillation approach:

Ren et al., 2026 β€” Rethinking Generalization in Reasoning SFT (arXiv:2604.06628)

The paper suggests that generalization in reasoning SFT is conditional. Key takeaways:

  • High-quality long-CoT data from DeepSeek-V4 enables cross-domain transfer.
  • Optimization Discipline: Short, highly-curated distillation (8000 examples) prevents the model from overfitting to the teacher's stylistic quirks while preserving the core reasoning engine.

πŸ› οΈ Best Practices

For optimal performance, we recommend the following generation parameters:

  • temperature=0.7 to 1.0 (Use lower temperature for strict coding tasks, higher for creative reasoning)
  • top_p=0.95

When interacting with the model, using a structured prompt template or standard ChatML format will yield the best reasoning results.


πŸ“š Resources & Guides

πŸ‘‰ GitHub Repository: Jackrong-llm-finetuning-guide Visit the repository to dive into the codebase and reproduce the results locally or on Colab.

πŸ“₯ Core Technical Document

πŸ”— Complete Fine-Tuning Guide (PDF)

A Note: My goal isn't just to detail a workflow, but to demystify LLM training. Beyond the social media hype, fine-tuning isn't an unattainable ritualβ€”often, all you need is a Google account, a standard laptop, and relentless curiosity. All training and testing for this project were self-funded. If you find this model or guide helpful, a Star ⭐️ on GitHub would be the greatest encouragement. Thank you! πŸ™


⚠️ Limitations

  • Parameter Constraints: While enhanced by DeepSeek-V4 distillation, the model is still bound by the 9B parameter limits and may struggle with extremely obscure knowledge.
  • Over-reasoning: On very simple queries, the model might still attempt to produce a lengthy reasoning chain due to the SFT bias.
  • Safety Trade-offs: Asymmetric gains mean that while reasoning improves, certain alignment-sensitive behaviors might regress.

πŸ™ Acknowledgements

Special thanks to:

  • DeepSeek Team for the foundational advancements in the V4 architecture.
  • Unsloth for efficient fine-tuning frameworks.
  • Open-source datasets and community contributors.
  • Researchers exploring reasoning SFT and distillation.

πŸ“– Citation

@misc{jackrong_qwen35_9b_deepseek_v4_flash,
  title        = {Qwen3.5-9B-DeepSeek-V4-Flash},
  author       = {Jackrong},
  year         = {2026},
  publisher    = {Hugging Face}
}
Visit Website
β€”

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes36
Downloadsβ€”
πŸ“

No reviews yet

Be the first to review Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF!

Model Info

ProviderJackrong
Categoryimage
Reviews0
Avg. Ratingβ€” / 5.0

Community

Likes36
Downloadsβ€”

Rating Guidelines

β˜…β˜…β˜…β˜…β˜…Exceptional
β˜…β˜…β˜…β˜…Great
β˜…β˜…β˜…Good
β˜…β˜…Fair
β˜…Poor