Back to Models
huihui-ai logo

huihui-ai/Huihui4-8B-A4B-GGUF

huihui-ai β€’ image

πŸ€– Huihui4-8B-A4B-GGUF

πŸ“Œ Overview

Huihui4-8B-A4B is a lightweight MoE (Mixture of Experts) conversational model optimized from Google's gemma-4-26B-A4B-it architecture. Through expert pruning and supervised fine-tuning on high-quality dialogue data, this model significantly reduces computational overhead while preserving core reasoning and interaction capabilities. It is specifically designed for deployment on consumer-grade hardware and code-related conversational tasks.

This model is not an ablation variant.

ollama

Please use the latest version of ollama

You can use huihui_ai/huihui-4:8b directly,

ollama run huihui_ai/huihui-4:8b

🧱 Architecture & Configuration

ParameterDescription
Base Modelgoogle/gemma-4-26B-A4B-it
Total MoE Experts32 (pruned from the original 128)
Active Experts per Token8 (maintaining the A4B activation scale)
Model PositioningLightweight MoE conversational base / Consumer-hardware friendly

πŸ“Š Training Data & Methodology

  • Data Source: 500+ high-quality dialogue samples carefully extracted from code preference data.
  • Training Method: Supervised Fine-Tuning (SFT).
  • Optimization Goal: Maintain semantic coherence, instruction-following capability, and code context understanding post-pruning.

πŸ“ˆ Evaluation & Performance

  • Evaluation Tool: Quantitative perplexity assessment using the calculate_perplexity script.
  • Test Results: Preliminary dialogue tests indicate smooth interactions and stable logic. The model performs reliably in daily conversations and code-assistance tasks, with no significant performance degradation observed after pruning.

πŸ’» Inference & Deployment Recommendations

  • Recommended Frameworks: vLLM / llama.cpp / HuggingFace Transformers
  • VRAM Requirements:
    • FP16: < 18GB
    • INT4/INT8 Quantized: < 6~9GB (compatible with mainstream single consumer GPUs)
  • Use Cases: Code conversation assistants, lightweight task planning, local deployment prototyping, and baseline validation for MoE pruning/merging techniques.

πŸ—ΊοΈ Roadmap

  1. Multi-Domain Fine-Tuning: Further SFT on four distinct datasets to enhance the generalization capabilities of this 32-expert model.
  2. Expert Merging Validation: Experiment with merging the four independently fine-tuned models back into a 128-expert architecture, validating the feasibility of a "prune β†’ fine-tune β†’ merge" pipeline.
  3. Core Objective: Ultimately verify the engineering viability of training and iterating on large-scale MoE models using only consumer-grade hardware.
  4. If you're interested, feel free to fine-tune this model on your own datasets. We plan to merge all resulting models into a unified version at the end.

πŸ“ Notes

  • This model represents the initial pruned and fine-tuned iteration of the Huihui series. Future updates will involve multi-dataset integration and expert merging.
  • calculate_perplexity evaluation script).
  • Evaluation results
python evaluate_perplexity_final.py --model_path ./google/gemma-4-26B-A4B-it

Model Path     : ./google/gemma-4-26B-A4B-it
Eval Samples   : 100
Max Length     : 8192
modelFine-tuning stepsnum_expertsPerplexityAverage Loss
gemma-4-26B-A4B-it01281.5964 (+ 0 )0.4678 (+ 0 )
gemma-4-26B-A4B-it-Pruned-320322.4826 (+ 0.8862)0.9093 (+ 0.4415)
gemma-4-26B-A4B-it-Pruned-32-sft-750750321.3827 (- 0.2137)0.3240 (- 0.1438)
gemma-4-26B-A4B-it-Pruned-32-sft-13501350321.2374 (- 0.359 )0.2130 (- 0.2548)
gemma-4-26B-A4B-it-Pruned-32-sft-18001800321.1724 (- 0.424 )0.1590 (- 0.3088)
gemma-4-26B-A4B-it-Pruned-32-sft-29502950321.0924 (- 0.504 )0.0883 (- 0.3795)
gemma-4-26B-A4B-it-Pruned-32-sft-35502950321.0645 (- 0.5319)0.0625 (- 0.4053)
gemma-4-26B-A4B-it-Pruned-32-sft-41504150321.0532 (- 0.5432)0.0518 (- 0.416 )
gemma-4-26B-A4B-it-Pruned-32-sft-47004700321.0411 (- 0.5553)0.0403 (- 0.4275)
gemma-4-26B-A4B-it-Pruned-32-sft-78007800321.0088 (- 0.5876)0.0088 (- 0.459 )
gemma-4-26B-A4B-it-Pruned-32-sft-1090010900321.0035 (- 0.5929)0.0035 (- 0.4643)

Citation

@misc{huihui-qwen3.6-27b-abliterated,
      title  = {{Huihui4-8B-A4B}: A lightweight MoE (Mixture of Experts) conversational model},
      author = {Huihui-ai},
      year   = {2026},
      url    = {https://hf.co/huihui-ai/Huihui4-8B-A4B}
}

Contact

If you have any questions, please raise an issue or contact us at support@huihui.ai.

Visit Website
β€”

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes9
Downloadsβ€”
πŸ“

No reviews yet

Be the first to review huihui-ai/Huihui4-8B-A4B-GGUF!

Model Info

Providerhuihui-ai
Categoryimage
Reviews0
Avg. Ratingβ€” / 5.0

Community

Likes9
Downloadsβ€”

Rating Guidelines

β˜…β˜…β˜…β˜…β˜…Exceptional
β˜…β˜…β˜…β˜…Great
β˜…β˜…β˜…Good
β˜…β˜…Fair
β˜…Poor