Back to Models
Visit Website
huihui-ai/Huihui4-8B-A4B-GGUF
huihui-ai β’ imageπ€ Huihui4-8B-A4B-GGUF
π Overview
Huihui4-8B-A4B is a lightweight MoE (Mixture of Experts) conversational model optimized from Google's gemma-4-26B-A4B-it architecture. Through expert pruning and supervised fine-tuning on high-quality dialogue data, this model significantly reduces computational overhead while preserving core reasoning and interaction capabilities. It is specifically designed for deployment on consumer-grade hardware and code-related conversational tasks.
This model is not an ablation variant.
ollama
Please use the latest version of ollama
You can use huihui_ai/huihui-4:8b directly,
ollama run huihui_ai/huihui-4:8b
π§± Architecture & Configuration
| Parameter | Description |
|---|---|
| Base Model | google/gemma-4-26B-A4B-it |
| Total MoE Experts | 32 (pruned from the original 128) |
| Active Experts per Token | 8 (maintaining the A4B activation scale) |
| Model Positioning | Lightweight MoE conversational base / Consumer-hardware friendly |
π Training Data & Methodology
- Data Source: 500+ high-quality dialogue samples carefully extracted from code preference data.
- Training Method: Supervised Fine-Tuning (SFT).
- Optimization Goal: Maintain semantic coherence, instruction-following capability, and code context understanding post-pruning.
π Evaluation & Performance
- Evaluation Tool: Quantitative perplexity assessment using the
calculate_perplexityscript. - Test Results: Preliminary dialogue tests indicate smooth interactions and stable logic. The model performs reliably in daily conversations and code-assistance tasks, with no significant performance degradation observed after pruning.
π» Inference & Deployment Recommendations
- Recommended Frameworks:
vLLM/llama.cpp/HuggingFace Transformers - VRAM Requirements:
FP16: < 18GBINT4/INT8 Quantized: < 6~9GB (compatible with mainstream single consumer GPUs)
- Use Cases: Code conversation assistants, lightweight task planning, local deployment prototyping, and baseline validation for MoE pruning/merging techniques.
πΊοΈ Roadmap
- Multi-Domain Fine-Tuning: Further SFT on four distinct datasets to enhance the generalization capabilities of this 32-expert model.
- Expert Merging Validation: Experiment with merging the four independently fine-tuned models back into a 128-expert architecture, validating the feasibility of a
"prune β fine-tune β merge"pipeline. - Core Objective: Ultimately verify the engineering viability of training and iterating on large-scale MoE models using only consumer-grade hardware.
- If you're interested, feel free to fine-tune this model on your own datasets. We plan to merge all resulting models into a unified version at the end.
π Notes
- This model represents the initial pruned and fine-tuned iteration of the
Huihuiseries. Future updates will involve multi-dataset integration and expert merging. - calculate_perplexity evaluation script).
- Evaluation results
python evaluate_perplexity_final.py --model_path ./google/gemma-4-26B-A4B-it
Model Path : ./google/gemma-4-26B-A4B-it
Eval Samples : 100
Max Length : 8192
| model | Fine-tuning steps | num_experts | Perplexity | Average Loss |
|---|---|---|---|---|
| gemma-4-26B-A4B-it | 0 | 128 | 1.5964 (+ 0 ) | 0.4678 (+ 0 ) |
| gemma-4-26B-A4B-it-Pruned-32 | 0 | 32 | 2.4826 (+ 0.8862) | 0.9093 (+ 0.4415) |
| gemma-4-26B-A4B-it-Pruned-32-sft-750 | 750 | 32 | 1.3827 (- 0.2137) | 0.3240 (- 0.1438) |
| gemma-4-26B-A4B-it-Pruned-32-sft-1350 | 1350 | 32 | 1.2374 (- 0.359 ) | 0.2130 (- 0.2548) |
| gemma-4-26B-A4B-it-Pruned-32-sft-1800 | 1800 | 32 | 1.1724 (- 0.424 ) | 0.1590 (- 0.3088) |
| gemma-4-26B-A4B-it-Pruned-32-sft-2950 | 2950 | 32 | 1.0924 (- 0.504 ) | 0.0883 (- 0.3795) |
| gemma-4-26B-A4B-it-Pruned-32-sft-3550 | 2950 | 32 | 1.0645 (- 0.5319) | 0.0625 (- 0.4053) |
| gemma-4-26B-A4B-it-Pruned-32-sft-4150 | 4150 | 32 | 1.0532 (- 0.5432) | 0.0518 (- 0.416 ) |
| gemma-4-26B-A4B-it-Pruned-32-sft-4700 | 4700 | 32 | 1.0411 (- 0.5553) | 0.0403 (- 0.4275) |
| gemma-4-26B-A4B-it-Pruned-32-sft-7800 | 7800 | 32 | 1.0088 (- 0.5876) | 0.0088 (- 0.459 ) |
| gemma-4-26B-A4B-it-Pruned-32-sft-10900 | 10900 | 32 | 1.0035 (- 0.5929) | 0.0035 (- 0.4643) |
Citation
@misc{huihui-qwen3.6-27b-abliterated,
title = {{Huihui4-8B-A4B}: A lightweight MoE (Mixture of Experts) conversational model},
author = {Huihui-ai},
year = {2026},
url = {https://hf.co/huihui-ai/Huihui4-8B-A4B}
}
Contact
If you have any questions, please raise an issue or contact us at support@huihui.ai.