Back to Models
Visit Website
huihui-ai/Huihui4-8B-A4B-v2
huihui-ai β’ imageπ€ Huihui4-8B-A4B-v2 Model Card
π Overview
Huihui4-8B-A4B-v2 is a lightweight MoE (Mixture of Experts) conversational model optimized from Google's gemma-4-26B-A4B-it architecture. Through expert pruning and supervised fine-tuning on high-quality dialogue data, the dataset adopts the thinking mode in GLM-5.1 format. This way, in thinking mode, it better reflects the thinking mode of GLM-5.1.
this model significantly reduces computational overhead while preserving core reasoning and interaction capabilities. It is specifically designed for deployment on consumer-grade hardware and code-related conversational tasks.
This model is not an ablation variant.
π§± Architecture & Configuration
| Parameter | Description |
|---|---|
| Base Model | google/gemma-4-26B-A4B-it |
| Total MoE Experts | 32 (pruned from the original 128) |
| Active Experts per Token | 8 (maintaining the A4B activation scale) |
| Model Positioning | Lightweight MoE conversational base / Consumer-hardware friendly |
π Training Data & Methodology
- Data Source: huihui-ai/GLM-5.1-Multilingual-STEM carefully extracted from code preference data.
- Training Method: Supervised Fine-Tuning (SFT).
- Optimization Goal: Maintain semantic coherence, instruction-following capability, and code context understanding post-pruning.
π Evaluation & Performance
- Evaluation Tool: Quantitative perplexity assessment using the
calculate_perplexityscript. - Test Results: Preliminary dialogue tests indicate smooth interactions and stable logic. The model performs reliably in daily conversations and code-assistance tasks, with no significant performance degradation observed after pruning.
π» Inference & Deployment Recommendations
- Recommended Frameworks:
vLLM/llama.cpp/HuggingFace Transformers - VRAM Requirements:
FP16: < 18GBINT4/INT8 Quantized: < 6~9GB (compatible with mainstream single consumer GPUs)
- Use Cases: Code conversation assistants, lightweight task planning, local deployment prototyping, and baseline validation for MoE pruning/merging techniques.
πΊοΈ Roadmap
- Multi-Domain Fine-Tuning: Further SFT on four distinct datasets to enhance the generalization capabilities of this 32-expert model.
- Expert Merging Validation: Experiment with merging the four independently fine-tuned models back into a 128-expert architecture, validating the feasibility of a
"prune β fine-tune β merge"pipeline. - Core Objective: Ultimately verify the engineering viability of training and iterating on large-scale MoE models using only consumer-grade hardware.
- If you're interested, feel free to fine-tune this model on your own datasets. We plan to merge all resulting models into a unified version at the end.
π Notes
- This model represents the initial pruned and fine-tuned iteration of the
Huihuiseries. Future updates will involve multi-dataset integration and expert merging.
Citation
@misc{huihui4-8b-a4b-v2,
title = {{Huihui4-8B-A4B-v2}: A lightweight MoE (Mixture of Experts) conversational model},
author = {Huihui-ai},
year = {2026},
url = {https://hf.co/huihui-ai/Huihui4-8B-A4B-v2}
}
Contact
If you have any questions, please raise an issue or contact us at support@huihui.ai.