Back to Models
huihui-ai logo

huihui-ai/Huihui4-8B-A4B-v2

huihui-ai β€’ image

πŸ€– Huihui4-8B-A4B-v2 Model Card

πŸ“Œ Overview

Huihui4-8B-A4B-v2 is a lightweight MoE (Mixture of Experts) conversational model optimized from Google's gemma-4-26B-A4B-it architecture. Through expert pruning and supervised fine-tuning on high-quality dialogue data, the dataset adopts the thinking mode in GLM-5.1 format. This way, in thinking mode, it better reflects the thinking mode of GLM-5.1. this model significantly reduces computational overhead while preserving core reasoning and interaction capabilities. It is specifically designed for deployment on consumer-grade hardware and code-related conversational tasks.

This model is not an ablation variant.

🧱 Architecture & Configuration

ParameterDescription
Base Modelgoogle/gemma-4-26B-A4B-it
Total MoE Experts32 (pruned from the original 128)
Active Experts per Token8 (maintaining the A4B activation scale)
Model PositioningLightweight MoE conversational base / Consumer-hardware friendly

πŸ“Š Training Data & Methodology

  • Data Source: huihui-ai/GLM-5.1-Multilingual-STEM carefully extracted from code preference data.
  • Training Method: Supervised Fine-Tuning (SFT).
  • Optimization Goal: Maintain semantic coherence, instruction-following capability, and code context understanding post-pruning.

πŸ“ˆ Evaluation & Performance

  • Evaluation Tool: Quantitative perplexity assessment using the calculate_perplexity script.
  • Test Results: Preliminary dialogue tests indicate smooth interactions and stable logic. The model performs reliably in daily conversations and code-assistance tasks, with no significant performance degradation observed after pruning.

πŸ’» Inference & Deployment Recommendations

  • Recommended Frameworks: vLLM / llama.cpp / HuggingFace Transformers
  • VRAM Requirements:
    • FP16: < 18GB
    • INT4/INT8 Quantized: < 6~9GB (compatible with mainstream single consumer GPUs)
  • Use Cases: Code conversation assistants, lightweight task planning, local deployment prototyping, and baseline validation for MoE pruning/merging techniques.

πŸ—ΊοΈ Roadmap

  1. Multi-Domain Fine-Tuning: Further SFT on four distinct datasets to enhance the generalization capabilities of this 32-expert model.
  2. Expert Merging Validation: Experiment with merging the four independently fine-tuned models back into a 128-expert architecture, validating the feasibility of a "prune β†’ fine-tune β†’ merge" pipeline.
  3. Core Objective: Ultimately verify the engineering viability of training and iterating on large-scale MoE models using only consumer-grade hardware.
  4. If you're interested, feel free to fine-tune this model on your own datasets. We plan to merge all resulting models into a unified version at the end.

πŸ“ Notes

  • This model represents the initial pruned and fine-tuned iteration of the Huihui series. Future updates will involve multi-dataset integration and expert merging.

Citation

@misc{huihui4-8b-a4b-v2,
      title  = {{Huihui4-8B-A4B-v2}: A lightweight MoE (Mixture of Experts) conversational model},
      author = {Huihui-ai},
      year   = {2026},
      url    = {https://hf.co/huihui-ai/Huihui4-8B-A4B-v2}
}

Contact

If you have any questions, please raise an issue or contact us at support@huihui.ai.

Visit Website
β€”

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes11
Downloadsβ€”
πŸ“

No reviews yet

Be the first to review huihui-ai/Huihui4-8B-A4B-v2!

Model Info

Providerhuihui-ai
Categoryimage
Reviews0
Avg. Ratingβ€” / 5.0

Community

Likes11
Downloadsβ€”

Rating Guidelines

β˜…β˜…β˜…β˜…β˜…Exceptional
β˜…β˜…β˜…β˜…Great
β˜…β˜…β˜…Good
β˜…β˜…Fair
β˜…Poor