Back to ModelsVisit Website
GG
ggml-org/gemma-4-26B-A4B-it-GGUF
ggml-org • generalgemma-4-26B-A4B-it-GGUF
Recommended way to run this model:
llama-server -hf ggml-org/gemma-4-26B-A4B-it-GGUF
Then, access http://localhost:8080