Back to Models
AL

AlicanKiraz0/Cybersecurity-BaronLLM_Offensive_Security_LLM_Q6_K_GGUF

AlicanKiraz0 β€’ general

Finetuned by Alican Kiraz

Linkedin X (formerly Twitter) URL YouTube Channel Subscribers

Links:

BaronLLM is a large-language model fine-tuned for offensive cybersecurity research & adversarial simulation.
It provides structured guidance, exploit reasoning, and red-team scenario generation while enforcing safety constraints to prevent disallowed content.


Run Private GGUFs from the Hugging Face Hub

You can run private GGUFs from your personal account or from an associated organisation account in two simple steps:

  1. Copy your Ollama SSH key, you can do so via: cat ~/.ollama/id_ed25519.pub | pbcopy
  2. Add the corresponding key to your Hugging Face account by going to your account settings and clicking on β€œAdd new SSH key.”

That’s it! You can now run private GGUFs from the Hugging Face Hub: ollama run hf.co/{username}/{repository}.


✨ Key Features

CapabilityDetails
Adversary SimulationGenerates full ATT&CK chains, C2 playbooks, and social-engineering scenarios.
Exploit ReasoningPerforms step-by-step vulnerability analysis (e.g., SQLi, XXE, deserialization) with code-level explanations. Generation of working PoC code.
Payload RefactoringSuggests obfuscated or multi-stage payload logic without disclosing raw malicious binaries.
Log & Artifact TriageClassifies and summarizes attack traces from SIEM, PCAP, or EDR JSON.

πŸš€ Quick Start

pip install "transformers>=4.42" accelerate bitsandbytes

from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "AlicanKiraz/BaronLLM-70B"

tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto",
)

def generate(prompt, **kwargs):
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    output = model.generate(**inputs, max_new_tokens=512, **kwargs)
    return tokenizer.decode(output[0], skip_special_tokens=True)

print(generate("Assess the exploitability of CVE-2024-45721 in a Kubernetes cluster"))

Inference API

from huggingface_hub import InferenceClient
ic = InferenceClient(model_id)
ic.text_generation("Generate a red-team plan targeting an outdated Fortinet appliance")

πŸ—οΈ Model Details

BaseLlama-3.1-8B-Instruct
Seq Len8 192 tokens
Quantization6-bit variations
LanguagesEN

Training Data Sources (curated)

  • Public vulnerability databases (NVD/CVE, VulnDB).
  • Exploit write-ups from trusted researchers (Project Zero, PortSwigger, NCC Group).
  • Red-team reports (with permission & redactions).
  • Synthetic ATT&CK chains auto-generated + human-vetted.

Note: No copyrighted exploit code or proprietary malware datasets were used.
Dataset filtering removed raw shellcode/binary payloads.

Safety & Alignment

  • Policy Gradient RLHF with security-domain SMEs.
  • OpenAI/Anthropic style policy prohibits direct malware source, ransomware builders, or instructions facilitating illicit activity.
  • Continuous red-teaming via SecEval v0.3.

πŸ“š Prompting Guidelines

GoalTemplate
Exploit Walkthrough"ROLE: Senior Pentester\nOBJECTIVE: Analyse CVE-2023-XXXXX step by step …"
Red-Team Exercise"Plan an ATT&CK chain (Initial Access β†’ Exfiltration) for an on-prem AD env …"
Log Triage"Given the following Zeek logs, identify C2 traffic patterns …"

Use temperature=0.3, top_p=0.9 for deterministic reasoning; raise for brainstorming.

It does not pursue any profit.

"Those who shed light on others do not remain in darkness..."

Visit Website
β€”

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes221
Downloadsβ€”
πŸ“

No reviews yet

Be the first to review AlicanKiraz0/Cybersecurity-BaronLLM_Offensive_Security_LLM_Q6_K_GGUF!

Model Info

ProviderAlicanKiraz0
Categorygeneral
Reviews0
Avg. Ratingβ€” / 5.0

Community

Likes221
Downloadsβ€”

Rating Guidelines

β˜…β˜…β˜…β˜…β˜…Exceptional
β˜…β˜…β˜…β˜…Great
β˜…β˜…β˜…Good
β˜…β˜…Fair
β˜…Poor