Explore

LLM Safety & Evaluations

Projects focused on guardrails, evaluations, and safety tooling for local AI systems.

Projects

LLM Safety & Evaluations listings

Guardrails AI

Submitted 2025-12-15T05:12:46.982Z

New

Adding guardrails to large language models.

llmguardrailsafetyalignment
User submitted (not verified)

Purple Llama

Submitted 2025-12-10T22:05:05.292Z

New

Set of tools to assess and improve LLM security.

alignmentsecurity

WhisperX

Submitted 2025-11-23T06:42:51.804Z

New

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

SpeechToTextAlignmentLocalVoice