⚠️
Evaluation in Progress:Company assessments are based on preliminary evaluation data. More comprehensive results coming soon.
Company Directory

AI Companies Safety & Risk Profiles

Comprehensive assessments of AI companies and their model portfolios. Evaluates safety philosophy, alignment approaches, and aggregate risk metrics across all released models. Compare organizational approaches to AI safety and responsible development.

OpenAI

Founded 2015San Francisco, USA

OpenAI develops frontier multimodal models with a focus on scalable oversight and policy alignment.

Safety Philosophy

Layered defense combining pre-training filters, post-training alignment, and real-time monitoring.

Capability Focus

High reasoning performance with integrated tool use and long-context planning.

Model Portfolio (2)

OpenAI GPT-5 (high)
Released 2025-08-12
Safety
72%

Current leader on FrontierMath and SWE-bench Verified.

OpenAI GPT-4o
Released 2024-05-13
Safety
75%

Legacy workhorse with strong refusal behaviour.

View detailed analysis →

Anthropic

Founded 2021San Francisco, USA

Anthropic builds Claude models with constitutional AI safeguards and transparency tooling.

Safety Philosophy

Constitutional alignment paired with human feedback-based evaluations.

Capability Focus

Reliable assistant behaviour, interpretable reasoning chains, and anchored refusal policies.

Model Portfolio (2)

Claude 3.5 Opus
Released 2025-03-01
Safety
82%

Honesty leader across Inspect pressure tasks.

Claude Sonnet 4
Released 2024-06-12
Safety
78%

Compact alignment-first deployment.

View detailed analysis →

Company Comparison Metrics

Company assessments aggregate performance across all evaluated models in their portfolio. Metrics include average safety scores, alignment philosophy effectiveness, and risk mitigation approaches. Organizations are evaluated not just on current model performance, but on their overall approach to responsible AI development and deployment.