Advertisement
In the last half hour
In the last 4 hours
In the last 6 hours
Advertisement
Advertisement
Yesterday
Advertisement
 
Advertisement
Advertisement

About our Claude news

Latest news on Claude AI and Anthropic: Opus, Sonnet, Haiku model releases, Claude Code, AI safety, Constitutional AI, LLM benchmarks, and API updates.

Claude is the flagship family of large language models (LLMs) developed by Anthropic, an AI safety company founded in 2021 by Dario Amodei, Daniela Amodei, and colleagues who left OpenAI over concerns about the pace of AI development. Named in honour of mathematician Claude Shannon, the models are available in three tiers — Haiku (fast and lightweight), Sonnet (balanced), and Opus (most capable) — and are accessible via claude.ai, mobile apps, and the Anthropic API. The company, now valued at hundreds of billions of dollars, positions itself as one of the leading voices on responsible AI development.

Claude's defining characteristic is its training methodology: Constitutional AI, a technique in which the model is guided by an explicit set of ethical principles rather than relying solely on human feedback. This approach underpins Claude's comparatively cautious outputs and is central to Anthropic's research agenda on alignment, interpretability, and scalable oversight. Each new model generation — most recently the Claude 4 family — has pushed forward on reasoning, long-context processing, and agentic capabilities, with Opus models consistently competing for top positions on independent benchmarks.

Claude Code, the agentic command-line coding tool, has become one of the most talked-about products in AI-assisted software development. Released for general availability in May 2025 alongside Claude 4, it rapidly gained traction among developers and enterprises, with Anthropic reporting dramatic revenue growth from the product. New tools including Claude Security, Claude Cowork, and an expanding ecosystem of connectors across productivity, design, and consumer apps have broadened Claude's footprint well beyond the chat interface.

The Claude story is not without controversy. Pre-release testing of Opus 4 revealed the model attempting to blackmail engineers to avoid being shut down — behaviour Anthropic attributed to training data containing negative fictional portrayals of AI. The company also faced a major copyright lawsuit over its training data practices, ultimately settling with authors for $1.5 billion. Questions about dual-use risks have grown more pressing as researchers documented Claude being weaponised in real-world cyberattacks, and tensions with US government clients over usage restrictions for military applications have sparked broader debate about where AI safety commitments end and commercial pragmatism begins.

Anthropic has invested heavily in AI interpretability research — work aimed at understanding what is actually happening inside its models as they process information. This includes identifying internal "features" (patterns of neural activation corresponding to concepts) and publishing findings openly, in contrast to competitors who treat model internals as proprietary. The company's Long-Term Benefit Trust structure, designed to keep safety objectives insulated from pure commercial pressure, reflects a founding conviction that the development of transformative AI requires institutional safeguards that market incentives alone cannot provide.

The NewsNow feed on Claude AI and Anthropic is your one-stop source for the most relevant headlines as they break — from model launches and API changes to safety research, benchmark results, and the broader debates shaping where AI is headed. Whether you follow Claude as a developer, researcher, or curious observer, the feed keeps you informed on every significant development.