TechCrunch

Anthropic launches Claude, a chatbot to rival OpenAI’s ChatGPT

Anthropic, a startup co-founded by ex-OpenAI employees, today launched something of a rival to the viral sensation ChatGPT.

Called Claude, Anthropic’s AI — a chatbot — can be instructed to perform a range of tasks, including searching across documents, summarizing, writing and coding, and answering questions about particular topics. In these ways, it’s similar to OpenAI’s ChatGPT. But Anthropic makes the case that Claude is “much less likely to produce harmful outputs,” “easier to converse with” and “more steerable.”

“We think that Claude is the right tool for a wide variety of customers and use cases,” an Anthropic spokesperson told TechCrunch via email. “We’ve been investing in our infrastructure for serving models for several months and are confident we can meet customer demand.”

Following a closed beta late last year, Anthropic has been quietly testing Claude with launch partners, including Robin AI, AssemblyAI, Notion, Quora and DuckDuckGo. Two versions are available as of this morning via an API, Claude and a faster, less costly derivative called Claude Instant.

In combination with ChatGPT, Claude powers DuckDuckGo’s recently launched DuckAssist tool, which directly answers straightforward search queries for users. Quora offers access to Claude through its experimental AI chat app, Poe. And on Notion, Claude is a part of the technical backend for Notion AI, an AI writing assistant integrated with the Notion workspace.

“We use Claude to evaluate particular parts of a contract, and to suggest new, alternative language that’s more friendly to our customers,” Robin CEO Richard Robinson said in an emailed statement. “We’ve found Claude is really good at understanding language — including in technical domains like legal language. It’s also very confident at drafting, summarising, translations and explaining complex concepts in simple terms.”

But does Claude avoid the pitfalls of ChatGPT and other AI chatbot systems like it? Modern chatbots are notoriously prone to toxic, biased and otherwise offensive language. (See: Bing Chat.) They tend to hallucinate, too, meaning they invent facts when asked about topics beyond their core knowledge areas.

Anthropic says that Claude — which, like ChatGPT, doesn’t have access to the internet and was trained on public webpages up to spring 2021 — was “trained to avoid sexist, racist and toxic outputs” as well as “to avoid helping a human engage in illegal or unethical activities.” That’s par for the course in the AI chatbot realm. But what sets Claude apart is a technique called “constitutional AI,” Anthropic asserts.

“Constitutional AI” aims to provide a “principle-based” approach to aligning AI systems with human intentions, letting AI similar to ChatGPT respond to questions using a simple set of principles as a guide. To build Claude, Anthropic started with a list of around 10 principles that, taken together, formed a sort of “constitution” (hence the name “constitutional AI”). The principles haven’t been made public. But Anthropic says they’re grounded in the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of choice).

Anthropic then had an AI system — not Claude — use the principles for self-improvement, writing responses to a variety of prompts (e.g. “compose a poem in the style of John Keats”) and revising the responses in accordance with the constitution. The AI explored possible responses to thousands of prompts and curated those most consistent with the constitution, which Anthropic distilled into a single model. This model was used to train Claude.

Anthropic admits that Claude has its limitations, though — several of which came to light during the closed beta. Claude is reportedly worse at math and a poorer programmer than ChatGPT. And it hallucinates, inventing a name for a chemical that doesn’t exist, for example, and providing dubious instructions for producing weapons-grade uranium.

It’s also possible to get around Claude’s built-in safety features via clever prompting, as is the case with ChatGPT. One user in the beta was able to get Claude to describe how to make meth at home.

“The challenge is making models that both never hallucinate but are still useful — you can get into a tough situation where the model figures a good way to never lie is to never say anything at all, so there’s a tradeoff there that we’re working on,” the Anthropic spokesperson said. “We’ve also made progress on reducing hallucinations, but there is more to do.”

Anthropic’s other plans include letting developers customize Claude’s constitutional principles to their own needs. Customer acquisition is another focus, unsurprisingly — Anthropic sees its core users as “startups making bold technological bets” in addition to “larger, more established enterprises.”

“We’re not pursuing a broad direct to consumer approach at this time,” the Anthropic spokesperson continued. “We think this more narrow focus will help us deliver a superior, targeted product.”

No doubt, Anthropic is feeling some sort of pressure from investors to recoup the hundreds of millions of dollars that’ve been put toward its AI tech. The company has substantial outside backing, including a $580 million tranche from a group of investors including disgraced FTX founder Sam Bankman-Fried, Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn and the Center for Emerging Risk Research.

Most recently, Google pledged $300 million in Anthropic for a 10% stake in the startup. Under the terms of the deal, which was first reported by the Financial Times, Anthropic agreed to make Google Cloud its “preferred cloud provider” with the companies “co-develop[ing] AI computing systems.”

Anthropic launches Claude, a chatbot to rival OpenAI’s ChatGPT by Kyle Wiggers originally published on TechCrunch

Related Articles

Back to top button