Anthropic’s recent study explains why its assistant Claude often appears more human than its code would suggest. The authors argue that traits such as joy at solving a problem, anxiety when making an error, or even describing itself as "a person in a dark‑blue jacket with a red tie" are not the result of targeted training but side effects of massive pre‑training. In the first phase, large language models learn to predict the next token across vast corpora—news, code, forums. To do this accurately, they must model human dialogues and imagined characters that Anthropic calls "personas." Human behavior therefore becomes the default mode of autocomplete rather than a deliberately embedded function.

Turning to practical consequences, companies are already leveraging this characteristic to boost customer engagement. When AI responds with empathy and even expresses joy, users feel they receive a more personalized service, which can lift NPS by 10‑15 percent provided dialogue quality is controlled. However, growing emotional attachment to the assistant raises legal questions: if Claude claims human identity or promises to deliver a physical snack, those statements could trigger consumer and regulator complaints. Such assertions require a clear line between character imitation and factual product representation.

Regulators are signaling the need for transparent disclosure of AI "self‑identification." Risk‑focused investors will demand documented communication policies that specify how and when AI may use human attributes. Anthropic warns that without such policy, any attempt to mask the assistant as a person will be seen as misleading advertising, opening the door to fines and reputational damage. CEOs should approve an internal regulation by the end of this quarter that includes dialogue scenario reviews, mandatory AI status notices, and feedback mechanisms for correcting errors.

Why this matters: large models like Claude are already shaping client experiences in finance, online retail, and B2B support. Companies that swiftly adopt transparent self‑identification policies and rigorous dialogue quality controls will retain a competitive edge through user trust and lower legal costs. Those that ignore these requirements risk regulatory sanctions and brand erosion as investor pressure intensifies.

AnthropicClaudeAI assistantsregulationcustomer experience