Pulse News

FridayMarch 14, 2025

Make Way for the Humatons

View Original Article →Published: 3/5/2025

**Make Way for the Humatons**

Humans will keep jobs requiring client interaction because their human identity is irreplaceable. But they will lose some authority over their own voice and labor.

By Roddy Lindsay

Mar 5, 2025, 9:30am PST

A stroll in downtown San Francisco in 2025 would not be complete without overhearing a conversation about artificial intelligence agents and seeing a billboard imploring managers to "stop hiring humans" for entry-level sales jobs. A visitor could be forgiven for predicting that AI bots would soon replace entire job categories, such as salespeople and therapists.

Not so fast. It's more likely that humans will retain jobs requiring client interaction because their human identity is irreplaceable. But they will lose some authority over their own voice and labor as efficiency-seeking AI optimizes every aspect of their job.

**The Takeaway**

Humans will retain jobs requiring client interaction because their human identity is irreplaceable. But they will lose some authority over their own voice and labor as efficiency-seeking AI optimizes every aspect of their job.

Behold the humaton—a human worker who functions partly as a skin for AI-coordinated and AI-generated communication.

We're all familiar with humatons. You find them in the pop-up chat widget on business-to-business software websites, introducing themselves as "Dave from sales"—even though you know perfectly well that Dave didn't personally write that message at 11 p.m. Start peppering Dave with questions and you might get an off-topic response or a highly detailed answer, depending on the mix of humans and software powering Dave at the moment.

In most cases, though, there is a real Dave out there, just waiting for qualified leads in his inbox. Submit your phone number, and you'll get a real call from real Dave the next day!

Mixing human communication with automation and computer oversight is not new, of course. All of us use tools like away messages to deliver emails to friends and colleagues when we're on vacation. Email and texting apps have long included spell-check features to eliminate our writing errors.

But as AI moves into almost every product involving the written word, more of our digital utterances are influenced by automation. Spell-check has evolved into suggesting sentences, paragraphs or entire documents. Mail merge has turned into fully automated and personalized email sequencing programs. Mass deepfake video pitches can be created on the fly from a few minutes of training video recorded by a live person.

The traditional categories of "human" and "AI" cannot fully describe this modern digital actor, who is best thought of as neither a human nor AI, but a hybrid—a humaton.

It's easy to understand why a company would prefer humatons to human workers. It's a straightforward calculation—if a business can use software to increase the volume and quality of its staff's client-facing activities at little cost, it would be foolish not to do so.

But why would a company deploy humatons instead of full-blown AI agents, touted by the billboards as working 24/7 and not requiring sick days? Many reasons, it turns out.

Most important is that customers in nearly all cases prefer to interact with another human. Clients who suspect they are interacting with a bot may be turned off if they feel you don't think they merit human-to-human interaction. Recent articles in Wired and The New York Times highlight a yawning dichotomy between high-touch, human-delivered services for the worthy and "AI for everyone else." For firms used to rolling out the red carpet for clients, are the savings from AI agents worth the risk of offending potential customers used to interacting with a human?

Of course, companies using AI agents may be tempted to fool clients by creating fake human identities. But this may court regulatory trouble. California's Bolstering Online Transparency Act, on the books since 2018, makes it illegal to use a bot with the "intent to mislead the other person about its artificial identity" for sales or political purposes. States including Colorado and Illinois have similar laws, and more are on the way. Because their automation can be ramped up or down as needed, humatons give businesses the flexibility to navigate quickly changing regulations governing AI.

Recent advances in reasoning models such as OpenAI's O1 or DeepSeek's R1 have made AI less likely to hallucinate. But executives remain rightfully cautious about AI bots interacting directly with customers without humans in the loop. A Canadian court last year found Air Canada liable for an infamous episode in which the airline's chatbots invented discounts during customer chats. Humatons are an important line of defense against AI that goes off the rails, due either to misconfiguration or limitations of the technology.

Humatons raise novel opportunities and pitfalls for executives, regulators, and workers. I'm the founder and CEO of a startup, Zabit, that employs human coaches to help consumers develop better habits, such as reducing social media time, going to sleep earlier, or exercising regularly. I've been using AI to help make my coaches more efficient with their time. AI automatically analyzes client behavior and messages, then presents coaches with specific ideas to offer support and feedback at the right moment. This efficiency lets me offer personalized one-on-one coaching for $10 a week instead of $100.

But exactly how and when to inject AI into a fundamentally human discipline like coaching presents practical and ethical dilemmas. In the absence of regulations or accepted industry best practices, I've had to establish my own "humaton principles."

1. **Establish humanness.** My company's service provides accountability, which is best provided by a real person, not software or a bot. Potential customers want to know there is a real person on the other end of the line. Today, my coaches do this by sending photos and videos of themselves, but I'm nervous that improving generative AI models in the coming years will ruin personalized images and videos as a credible authenticity check.

2. **Be transparent about automation.** My coaches are given AI suggestions for client messages, which they are free to use or tweak as they see fit. But on occasion, such as when a coach goes on vacation but wants to continue sending reminders, they will cede control over the timing and content of their message to an automated system. In these cases, the messages will say clearly that they were crafted by an "assistant." This transparency is necessary to preserve client trust in the service, especially when human service providers are a key selling point, as they are for my business.

3. **Move AI to the background.** I've found that many clients want human connection in a health service, and AI elements should recede into the background wherever possible. In the case of my business, AI is great at spotting trends in human behavior and devising interventions, but my human coaches are the best messengers, especially when they inject their own personality into the conversation.

Though initially queasy about humatons, I now believe they represent one of humanity's best chances to preserve an advantage in the growing competition between humans and AI for paid work. (I think the opportunity is so great that I'm planning to spin out Zabit's back-end system as a separate product for managing humaton messaging applications.) The sooner we recognize the value of humatons and put protections in place to maintain their unique human traits—such as setting standards for proving human identity and cracking down on bot impersonators—the better chance we'll have to ensure the flourishing of human work for years to come.