Anthropic's research (March 2026) shows AI already covers up to 75% of tasks in several occupations, yet real-world usage is just a third of its potential. Youth hiring into exposed roles has slowed by 14% — an early sign the market is shifting. Whoever builds AI agent infrastructure first wins. Directual provides everything you need without code, and its free course gets you started in an hour.
Introduction: Why This Article Isn’t Just Another AI Hype Piece
Every day, dozens of articles about artificial intelligence hit the web. Most of them are either breathless predictions along the lines of “robots will replace everyone” or dismissive hand-waves of “it’s just a chatbot.” The truth, as usual, is more complex and far more interesting.
On March 5, 2026, Anthropic — one of the world’s leading developers of large language models (LLMs) and the creator of Claude — published a major study on AI’s impact on the labor market. This isn’t a futurist’s forecast or a marketing brochure. It’s an academic paper backed by real data: an analysis of hundreds of occupations, millions of AI conversations, and cross-referencing with official US employment statistics.
image from anthropic article
The headline finding can be summarized like this: AI already covers a significant share of tasks across dozens of professions, and that share is only growing. There’s no mass unemployment yet — but early signals of a hiring slowdown among young workers in the most “exposed” occupations have already been detected.
What does this mean for business? It means the window of opportunity is wide open right now. Companies that are already building AI agents for their processes are gaining a competitive edge. Those who wait risk ending up like the businesses that ignored the internet in the late ’90s.
In this article, we’ll break down the key findings of Anthropic’s research, explain why AI agents (not just “chatting with an AI”) are what businesses actually need, and show how Directual lets you build these agents without writing code — quickly, reliably, and at scale.
What Anthropic’s Research Found: Key Takeaways
A New Metric: “Observed Exposure”
Anthropic’s researchers introduced a fundamentally new approach to measuring AI’s impact on the labor market. Previous models assessed theoretical feasibility: “Could an AI, in principle, speed up this task?” The new metric — observed exposure — factors in real-world AI usage in professional contexts.
The metric combines three data sources. First, the O*NET database, which describes tasks for approximately 800 occupations in the US. Second, Anthropic’s own data on how people actually use Claude. Third, theoretical estimates of which tasks LLMs could, in principle, make at least twice as fast.
A critical distinction: the new metric gives full weight to automated usage (via API, without human involvement) compared to half weight for “augmentative” usage (where a human interacts with the AI in a conversational mode). This makes sense: if a task is fully automated via API, it’s closer to genuine replacement of a work function.
The Gap Between Capability and Reality
One of the study’s most important findings: AI is far from reaching its theoretical potential. For example, in the “Computer & Math” occupational category, up to 94% of tasks could theoretically be automated. In reality, Claude covers just 33%.
This doesn’t mean AI is weak. It means there’s a massive untapped potential. And that’s precisely where the opportunity for business lies. Whoever closes this gap in their industry first gains a decisive advantage.
The reasons for this theory-to-practice gap are varied: legal constraints, the need for specialized software, human verification requirements, business process inertia. But the technological barriers are falling with every passing month.
Top Occupations at Risk
The study identified ten occupations with the highest level of observed exposure. Computer Programmers top the list at 75% task coverage. Customer Service Representatives come second. Data Entry Keyers rank third at 67%. Financial analysts, accountants, technical writers, and other white-collar specialists follow.
At the same time, 30% of workers have zero coverage — their tasks appeared too infrequently in the AI usage data. These include cooks, mechanics, lifeguards, bartenders, and other predominantly physical occupations.
Notice the pattern: the workers most exposed to AI impact are highly paid and highly educated. People with graduate degrees are represented in the “exposed” group four times more than in the “unexposed” group. Average earnings in the at-risk group are 47% higher.
No Catastrophe Yet — But Early Warning Signs Are Here
The good news: there’s no mass unemployment from AI so far. Unemployment rates among the most “exposed” occupations remain roughly at the same level as before ChatGPT launched.
However, there’s a troubling signal. Among young workers (ages 22–25), there’s been a roughly 14% decline in the rate of hiring into exposed occupations. In low-exposure occupations, youth hiring remains stable. No such effect exists for workers over 25.
What does this mean? Companies aren’t yet laying off current employees because of AI, but they’re already slowing down the hiring of new ones for positions where AI can handle a significant share of the work. It’s the classic “canary in the coal mine” — an early harbinger of more sweeping changes.
Why AI Agents, Not Just “A Neural Network”
The Difference Between a Chatbot and an Agent
When most people think about AI in business, they picture a chatbot on a website or an assistant like ChatGPT. These are useful tools, but they operate in a “question-and-answer” mode: the human asks, the AI responds. Initiative always rests with the human.
An AI agent is a fundamentally different level. An agent can autonomously execute a sequence of actions to achieve a goal. It can query databases, call external service APIs, make decisions based on intermediate results, process incoming events, and even escalate non-standard situations to a human.
A simple example: a chatbot can answer a customer’s question about order status if the customer initiates contact. An agent, on the other hand, can monitor all orders, proactively detect delivery issues, notify customers preemptively, generate claims to logistics partners, and track their resolution — all without any human involvement.
Anthropic’s Data Confirms: Automation Is the Key Trend
Anthropic’s research distinguishes between two types of AI usage: augmentation and automation. With augmentation, a human works alongside AI — editing text, asking questions, using it as an assistant. With automation, the AI performs the task independently, typically via API.
The observed exposure metric deliberately gives full weight to automated usage and half weight to augmentation. The researchers explain that automation is closest to actually replacing work functions. And automation is precisely what AI agents do.
The trend is clear: business is moving from “let’s give employees access to ChatGPT” to “let’s automate entire workflows with AI agents.” Anthropic’s data confirms this is exactly the trajectory the market is following.
Agents Close the “Potential Gap”
Remember that gap between AI’s theoretical capabilities (94% of tasks in IT) and actual coverage (33%)? A significant portion of this gap is explained not by limitations of the models themselves, but by the lack of infrastructure to deploy them.
An LLM by itself is a brain without a body. It can think, analyze, and generate text. But it can’t log into your CRM, check an order status in your ERP system, send an email to a customer, update a database record, or trigger a third-party webhook. All of that requires infrastructure — the body, hands, and feet for the AI brain.
That’s exactly what an AI agent is: an LLM plus tools through which the model interacts with the real world. And building that infrastructure is the key challenge for any business that wants to use AI not as a toy, but as a competitive advantage.
Labor Market Projections: What the Statistics Say
Anthropic’s study compared its observed exposure metric with the US Bureau of Labor Statistics (BLS) employment projections for 2024–2034. The result: occupations with higher observed AI exposure show weaker projected employment growth.
Specific numbers: every additional 10 percentage points of AI task coverage corresponds to a 0.6 percentage point decline in projected employment growth. This may seem small, but at the scale of entire industries, it translates to millions of jobs.
An important detail: the correlation was found specifically with the observed exposure metric based on real usage data. The purely theoretical metric (can AI in principle speed up the task?) showed no such correlation. This suggests that actual AI deployment, not just potential capabilities, is what affects the labor market.
For business, the conclusion is clear: the market is already restructuring. Companies using AI agents are optimizing their processes and staffing needs. Those that don’t implement AI automation will be competing against those who already have — and losing on cost, speed, and quality.
Why You Need to Build AI Agents Right Now
The Window of Opportunity
Anthropic’s data paints a clear picture: we’re at the beginning of an S-curve adoption pattern. The theoretical potential is enormous, actual coverage is still modest, but it’s growing every quarter. The Anthropic Economic Index, published regularly, tracks the steady expansion of tasks performed by AI in professional settings.
Right now is the ideal time to get in. The entry barrier is dropping: no-code and low-code platforms make building AI agents accessible to non-developers. Models are getting cheaper and more powerful with every update. API ecosystems are maturing. Integration standards (MCP, tool use) are becoming robust.
But the window won’t stay open forever. Once observed exposure hits critical mass in your industry, the first-mover advantage evaporates. Instead of “we deployed AI agents and gained a competitive edge,” it becomes “we’re forced to deploy AI agents just to keep up.” As the history of technology waves shows — from electricity to the internet to mobile apps — latecomers always pay more.
Young Workers Are Already Feeling the Pressure
The 14% decline in youth hiring into AI-exposed occupations isn’t just a statistic. It means companies are already reassessing their need for junior roles. Where a department of 10 used to hire 2–3 interns a year to handle routine tasks, AI is now taking on a portion of that workload.
This creates a paradox: on one hand, the need for people who can manage AI agents is growing. On the other, the need for people who perform tasks accessible to AI is declining. Businesses must recognize this shift and restructure their processes: not just “give people AI tools,” but build a full-fledged AI infrastructure with agents, data processing pipelines, and monitoring systems.
Regulatory Pressure Is Mounting
Research like Anthropic’s is drawing attention from regulators worldwide. The European AI Act is already taking effect, and AI legislation in other jurisdictions is evolving. The sooner a business starts building its AI infrastructure on transparent, auditable platforms, the easier it will be to meet regulatory requirements.
Ad hoc integrations through scripts and homegrown API wrappers are extremely difficult to audit. Platform-based solutions, by contrast, provide logging, versioning, access control, and other essential compliance elements out of the box.
What You Need to Build AI Agents: An Infrastructure Checklist
Anthropic’s research implicitly but convincingly demonstrates that bridging the gap between AI’s theoretical potential and real-world usage requires infrastructure. Not just access to an LLM’s API, but a full-fledged environment for creating, deploying, and managing AI agents. Let’s break down what that looks like.
1. LLM Provider Connectivity
An agent needs to be able to call language models: OpenAI, Anthropic Claude, Google Gemini, Mistral, and others. Ideally, with the ability to switch between providers without rewriting business logic.
2. Tools and Integrations
An LLM without tools is a brain without hands. An agent needs tools for working with databases, calling external APIs, sending notifications, processing files, handling webhooks, and more. It’s the tools that transform a language model from a “smart chatbot” into a fully autonomous worker.
3. Orchestration and Workflows
An agent isn’t a single API call. It’s a chain of actions: receive input data, process it, call the LLM, invoke a tool, process the result, potentially call the LLM again with new context, make a decision, execute an action. You need an orchestration system to manage this cycle.
4. Data and Context Storage
An agent needs “memory”: a database for storing intermediate results, interaction histories, and user data. For RAG systems (Retrieval-Augmented Generation), vector stores for semantic search across documents are essential.
5. Monitoring and Logging
When an AI agent operates autonomously, it’s critical to see what it’s doing: what requests it sends to the LLM, what responses it receives, which tools it invokes, what errors arise. Without a monitoring and logging system, an agent is a black box you can’t trust.
6. Scaling and Fault Tolerance
A single agent handling 10 requests a day is a simple task. Dozens of agents processing thousands of requests per hour is an engineering challenge. You need cloud infrastructure with auto-scaling, task queues, error handling, and retries.
7. Security
An agent works with company and customer data. Access control, encryption, action auditing, and prompt injection protection aren’t “nice to have” — they’re mandatory requirements for any production solution.
Assembling all of this from separate components is possible but expensive: you’d need a team of developers, months of work, and ongoing maintenance. That’s why a platform-based approach is the only sensible choice for 90% of companies.
Directual: The Infrastructure for Building AI Agents
Directual is a cloud-based no-code/low-code platform originally designed for building complex backend applications. Today, it provides everything you need to build full-fledged AI agents — from LLM connectivity to workflow orchestration and data storage.
Why Directual, Not “Just Another Bot Builder”
The market is flooded with tools for building chatbots and simple AI automations. Most of them work on a “if the user says X, respond with Y” principle. That’s not agents. That’s decision trees with an LLM wrapper.
Directual offers a fundamentally different approach. The platform provides a full-fledged environment for building backend logic: databases, API endpoints, data processing workflows, external service integrations, and access control systems. These are the building blocks from which you can assemble an agent of any complexity.
Directual’s Key Capabilities for AI Agents
Connect to Any LLM
Directual lets you integrate with any LLM provider via HTTP requests: OpenAI, Anthropic Claude, Google Gemini, Mistral, and any other model accessible through an API. You’re not locked into a single provider and can switch between models based on the task and budget.
Workflows as an Agentic Loop
Directual’s backend scenarios are a visual business logic builder. You construct chains: trigger (webhook, schedule, database change) → data processing → LLM call → response analysis → tool invocation → re-call to LLM if needed → execute action. This is the agentic loop, implemented visually.
Built-in Database
Every Directual project comes with a built-in database. An agent can store conversation histories, analysis results, customer data, and intermediate states — all in one place. For RAG systems, you can connect vector stores.
API Out of the Box
Directual automatically generates REST APIs for every data structure. This means your agent can not only process incoming requests but also expose APIs for external systems. You can embed an agent into any existing business process via standard API calls.
Integrations and Webhooks
Directual supports integrations with hundreds of services: CRM systems, messengers (Telegram, WhatsApp), payment systems, marketplaces, and email services. An agent can interact with the outside world through HTTP requests and webhooks, responding to events in real time.
Cloud Infrastructure
Directual runs in the cloud — you don’t need to worry about servers, scaling, or fault tolerance. The platform provides auto-scaling, monitoring, and logging. You focus on the agent’s business logic, not DevOps.
Security and Access Control
The built-in roles and permissions system lets you control which data the agent can access and which users can interact with it. All actions are logged. This is critical for enterprise use cases and regulatory compliance.
Real-World Use Cases
Customer Service
Remember that Customer Service Representatives are the second most exposed occupation in Anthropic’s study? With Directual, you can build an agent that processes incoming customer inquiries: classifies the request, searches a knowledge base (RAG), formulates a personalized response, and escalates to a human when needed. All without a single line of code.
Marketplace Review Management
For e-commerce sellers on Amazon, eBay, Etsy, and other marketplaces, an agent can monitor new reviews, analyze sentiment, generate responses tailored to the product context and platform policies, and classify reviews by actionability. All in fully automatic mode.
Financial Analysis
Financial analysts are among the top AI-exposed occupations. With Directual, you can build an agent that aggregates data from multiple sources, prepares analytical summaries, detects anomalies, and generates reports. The human focuses on strategic decisions while the agent handles routine data processing.
HR and Recruiting
An agent can pre-screen resumes, conduct initial candidate screening, compose personalized outreach messages, and schedule interviews. Given that hiring is already slowing in exposed occupations, optimizing HR processes through AI agents is becoming a critical necessity.
Internal Operations
Document processing, report preparation, request approvals, KPI monitoring — all tasks that an AI agent on Directual can handle. Every automated process frees up your employees’ hours for tasks requiring creative thinking and expertise.
Free Course: Start Building AI Agents Today
We understand that the leap from “interesting, but unclear” to “building my first agent” is a big step. That’s exactly why Directual offers a free course on building AI agents and RAG systems.
What You’ll Learn
The course consists of three video lessons totaling about 60 minutes. The first lesson covers the fundamentals: what LLMs are, how they work, their strengths and weaknesses, and how to add AI to your product without writing code.
The second lesson dives into RAG — Retrieval-Augmented Generation. You’ll learn how to create an AI assistant that answers based on your own data rather than “hallucinating” from the model’s general knowledge. Vector embeddings, semantic search, knowledge base integration — all hands-on inside Directual.
The third lesson covers advanced techniques: structured output (JSON), chain-of-thought reasoning, hallucination detection via logprobs, and running open-source models on your own hardware.
The course is designed for entrepreneurs, product managers, marketers, analysts — anyone who wants to understand how to build AI agents but doesn’t have deep programming expertise. All you need is a basic understanding of Directual (there are separate introductory courses in the academy for that).
After completing the course, you’ll be able to independently build a working AI agent: from designing a RAG system to deploying the agent in production.
AI Agent Deployment Strategy: A Step-by-Step Plan
Anthropic’s data suggests a sensible strategy: start with tasks that already show high observed exposure, and gradually expand coverage.
Step 1: Process Audit
Map out your company’s tasks. Identify which ones fall into high observed exposure categories: text processing, data analysis, customer service, report generation, document handling. That’s where AI agents will deliver maximum impact.
Step 2: Pilot Project
Choose one specific task and build a prototype agent on Directual. This could be auto-responding to routine inquiries, generating analytical summaries, or processing incoming documents. Key requirement: the task should be routine enough for automation to be noticeable, but not so critical that agent errors would cause serious consequences.
Step 3: Iterate and Expand
Based on pilot results, refine the agent: improve prompts, expand the knowledge base, add edge case handling. Then move to the next task. Directual lets you build agents incrementally: each new scenario is added as a separate module without touching existing logic.
Step 4: Scale
Once pilot agents prove their effectiveness, scale the solution: connect new channels (messengers, email, marketplaces), expand the number of scenarios handled, and integrate agents with core business systems. Directual’s cloud infrastructure handles scaling without needing a DevOps team.
Step 5: Monitor and Optimize
Regularly analyze agent performance: what percentage of requests are handled without human involvement, how accurate are the responses, where do errors occur. Use this data for continuous improvement. The Anthropic Economic Index is updated regularly — track how the AI automation landscape evolves in your industry.
AI Agent Myths and Misconceptions: An Honest Look
Despite the compelling data from Anthropic, plenty of myths and misconceptions about AI agents persist. Let’s address the most common ones and explain why they don’t hold up.
Myth 1: “AI Agents Are Just Hype — People Will Forget in a Year”
This argument was made about the internet in 1998, smartphones in 2008, and cloud computing in 2010. The skeptics were wrong every time — not because the technologies were perfect from the start, but because they solved real business problems.
Anthropic’s research shows that AI already covers a measurable share of tasks in real occupations. This isn’t a forecast — it’s a fact backed by usage data. 97% of tasks that people actually perform with Claude fall into categories assessed as theoretically feasible for AI back in 2023. Theory is becoming practice, and the process is accelerating.
Moreover, trillions of dollars in investment stand behind AI agents: from Microsoft and Google to Anthropic and dozens of startups. Infrastructure is developing rapidly. Models are getting cheaper and better every quarter. Opting out of this trend is like deciding in 2005 that “the internet is a bubble” and continuing to operate without a website.
Myth 2: “Our Company Is Too Small for AI”
Quite the opposite. Anthropic’s research shows that AI primarily automates tasks typical of white-collar work: text processing, customer service, data analysis, report preparation. These are tasks that exist in companies of every size.
For small businesses, the effect can be even more dramatic. If you have 5 employees and one of them spends 60% of their time answering routine customer questions, an AI agent can effectively free up nearly an entire headcount. For a company of 1,000, the same effect gets diluted across dozens of departments.
No-code platforms like Directual make building agents accessible without hiring an AI engineer at $100K+ per year. You can build a working agent in days, not months, and start seeing returns almost immediately.
Myth 3: “AI Is Unreliable and Hallucinates — You Can’t Trust It”
This is a valid concern — but it’s not an argument against AI agents. It’s an argument for proper architecture. A well-designed AI agent isn’t a “bare” language model set loose. It’s a system with clear rules, constraints, and control mechanisms.
First, RAG technology (Retrieval-Augmented Generation) enables the agent to answer based on your specific data rather than hallucinating from general knowledge. Second, the agent can be configured to escalate: at any sign of uncertainty, it passes the task to a human. Third, all agent actions are logged, and you can check what it did and why at any time.
Directual provides all of these mechanisms: a built-in database for RAG, workflows with escalation conditions, and full action logging. You control the agent, not the other way around.
Myth 4: “You Need to Be a Programmer to Build an AI Agent”
This was true two years ago. Not anymore. No-code and low-code platforms have radically lowered the barrier to entry. With Directual, you build agents visually: drag blocks, configure connections, set prompts and conditions. All you need is an understanding of your business process and basic logic.
Directual’s free AI agent course takes just 60 minutes — and afterward, you’ll be able to build a working agent with a RAG system. This isn’t marketing hyperbole — it’s the actual training duration, accessible to anyone with a technical comfort level of “I know how to use a spreadsheet.”
Myth 5: “Better to Wait Until the Technology Matures”
Anthropic’s data dispels this myth directly. The hiring slowdown for young workers in exposed occupations is already real. Companies that have deployed AI are already optimizing their processes. Every month you wait is a month your competitors are building their advantage.
Early adoption also has a cumulative effect. An agent working today accumulates data: customer question patterns, common issues, effective response formulations. That data becomes a competitive advantage in its own right. The earlier you start, the more data you’ll collect.
You don’t need to wait for the technology: it’s already here. Claude, GPT, and Gemini are good enough for the vast majority of business tasks. Platforms like Directual are mature enough. The only thing missing is your decision to start.
The Economics of AI Agents: Running the Numbers
Cost of Building an Agent: Three Approaches
Let’s honestly compare three approaches to building an AI agent — by cost, timeline, and outcome.
Approach 1: Build from Scratch
You’ll need a backend developer, a frontend developer (if you need an interface), a DevOps engineer for deployment and maintenance, and an ML or prompt engineer for the AI components. Minimum team: 2–3 people. Realistic timeline from idea to production: 3–6 months. Cost: $50K to $200K+, not counting ongoing support.
You’ll end up with a custom solution that you must maintain and evolve yourself. If the key developer leaves, you risk being stuck with an undocumented codebase that’s impossible to hand off quickly.
Automation platforms let you quickly assemble simple chains: trigger → LLM call → action. This works for primitive scenarios but breaks down when you try to build a real agent. No built-in database (or a primitive one), no proper API, no mechanisms for implementing an agentic loop with tool use.
Automation platforms are great for linear workflows: “if an email arrives, post to Slack.” But an AI agent isn’t a linear workflow. It’s a loop: the model thinks, calls tools, analyzes results, thinks again, and makes a decision. That requires a different architecture.
Approach 3: Directual
Directual combines the best of both worlds. As a platform, it provides the speed and accessibility of no-code. As a backend framework, it delivers a fully featured database, APIs, workflows with arbitrary logic, and HTTP integrations. Building an agent from idea to MVP takes days, not months. Cost: an order of magnitude lower than custom development.
Importantly, Directual isn’t a toy builder. Production systems with thousands of users run on it. The cloud infrastructure ensures scalability, while the visual interface makes logic transparent to all team members, not just developers.
AI Agent ROI: A Customer Service Example
Let’s take a typical scenario: an e-commerce seller receives 100 reviews per day. An employee spends an average of 5 minutes on each response: read the review, understand the context, compose the reply, send it. That’s 500 minutes — over 8 hours, a full headcount.
An AI agent on Directual can handle 80–90% of these reviews automatically: standard thank-yous, typical complaints, information requests. Only 10–20% remain for manual processing — non-standard situations requiring human judgment. Savings: 6–7 hours per day, or 130–150 working hours per month.
At a fully loaded employee cost of $25–$40/hour, the savings come to $3,250–$6,000 per month. The cost of AI inference using efficient models (Claude Haiku, GPT-4o-mini) is $50–$100 per month for 100 reviews/day. Gross margin: over 90%.
And this is just one scenario. When the agent also handles buyer questions, marketplace chat conversations, and review analytics, the economic impact multiplies.
Non-Obvious Benefits
Beyond direct headcount savings, AI agents deliver several non-obvious advantages. Response speed: an agent replies in seconds, not hours. On marketplaces, fast review responses are a ranking factor and loyalty driver. 24/7 availability: an agent doesn’t take lunch breaks, sick days, or vacations.
Quality consistency: a human writes worse responses on Friday evening than Monday morning. An agent performs at the same quality level 24/7. Scalability: if tomorrow you get 1,000 reviews instead of 100, the agent handles it — you just need to increase the inference budget by a few hundred dollars.
Data: every agent-customer interaction is structured data you can analyze. What questions come up most often? Which products generate the most complaints? Which response formulations improve ratings? This analytics is invaluable to a business, and it emerges as a byproduct of the agent’s work.
Industry Applications: Where AI Agents Deliver Maximum Impact
E-Commerce and Marketplaces
This is one of the most mature industries for AI agents. Automating review and question responses, generating product listing content, monitoring competitor pricing, managing ad campaigns, and sales analytics — all tasks that AI agents perform today.
According to Anthropic, customer service is one of the occupations with the highest observed AI exposure. For marketplace sellers, this means automating buyer interactions isn’t a question of “should we?” but “when?” And the answer is: better now.
Finance and Accounting
Financial analysts are in the top 10 occupations by observed exposure. Processing source documents, reconciling data, preparing reports, and monitoring budget variances are all routine tasks that consume time better spent on strategic analysis.
An AI agent can process incoming invoices, classify expenses, prepare summary reports, and flag anomalies. On Directual, such an agent connects to your accounting system via API and operates in fully automatic mode.
HR and Recruiting
Pre-screening resumes, writing job descriptions, answering typical candidate questions, and generating hiring funnel reports are all tasks ideally suited for AI agents. Given that Anthropic’s research specifically identifies changes in hiring patterns, optimizing HR processes through agents is a strategically critical direction.
Legal Services
Contract analysis, precedent research, preparing standard documents, and monitoring legislative changes are tasks where an AI agent with a RAG system delivers enormous time savings. The final decision always stays with the lawyer — the agent handles the preparatory work that used to take hours.
Education and Training
Personalized learning programs, automated assignment checking, test generation, and answering student questions based on course materials — an AI agent can become a personal tutor for every learner. A RAG system ensures the agent responds strictly based on specific course materials rather than making things up.
Technical Support
Ticket classification, knowledge base search, step-by-step user instructions, and automatic escalation of complex cases — this is AI agents’ bread and butter. Anthropic’s research confirms that technical support tasks are already being actively covered by AI. Companies that have deployed agents in tech support report a 40–60% reduction in first-line workload.
The Future: What’s Coming
Anthropic’s research points unambiguously in one direction: AI task coverage will grow. Models are getting more powerful, integrations are getting simpler, inference costs are falling. The red zone on Anthropic’s charts (actual usage) will steadily approach the blue zone (theoretical potential).
But this isn’t a story about “robots taking all jobs.” It’s a story about transformation. Jobs won’t disappear — they’ll change. People who know how to work with AI agents, manage them, and design them will be worth their weight in gold. Companies that have built AI infrastructure will dominate their niches.
The youth hiring slowdown data is the first alarm bell. In 2–3 years, as models improve further and the habit of automation takes root, changes will become far more visible. And those who started building infrastructure today will be ready.
Anthropic explicitly states it intends to regularly update its research as new data emerges. This means observed exposure will keep growing — and each update will serve as a reminder that inaction is getting more expensive by the day.
Trends That Will Amplify the Role of AI Agents
Multimodality: modern models already understand not just text but images, audio, and video. This opens new scenarios: agents will be able to process defect photos from customers, analyze error screenshots, and perform visual quality control. Directual already supports multimodal LLM requests through standard HTTP integrations.
Multi-agent systems: instead of one “super-agent,” businesses will use multiple specialized agents working in concert. One agent classifies the request, a second searches the knowledge base, a third formulates the response, a fourth runs quality control. Directual’s flexible workflow system and internal APIs are ideal for orchestrating these architectures.
Integration standardization: Anthropic’s MCP protocol, OpenAI’s function calling, tool use as a common pattern — the AI industry is converging on standard ways to connect tools to language models. This means the ecosystem of ready-made integrations will grow exponentially, and platforms with open API architectures will reap the most benefit.
Inference cost reduction: the cost of processing a single LLM request drops by multiples every year. Tasks that were economically unfeasible to automate a year ago are becoming profitable today. This expands the range of tasks worth delegating to AI agents and makes the technology accessible to businesses of every scale — from freelancers to enterprises.
Growing trust and regulatory clarity: as successful AI agent deployments accumulate, the trust barrier drops. Companies see pilot project results from competitors and partners. Regulators are forming clear rules of the game. All of this creates an environment where deploying AI agents becomes not a risky experiment but a sound business practice.
Conclusion: Time to Act
Let’s recap.
Anthropic’s research has scientifically confirmed what many intuitively sensed: AI is already having a real impact on the labor market. Not in theory, not “someday,” but right now. 75% of programmers’ tasks, a significant share of customer service, financial analysis, and data entry tasks are already being covered by AI in professional settings.
Yet we’re still at the very beginning. The gap between potential and reality is enormous. This means a window of opportunity is wide open for business: those who build AI infrastructure now gain a head start over their competitors.
AI agents aren’t hype or a buzzword. They’re a practical tool for business process automation, validated by data from one of the world’s leading AI developers. And you don’t need a team of programmers to build them — you need the right platform.
Directual is the infrastructure for building AI agents. Database, APIs, processing workflows, integrations, cloud, security — all in one place, no code required. Start today — take the free course and build your first agent.
We’re back with the big one. After a stellar reaction last year, we’re doubling down — the same unbeatable deal, extended dates, even more opportunity. From November 17 to November 30, this isn’t a “maybe” — it’s your moment to jump.
While some companies pour millions into flashy ChatGPT demos and get zero results, others are already making millions from AI agents. Why do 95% of corporate AI projects fail — and how can you join the successful 5%? Let’s unpack MIT data, real-world cases, and how agentic systems with memory and learning are changing the rules of the game.
Two mindsets are clashing: the fast-flowing vibe-coding, where AI drives creativity, and the structured low-coding, where control and architecture matter. This article explores how to balance the two.
We’ve rolled out an update that transforms everyday work on the platform. This release brings eight-language support, long-awaited Dark Mode, smart copying of steps and sections, new icons, an improved SQL step, and several other handy enhancements.
Imagine launching and scaling a startup entirely on your own, without a large team or substantial investment. In 2025, this vision is a reality, thanks to the powerful combination of artificial intelligence (AI) and no-code platforms. This isn't just a trend; it's a genuine entrepreneurial revolution, and you can become part of it right now.
Join 22,000+ no-coders using Directual and create something you can be proud of—both faster and cheaper than ever before. It’s easy to start thanks to the visual development UI, and just as easy to scale with powerful, enterprise-grade databases and backend.