I thought this a good article on AI
A Robopocalypse for Jobs? Not Today, and Probably Not Tomorrow, Either
Imagine a world of human-level artificial intelligence. The job market would probably be a lot different. A few folks own all the machines with the rest of us on the dole. Not sure that is the most likely outcome, however, even if artificial general intelligence proves possible.
www.aei.org
Imagine a world of human-level artificial intelligence. The job market would probably be a lot different. There’s the “
robots take all the jobs” scenario, I suppose. A few folks own all the machines with the rest of us on the dole. Not sure that is the most likely outcome, however, even if artificial general intelligence proves possible.
Here’s another scenario: According to economist Pascual Restrepo of Yale University in his paper “
We Won’t be Missed: Work and Growth in the AGI World,” if AGI could perform all economically valuable tasks, work would shift from economic necessity to personal meaning: art, teaching, and personal care. Wages might become pegged to the cost of AI compute. Labor’s share of income could shrink even as absolute living standards rise dramatically.
Sounds better! But where are we right now? Early, early days, I think. Economists usually think about automation not as whole jobs disappearing, but as specific tasks within jobs being replaced or assisted by technology. The economics team at Goldman Sachs, a bank, assumed in a 2023
analysis that generative AI can handle moderately difficult tasks across hundreds of occupations. It estimated that about two-thirds of US jobs are partly exposed, with AI able to ultimately automate roughly one-quarter to one-half of the tasks within those roles. Globally, that translates to about 18 percent of total work being automatable. Even so, Goldman expects AI to replace only about seven percent of US jobs while enhancing 63 percent—a reshaping of work, not a mass wipeout.
A different approach is taken in the new paper “
Remote Labor Index: Measuring AI Automation of Remote Work.” The authors, from the Center for AI Safety and Scale AI, decided to treat AI systems as if they were freelance workers on real jobs. They took 240 genuine Upwork-style projects—everything from data dashboards and 3D product designs to marketing videos—and provided the same briefs, files, and deliverables to both humans and AI models such as GPT-5, Claude Sonnet 4.5, and Gemini 2.5 Pro. Human evaluators then judged whether the AI’s submissions would be acceptable to a paying client.
The result: Almost never, with a tiny 2.5 percent success rate “revealing a stark gap between progress on computer use evaluations and the ability to perform real and economically valuable work,” the paper concludes. Even the top-performing model,
Chinese AI agent Manus, “earned” only about $1,700 out of $144,000 worth of human labor. Here’s why most of the AI outputs were rejected:
Rejections predominantly cluster around the following primary categories of failure:
- Technical and File Integrity Issues: Many failures were due to basic technical problems, such as producing corrupt or empty files, or delivering work in incorrect or unusable formats.
- Incomplete or Malformed Deliverables: Agents frequently submitted incomplete work, characterized by missing components, truncated videos, or absent source assets.
- Quality Issues: Even when agents produce a complete deliverable, the quality of the work is frequently poor and does not meet professional standards.
- Inconsistencies: Especially when using AI generation tools, the AI work often shows inconsistencies between deliverable files.
The failure rate cuts sharply against the dystopian headlines predicting a white-collar wipeout that’s just around the corner. Far from replacing designers, coders, or analysts, today’s AIs are still fumbling at doing the basics correctly. Models can edit text or generate images in seconds, but they crumble when asked to manage complex, multi-step work. That gives policymakers and firms time to
adapt through training, not
panic through bans or basic-income schemes. And as AI gets better, we’ll be ready.
Learn more: America’s Self-Driving Test of Faith | Anxious About AI? We’ve Been Here Before | How Environmental Virtue Signaling Starves the Poor | AI Ban Backers Risk Freezing Progress