Forbes contributors publish independent expert analyses and insights.
A recent study from Stanford University has sounded a wake-up call for AI entrepreneurs. By surveying 1,500 workers across 104 occupations, researchers found that "41.0% of Y Combinator company-task mappings are in the 'Low Priority' and 'Automation Red Light' zones", meaning almost half of the automation tools being built target tasks that employees don't actually want automated. It seems that despite the fervent competition for AI talent hires, founders are often aiming at problems that workers consider unimportant or even sensitive.
Making use of the U.S. Department of Labor's O*NET task database and a novel Human Agency Scale, the Stanford team unearthed a huge disconnect: venture-backed startups are pouring billions into automating creative strategy and managerial tasks, despite workers crying out for for help with mundane tasks. This misalignment represents billions of dollars in poorly allocated capital and a huge missed opportunity to improve real workplace productivity.
Conversely, workers are asking for AI to tackle the drudgery of their day-to-day jobs, not to replace the parts that they actually find interesting. The Stanford study shows that the most common tasks employees do want automated are routine and repetitive: processing expense reports, inputting data, running reports and fixing IT issues. For example, a UiPath survey of 6,400 workers found that respondents ranked mundane tasks like data analysis (52%), data entry/creating datasets (50%), resolving IT issues (49%), and running reports (48%) as the top tasks they want AI to help with. ProcessMaker research concurs: a typical office worker spends 10% of their time on manual data entry and more than 50% creating or updating documents. Workers see AI as a way to "free up time for high-value work" (69.4% of pro-automation survey responses cited this motivation) rather than a streamlined replacement for the fun or creative parts of their job.
Perversely, startup builders often seem to flip this preference on its head. Instead of the tedious tasks, many AI tools target high-level strategy, marketing or client-relationship work; tasks that employees want to keep human. The result is bordering on farcical: employees groaning under 1,000 weekly copy-pastes and waiting in line for expense approvals, while startups chase sexy use cases. In fact, some studies point to around 26% of an office worker's time being wasted on pointless tasks - roughly 76 days per year per employee - and nearly half of executives believe better tech would improve productivity. And yet, 41% of YC-backed AI projects focus on what workers rank lowest in automation priority.
Venture capital has fuelled this mismatch. In Q1 of 2025, a record $73.1 billion (57.9%) of all VC funding went into AI and machine learning startups, as part of the global AI race. Investors are chasing "10x" disruption and headline-grabbing technology, not incremental office productivity. Longtime AI investor Mark Goldberg of Index Ventures told the Wall Street Journal that after the arrival of ChatGPT, investors fell into "magical thinking" and hoped AI's value "would materialize at light speed". In reality, that high-octane funding spree has meant founders often prioritize flashy demos over user needs.
The data paints a stark picture: AI startups face astronomical odds of failure. Some estimates warn that over 90% of AI startups fail within five years, and recent research finds 42% of companies have scrapped most of their AI projects due to poor alignment with real-world problems. In other words, by ignoring what workers actually need, many startups are betting on flashy tech that never pays off. Meanwhile, the basics go unaddressed. The AI boom has ironically left the "boring" but essential work unattended.
Perhaps the most revealing finding is how workers want to engage with AI; as partners, not replacements. Stanford introduced a "Human Agency Scale" from H1 (full automation) to H5 (full human) and found a clear pattern: in 47 of 104 occupations, the dominant worker preference is H3, or equal partnership between human and AI. In other words, even if AI could do a task alone, workers often prefer to stay involved. Overall, workers prefer higher levels of human oversight than AI experts think necessary for 47.5% of tasks.
This insight ties to Google CEO Sundar Pichai's famous mantra: "The future of AI is not about replacing humans, it's about augmenting human capabilities." Employees evidently see AI as a way to remove grunt work, not to steal their spotlight. Edward Houghton of the UK's Chartered Institute of Personnel & Development underscores this: "Automation definitely gives more opportunity to add human value and enables the organisation to make the most of the human elements of work, such as building relationships that are very hard to automate." In practice, companies that build AI tools that enhance, rather than supplant, human effort may find more success and adoption.
Even when AI is helpful, many workers remain wary. The Stanford team found the top reason for resisting AI (45%) is lack of trust in system accuracy or reliability. Fear of job loss (23%) and loss of human touch (16%) are also common concerns. This "trust gap" shows up in surveys: UiPath found that only 55% of employees express confidence in responsible AI deployment compared to 62% of executives. And mounting evidence suggests that some workers actively push back on AI initiatives.
In a recent report, 31% of employees (41% of Gen Z) admit to "sabotaging" company AI efforts by refusing to use the tools. Kevin Chung, Writer's chief strategy officer, explains: "Two years ago, nine times out of 10 it was about 'why am I training the robot that's going to take my job away from me?'... Now that they've had a chance to play with it, [many employees] are quite disappointed in the results they've seen, and that's why they are disillusioned by it."
In other words, employees aren't rebelling because they love their admin and repetitive tasks; they're fed up with underperforming AI. Addressing this requires transparency and involvement; if workers feel excluded, burnout and distrust become self-fulfilling barriers. As Bain's Sarah Elk says, ""If it feels like a black box that's sprung on the frontline worker, they probably won't trust it... You'll get results far faster if you take the time up front to engage in a thoughtful process with the people who will be impacted."
Having established where we shouldn't be focusing AI investment, let's now focus on the potential savings of automating the right tasks. Let's take expense reports, for example: around 26% of office time is wasted on "pointless" admin, with each employee spending an average of 42 minutes on needless tasks and 24 minutes on expenses every day. In a 20-person operations team, that could amount to over a million copy-paste actions per year. Automating even a fraction of these tasks yields enormous potential savings.
These figures underline the opportunity: the biggest gains are in "unsexy" areas. Instead of chasing the next viral chatbot, entrepreneurs might find more wins by tackling the boring, time-sucking tasks employees hate. For businesses, that means turning lost hours into productive ones, along with huge cost savings.
The Stanford findings point to a simple truth: successful AI means starting with human needs, not technology for its own sake. Executives should take Sundar Pichai's advice to heart, focusing on partnership, not replacement, and designing systems that workers trust. Companies can do this by involving employees in the AI design process, being transparent about capabilities, and training staff for new human-centric roles.
Venture capitalists and founders, meanwhile, should temper the "light speed" hype and ask: are we solving a real pain? A shift toward human-centered design, building agents to assist with tedious tasks could unlock vast new markets. It might not be flashy, but it's what workers actually want.
The Stanford study exposes a costly blind spot: the AI industry has been asking "can we do it?" when it should be asking "should we do it - and do our users want it?"
41% of current AI tools fall into the "red light" zone because they ignore user needs. The companies that succeed will be those that listen to workers and target the monotonous and the mundane, not the dreams. In the race toward AI-driven automation for productivity, the biggest winners will be the ones building tools that employees actually ask for - turning that 41% liability into opportunity.