easy alpha

the automation gap is not about models, it’s about the boring stuff nobody wants to own

26-Jan-26

There is a surprising amount of easy-win AI automation sitting untouched in most industries. Not frontier research. Not novel architectures. Straightforward workflow automation that would take a competent builder a week to ship and save 20 hours of manual work every month. Nobody is picking it up.

I spent the better part of two years trying to bring forecasting, ML, and agentic tooling into large industrial organizations. The gap between what is technically possible and what organizations actually deploy is where the alpha lives.

the real bottleneck

Every AI conference starts with models. Transformer architectures, fine-tuning strategies, context windows, agents. The vendor pitch is always “our model is better.” In practice, the model is almost never the bottleneck.

It is three things, and they are all boring.

Clean, accessible data. Not “big data.” Just data you can actually reach without three months of IT tickets and a governance committee. A scheduling system with an API. A CRM that exports properly. Inventory records in a format a machine can read.

Clear end-to-end ownership of the business problem. Not a steering committee. Not a cross-functional working group. One person who can say: this is what we are solving, this is how we know it worked, and I can approve changes by Tuesday.

The ability to test and iterate quickly. Ship something, measure it, improve it next week. Not a 14-month roadmap with a go-live date that everyone knows will slip.

Teams that do those three things consistently compound an advantage. Teams that do none of them spend two years on a “digital transformation” and have a PowerPoint to show for it.

cheap primitives, expensive excuses

The primitives for building bespoke automation for almost any digital workflow are now cheap and widely available. API calls cost fractions of a cent. Open-source models run on commodity hardware. Orchestration frameworks let you chain tools, data sources, and decision logic in an afternoon.

If you can define a workflow clearly, you can often ship a useful first version fast. Not a demo. Not a proof of concept living in a Jupyter notebook that someone presents once and never touches again. A working system that handles real inputs and delivers real outputs. Then you improve it weekly.

The barrier to entry collapsed. The barrier to execution did not. That is where the alpha is. The gap between “this is technically possible for almost nothing” and “our organization still cannot ship it” is enormous, and it is not closing.

why large orgs stall

I tried to make inroads in large organizations. Oil and gas, industrial, the sectors I know best. The problems were real: forecasting accuracy that left money on the table, manual reporting that consumed entire teams, operational decisions made on stale data. Every one of those is a solvable problem with available tools.

The organizations were not solvable. Large orgs tend to be too slow and too political to capture easy wins. The “AI enablement” lead got the role through seniority or a prior internal home run, and now spends their time fighting organizational friction and trying to translate value across layers. IT owns the infrastructure but does not think in products. Business owners have the problems but not the technical intuition. Nobody has end-to-end ownership.

The AI conversation inside these organizations is running in three parallel tracks, and none of them are talking to each other. Camp one is still stuck on “AI is scary.” Job replacement, environment, the usual fears. Most of these people have never actually used the tools they are afraid of. Camp two is leadership defaulting to whatever Microsoft is selling, because nobody gets fired for buying Azure. Camp three saw a demo go well once and now thinks everything should ship in a week. They want the speed without understanding what made the thing fast in the first place.

All three camps have the same blind spot. They are reacting to AI as something that happens to them instead of something they can point at a problem.

Speed expectations and understanding what is now possible are two completely different forces. One compresses timelines. The other changes the org chart. When the inertia breaks top-down through impatience, you get cost-cutting theater. When it breaks through someone actually building, you get transformation. But that second path requires someone willing to be wrong publicly inside an organization that punishes exactly that.

The part that keeps bugging me: the people who actually know where the work breaks down, the operators, the process people, do not have the authority to change anything. The people with the authority do not know what is broken. So nothing moves.

The people best positioned to close this gap are the ones who have been in the weeds across multiple levels of an organization: close enough to the data to know what is broken, close enough to operations to know what matters, close enough to leadership to know what will actually get approved. Forecasting background, ML chops, business context. That profile exists. Large orgs just do not know how to hire for it, because the role sits between IT, operations, and strategy, and nobody owns that intersection.

The result: millions spent on strategy, architecture reviews, vendor evaluations, and governance frameworks. The automation that would save a team 20 hours a week sits unbuilt because nobody can say “yes, build it, ship it this week.”

When the approval chain for a workflow automation is longer than the build time, something structural is broken.

what I built instead

I started building for small operators partly out of frustration. The AI enablement roles at large organizations exist, but they are political, not product-oriented. I am a forecasting and ML person at heart. But the gap between what I could build and what those organizations could absorb was not closing on any timeline I was willing to accept.

So I started building where things move fast. Small and mid-sized businesses where the decision-maker is in the room and the feedback loop is a week, not a quarter.

The pattern repeats across industries that have nothing in common except the same automation gap. A landscape management company running on manual scheduling and email chains. A lighting business with inventory logic trapped in someone’s head. A dental clinic where 40% of admin time is copy-paste between systems. Wildly different sectors, nearly identical bottlenecks. That is how you know the gap is structural, not industry-specific.

In every case, the same three things: define the workflow, build the automation, iterate weekly.

I built infrastructure that turns repetitive work into always-on automation capacity. The unit economics work because the primitives are cheap and the workflows, once defined, are stable enough to compound.

This is what Decarb Desk does. We ship AI-enabled automation for digital workflows: scheduling, document processing, customer communications, data extraction, reporting. The kind of work everyone knows should be automated but nobody has gotten around to building.

deployment as a muscle

The organizations that will win this cycle are not the ones with the best models or the biggest budgets. They are the ones that treat deployment as a product muscle rather than a one-off project.

Ship something small. Measure whether it works. Fix what breaks. Ship again next week. That rhythm, boring as it sounds, is the entire competitive advantage. Every iteration tightens the workflow, catches an edge case, adds a step that used to require a human. Over months, the gap between organizations that do this and organizations that do not becomes structural.

Most companies treat AI like a capital project: plan for a year, build for a year, launch, declare victory. That is how you build something obsolete by the time it ships. The alternative is to treat it like product development. Small bets. Fast cycles. Real users from day one.

Nimble organizations that build this muscle will compound an advantage that gets harder to replicate with every cycle. The large incumbents running approval chains will keep commissioning strategy decks about why they need to “accelerate AI adoption.”

where this is going

The real alpha in AI automation is not inside large organizations fighting their own bureaucracy. It is in smaller, nimble operations that can define a problem Monday and ship a solution Friday. The people who will capture it are not pure ML researchers or pure business strategists. They are the cross-functional builders who have sat at enough levels to see where the seams are.

Oil and gas, producers, the companies that actually operate assets and make daily decisions under uncertainty: those are the organizations positioned to move fast if they choose to. The ones with a clear owner, a product mindset, and business-level accountability for what gets shipped.

If AI enablement at your organization sits under IT without a product owner and without business-level accountability, that is a missed opportunity. Not because IT is bad at technology. Because deployment is a product problem, not an infrastructure problem.

The shift will not come from inside these companies. It will come from someone outside who spent enough time inside to know where the real friction is. Someone who builds the thing that makes the old workflow look embarrassing. The gap is not technical. Any decent engineering team can build now. The gap is translational: knowing what to aim the build at. That is the role nobody is hiring for and everybody needs.

The alpha is not in the models. It is in the execution layer: clean data, clear ownership, fast iteration. The primitives are available. The question is whether your organization can deploy them, or whether it is still writing the strategy deck.