< Back to Blog

Scope Creep Never Ships: Why My Trading Bot Has One Job and Corporate Software Has Forty

TL;DR / Key Takeaways

  • Corporate software expands because nobody is accountable for the cost of complexity. Trading bots have a ruthless cost accountant: the P&L ledger.
  • Every feature added to a production system is a failure mode added to a production system.
  • The hardest engineering discipline is not building more. It is deciding what to leave out permanently.
  • The Predict & Profit bot has five configurable thresholds and one job. That is not a limitation. That is the whole point.

I spent most of my career building things that were originally simple.

A data pipeline to move records from one database to another. A reporting tool that displayed three numbers. An API that did one thing. Somewhere between the requirements document and the fifth sprint review, every one of them became something else. Something larger, slower, harder to reason about, and more expensive to maintain than anyone had planned.

This is not a failure of execution. It is a structural feature of how corporate software is built.

How scope creep actually works

It is rarely one big decision. It is a hundred small ones, each individually reasonable.

The VP of Finance asks if the pipeline can also include the secondary revenue table. Of course it can. The product manager asks if the reporting tool can show a trend line. Sure. The account manager asks if the API can return metadata from the legacy system. Why not.

Each request sounds small. Each one adds maybe a day of work. What nobody prices in is the permanent cost: more code paths, more test cases, more surface area for bugs, more cognitive load for the next engineer who has to modify the system. You never add a feature and then remove the complexity it brought. Complexity is one-directional in enterprise environments.

After 30 years of this, I have a theory. The reason scope creep is so hard to stop in large organizations is that nobody is actually accountable for the total cost. The engineer who adds the feature takes the credit. The future engineer who debugs the failure at 2 AM pays the price. Those are rarely the same person. The incentive structure rewards adding, not refusing.

What happens when the accountability is direct and immediate

I built Predict & Profit in my own time, on my own hardware, with my own capital at stake. The accountability structure is completely different.

Every decision about what the bot does has a direct cost I bear immediately. Adding a feature that introduces a bug means lost trades. Adding complexity that makes the code harder to reason about means I miss a logic error and the bot buys a contract I should not have touched. Adding a parameter I do not fully understand means I cannot debug the failure when it happens at 3 AM.

There is no committee to share the blame. There is no sprint retrospective to process the incident. There is a SQLite log, a P&L number, and a mistake I will not make twice.

That accountability changes everything about the engineering decisions.

The bot has five thresholds and one job

The core of the Predict & Profit weather bot is not complicated. It is deliberately, carefully not complicated.

It pulls GFS ensemble weather data from the Open-Meteo API. It scores each Kalshi temperature market on four factors: ensemble spread, ensemble confidence, model-to-market gap, and fee efficiency. It filters using five minimum thresholds. If a trade clears all five filters and the ensemble does not contradict the direction, the order goes in. Everything gets logged.

That is it.

I have been asked why the bot does not also trade precipitation markets, or city-by-city wind contracts, or economic event markets. Why not add a sentiment layer from financial news? Why not integrate a secondary model as a tiebreaker?

The answer is that each of those is an additional failure mode. Each one requires its own data source, its own error handling, its own edge case logic. Each one makes the system harder to verify, harder to debug, and harder to be confident in.

The bot has one job. Find mispriced weather contracts. Execute when the edge is real. Stay out when it is not.

Thirty years of enterprise engineering taught me that the hardest discipline is not building more. It is having a clear enough answer to "what is this system for?" that you can say no to the ninety percent of requests that are outside that answer.

The cost of "just one more thing"

Here is something that is not theoretical. Earlier this year I was upgrading the ensemble source from 31 GFS members to the 62-member HGEFS hybrid ensemble. The change improved forecast accuracy on high-uncertainty days. It was a meaningful upgrade with a measurable effect on edge scores.

While I was in that part of the codebase, it was tempting to also add a second scoring dimension I had been thinking about: a momentum factor based on how quickly the market probability had moved in the last few hours. It would have been maybe four hours of work.

I did not add it.

Not because it was a bad idea. Because I was in the middle of validating the ensemble upgrade and mixing two changes at once would have made it impossible to isolate which change caused what. If edge scores went up, was it the ensemble improvement or the momentum factor? If something broke, which change was responsible?

One change at a time. Measure. Then decide if the next change is worth the complexity it adds.

That is not how most enterprise software gets built. It is how reliable systems get built.

Lean scope is not the same as a simple system

The bot is not simple. It handles API authentication with RSA-PSS signatures, exponential backoff on rate-limit responses, partial fill detection, fee efficiency calculations, SQLite logging with thread-safe connection management, and a nightly data pull that has to complete before the morning trading window opens.

That is a real system with real complexity. The complexity is there because it has to be there, not because someone asked for it in a meeting.

The discipline is knowing the difference between complexity that earns its place and complexity that exists because nobody was willing to say no.

In corporate environments, that distinction is genuinely hard to maintain. The people who benefit from adding requirements are rarely the people who pay the cost. The meetings, the roadmap reviews, the "voice of the customer" sessions all push in one direction: more.

Running your own system puts the full ledger in front of you. What does this feature cost to build? What does it cost to maintain? What does it cost if it fails? If those numbers do not justify the benefit, the feature does not ship.

That is not minimalism as an aesthetic. It is engineering as economics.

What this means if you are thinking about building your own system

The technology to build an automated trading bot is accessible. Python, the Kalshi API, Open-Meteo for weather data, a cheap Ubuntu VM. The real cost is not the code. It is making and living with the decisions the code represents.

What markets does the bot trade? What does it refuse? What edge is real and what is noise? What failure should stop the system versus what failure should be logged and retried?

Answering those questions with specificity, and then holding the line on the answers when the temptation to expand shows up, is the actual work. It is also the work that corporate engineering environments make structurally difficult.

Scope creep in enterprise software is a governance problem. In your own system, it is a clarity problem. Do you know precisely what your system is for?

If you do, it is a lot easier to keep it doing that one thing well.


The Predict & Profit bot source code is available at the link below. It is a complete, documented Python system built to one specific purpose. If you want to extend it, you can. But it ships lean.

Get the Source Code — $67 — use code REDDIT for 15% off

Read the Ebook

Related Reading