< Back to Blog

The Alignment Meeting That Made Me Finally Ship Something

TL;DR / Key Takeaways

  • Solo automation replaces alignment meetings with direct technical feedback from logs, trades, and model output.
  • The Kalshi bot runs unattended, pulls ensemble data, scores markets, and records decisions without a stakeholder process.
  • System upgrades can be tested and shipped when the data supports them rather than when a committee approves them.
  • The article frames automation as a way for experienced engineers to recover fast build-measure-learn cycles.

I have been in software engineering for over 30 years. I have built data pipelines, ETL systems, real-time streaming platforms, and enterprise data warehouses. I have worked at companies with 50 people and companies with 50,000. The technology changes. The meeting culture does not.

There is a specific type of meeting that I think breaks engineers. Not the sprint review or the postmortem. Those are useful. The one that breaks you is the alignment meeting.

The alignment meeting exists to ensure that all stakeholders are aligned on the direction of the initiative before proceeding to the next phase of planning. That is a real sentence that real humans say with serious faces in corporate buildings.

What it actually means is: nothing is allowed to be built until everyone has agreed on what might be built, and that agreement must be documented, reviewed, distributed, and then re-aligned upon in a follow-up meeting because someone missed the first one.

I am not dramatizing. In one quarter at a previous employer I counted three separate alignment meetings whose purpose was to align on the roadmap for the upcoming roadmap planning session. No code was written. No user saw anything new. The Jira board moved cards between states and everyone wrote updates about the updates.

The system that does not need to be aligned

I built Predict & Profit in my spare time while working a full-time job as a Senior Data Engineer. It is a Python-based automated trading bot. It pulls 62-member ensemble weather model data from NOAA and Open-Meteo, scores Kalshi prediction market contracts for mispricing, and places trades automatically 24/7 via the Kalshi API.

It does not have stakeholders. It does not have a roadmap review cycle. It does not ask for permission.

At 2 AM while I am asleep, the bot wakes up, pulls fresh GFS ensemble data, scores 14 weather markets against ensemble probability thresholds, rejects the ones that do not meet minimum edge requirements, and places trades on the ones that do. Then it logs everything to PostgreSQL and goes back to waiting.

The outcome is objective and measured daily. It wins or it loses. The number in the account either goes up or it does not. There is no ambiguity, no narrative, no spin. You cannot alignment-meet your way to a better P&L.

That simplicity is not just appealing. It is the whole point.

What corporate engineering actually costs

I want to be careful here because I do not hate my day job. The people are fine. The work is technically interesting. But there is a real cost to spending your best engineering hours inside institutional systems that are optimized for risk avoidance rather than value creation.

The cost is not just time. It is the gradual erosion of your ability to just build something. After enough years of feature flagging, approval chains, architecture review boards, and quarterly OKR alignment, you forget what it feels like to see an idea and turn it into a working system in the same week.

I built the first working version of the Predict & Profit trading engine in about three weeks. One Saturday I wrote the Open-Meteo API client. The next I wired up the Kalshi API with RSA-PSS authentication. The third weekend I built the edge scoring logic and connected a PostgreSQL database for trade logging. At the end of week three, the bot placed its first live trade.

That happened because I was the architect, the engineer, the product manager, the QA, and the deployment pipeline. The iteration cycle was measured in hours. When something was wrong I fixed it immediately. When I had a better idea I implemented it the same day.

No ticket. No sprint ceremony. No dependency on a team I had to coordinate with across three time zones.

The upgrade cycle looks nothing like enterprise software

| Upgrade path | Validation method | Approval dependency | Shipping criterion | | --- | --- | --- | --- | | Enterprise roadmap | Proposal, review, stakeholder alignment | High | Consensus and prioritization | | 31-member GFS bot | Live data checks and historical calibration | Low | Stable ingestion and positive edge score | | 62-member HGEFS bot | Dual-ensemble agreement and forward testing | Low | Better uncertainty filtering with fewer marginal trades |

When I decided to upgrade from 31-member GFS ensembles to a 62-member HGEFS hybrid that combines GFS with NOAA's new AI ensemble, I did not write a proposal. I did not convene a working group. I read the NOAA documentation, tested the AWS S3 ingestion pipeline on a weekend, validated the combined ensemble math, and shipped the upgrade.

That is not recklessness. I test things. I run the system locally before it goes live. I check the P&L carefully. But the decision loop is mine and the cycle time is days, not quarters.

In a corporate environment, a change of equivalent technical scope — swapping out a core data source from one model to a hybrid two-source ensemble with new ingestion logic — would require a design document, an architecture review, a security review, a data governance review, and at least one alignment meeting with stakeholders who do not fully understand the technical change they are being asked to align on.

By the time that process completed, the market conditions that made the upgrade interesting would have moved on.

Losing is part of it. The math does not care about your narrative.

I want to be clear about something. This system loses. Not constantly, but it loses. The first batch of live trades on the old 31-member configuration went 1 and 5. Six trades, five losses, one win. I published the results.

In corporate life, a 17% success rate on an initiative would trigger a retrospective, a blame-distribution exercise, and a re-alignment on strategy going forward. It would also probably mean those results never got published externally where anyone could see them.

In a systematic trading context, six trades is not a statistically significant sample. The expected value of each trade at execution time was positive. Short-term outcomes in small samples are noise. The right response is to continue operating according to the system's rules while accumulating enough trades to measure real performance.

That distinction, the separation of process quality from short-term outcome variance, is something corporate culture is structurally incapable of internalizing. The quarterly earnings pressure, the stakeholder communication cycles, the performance review optics: all of these push organizations toward optimizing for the appearance of success rather than the mathematical expected value of their choices.

My bot does not have a performance review. The PostgreSQL logs do not ask how I feel about the results. The number is the number.

Who this is for

I built this for myself first. The fact that it is now a product — source code available, ebook on Amazon — is a function of people asking what I was doing and whether they could replicate it.

I am not going to tell you this will replace your income in 30 days. It might not replace your income at all. Prediction markets carry real risk, the Kalshi fee structure is punishing on small edges, and running an algo trading system requires genuine technical competence to do correctly.

What I will tell you is this. If you are a developer who has been sitting in alignment meetings for years wondering when you are going to build something for yourself, the barrier is lower than you think. Public weather model data is free. Open-Meteo has no rate limits on the ensemble endpoint. Kalshi has a functional paper trading environment. Python does everything you need.

The first step is not the technical part. The first step is deciding that what you build outside of work is allowed to matter as much as what you build inside it.

I made that decision at 60. It should not have taken that long.


The Predict & Profit bot is live and trading. The source code is available if you want to run your own version.

Get the Source Code — $67

Read the Ebook — $9.99 on Amazon

Frequently Asked Questions

Q: What makes the bot different from an enterprise software project?

A: The bot has a direct measure of correctness: market decisions, fills, settlement outcomes, and logs. There is no separate alignment process needed to decide whether the system worked.

Q: How was the HGEFS upgrade validated technically?

A: The upgrade was tested by ingesting NOAA data, validating the combined ensemble math, and comparing forecast behavior before production trading. The decision depended on measured pipeline behavior rather than proposal approval.

Q: Why are losses still useful feedback?

A: A loss can reveal whether the model was wrong, whether variance explains the result, or whether a filter allowed a weak setup. Logged trade context makes that diagnosis possible.

Related Reading