< Back to Blog

SQLite Thread Locks in a Dual-Bot Python Architecture: What Breaks and How to Fix It

TL;DR / Key Takeaways

  • SQLite's default check_same_thread=True will crash your bot the moment a background thread tries to log a trade.
  • Sharing a single connection object across threads is not safe even with check_same_thread=False unless you add explicit locking.
  • The correct pattern for multi-threaded bot logging is a per-thread connection via threading.local() or a dedicated writer thread with a queue.
  • The dual-bot architecture (Kalshi weather bot + Alpaca stock bot) surfaces this problem faster than any single-bot setup because write contention is nearly constant.

I run two automated trading bots from the same Ubuntu server at home. The Kalshi weather bot scans GFS ensemble data every cycle and logs trade decisions and P&L to a local database. The Alpaca stock bot runs on a separate schedule, pulls Finnhub sentiment data and technical indicators, runs them through the Kronos AI model, and also logs everything.

When I migrated both bots from PostgreSQL to SQLite for the local dev and staging environment, I expected a straightforward swap. SQLite is simpler, there is no server to manage, and for a two-bot setup generating a few dozen write operations per hour it should be plenty.

What I did not account for is that "simpler" does not mean "without edge cases." SQLite has one specific threading constraint that will cause your bot to crash in ways that are annoying to diagnose because the error does not appear on every run. It only appears when two threads happen to collide on a write at the same moment. In a production bot that runs 24/7, that collision will happen.

The Error

ProgrammingError: SQLite objects created in a thread can only be used in that same thread.

This fires when a sqlite3.Connection object is created in one thread and then accessed from a different thread. By default, SQLite in Python enforces this with the check_same_thread parameter, which defaults to True.

The Kalshi weather bot uses a background scheduler thread to trigger scans. The Alpaca bot uses a mix of main-thread execution and background threads for heartbeat logging and connection health checks. In both cases, the connection was being instantiated at startup in the main thread and then passed around to whatever function needed it, including functions called from threads that were not the main thread.

Most of the time this worked fine. The write happened to land on the main thread, no collision, no error. But occasionally, two things tried to log at the same time, the write came from a background thread, and the crash appeared.

The Wrong Fix

The first thing most people try is setting check_same_thread=False on the connection:

# predictandprofit.io — don't do this without locking
conn = sqlite3.connect("trades.db", check_same_thread=False)

This silences the error but does not make SQLite thread-safe. What it does is tell Python's sqlite3 module to stop checking. The underlying SQLite file-level locking is still there. If two threads write simultaneously to the same connection object, you can corrupt the sqlite3.Connection object's internal state. The writes may succeed individually but leave the connection in a state where the next operation behaves unexpectedly or raises a DatabaseError with no clear cause.

I spent about two hours debugging a case where the trade count in the database was correct but a specific row had a null timestamp that should not have been possible. That was the shared-connection problem manifesting quietly instead of loudly.

The Pattern That Actually Works

There are two approaches worth considering. The right one depends on your write frequency.

Option 1: Per-thread connections via threading.local()

This is the cleanest approach when writes are infrequent and happen from many different threads unpredictably:

# predictandprofit.io
import sqlite3
import threading

_local = threading.local()

def get_connection(db_path: str) -> sqlite3.Connection:
    if not hasattr(_local, "conn") or _local.conn is None:
        _local.conn = sqlite3.connect(db_path)
        _local.conn.row_factory = sqlite3.Row
    return _local.conn

def log_trade(db_path: str, trade: dict) -> None:
    conn = get_connection(db_path)
    cursor = conn.cursor()
    cursor.execute(
        "INSERT INTO trades (ticker, direction, price, quantity, timestamp) VALUES (?, ?, ?, ?, ?)",
        (trade["ticker"], trade["direction"], trade["price"], trade["quantity"], trade["timestamp"])
    )
    conn.commit()

Every thread that calls get_connection() gets its own connection. No sharing, no locking needed, no collisions. The tradeoff is that each connection has its own write-ahead log state, so if you need strict read-your-own-writes semantics across threads, this does not guarantee it.

For trade logging specifically, this is fine. Each thread writes its own events. The read path (P&L queries, reporting) runs from the main thread at the end of a cycle, by which point all background writes have completed.

Option 2: A dedicated writer thread with a queue

This is the right pattern when write frequency is higher or when you need guaranteed write ordering:

# predictandprofit.io
import sqlite3
import threading
import queue

_write_queue: queue.Queue = queue.Queue()
_writer_thread: threading.Thread | None = None

def _writer_loop(db_path: str) -> None:
    conn = sqlite3.connect(db_path)
    while True:
        item = _write_queue.get()
        if item is None:
            break
        sql, params = item
        try:
            conn.execute(sql, params)
            conn.commit()
        except sqlite3.Error as e:
            print(f"DB write error: {e}")
        finally:
            _write_queue.task_done()
    conn.close()

def start_writer(db_path: str) -> None:
    global _writer_thread
    _writer_thread = threading.Thread(target=_writer_loop, args=(db_path,), daemon=True)
    _writer_thread.start()

def enqueue_write(sql: str, params: tuple) -> None:
    _write_queue.put((sql, params))

Any thread in the system calls enqueue_write(). The single writer thread is the only thing that touches the connection. SQLite is fully safe because one thread, one connection. The queue serializes all writes automatically.

This is what I landed on for the dual-bot architecture. Both the Kalshi bot and the Alpaca bot call enqueue_write(). The writer thread processes them in order. No collisions, no shared state, and the queue gives you a natural audit trail of what was written and when.

The Dual-Bot Specific Problem

Running two separate bots that share a single SQLite file introduces a different class of issue: file-level locking.

SQLite allows multiple processes to read from the same database file simultaneously. It does not allow concurrent writes across processes. One process holds the write lock at a time. The other process blocks until the lock is released.

In practice this means if the Kalshi bot is mid-transaction when the Alpaca bot tries to log a trade, the Alpaca bot will wait. By default, the wait timeout is 5 seconds. If the lock is not released in 5 seconds, you get:

OperationalError: database is locked

The fix is straightforward:

# predictandprofit.io
conn = sqlite3.connect(db_path, timeout=30)

Set the timeout to something reasonable for your system. On my home server, 30 seconds is more than enough. The Kalshi bot's transactions complete in under 100 milliseconds. A 30-second timeout means I would need 300 consecutive collisions to generate an actual failure, and that will not happen in normal operation.

If you are seeing database-locked errors on a single-process setup, the timeout is not your problem. You have a threading issue, not a process contention issue. Start there.

What the Migration Taught Me

Moving from PostgreSQL to SQLite is not just a configuration change. PostgreSQL handles concurrency at the server level. It is built for it. SQLite is a file-based library that gives you just enough concurrency to work in most situations, and just enough footguns to cause problems in the specific situations where automated trading bots operate: long-running processes, multiple threads, scheduled tasks firing on overlapping intervals.

The three rules I follow now:

  1. One connection per thread, always. Never share a connection object.
  2. For multi-process setups, use the writer-thread queue pattern to serialize writes within each process, and set a generous timeout for cross-process locking.
  3. Test under concurrent load. Single-threaded tests will not surface threading bugs. Spin up a quick script that hammers the write path from 10 threads simultaneously and watch what happens.

The bugs that appear randomly in production almost always look deterministic in retrospect. This one was. I just had to make the conditions obvious to see it.

The full Predict & Profit bot source code, including the dual-bot logging architecture and SQLite integration, is available at predictandprofit.gumroad.com.