< Back to Blog

Alpaca Fractional Shares and Dynamic Position Sizing: How the Bot Decides How Much to Buy

TL;DR / Key Takeaways

  • Alpaca supports fractional share orders via the notional parameter, which lets you buy an exact dollar amount of any stock regardless of share price — no minimum share count required.
  • Signal-proportional position sizing — stronger signals get more capital — outperforms flat bet sizing over a full trading session, but requires a hard portfolio-level exposure cap enforced before every order submission.
  • Minimum notional constraints ($1 for most Alpaca tickers) interact with your signal floor. Orders below that threshold need to be filtered before they hit the API or Alpaca will reject them silently.
  • Tracking live open position exposure in SQLite before each order prevents the bot from stacking concurrent positions and breaching your total capital limit during a volatile session.

A trade signal is only half the problem. The other half is: given that the signal is valid, how much money should the bot put behind it?

This question does not have one right answer. But it does have a wrong answer, and most beginner automated trading implementations fall into it: flat bet sizing. Every valid signal gets the same dollar amount regardless of how strong the signal is. It is simple to implement and it leaves a lot of edge on the table.

The Alpaca stock bot I run uses a composite signal built from three inputs: technical indicators, Finnhub age-weighted news sentiment, and Kronos AI time-series predictions. Each of those components returns a score. The bot combines them into a single composite score between 0 and 1. A score of 0.95 is not the same opportunity as a score of 0.62. The position sizing logic should reflect that difference.

How Alpaca Handles Fractional Shares

Before getting into the math, the mechanics: Alpaca's API supports fractional share orders for most equities. You pass a notional parameter instead of a qty parameter, and Alpaca executes as many shares as your dollar amount buys at the current market price.

# predictandprofit.io
from alpaca.trading.client import TradingClient
from alpaca.trading.requests import MarketOrderRequest
from alpaca.trading.enums import OrderSide, TimeInForce

client = TradingClient(api_key=API_KEY, secret_key=SECRET_KEY, paper=False)

def place_fractional_order(symbol: str, notional: float, side: OrderSide) -> dict:
    """Place a fractional share market order by dollar amount."""
    if notional < 1.0:
        raise ValueError(f"Notional {notional:.2f} below Alpaca minimum of $1.00 for {symbol}")

    order = MarketOrderRequest(
        symbol=symbol,
        notional=round(notional, 2),
        side=side,
        time_in_force=TimeInForce.DAY,
    )
    return client.submit_order(order_data=order)

The time_in_force=DAY setting is intentional. The bot does not hold fractional positions overnight. Every position opened during the session is closed before market close or on the next signal reversal. This keeps the risk profile clean and avoids complications with after-hours price swings on illiquid names.

The minimum notional is $1.00 for most Alpaca-supported tickers. Below that threshold, Alpaca returns a validation error. Building the floor check into the order function rather than the signal evaluation function means the constraint is enforced in exactly one place, regardless of how many signal paths feed into it.

The Position Sizing Formula

The core sizing formula scales the notional amount linearly with signal strength, bounded by a per-trade maximum and a portfolio-level exposure cap.

# predictandprofit.io
def calculate_notional(
    signal_score: float,
    max_per_trade: float,
    min_signal_threshold: float = 0.55,
    min_notional: float = 1.00,
) -> float:
    """
    Convert a composite signal score [0, 1] into a dollar notional.

    signal_score: composite score from Kronos + technicals + sentiment
    max_per_trade: maximum dollars to deploy on any single trade
    min_signal_threshold: scores below this do not trade
    min_notional: Alpaca's minimum order size
    """
    if signal_score < min_signal_threshold:
        return 0.0  # no trade

    # scale linearly from min_signal_threshold to 1.0
    scaled = (signal_score - min_signal_threshold) / (1.0 - min_signal_threshold)
    notional = scaled * max_per_trade

    if notional < min_notional:
        return 0.0  # too small to submit

    return round(notional, 2)

With max_per_trade=500 and min_signal_threshold=0.55:

  • Signal 0.55 maps to $0 (at threshold, no trade)
  • Signal 0.70 maps to $167 (about one third of max)
  • Signal 0.85 maps to $333 (about two thirds of max)
  • Signal 1.00 maps to $500 (full size)

The scaling is linear rather than exponential by design. An exponential curve sounds more sophisticated but it concentrates too much capital in the highest-signal events, which are also the noisiest. The linear model keeps sizing predictable and auditable. When I look at the SQLite log and see a $287 order, I can immediately infer the approximate signal score. That transparency matters more than theoretical optimality.

Portfolio-Level Exposure Cap

Signal scoring happens per ticker. But the bot watches multiple tickers concurrently. Without a portfolio-level cap, strong signals on five tickers at once could trigger five max-size orders simultaneously. That is not position sizing. That is concentration risk.

# predictandprofit.io
import sqlite3
from datetime import date

def get_open_exposure(db_path: str) -> float:
    """Sum the current notional of all open positions in the SQLite log."""
    with sqlite3.connect(db_path) as conn:
        cursor = conn.execute(
            """
            SELECT COALESCE(SUM(notional), 0.0)
            FROM positions
            WHERE status = 'open'
              AND trade_date = ?
            """,
            (date.today().isoformat(),),
        )
        return cursor.fetchone()[0]

def get_available_capital(
    total_capital: float,
    max_exposure_pct: float,
    db_path: str,
) -> float:
    """Return how many dollars the bot is allowed to deploy right now."""
    max_exposure = total_capital * max_exposure_pct
    current_exposure = get_open_exposure(db_path)
    available = max_exposure - current_exposure
    return max(available, 0.0)

Before every order, the bot calls get_available_capital and clamps the calculated notional to whatever headroom remains.

# predictandprofit.io
def size_and_submit_order(
    symbol: str,
    signal_score: float,
    side: OrderSide,
    config: dict,
) -> dict | None:
    """Full sizing pipeline with portfolio exposure check."""
    raw_notional = calculate_notional(
        signal_score=signal_score,
        max_per_trade=config["max_per_trade"],
        min_signal_threshold=config["min_signal_threshold"],
    )

    if raw_notional == 0.0:
        return None  # signal too weak

    available = get_available_capital(
        total_capital=config["total_capital"],
        max_exposure_pct=config["max_exposure_pct"],
        db_path=config["db_path"],
    )

    final_notional = min(raw_notional, available)

    if final_notional < 1.0:
        return None  # no room left in portfolio

    return place_fractional_order(symbol=symbol, notional=final_notional, side=side)

In practice max_exposure_pct is set to 0.40 for the current bot configuration. At any given moment, no more than 40% of total capital is deployed across open positions. The rest sits in cash as a buffer against rapid signal reversals or volatility events.

Why Fractional Sizing Changes the Math

Without fractional shares, position sizing rounds down to whole shares. A $287 signal on a $340 stock buys zero shares. That trade never happens. With fractional shares, it buys 0.844 shares and the signal is exercised. Over a session with dozens of signals in that sub-one-share zone, the difference in deployed capital is material.

On a day where the bot generates 40 valid signals and the average signal score is around 0.72, the non-fractional bot might execute 12 trades (the ones where the notional comfortably covers a full share). The fractional bot executes all 40. Same signal quality. Significantly more capital efficiently deployed.

This is not about taking more risk. The position sizing formula caps the notional regardless. It is about not leaving valid signals unexecuted because of an arbitrary share-count rounding constraint.

What the SQLite Log Looks Like

Every order submission — whether it executes, is rejected, or falls below the sizing floor — gets a row in the orders table:

trade_date   | symbol | signal_score | raw_notional | final_notional | status   | alpaca_order_id
2026-05-01   | AAPL   | 0.81         | 412.50       | 412.50         | filled   | abc123
2026-05-01   | MSFT   | 0.63         | 177.78       | 122.34         | filled   | def456
2026-05-01   | TSLA   | 0.57         | 22.22        | 0.00           | skipped  | null
2026-05-01   | NVDA   | 0.91         | 500.00       | 0.00           | no_room  | null

The raw_notional vs final_notional difference on the MSFT row shows the portfolio cap kicking in. The available capital headroom was $122.34, so the $177.78 raw signal got trimmed. The TSLA row shows the sizing floor in action — signal was valid but the notional was below $1. The NVDA row shows the cap already exhausted by prior trades.

All four outcomes are logged. You can only diagnose position sizing behavior if you can see the cases where the bot did not trade and why.

A Note on Slippage and Fractional Orders

Fractional market orders on Alpaca execute at the NBBO at the time of submission. For liquid names like AAPL or MSFT, slippage is negligible. For smaller-cap or less-liquid names, a market order on a fractional amount can experience meaningful slippage. The current bot restricts its ticker universe to S&P 500 constituents with average daily volume above 1M shares. Below that threshold, the fractional sizing logic is not relevant because the bot will not trade the ticker at all.

If you are building on this pattern for a wider ticker universe, consider switching to limit orders for anything with spread above a threshold. The notional parameter does not work with limit orders — you will need to calculate share count from current price — but for illiquid names that tradeoff is worth making.


The full Alpaca bot source code, including the position sizing module, composite signal pipeline, and SQLite logging layer, is part of the Predict & Profit dual-bot package.

Get the Source Code — $67

How It Works — Full System Overview

Frequently Asked Questions

Q: Why not use Kelly Criterion for position sizing?

A: Kelly Criterion is theoretically optimal for maximizing log wealth but it requires an accurate estimate of win probability and expected payoff, both of which shift constantly in live markets. The linear scaling model I use is simpler, more conservative, and easier to audit. It underperforms full Kelly in ideal conditions and outperforms it when your probability estimates are even slightly wrong, which they almost always are.

Q: What happens if Alpaca rejects the order after the exposure is already logged?

A: The position is logged with status = 'pending' on submission and updated to filled or rejected based on the Alpaca API response. If the order is rejected, the exposure is not counted against the portfolio cap for subsequent trades. The watchdog process that monitors order status handles the status update within 30 seconds of submission.

Q: Does this work for short positions?

A: The sizing logic is side-agnostic. The side parameter is passed through to Alpaca and the notional calculation is identical. Alpaca requires margin approval for short selling, so fractional shorts are only available on margin accounts. The current bot only takes long positions.

Related Reading