Can AI predict my performance trends?
Short answer: yes, within limits. AI can spot patterns in your training data and forecast where key metrics like FTP, fatigue, and power-duration may be heading. It wonât tell you exactly how your next race will unfold, but it can guide smarter decisions about training load, recovery, and when to push for breakthroughs.
All models are wrongâsome are useful. Use AI to inform choices, not to replace them.
What AI can and canât predict
Modern models learn from your ride files (power, heart rate, cadence), training load (TSS), wellness notes, and sometimes HRV, sleep, and temperature. With enough consistent data, they can produce practical forecasts.
Useful predictions most riders can expect:
- FTP trend: Direction and magnitude over 2â8 weeks based on recent load, intensity distribution, and recovery.
- Powerâduration curve changes: Likely gains or losses at key durations (1, 5, 20, 60 minutes) and time to exhaustion at a target wattage.
- Readiness/fatigue: Probability youâll hit targets in tomorrowâs workouts given acute training load, HRV, sleep, and previous session response.
- Plateau detection: Early flags that your current stimulus is no longer producing adaptation.
- Taper response: Expected performance lift after reducing load before an event.
Things AI will not reliably predict:
- Race outcomes shaped by tactics, weather, crashes, and competition.
- Sudden illness or life stress unless captured by your inputs (symptoms, sleep, HRV).
- Equipment or environmental shocks like power meter drift, heat waves, or altitude unless you tag them.
Think in probabilities and trends, not certainties. A good model might say, â70% chance your 20-minute power improves 2â4% in the next 21 days if you follow Plan A.â Thatâs actionable without pretending to be exact.
The models behind the predictions
Coaching platforms mix several machine learning approaches. You donât need to code to use them, but knowing whatâs under the hood helps you trust (and question) the outputs.
| Model | Typical use | Data needed | Pros / watch-outs |
|---|---|---|---|
| Regularized regression or mixed-effects | FTP and power-duration trends; response to volume/intensity | Power, heart rate, training load, zones, blocks | Interpretable; needs clean inputs; may miss nonlinear effects |
| Gradient boosting trees | Workout completion probability; readiness classification | Training history, HRV/sleep, prior RPE, context tags | Strong accuracy; less transparent; sensitive to data drift |
| Time-series forecasting | CTL/ATL/TSB, seasonal patterns, taper effects | Daily load and wellness over months | Captures seasonality; assumes consistent logging |
| Sequence models (RNN/Transformer) | Short-term performance forecasting from sequences of workouts | High-frequency data, longer history | Powerful with lots of data; can overfit; harder to explain |
| Bayesian hierarchical | Personalized priors, credible intervals for FTP change | Your data + population data | Honest uncertainty; slower; requires calibration |
| Anomaly detection | Flags unusual fatigue, decoupling, meter drift | Power/HR, calibration notes | Great for data hygiene; raises false alarms if context missing |
Most tools ensemble several models and then layer simple rules (e.g., training zones, recovery days) to convert predictions into plan changes.
How to use AI predictions in your training
AI helps most when your inputs are consistent and you translate probabilities into clear decisions.
1) Clean inputs win
- Calibrate and label: Zero-offset your power meter, note firmware changes, and tag rides with heat, altitude, or illness.
- Use stable training zones: Update FTP/CP after valid tests or via model-verified breakthroughs; donât jump zones on one noisy day.
- Log RPE and context: A 6/10 at 260 watts is not the same as a 6/10 at 260 watts in 35°C. Record sleep, HRV if available, and key life stressors.
- Guard against drift: Compare indoor vs outdoor and dual-record occasionally to catch meter deviations.
2) Build prediction-to-decision rules
Turn model outputs into actions you can follow.
- If 7-day success probability for VO2max intervals < 50%, replace one session with tempo/threshold and add an extra easy day.
- If 21-day FTP improvement forecast ⼠3%, schedule an assessment (20â30 min TT, ramp, or modeled CP) after a mini-taper.
- If fatigue risk is high (e.g., negative TSB plus low HRV), cap endurance at Zone 2 and prioritize sleep and fueling.
- If plateau is flagged (no gains at 5â20 min for 4â6 weeks), shift intensity distribution: e.g., reduce threshold, add two VO2max sessions for 2 weeks, maintain weekly kJ.
3) Check the model, not just the mirror
Evaluate whether the predictions are actually helping you.
- Segment your calendar: Keep 2â4 weeks as a rolling âholdout.â Donât let the model train on it; use it to verify forecasts.
- Track simple metrics: Mean absolute error (MAE) for FTP/power predictions; Brier score or calibration for probabilities.
- Recalibrate monthly: Update thresholds, re-tag bad files, and retrain if your setup changed (new meter, big altitude shift).
# Simple weekly check (pseudocode)
for each week:
predicted = model.predict(20min_power_next_week)
actual = best_20min_power_next_week
error = abs(predicted - actual)
log(MAE = mean(error))
if MAE > 10-15 watts for 3 weeks:
audit data (meter drift, heat), retrain, simplify decisions
Fueling and recovery still set the ceiling. Even the best algorithm canât overcome chronic under-fueling, poor sleep, or erratic training zones. Use AI to place the right sessions on the right days, then execute with good nutrition, pacing, and patience.
Bottom line: AI can reliably map performance direction when you give it clean data and apply clear decision rules. Treat predictions as probabilities, verify regularly, and keep the coachâs eyeâyours or someone elseâsâon the bigger picture.