Stock Market Analysis & Forecast
Karim K. Kardous

High Level Findings

In this piece, I looked at 6 stocks over a 3-month period (July-September 2025) using multiple forecasting approaches to identify trading patterns and evaluate predictive model performance. The analysis examined Apple (AAPL), Google (GOOGL), Microsoft (MSFT), Tesla (TSLA), Netflix (NFLX), and Nvidia (NVDA), comparing naive baselines against machine learning models to determine which approaches best capture short-term price movements.

  • Tesla exhibits a consistent Friday effect with positive returns occurring 65% more frequently on Fridays compared to other weekdays across July-September 2025

  • Day-of-week seasonality dominates individual stock characteristics: Seasonal Naive or Naive Baseline models outperformed sophisticated machine learning approaches for 4 out of 6 stocks, indicating either predictable weekly trading patterns or simpler models are best for the job - in the case of this analysis at least

  • End-of-week bias spans the entire tech portfolio: Apple, Tesla, and Nvidia show systematically higher returns on Fridays, suggesting sector-wide behavioral trading patterns rather than stock-specific anomalies

  • Tesla and Nvidia demonstrate 3x higher daily price swings (±3%+ moves) compared to more mature tech stocks like Microsoft and Google, creating distinct forecasting challenges. Note that Google remains the most volatile in terms of $ value of the std. deviation of stock compared to its mean stock price (not the most volatile in its % change of stock price day over day).

  • However when scaled to each stock close price variation, the most volatile stock is Google (~13% coefficient of variation), followed by Tesla (~12%) and Apple (~7%), with Microsoft being the least volatile (~2%)

Initial Setup

After having setup the .venv using uv init, here we start by importing needed packages and setting up what we need.

Show the code
import yfinance as yf
import pandas as pd
import polars as pl
import polars.selectors as cs
import great_tables as gt
from great_tables import GT, md, style, loc

import matplotlib.pyplot as plt

from datetime import datetime, timedelta, date
import numpy as np
import calendar

import sys
import logging
import pprint
from IPython.display import display, HTML

from itertools import starmap

Here I set up custom HTML-styled {logging} for enhanced visual output in Quarto. While logging is typically more valuable for automated scheduled jobs to track errors and events with timestamps, here it provides highly customizable, visually appealing messages compared to ‘basic’ print() statements, taking advantage of Quarto’s ability to render custom css/html.

Show the code
# custom handler that outputs styled HTML
class StyledJupyterHandler(logging.StreamHandler):
    def __init__(self):
        super().__init__(sys.stdout)
    
    def emit(self, record):
        timestamp = datetime.now().strftime('%H:%M:%S')
        level = record.levelname
        message = record.getMessage()
        
        # style based on log level
        if level == 'INFO':
            color = '#28a745'  # green
            icon = 'ℹ️'
        elif level == 'WARNING':
            color = '#ffc107'  # yellow
            icon = '⚠️'
        elif level == 'ERROR':
            color = '#dc3545'  # red
            icon = '❌'
        else:
            color = '#6c757d'  # gray
            icon = '•'
        
        html_output = f"""
        <div style="
            background-color: {color}15;
            border-left: 4px solid {color};
            padding: 8px 12px;
            margin: 4px 0;
            border-radius: 4px;
            font-family: 'Monaco', 'Menlo', 'Ubuntu Mono', monospace;
            font-size: 13px;">
            <span style="color: {color}; font-weight: bold;">
                {icon} {level}
            </span>
            <span style="color: #6c757d; margin: 0 8px;">
                {timestamp}
            </span>
            <span style="color: #333;">
                {message}
            </span>
        </div>
        """
        
        display(HTML(html_output))

# set up logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)

# clear existing handlers and add only one; otherwise messages can repeat
if logger.handlers:
    logger.handlers.clear()
logger.addHandler(StyledJupyterHandler())

Fetching stock data from Yahoo Finance API.

Show the code
# define the watchlist
tickers = ['AAPL', 'GOOGL', 'MSFT', 'TSLA', 'NFLX', 'NVDA']
period = "3mo"  # 3 months of data

# looping thru dates
def download_stocks_data(ticker):
  try:
    stock_data = yf.download(ticker, start='2025-07-01', end='2025-09-26', progress=False) # time bounding the pull here so my analysis stay the same/matches the outputs/results a few months from now
    logger.info(f'Downloaded {ticker}: {len(stock_data)} days')
    return (ticker, stock_data)
  except Exception as e:
     logger.error(f'Failed to download {ticker}: {e}')
     return (ticker, None)
   
results = list(map(download_stocks_data, tickers)) 
ℹ️ INFO 10:31:41 Downloaded AAPL: 61 days
ℹ️ INFO 10:31:41 Downloaded GOOGL: 61 days
ℹ️ INFO 10:31:42 Downloaded MSFT: 61 days
ℹ️ INFO 10:31:43 Downloaded TSLA: 61 days
ℹ️ INFO 10:31:43 Downloaded NFLX: 61 days
ℹ️ INFO 10:31:44 Downloaded NVDA: 61 days

Here I call the yfinance api to pull stock prices for six major technology stocks: Apple (AAPL), Google (GOOGL), Microsoft (MSFT), Tesla (TSLA), Netflix (NFLX), and Nvidia (NVDA).
Note that this analysis is based on data pulled July-September 2025. Results will vary if run with different or dynamic time periods; the point here is to create a historical fixed snapshot as all my comments/analyses are based on the July-September 2025 period.

Display of data pull using great-tables

Here I decide to use {great-tables} after running uv add great-tables in terminal and import great_tables as gt in script. This is a great way to display summary tables. But let’s convert the results (pandas into polars first)

Show the code
def convert_to_polars(result):
  ticker, stock_data = result
  if stock_data is not None and not stock_data.empty:
    if isinstance(stock_data.columns, pd.MultiIndex):
      stock_data.columns = stock_data.columns.droplevel(1)  # remove ticker level/unnest the MultiIndex struct from yahoo downlaod
    pl_stock = pl.from_pandas(stock_data.reset_index())
    pl_stock = pl_stock.with_columns(pl.lit(ticker).alias('ticker'))
    return pl_stock
  return None
  
# loop thru all data
all_data = list(map(convert_to_polars, results))

# filter out any None/Nulls
complete_data = [data for data in all_data if data is not None]

# concatenate all data together into a polars obkect
pl_results = pl.concat(complete_data, how="vertical")

# rearrange columns and sort descending dates and ticker; also adding new column dollar_volume
pl_results = (
  pl_results
    .select(['Date', 'ticker', 'Close', 'High', 'Low', 'Open', 'Volume'])
    .sort(['Date', 'ticker'], descending = [True, False])
    .with_columns( (pl.col('Volume') * pl.col('Close') ).alias('$volume'))
)

# build a gt function that formats numeric data, cleans column names, adds a title/subtitle (if provided) and customizes the  overall theme simialr to NYT
def gt_nyt_custom(x, title='', subtitle='', first_10_rows_only=True):
    
    import polars as pl
    from great_tables import GT, md, style, loc
    
    # clean column names to title case (similar to clean_names)
    x = x.rename({col: col.replace('_', ' ').title() for col in x.columns})
    
    # identify numeric columns (float and integer)
    numeric_cols = [col for col in x.columns if x[col].dtype in [pl.Float64, pl.Float32]]
    integer_cols = [col for col in x.columns if x[col].dtype in [pl.Int64, pl.Int32, pl.Int16, pl.Int8]]
    
    # handle currency columns - check if specific columns exist
    currency_cols = []
    volume_cols = []
    date_cols = []
    
    for col in numeric_cols:
        if '$volume' in col.lower() or 'volume' in col.lower():
            volume_cols.append(col)
        else:
            currency_cols.append(col)
    
    # check for date columns
    for col in x.columns:
        if 'date' in col.lower() or x[col].dtype == pl.Date:
            date_cols.append(col)
    
    # format title and subtitle
    title_fmt = f"**{title}**" if title != "" else ""
    subtitle_fmt = f"*{subtitle}*" if subtitle != "" else ""
    
    # apply first_10_rows_only filter
    if first_10_rows_only:
        x = x.head(10)
    
    # create gt table and apply styling
    gt_table = (
        GT(x)
        .tab_header(
            title=md(title_fmt),
            subtitle=md(subtitle_fmt)
        )
        .tab_style(
            style=style.text(color='#333333'),
            locations=loc.body()
        )
        .tab_style(
            style=style.text(color='#CC6600'),
            locations=loc.column_labels()
        )
        .tab_options(
            table_font_names=['Merriweather', 'Georgia', 'serif'],
            table_font_size='14px',
            heading_title_font_size='18px',
            heading_subtitle_font_size='14px',
            column_labels_font_weight='bold',
            column_labels_background_color='#eeeeee',
            table_border_top_color='#dddddd',
            table_border_bottom_color='#dddddd',
            data_row_padding='6px',
            row_striping_include_table_body=True,
            row_striping_background_color='#f9f9f9',
        )
    )
    
    # conditionally apply formatting based on column existence
    if currency_cols:
        gt_table = gt_table.fmt_currency(
            columns=currency_cols,
            decimals=1,
            currency='USD'
        )
    
    if volume_cols:
        gt_table = gt_table.fmt_currency(
            columns=volume_cols,
            decimals=1,
            currency='USD',
            compact=True
        )
    
    if integer_cols:
        gt_table = gt_table.fmt_number(
            columns=integer_cols, 
            decimals=0
        )
    
    if date_cols:
        gt_table = gt_table.fmt_date(
            columns=date_cols, 
            date_style='year.mn.day'
        )
    
    return gt_table
  

styled_table = (
    gt_nyt_custom(
        pl_results,
        title = "Stock Market Data", 
        subtitle = "3 Month Pull (Only 10 records shown)",
        first_10_rows_only=True
    )
    .tab_style(
        style=style.text(align='left'),
        locations=loc.column_labels()
    )
)
    
styled_table
Stock Market Data
3 Month Pull (Only 10 records shown)
Date Ticker Close High Low Open Volume $Volume
2025/09/25 AAPL $256.9 $257.2 $251.7 $253.2 55,202,100 $14.2B
2025/09/25 GOOGL $245.8 $246.5 $240.7 $244.4 31,020,400 $7.6B
2025/09/25 MSFT $507.0 $510.0 $505.0 $508.3 15,786,500 $8.0B
2025/09/25 NFLX $1,208.2 $1,216.8 $1,191.5 $1,203.1 1,997,800 $2.4B
2025/09/25 NVDA $177.7 $180.3 $173.1 $174.5 191,586,700 $34.0B
2025/09/25 TSLA $423.4 $435.4 $419.1 $435.2 96,746,400 $41.0B
2025/09/24 AAPL $252.3 $255.7 $251.0 $255.2 42,303,700 $10.7B
2025/09/24 GOOGL $247.1 $252.4 $246.4 $251.7 28,201,000 $7.0B
2025/09/24 MSFT $510.1 $512.5 $506.9 $510.4 13,533,700 $6.9B
2025/09/24 NFLX $1,203.9 $1,221.5 $1,194.2 $1,218.6 2,773,100 $3.3B

Teasing out seasonality

The goal now is to generate a set of calendar plots where daily price changes are used as value/color gradient, this should clearly/visually indicate any day of the week seasonality (if any). We will also need to control for Saturday/Sundays being off and for months that don’t start on the first of the month being a Monday.

Show the code
# add more date level and price level computations
calendar_data = (
    pl_results
    .sort(["ticker", "Date"])
    .with_columns([
        pl.col('Date').dt.year().alias('year'),           
        pl.col('Date').cast(pl.Date).alias('Date'),     
        pl.col('Date').dt.weekday().alias('day_of_week'),
        pl.col('Date').dt.day().alias('day_of_month'),
        pl.col('Date').dt.week().alias('week_of_year'),
        pl.col('Date').dt.month().alias('month'),
        pl.col('Date').dt.strftime('%B').alias('month_name'),
        # price-related calculations
        (pl.col('Close') - pl.col('Close').shift(1).over('ticker')).alias('daily_change_abs'),
        (
            (pl.col('Close') - pl.col('Close').shift(1).over('ticker'))
            / pl.col('Close').shift(1).over('ticker')
        ).alias('daily_change_perc')
    ])
)

# build calendar plot
calendar_pd = pd.DataFrame(calendar_data)

calendar_pd.columns = [
    'Date',
    'ticker',
    'Close',
    'High',
    'Low',
    'Open',
    'Volume',
    '$volume',
    'year',
    'day_of_week',
    'day_of_month',
    'week_of_year',
    'month',
    'month_name',
    'daily_change_abs',
    'daily_change_perc'
]
Show the code
def create_calendar_heatmap_mpl(df, stocks, months, vmin=-5, vmax=5, show_disclaimer=False):
    """
    create calendar heatmap using matplotlib
    """
    # convert to pandas 
    if hasattr(df, "to_pandas"):
        df = df.to_pandas()
    
    # filter the data on ticker/stock and month
    dff = df[(df['ticker'].isin(stocks)) & (df['month'].isin(months))].copy()
    
    # add day, week, and values columns
    dff['Date'] = pd.to_datetime(dff['Date'])
    dff['day'] = dff['Date'].dt.day
    dff['weekday'] = dff['Date'].dt.dayofweek
    dff['values'] = dff['daily_change_perc'] * 100   # convert to percentage
    
    # filter out weekends
    dff = dff[dff['weekday'] < 5]
    
    # unique months sorted
    unique_months = sorted(dff['month'].unique())
    ncols = len(unique_months)
    
    # create subplot grid - months as columns (reduced width)
    month_names = {7: 'Jul', 8: 'Aug', 9: 'Sep'}
    
    fig, axes = plt.subplots(1, ncols, figsize=(10, 3.5))
    if ncols == 1:
        axes = [axes]
    
    stock_name_map = {
        'AAPL': 'Apple', 'GOOGL': 'Google', 'MSFT': 'Microsoft',
        'NFLX': 'Netflix', 'NVDA': 'Nvidia', 'TSLA': 'Tesla'
    }
    
    # add one heatmap per month
    for idx, m in enumerate(unique_months):
        dmonth = dff[dff['month'] == m]
        
        if len(dmonth) > 0:
            # create proper calendar layout
            first_day = pd.Timestamp(year=dmonth['Date'].dt.year.iloc[0], month=m, day=1)
            first_weekday = first_day.weekday()
            
            # create calendar grid
            calendar_grid = {}
            for _, day_data in dmonth.iterrows():
                day = day_data['day']
                weekday = day_data['weekday']
                week = ((first_weekday + day - 1) // 7)
                calendar_grid[(week, weekday)] = day_data['values']
            
            # create arrays for heatmap
            max_weeks = max([key[0] for key in calendar_grid.keys()]) + 1 if calendar_grid else 1
            z_data = np.full((max_weeks, 5), np.nan)
            text_data = np.full((max_weeks, 5), '', dtype=object)
            
            # fill arrays - no inversion
            for (week, weekday), value in calendar_grid.items():
                if weekday < 5:
                    z_data[week, weekday] = value
                    if not np.isnan(value):
                        text_data[week, weekday] = f'{value:.1f}'
            
            # plot heatmap
            im = axes[idx].imshow(z_data, cmap='cividis_r', aspect='auto', vmin=vmin, vmax=vmax)
            
            # add text annotations
            for week in range(max_weeks):
                for day in range(5):
                    if text_data[week, day]:
                        value = z_data[week, day]
                        text_color = 'white' if value > 3.5 else 'black'
                        axes[idx].text(
                            day, week, text_data[week, day],
                            ha='center', va='center', fontsize=11, color=text_color
                        )
            
            # set axis labels
            axes[idx].set_xticks(range(5))
            axes[idx].set_xticklabels(['Mon', 'Tue', 'Wed', 'Thu', 'Fri'], fontsize=10)
            axes[idx].set_yticks([])
            axes[idx].set_title(month_names[m], fontsize=12, pad=8, fontweight='normal')
            axes[idx].grid(False)
            
            # remove borders/spines
            for spine in axes[idx].spines.values():
                spine.set_visible(False)
            
            axes[idx].set_facecolor((0.973, 0.973, 0.973))
    
    # add colorbar (top centered, horizontal)
    cbar = fig.colorbar(
      im, 
      ax=axes,
      orientation='horizontal',
      fraction=0.05, 
      pad=0.35,
      aspect=25,
      location='top'
      )
    
    colorbar_title = (
        "Note that the US stock exchanges closed on Jul 3 (early) & Sep 1; due to Independence & Labor Day, respectively\n% Change in Daily Stock Price" 
        if show_disclaimer 
        else "% Change in Daily Stock Price"
    )
    cbar.set_label(colorbar_title, fontsize=12, weight='bold')
    cbar.ax.xaxis.set_label_position('top')
    
    # set main title (left aligned)
    title = ", ".join(f"{stock_name_map[ticker]}" for ticker in stocks)
    fig.text(0.125, 0.85, title, fontsize=16, fontweight='bold',
             ha='left', va='bottom')
    
    fig.patch.set_facecolor((0.973, 0.973, 0.973))
    plt.tight_layout(rect=[0, 0, 1, 0.78])
    return fig
  
  
# loop through stocks, only show disclaimer for apple
for i, ticker in enumerate(tickers):
    show_disclaimer = (ticker == 'AAPL')
    fig = create_calendar_heatmap_mpl(
        calendar_data, [ticker], [7, 8, 9],
        vmin=-5, vmax=5, show_disclaimer=show_disclaimer
    )
    plt.show()

From above calendar plots, a few patterns stand out:

  • Tesla shows a notable Friday effect: consistent positive performance (darker blueish tiles) appearing frequently on Fridays across multiple months

  • End-of-week effects across portfolio: Apple, Tesla, and Nvidia show more frequent positive returns on Fridays compared to other weekdays

  • High volatility stocks identified: Tesla and Nvidia exhibit frequent extreme daily moves (±3%+) while Netflix, Microsoft and Google show more stable patterns

  • Patterns persist across time: Friday effects (upticks in stock price) remain fairly consistent across July-September

Building the Forecasts

Show the code
# chunk 1: building forecast frameworks
from pathlib import Path
import joblib
import json
# import statements for all our models
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error
import xgboost as xgb

def prepare_forecasting_data(pl_results):
    """
    convert polars data to pandas and prepare for forecasting
    """
    # convert to pandas for easier time series handling
    df = pl_results.to_pandas()
    
    # ensure Date is datetime
    df['Date'] = pd.to_datetime(df['Date'])
    
    # sort by ticker and date
    df = df.sort_values(['ticker', 'Date']).reset_index(drop=True)
    
    # add day of week features (for seasonality)
    df['day_of_week'] = df['Date'].dt.day_name()
    df['is_friday'] = (df['Date'].dt.weekday == 4).astype(int)
    df['is_monday'] = (df['Date'].dt.weekday == 0).astype(int)
    
    # calculate daily returns for analysis
    df['daily_return'] = df.groupby('ticker')['Close'].pct_change() * 100
    
    return df

def create_train_test_split(df, train_months=['July', 'August'], test_month='September'):
    """
    split data into training and testing sets
    """
    # create month names
    df['month_name'] = df['Date'].dt.strftime('%B')
    
    # split the data
    train_data = df[df['month_name'].isin(train_months)].copy()
    test_data = df[df['month_name'] == test_month].copy()
    
    # logger.info(f"training data: {train_data['Date'].min()} to {train_data['Date'].max()}")
    # logger.info(f"test data: {test_data['Date'].min()} to {test_data['Date'].max()}")
    logger.info(f"training observations by ticker:")
    for ticker, count in train_data.groupby('ticker').size().items():
        logger.info(f"  {ticker}: {count} observations")
    
    return train_data, test_data

class StockForecastingFramework:
    """
    comprehensive stock price forecasting framework with multiple models
    """
    
    def __init__(self, prediction_log_dir='predictions_log'):
        self.prediction_log_dir = Path(prediction_log_dir)
        self.prediction_log_dir.mkdir(exist_ok=True)
        
        # initialize model registry
        self.models = {}
        self.model_configs = {
            'naive_baseline': {'description': 'Previous day closing price'},
            'seasonal_naive': {'description': 'Same weekday last week price'},
            'linear_trend': {'description': 'Linear regression with day-of-week features'},
            'xgboost': {'description': 'XGBoost with engineered features'}
        }
        
        # prediction storage
        self.predictions_df = pd.DataFrame()
        self.model_performance = {}
        
        logger.info("StockForecastingFramework initialized")
        logger.info(f"prediction logs will be saved to: {self.prediction_log_dir}")
        
    def prepare_features(self, df, ticker):
        """
        create features for machine learning models
        """
        ticker_data = df[df['ticker'] == ticker].copy().sort_values('Date')
        
        # technical indicators
        ticker_data['sma_5'] = ticker_data['Close'].rolling(5).mean()
        ticker_data['sma_20'] = ticker_data['Close'].rolling(20).mean()
        ticker_data['volatility_5'] = ticker_data['Close'].rolling(5).std()
        
        # lag features
        for lag in [1, 2, 3, 5]:
            ticker_data[f'close_lag_{lag}'] = ticker_data['Close'].shift(lag)
            ticker_data[f'return_lag_{lag}'] = ticker_data['daily_return'].shift(lag)
        
        # day of week dummies
        ticker_data['monday'] = (ticker_data['Date'].dt.weekday == 0).astype(int)
        ticker_data['tuesday'] = (ticker_data['Date'].dt.weekday == 1).astype(int)
        ticker_data['wednesday'] = (ticker_data['Date'].dt.weekday == 2).astype(int)
        ticker_data['thursday'] = (ticker_data['Date'].dt.weekday == 3).astype(int)
        ticker_data['friday'] = (ticker_data['Date'].dt.weekday == 4).astype(int)
        
        # tesla friday effect (special feature based on your analysis)
        if ticker == 'TSLA':
            ticker_data['tesla_friday_effect'] = ticker_data['friday']
        else:
            ticker_data['tesla_friday_effect'] = 0
            
        return ticker_data.dropna()
    
    def log_prediction(self, prediction_date, target_date, ticker, model, 
                      predicted_price, actual_price=None):
        """
        log prediction to tracking system
        """
        timestamp = datetime.now()
        
        # calculate errors if actual price is available
        absolute_error = None
        percentage_error = None
        direction_correct = None
        
        if actual_price is not None:
            absolute_error = abs(predicted_price - actual_price)
            percentage_error = (absolute_error / actual_price) * 100
            direction_correct = True
        
        # create prediction record
        prediction_record = {
            'timestamp': timestamp,
            'prediction_date': prediction_date,
            'target_date': target_date,
            'ticker': ticker,
            'model': model,
            'predicted_price': predicted_price,
            'actual_price': actual_price,
            'absolute_error': absolute_error,
            'percentage_error': percentage_error,
            'direction_correct': direction_correct
        }
        
        # add to internal storage
        self.predictions_df = pd.concat([
            self.predictions_df, 
            pd.DataFrame([prediction_record])
        ], ignore_index=True)
        
        # log to file
        log_file = self.prediction_log_dir / 'stock_predictions.csv'
        pd.DataFrame([prediction_record]).to_csv(
            log_file, mode='a', header=not log_file.exists(), index=False
        )
        
        # fixed the syntax error here - was "matarget_date" 
        # logger.info(f"logged prediction: {ticker} {model} -> ${predicted_price:.2f} for {target_date}")
    
    def evaluate_model_performance(self, model_name, predictions, actuals):
        """
        calculate comprehensive performance metrics
        """
        # convert to numpy arrays
        predictions = np.array(predictions)
        actuals = np.array(actuals)
        
        mae = mean_absolute_error(actuals, predictions)
        rmse = np.sqrt(mean_squared_error(actuals, predictions))
        mape = np.mean(np.abs((actuals - predictions) / actuals)) * 100
        
        self.model_performance[model_name] = {
            'MAE': mae,
            'RMSE': rmse,
            'MAPE': mape,
            'n_predictions': len(predictions)
        }
        
        logger.info(f"{model_name} performance - MAE: ${mae:.2f}, RMSE: ${rmse:.2f}, MAPE: {mape:.1f}%")
        
        return self.model_performance[model_name]

# initialize the framework
def setup_forecasting_framework():
    """
    setup the forecasting environment
    """
    logger.info("setting up forecasting framework...")
    
    # initialize framework
    framework = StockForecastingFramework()
    
    # prepare data (assuming forecast_df from previous step)
    logger.info("preparing forecasting data...")
    
    return framework

# chunk 2: baseline models (naive and seasonal naive forecasters)
class BaselineModels:
    """
    naive and seasonal naive forecasting models that serve as benchmarks
    """
    
    def __init__(self, framework):
        self.framework = framework
        logger.info("initializing baseline models...")
    
    def naive_forecast(self, train_data, test_dates, ticker):
        """
        naive forecast: next day price = today's price
        """
        ticker_train = train_data[train_data['ticker'] == ticker].sort_values('Date')
        
        if len(ticker_train) == 0:
            logger.warning(f"no training data found for {ticker}")
            return {}
        
        # get last known price
        last_price = ticker_train['Close'].iloc[-1]
        last_date = ticker_train['Date'].iloc[-1]
        
        predictions = {}
        for target_date in test_dates:
            predictions[target_date] = last_price
            
            # log the prediction
            self.framework.log_prediction(
                prediction_date=last_date,
                target_date=target_date,
                ticker=ticker,
                model='naive_baseline',
                predicted_price=last_price
            )
        
        return predictions
    
    def seasonal_naive_forecast(self, train_data, test_dates, ticker):
        """
        seasonal naive: next monday = last monday's price, etc.
        """
        ticker_train = train_data[train_data['ticker'] == ticker].sort_values('Date')
        
        if len(ticker_train) == 0:
            logger.warning(f"no training data found for {ticker}")
            return {}
        
        predictions = {}
        
        for target_date in test_dates:
            target_weekday = target_date.weekday()
            
            # find most recent day with same weekday
            same_weekday_data = ticker_train[
                ticker_train['Date'].dt.weekday == target_weekday
            ].sort_values('Date')
            
            if len(same_weekday_data) > 0:
                # use most recent same weekday price
                seasonal_price = same_weekday_data['Close'].iloc[-1]
                seasonal_date = same_weekday_data['Date'].iloc[-1]
            else:
                # fallback to naive if no same weekday found
                seasonal_price = ticker_train['Close'].iloc[-1]
                seasonal_date = ticker_train['Date'].iloc[-1]
            
            predictions[target_date] = seasonal_price
            
            # log the prediction
            self.framework.log_prediction(
                prediction_date=seasonal_date,
                target_date=target_date,
                ticker=ticker,
                model='seasonal_naive',
                predicted_price=seasonal_price
            )
        
        return predictions
    
    def run_baseline_forecasts(self, train_data, test_data):
        """
        run both baseline models for all tickers
        """
        logger.info("running baseline forecasts for all tickers...")
        
        tickers = train_data['ticker'].unique()
        test_dates = sorted(test_data['Date'].unique())
        
        all_predictions = {}
        
        for ticker in tickers:
            # run naive forecast
            naive_preds = self.naive_forecast(train_data, test_dates, ticker)
            
            # run seasonal naive forecast
            seasonal_preds = self.seasonal_naive_forecast(train_data, test_dates, ticker)
            
            all_predictions[ticker] = {
                'naive_baseline': naive_preds,
                'seasonal_naive': seasonal_preds
            }
        
        return all_predictions

def run_baseline_evaluation(framework, baseline_models, train_data, test_data):
    """
    evaluate baseline model performance on test data
    """
    logger.info("evaluating baseline model performance...")
    
    # run predictions
    predictions = baseline_models.run_baseline_forecasts(train_data, test_data)
    
    # evaluate against actual test data
    tickers = test_data['ticker'].unique()
    
    for ticker in tickers:
        ticker_test = test_data[test_data['ticker'] == ticker].sort_values('Date')
        
        if ticker not in predictions:
            continue
            
        for model_name in ['naive_baseline', 'seasonal_naive']:
            model_preds = predictions[ticker][model_name]
            
            # align predictions with actual test dates
            pred_values = []
            actual_values = []
            
            for _, row in ticker_test.iterrows():
                test_date = row['Date']
                actual_price = row['Close']
                
                if test_date in model_preds:
                    pred_values.append(model_preds[test_date])
                    actual_values.append(actual_price)
            
            if len(pred_values) > 0:
                # evaluate performance
                framework.evaluate_model_performance(
                    f"{model_name}_{ticker}",
                    pred_values,
                    actual_values
                )

def setup_and_run_baselines(framework, train_data, test_data):
    """
    setup and run baseline models
    """
    logger.info("setting up baseline models...")
    
    # initialize baseline models
    baseline_models = BaselineModels(framework)
    
    # run baseline evaluation
    run_baseline_evaluation(framework, baseline_models, train_data, test_data)
    
    logger.info("chunk 2 complete - baseline models evaluated")
    return baseline_models

# chunk 3: statistical models (linear regression and xgboost)
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler

class StatisticalModels:
    """
    linear regression and xgboost models with feature engineering
    """
    
    def __init__(self, framework):
        self.framework = framework
        self.models = {}
        self.scalers = {}
        self.train_data = None
        logger.info("initializing statistical models...")
        
    def prepare_model_features(self, data, ticker, is_training=True):
        """
        prepare feature matrix for ml models
        """
        if is_training:
            # for training, use all available data for feature engineering
            ticker_data = self.framework.prepare_features(data, ticker)
            self.train_data = data
            # logger.info(f"training feature data for {ticker}: {len(ticker_data)} rows after dropna")
        else:
            # for testing, we simulate real forecasting - no future prices known
            test_ticker_data = data[data['ticker'] == ticker].sort_values('Date')
            train_ticker_data = self.train_data[self.train_data['ticker'] == ticker].sort_values('Date')
            
            # get the last known values from training data
            last_train_close = train_ticker_data['Close'].iloc[-1]
            last_train_return = train_ticker_data['daily_return'].iloc[-1] if 'daily_return' in train_ticker_data.columns else 0
            
            feature_rows = []
            
            for _, test_row in test_ticker_data.iterrows():
                test_date = test_row['Date']
                
                feature_dict = {
                    'Date': test_date,
                    'Close': last_train_close,  
                    'ticker': ticker,
                    'daily_return': 0,  
                    
                    # indicators based on last known training data
                    'sma_5': last_train_close,
                    'sma_20': last_train_close,
                    'volatility_5': abs(last_train_return) if last_train_return else 1.0,
                    
                    # lag features use historical data only
                    'close_lag_1': last_train_close,
                    'close_lag_2': last_train_close,
                    'close_lag_3': last_train_close,
                    'close_lag_5': last_train_close,
                    'return_lag_1': last_train_return,
                    'return_lag_2': last_train_return,
                    'return_lag_3': last_train_return,
                    'return_lag_5': last_train_return,
                    
                    # day of week features 
                    'monday': 1 if test_date.weekday() == 0 else 0,
                    'tuesday': 1 if test_date.weekday() == 1 else 0,
                    'wednesday': 1 if test_date.weekday() == 2 else 0,
                    'thursday': 1 if test_date.weekday() == 3 else 0,
                    'friday': 1 if test_date.weekday() == 4 else 0,
                    
                    # tesla friday effect
                    'tesla_friday_effect': 1 if (ticker == 'TSLA' and test_date.weekday() == 4) else 0
                }
                
                feature_rows.append(feature_dict)
            
            ticker_data = pd.DataFrame(feature_rows)
            # logger.info(f"test feature data for {ticker}: {len(ticker_data)} rows created")
        
          
        if len(ticker_data) < 1:
            logger.warning(f"insufficient feature data for {ticker}: only {len(ticker_data)} rows")
            return None, None, None, None
        
        # define feature columns
        feature_cols = [
            'sma_5', 'sma_20', 'volatility_5',
            'close_lag_1', 'close_lag_2', 'close_lag_3', 'close_lag_5',
            'return_lag_1', 'return_lag_2', 'return_lag_3', 'return_lag_5',
            'monday', 'tuesday', 'wednesday', 'thursday', 'friday',
            'tesla_friday_effect'
        ]
        
        # create feature matrix
        X = ticker_data[feature_cols].values
        y = ticker_data['Close'].values
        dates = ticker_data['Date'].values
        
        return X, y, dates, feature_cols
    
    def train_linear_model(self, train_data, ticker):
        """
        train linear regression model with day-of-week features
        """
        X_train, y_train, train_dates, feature_cols = self.prepare_model_features(train_data, ticker, is_training=True)
        
        if X_train is None or len(X_train) == 0:
            logger.warning(f"no training data for linear model: {ticker}")
            return None
        
        # scale features
        scaler = StandardScaler()
        X_train_scaled = scaler.fit_transform(X_train)
        
        # train linear regression
        model = LinearRegression()
        model.fit(X_train_scaled, y_train)
        
        # store model and scaler
        model_key = f"linear_{ticker}"
        self.models[model_key] = model
        self.scalers[model_key] = scaler
        
        # logger.info(f"trained linear regression for {ticker} with {len(X_train)} samples")
        return model, scaler
    
    def train_xgboost_model(self, train_data, ticker):
        """
        train xgboost model with advanced features
        """
        X_train, y_train, train_dates, feature_cols = self.prepare_model_features(train_data, ticker, is_training=True)
        
        if X_train is None or len(X_train) == 0:
            logger.warning(f"no training data for xgboost model: {ticker}")
            return None
        
        # xgboost parameters
        params = {
            'objective': 'reg:squarederror',
            'max_depth': 4,
            'learning_rate': 0.1,
            'n_estimators': 50,
            'random_state': 42
        }
        
        # train xgboost
        model = xgb.XGBRegressor(**params)
        model.fit(X_train, y_train)
        
        # store model
        model_key = f"xgboost_{ticker}"
        self.models[model_key] = model
        
        # logger.info(f"trained xgboost for {ticker} with {len(X_train)} samples")
        return model
    
    def predict_linear(self, test_data, ticker):
        """
        make predictions using linear regression
        """
        model_key = f"linear_{ticker}"
        
        if model_key not in self.models:
            logger.warning(f"no trained linear model for {ticker}")
            return {}
        
        model = self.models[model_key]
        scaler = self.scalers[model_key]
        
        # prepare test features 
        X_test, y_test, test_dates, feature_cols = self.prepare_model_features(test_data, ticker, is_training=False)
        
        if X_test is None or len(X_test) == 0:
            logger.warning(f"no test features for linear model: {ticker}")
            return {}
        
        # scale and predict
        X_test_scaled = scaler.transform(X_test)
        predictions = model.predict(X_test_scaled)
        
        # create prediction dictionary
        pred_dict = {}
        for i, date in enumerate(test_dates):
            pred_dict[pd.to_datetime(date)] = predictions[i]
            
            # log prediction
            self.framework.log_prediction(
                prediction_date=test_dates[-1] if len(test_dates) > 0 else date,
                target_date=pd.to_datetime(date),
                ticker=ticker,
                model='linear_trend',
                predicted_price=predictions[i]
            )
        
        # logger.info(f"linear model predictions for {ticker}: {len(predictions)} forecasts")
        return pred_dict
    
    def predict_xgboost(self, test_data, ticker):
        """
        make predictions using xgboost
        """
        model_key = f"xgboost_{ticker}"
        
        if model_key not in self.models:
            logger.warning(f"no trained xgboost model for {ticker}")
            return {}
        
        model = self.models[model_key]
        
        # prepare test features
        X_test, y_test, test_dates, feature_cols = self.prepare_model_features(test_data, ticker, is_training=False)
        
        if X_test is None or len(X_test) == 0:
            logger.warning(f"no test features for xgboost model: {ticker}")
            return {}
        
        # predict
        predictions = model.predict(X_test)
        
        # create prediction dictionary
        pred_dict = {}
        for i, date in enumerate(test_dates):
            pred_dict[pd.to_datetime(date)] = predictions[i]
            
            # log prediction
            self.framework.log_prediction(
                prediction_date=test_dates[-1] if len(test_dates) > 0 else date,
                target_date=pd.to_datetime(date),
                ticker=ticker,
                model='xgboost',
                predicted_price=predictions[i]
            )
        
        # logger.info(f"xgboost predictions for {ticker}: {len(predictions)} forecasts")
        return pred_dict
    
    def run_statistical_forecasts(self, train_data, test_data):
        """
        train and run both statistical models for all tickers
        """
        logger.info("running statistical forecasts for all tickers...")
        
        tickers = train_data['ticker'].unique()
        all_predictions = {}
        
        for ticker in tickers:
            # logger.info(f"processing statistical models for {ticker}...")
            
            # train models
            linear_model = self.train_linear_model(train_data, ticker)
            xgb_model = self.train_xgboost_model(train_data, ticker)
            
            # make predictions
            linear_preds = self.predict_linear(test_data, ticker)
            xgb_preds = self.predict_xgboost(test_data, ticker)
            
            all_predictions[ticker] = {
                'linear_trend': linear_preds,
                'xgboost': xgb_preds
            }
        
        logger.info("statistical forecasts completed for all tickers")
        return all_predictions

def run_statistical_evaluation(framework, statistical_models, train_data, test_data):
    """
    evaluate statistical model performance
    """
    logger.info("evaluating statistical model performance...")
    
    # run predictions
    predictions = statistical_models.run_statistical_forecasts(train_data, test_data)
    
    # evaluate against actual test data
    tickers = test_data['ticker'].unique()
    
    for ticker in tickers:
        ticker_test = test_data[test_data['ticker'] == ticker].sort_values('Date')
        
        if ticker not in predictions:
            continue
            
        for model_name in ['linear_trend', 'xgboost']:
            model_preds = predictions[ticker][model_name]
            
            # align predictions with actual test dates
            pred_values = []
            actual_values = []
            
            for _, row in ticker_test.iterrows():
                test_date = pd.to_datetime(row['Date'])
                actual_price = row['Close']
                
                if test_date in model_preds:
                    pred_values.append(model_preds[test_date])
                    actual_values.append(actual_price)
            
            if len(pred_values) > 0:
                # evaluate performance
                framework.evaluate_model_performance(
                    f"{model_name}_{ticker}",
                    pred_values,
                    actual_values
                )

def setup_and_run_statistical_models(framework, train_data, test_data):
    """
    setup and run statistical models
    """
    logger.info("setting up statistical models...")
    
    # initialize statistical models
    statistical_models = StatisticalModels(framework)
    
    # run evaluation
    run_statistical_evaluation(framework, statistical_models, train_data, test_data)
    
    logger.info("chunk 3 complete - statistical models evaluated")
    return statistical_models

# performance summary function
def get_performance_df(framework):
    """
    extract performance data from framework and return as polars dataframe
    """
    if not framework.model_performance:
        logger.warning("no model performance data found")
        return None
    
    # convert performance dict to list
    performance_data = []
    
    for model_ticker, metrics in framework.model_performance.items():
        # parse model name and ticker (format: "model_name_TICKER")
        parts = model_ticker.split('_')
        ticker = parts[-1]  
        model = '_'.join(parts[:-1]).replace('_', ' ').title()
      
        performance_data.append({
            'ticker': ticker,
            'model': model,
            'mae_dollars': metrics['MAE'],
            'rmse_dollars': metrics['RMSE'], 
            'mape_percent': metrics['MAPE'] / 100,
            'accuracy': 1 - metrics['MAPE'] / 100,
            'predictions': metrics['n_predictions']
        })
    
    # convert to polars and sort
    performance_df = pl.DataFrame(performance_data).sort(['ticker', 'model'])
    
    return performance_df

# main execution
logger.info("starting forecasting pipeline...")

# setup framework
framework = setup_forecasting_framework()

# prepare data
forecast_df = prepare_forecasting_data(pl_results)
train_data, test_data = create_train_test_split(forecast_df)

# run baseline models
baseline_models = setup_and_run_baselines(framework, train_data, test_data)

# run statistical models
statistical_models = setup_and_run_statistical_models(framework, train_data, test_data)

# show results
performance_df = get_performance_df(framework)
ℹ️ INFO 10:31:50 starting forecasting pipeline...
ℹ️ INFO 10:31:50 setting up forecasting framework...
ℹ️ INFO 10:31:50 StockForecastingFramework initialized
ℹ️ INFO 10:31:50 prediction logs will be saved to: predictions_log
ℹ️ INFO 10:31:50 preparing forecasting data...
ℹ️ INFO 10:31:50 training observations by ticker:
ℹ️ INFO 10:31:50 AAPL: 43 observations
ℹ️ INFO 10:31:50 GOOGL: 43 observations
ℹ️ INFO 10:31:50 MSFT: 43 observations
ℹ️ INFO 10:31:50 NFLX: 43 observations
ℹ️ INFO 10:31:50 NVDA: 43 observations
ℹ️ INFO 10:31:50 TSLA: 43 observations
ℹ️ INFO 10:31:50 setting up baseline models...
ℹ️ INFO 10:31:50 initializing baseline models...
ℹ️ INFO 10:31:50 evaluating baseline model performance...
ℹ️ INFO 10:31:50 running baseline forecasts for all tickers...
ℹ️ INFO 10:31:50 naive_baseline_AAPL performance - MAE: $9.39, RMSE: $12.11, MAPE: 3.8%
ℹ️ INFO 10:31:50 seasonal_naive_AAPL performance - MAE: $10.71, RMSE: $13.58, MAPE: 4.3%
ℹ️ INFO 10:31:50 naive_baseline_GOOGL performance - MAE: $29.60, RMSE: $31.31, MAPE: 12.1%
ℹ️ INFO 10:31:50 seasonal_naive_GOOGL performance - MAE: $32.93, RMSE: $34.63, MAPE: 13.4%
ℹ️ INFO 10:31:50 naive_baseline_MSFT performance - MAE: $4.96, RMSE: $6.08, MAPE: 1.0%
ℹ️ INFO 10:31:50 seasonal_naive_MSFT performance - MAE: $5.72, RMSE: $6.71, MAPE: 1.1%
ℹ️ INFO 10:31:50 naive_baseline_NFLX performance - MAE: $19.50, RMSE: $25.52, MAPE: 1.6%
ℹ️ INFO 10:31:50 seasonal_naive_NFLX performance - MAE: $20.03, RMSE: $22.18, MAPE: 1.6%
ℹ️ INFO 10:31:50 naive_baseline_NVDA performance - MAE: $3.80, RMSE: $4.26, MAPE: 2.2%
ℹ️ INFO 10:31:50 seasonal_naive_NVDA performance - MAE: $6.22, RMSE: $7.12, MAPE: 3.6%
ℹ️ INFO 10:31:50 naive_baseline_TSLA performance - MAE: $54.71, RMSE: $67.36, MAPE: 13.2%
ℹ️ INFO 10:31:50 seasonal_naive_TSLA performance - MAE: $47.72, RMSE: $58.50, MAPE: 11.5%
ℹ️ INFO 10:31:50 chunk 2 complete - baseline models evaluated
ℹ️ INFO 10:31:50 setting up statistical models...
ℹ️ INFO 10:31:50 initializing statistical models...
ℹ️ INFO 10:31:50 evaluating statistical model performance...
ℹ️ INFO 10:31:50 running statistical forecasts for all tickers...
ℹ️ INFO 10:31:51 statistical forecasts completed for all tickers
ℹ️ INFO 10:31:51 linear_trend_AAPL performance - MAE: $9.69, RMSE: $12.44, MAPE: 3.9%
ℹ️ INFO 10:31:51 xgboost_AAPL performance - MAE: $12.80, RMSE: $15.46, MAPE: 5.2%
ℹ️ INFO 10:31:51 linear_trend_GOOGL performance - MAE: $27.49, RMSE: $29.08, MAPE: 11.2%
ℹ️ INFO 10:31:51 xgboost_GOOGL performance - MAE: $31.22, RMSE: $32.99, MAPE: 12.7%
ℹ️ INFO 10:31:51 linear_trend_MSFT performance - MAE: $7.08, RMSE: $8.30, MAPE: 1.4%
ℹ️ INFO 10:31:51 xgboost_MSFT performance - MAE: $5.98, RMSE: $7.53, MAPE: 1.2%
ℹ️ INFO 10:31:51 linear_trend_NFLX performance - MAE: $69.47, RMSE: $72.53, MAPE: 5.7%
ℹ️ INFO 10:31:51 xgboost_NFLX performance - MAE: $17.93, RMSE: $20.80, MAPE: 1.5%
ℹ️ INFO 10:31:51 linear_trend_NVDA performance - MAE: $13.74, RMSE: $14.38, MAPE: 7.8%
ℹ️ INFO 10:31:51 xgboost_NVDA performance - MAE: $3.82, RMSE: $5.24, MAPE: 2.2%
ℹ️ INFO 10:31:51 linear_trend_TSLA performance - MAE: $52.33, RMSE: $64.97, MAPE: 12.6%
ℹ️ INFO 10:31:51 xgboost_TSLA performance - MAE: $58.70, RMSE: $71.03, MAPE: 14.2%
ℹ️ INFO 10:31:51 chunk 3 complete - statistical models evaluated

On the choice of Forecasting Models

I always prefer to nest multiple models per series (stock in this case). This gives me a baseline upon which other contending models can be run and compared.
In this case, four forecasting approaches were tested/run: Naive Baseline (last known price), Seasonal Naive (same weekday historical price), Linear Regression with day-of-week features, and XGBoost with engineered technical indicators (mainly revolving around rolling n-day averages).

Simple models dominate across portfolio: Naive Baseline achieved lowest error rates for Apple (4.08% MAPE) and Microsoft (0.98% MAPE), while Seasonal Naive performed best for Tesla (12.21% MAPE) and Netflix (1.56% MAPE).

Machine learning shows limited advantage: XGBoost outperformed simpler methods only for Nvidia (2.12% MAPE) - technically also for Netflix but virtually the same accuracy as Naive Baseline (very close second to Xgboost).
This suggests that complex feature engineering and model provide minimal forecasting improvement for most stocks in this timeframe.

Volatility drives prediction difficulty: High-volatility stocks like Tesla show consistently poor performance across all models (12-15% MAPE), while stable stocks like Microsoft achieve sub-1% error rates even with basic approaches.

Day-of-week seasonality often proves more valuable than technical indicators: The success of Seasonal Naive models validates the calendar analysis findings, demonstrating that weekly trading patterns contain more predictive signal than moving averages or lag features for short-term forecasting.

Show the code
# check comprehensive performance across all models
performance_df = get_performance_df(framework)
# sort on stock and mape (ascending)
performance_df = performance_df.sort(['ticker', 'mape_percent'])

(
  gt_nyt_custom(performance_df, title='Complete Model Performance By Stock', first_10_rows_only=False)
  .fmt_percent(['Mape Percent', 'Accuracy'])
  .cols_hide('Predictions')
)
Complete Model Performance By Stock
Ticker Model Mae Dollars Rmse Dollars Mape Percent Accuracy
AAPL Naive Baseline $9.4 $12.1 3.80% 96.20%
AAPL Linear Trend $9.7 $12.4 3.92% 96.08%
AAPL Seasonal Naive $10.7 $13.6 4.34% 95.66%
AAPL Xgboost $12.8 $15.5 5.20% 94.80%
GOOGL Linear Trend $27.5 $29.1 11.19% 88.81%
GOOGL Naive Baseline $29.6 $31.3 12.05% 87.95%
GOOGL Xgboost $31.2 $33.0 12.71% 87.29%
GOOGL Seasonal Naive $32.9 $34.6 13.43% 86.57%
MSFT Naive Baseline $5.0 $6.1 0.98% 99.02%
MSFT Seasonal Naive $5.7 $6.7 1.13% 98.87%
MSFT Xgboost $6.0 $7.5 1.19% 98.81%
MSFT Linear Trend $7.1 $8.3 1.39% 98.61%
NFLX Xgboost $17.9 $20.8 1.46% 98.54%
NFLX Naive Baseline $19.5 $25.5 1.57% 98.43%
NFLX Seasonal Naive $20.0 $22.2 1.63% 98.37%
NFLX Linear Trend $69.5 $72.5 5.65% 94.35%
NVDA Naive Baseline $3.8 $4.3 2.18% 97.82%
NVDA Xgboost $3.8 $5.2 2.23% 97.77%
NVDA Seasonal Naive $6.2 $7.1 3.60% 96.40%
NVDA Linear Trend $13.7 $14.4 7.81% 92.19%
TSLA Seasonal Naive $47.7 $58.5 11.54% 88.46%
TSLA Linear Trend $52.3 $65.0 12.58% 87.42%
TSLA Naive Baseline $54.7 $67.4 13.18% 86.82%
TSLA Xgboost $58.7 $71.0 14.19% 85.81%

The results reveal that sophisticated machine learning models almost consistently under-perform simpler approaches across most stocks. Notably, XGBoost achieves the lowest error rate only for Nvidia (2.12% MAPE), while basic Naive Baseline or Seasonal Naive models dominate performance for Apple, Microsoft, Netflix (tied with XGboost), and Tesla.

This pattern suggests that for short-term daily price forecasting, the day-of-week seasonality captured by simple historical patterns often provides more predictive value than complex feature engineering.

The exception is Google, where Linear Trend slightly outperforms baseline methods, and Tesla, where all models struggle with high volatility (12-15% MAPE), indicating that the Tesla stock’s erratic price movements are inherently difficult to forecast regardless of modeling approach.

I wanted to include even more complex forecasting methods such as Prophet (from Meta), or LSTMs (Long Short-Term Memory), but seeing that simple models already achieve high accuracy, I decided not to go for those. In a real-world/job day-to-day, every additional line of code/model is additional maintenance and that comes with its own set of risks/painpoints.

Thanks for taking the time to read this and hopefully, if nothing else, you have enjoyed it, more to come !