For a while now, I’ve maintained a daily journal in a simple text file. It’s a straightforward system: I write the date, then list my activities under WORK:
and LIFE:
headers. This approach is simple and low-friction, but as the file grew, its limitations became obvious. Searching for specific activities was a grep
-and-pray operation, and any form of analysis was purely manual. The data had no structure.
Containerization: A Case Study in Environment and Configuration Management
Today’s objective was to containerize the SEC filing analysis application to ensure a consistent and reproducible runtime environment. This process involved not only creating the necessary Docker artifacts but also refactoring the application’s configuration to adhere to best practices for handling secrets and environment-specific variables.
The Illusion of Quarterly Data: Correctly Calculating Financials from SEC Filings
When building financial analysis tools, one of the most common and dangerous assumptions is that the financial data you receive—whether from an API or directly from SEC filings—represents discrete, isolated time periods. A “Q2” report, for instance, should contain data only for the second quarter. Right?
Unfortunately, this is often not the case. Raw SEC filings, specifically the quarterly 10-Q
and annual 10-K
reports, follow reporting rules that can be misleading if taken at face value. In this post, I’ll walk through the challenges of parsing these documents and present a robust Python solution to derive true, discrete quarterly financial figures.
Mess in SEC Financial Filings: A new Challenge of Data Extraction
In the world of financial data extraction, the challenge of parsing and interpreting complex documents is ever-present. As I continue to refine my financial analysis tool, I’ve encountered a new set of challenges that highlight the messy nature of financial filings. This post delves into these issues and how they impact the accuracy and reliability of data extraction.
Reading Notes: Thinking, Fast and Slow by Daniel Kahneman
This week I’ve started reading the second book in 2025 after finishing “The Psychology of Money” by Morgan Housel. The book is “Thinking, Fast and Slow” by Daniel Kahneman. This book delves into the dual systems of thought that govern our decision-making processes.
Refactoring for Resilience: Introducing a Database Caching Layer
For our financial analysis tool, the latest series of updates focuses on an architectural enhancement: the integration of a persistent database layer for caching, performance tracking, and data retention. This post details the changes, the rationale behind them, and how they set the stage for future development.
Seeing is Believing: A Tale of Two Demos (and the Power of Caching)
In this post, I want to show you the demo for the project after making the caching work. Below are two demos of the exact same analysis request. The only difference is that one is a cold start, and the other benefits from the now-functional cache.
From Sequential to Supersonic: A Developer's Journey into Parallel LLM Queries
When I first started building this application, my focus was on a simple goal: use a Large Language Model (LLM) to read dense SEC filings and extract structured, easy-to-digest insights. The initial prototype was magical. I could feed it the “Business” section of a 10-K filing, and it would return a beautiful JSON object with competitive advantages, key products, and more.
But then, I started to find out each analysis takes time especially when I wanted to analyze multiple sections like Business, Management’s Discussion and Analysis (MD&A), Risk Factors, and Financials. Each of these sections required a separate LLM API call, and I was making those calls one after another in a synchronous loop.
That’s when I hit the wall, together with the previous ‘cache’ implementation that wasn’t caching anything. The user experience was not ideal, and I knew I had to do something about it. So in this post I will show how to transform a sequential script to a multi-layered concurrent application that feels responsive and powerful to reduce the wait time from a couple of min to just seconds.
Debugging the Engine: Fixing a Broken Filing Cache
After running a few analyses, I noticed from the logs that with the existing edgar caching strategy, it still made HTTP requests and my terminal logs showed that the app was re-downloading the same SEC filings over and over.
The caching layer, which was supposed to prevent this, was clearly not working. This post is a chronicle of a classic developer experience: realizing a core feature isn’t working as intended and diving in to fix it.
Measuring What Matters: Building an Evaluation and Cost-Tracking Framework for my AI Financial Analyst
In this post, we will dive a bit into model evaluation. Building with Large Language Models (LLMs) presents two major challenges:
- How do you know if the output is any good?
- How do you prevent API costs from spiraling out of control?
Today, I tackled both of these head-on by building a robust evaluation and cost-tracking framework. It’s an (again) interesting learning journey but important step for moving from a fun prototype to a reliable tool.