This week I’ve started reading the second book in 2025 after finishing “The Psychology of Money” by Morgan Housel. The book is “Thinking, Fast and Slow” by Daniel Kahneman. This book delves into the dual systems of thought that govern our decision-making processes.
Refactoring for Resilience: Introducing a Database Caching Layer
For our financial analysis tool, the latest series of updates focuses on an architectural enhancement: the integration of a persistent database layer for caching, performance tracking, and data retention. This post details the changes, the rationale behind them, and how they set the stage for future development.
Seeing is Believing: A Tale of Two Demos (and the Power of Caching)
In this post, I want to show you the demo for the project after making the caching work. Below are two demos of the exact same analysis request. The only difference is that one is a cold start, and the other benefits from the now-functional cache.
From Sequential to Supersonic: A Developer's Journey into Parallel LLM Queries
When I first started building this application, my focus was on a simple goal: use a Large Language Model (LLM) to read dense SEC filings and extract structured, easy-to-digest insights. The initial prototype was magical. I could feed it the “Business” section of a 10-K filing, and it would return a beautiful JSON object with competitive advantages, key products, and more.
But then, I started to find out each analysis takes time especially when I wanted to analyze multiple sections like Business, Management’s Discussion and Analysis (MD&A), Risk Factors, and Financials. Each of these sections required a separate LLM API call, and I was making those calls one after another in a synchronous loop.
That’s when I hit the wall, together with the previous ‘cache’ implementation that wasn’t caching anything. The user experience was not ideal, and I knew I had to do something about it. So in this post I will show how to transform a sequential script to a multi-layered concurrent application that feels responsive and powerful to reduce the wait time from a couple of min to just seconds.
Debugging the Engine: Fixing a Broken Filing Cache
After running a few analyses, I noticed from the logs that with the existing edgar caching strategy, it still made HTTP requests and my terminal logs showed that the app was re-downloading the same SEC filings over and over.
The caching layer, which was supposed to prevent this, was clearly not working. This post is a chronicle of a classic developer experience: realizing a core feature isn’t working as intended and diving in to fix it.
Measuring What Matters: Building an Evaluation and Cost-Tracking Framework for my AI Financial Analyst
In this post, we will dive a bit into model evaluation. Building with Large Language Models (LLMs) presents two major challenges:
- How do you know if the output is any good?
- How do you prevent API costs from spiraling out of control?
Today, I tackled both of these head-on by building a robust evaluation and cost-tracking framework. It’s an (again) interesting learning journey but important step for moving from a fun prototype to a reliable tool.
Anatomy of an AI Financial Analyst: From Raw Filing to Interactive Dashboard
In the last post, I briefly outlined the project for an AI-powered tool to analyze SEC filings. In this blog, I’m diving deep into the architecture and core components that form the engine of this financial analyst.
We’ll dissect the journey of a single user request, from a ticker symbol entered into a form all the way to a rich, interactive dashboard filled with AI-generated insights.
LLM Learning Journey: Building an LLM-powered Financial Analyst with an AI Coding Partner
It’s been a while, but I’m happy to be back to blogging (helped by my AI assistants) with a new project and a new series! This is a dual experiment:
- Can we build an AI-powered tool to automate the analysis of complex financial documents for the purpose of learning the latest AI LLM techniques?
- How far can a developer push a project by truly partnering with AI coding assistants?
I’ll be sharing the journey right here.
The Problem: Information Overload in Finance
Before the project I had not much experience with financial documents especially company filings, only a basic understanding of their importance and relevance. After spending some time digging into them, I realized the challenge: You’re faced with mountains of dense, jargon-filled documents like 10-Ks and 10-Qs. Buried within these SEC filings is a goldmine of information about a company’s performance, strategy, and risks. But extracting and understanding it is a tedious, manual process and never even mention that not all SEC filings themselves are not standardized.
There are many websites have information about companies like Yahoo Finance, Google Finance, and others. But to build a platform with LLMs that can truly understand and analyze these documents, it’s essential to work directly with the raw data. This is where the SEC’s EDGAR database comes in, providing access to all public filings.
Inomad Diary-04-Infrastructure-02-Implementations-03-Create Resources
🔎 Intro
In this post, I will start creating resources in Azure using Pulumi. The full process is quite complex, so I will try to break it down into smaller parts and raise the key points here during my implementation.
Inomad Diary-04-Infrastructure-02-Implementations-02-Create a Pulumi Project
🔎 Intro
In the last post, I discussed some points of setting up Azure for Inomad’s infrastructure. In this post, I will be creating a Pulumi project and discuss some interesting points about the initial setup and considerations.