<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Feature Store on StackSimplify | DevOps &amp; Cloud Education by Kalyan Reddy</title><link>https://stacksimplify.com/tags/feature-store/</link><description>Recent content in Feature Store on StackSimplify | DevOps &amp; Cloud Education by Kalyan Reddy</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Tue, 14 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://stacksimplify.com/tags/feature-store/index.xml" rel="self" type="application/rss+xml"/><item><title>Feature Stores: The Package Registry for ML Features</title><link>https://stacksimplify.com/blog/feature-stores-ml/</link><pubDate>Tue, 14 Apr 2026 00:00:00 +0000</pubDate><guid>https://stacksimplify.com/blog/feature-stores-ml/</guid><description>Your training pipeline computes &amp;ldquo;average transaction amount&amp;rdquo; as the mean of the last 30 days. Your inference API computes it as the mean of the last 7 days.
Same feature name. Different values. Your model is silently wrong.
This is training-serving skew. The number one silent killer of ML models in production.
The Problem ML features get computed in two places:
Context How Features Are Computed Problem Training Batch job on historical data, saved to CSV Code written by data scientist Serving API computes on the fly per request Different code, different logic Two separate implementations.</description></item></channel></rss>