Sitemap
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Pages
Posts
ML4LM — Speculative Decoding — from where we left off
Published:
Most blogs stop at the basics and skip the real details. I break down what’s usually missing: batching, accept/reject checks, and fallbacks.
ML4LM — Speculative Decoding — from where we left off
Published:
Most blogs stop at the basics and skip the real details. I break down what’s usually missing: batching, accept/reject checks, and fallbacks.
ML4LM — Profiling torch.compile on DenseNet-121 Inference (GTX 1650) [medium]
Published:
Introduction
ML4LM — Guards vs Graph Breaks in PyTorch: What You Need to Know [medium]
Published:
Guards vs Graph Breaks in PyTorch torch.compile
ML4LM — A tiny Triton primer (toy example) [medium]
Published:
A tiny Triton primer (toy example)
ML4LM — PyTorch — What Not to Do in PyTorch Models for Better Performance (dynamo) [medium]
Published:
PyTorch — What Not to Do in PyTorch Models for Better Performance (dynamo)
Knowledge Distillation at a Low Level [medium]
Published:
Knowledge Distillation at a Low Level
ML4LM — Fine-Tune Smarter, Not Harder: Discover LoRA for LLMs [medium]
Published:
Fine-Tune Smarter, Not Harder: Discover LoRA for LLMs
Mastering Anomaly Detection in Production: When to use and when not to use [medium]
Published:
Anomaly Detection with Isolation Forest
ML4LM-Content Based Recommendation Systems [medium]
Published:
Understanding Content-Based Recommendation Systems
Making sense of Bellman Equation — RL — ML4LM [medium]
Published:
Making sense of Bellman Equation — RL — ML4LM
ML4LM — MLE vs Bayesian intuitive Insights, No Math! [medium]
Published:
MLE vs Bayesian intuitive Insights, No Math!
ML4LM - Vanishing Gradient Problem? [medium]
Published:
Ever noticed that while training neural networks, the loss stops decreasing, and weights don’t get updated after a certain point? Understanding this hitch involves looking at how we optimize loss using gradient descent, adjusting weights to find the lowest loss.
ML4LM - Vanishing Gradient Problem? [medium]
Published:
ML4LM - Vanishing Gradient Problem?
ML4LM- How does Lasso bring sparsity? [medium]
Published:
Many of us have heard about Lasso and its ability to bring sparsity to models, but not everyone understands the nitty-gritty of how it actually works. In a nutshell, Lasso is like a superhero for overfitting problems, tackling them through a technique called regularization. If you’re not familiar with regularization and how it fights overfitting, I’d recommend checking that out first. For now, let’s dive into the magic of how Lasso brings sparsity.
ML4LM- How does Lasso bring sparsity? [medium]
Published:
ML4LM- How does Lasso bring sparsity?
ML4LM — What are Derivatives? [medium]
Published:
Back in my school days up to the 10th grade, I had a genuine love for math. Whether it was tackling geometry, diving into trigonometry, or exploring progressions, I felt pretty confident in my abilities. But then came derivatives, and suddenly everything took a sharp turn. Instead of visualizing and understanding the beauty of math, I found myself stuck in a maze of formulas and differentiation problem-solving.
ML4LM — What are Derivatives? [medium]
Published:
ML4LM — What are Derivatives?
ML4LM-Feature Scaling- Normalization [medium]
Published:
Ever wondered how data gets its makeover before revealing its insights? Enter the battleground of data refinement, where normalization and standardization go head-to-head. Think of it as a compelling tale of two methods, each with its unique charm.
ML4LM-Feature Scaling- Normalization [medium]
Published:
ML4LM-Feature Scaling- Normalization
ML4LM— Cleaning the Data [medium]
Published:
ML4LM— Cleaning the Data
ML4LM— Cleaning the Data [medium]
Published:
Cleaning data for Machine Learning is like preparing for a road trip where your model is the driver, and your data is the map. However, the map is a mishmash of routes, some as straightforward as a highway, while others resemble a convoluted maze that even a GPS would find confusing.
portfolio
Portfolio item number 1
Short description of portfolio item number 1
Portfolio item number 2
Short description of portfolio item number 2 
publications
Paper Title Number 1
Published in Journal 1, 2009
This paper is about the number 1. The number 2 is left for future work.
Recommended citation: Your Name, You. (2009). "Paper Title Number 1." Journal 1. 1(1).
Download Paper | Download Slides
Paper Title Number 2
Published in Journal 1, 2010
This paper is about the number 2. The number 3 is left for future work.
Recommended citation: Your Name, You. (2010). "Paper Title Number 2." Journal 1. 1(2).
Download Paper | Download Slides
Paper Title Number 3
Published in Journal 1, 2015
This paper is about the number 3. The number 4 is left for future work.
Recommended citation: Your Name, You. (2015). "Paper Title Number 3." Journal 1. 1(3).
Download Paper | Download Slides
Paper Title Number 4
Published in GitHub Journal of Bugs, 2024
This paper is about fixing template issue #693.
Recommended citation: Your Name, You. (2024). "Paper Title Number 3." GitHub Journal of Bugs. 1(3).
Download Paper
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.
