Point Estimation
1. The Generating Process
Before we estimate, we must model. In this project, we assume our data follows a Gamma Distribution.
In the real world, Gamma models waiting times—the time until the next packet arrives at a server, or the time until a component fails.
It has two levers: Alpha (Shape) and Lambda (Scale). Try adjusting them to see how they warp the probability landscape.
2. The Law of Large Numbers
A good estimator must be Consistent. This means as we collect more data (n → ∞), our guess should converge to the true parameter with probability 1.
Here, we simulate the MLE estimator for Lambda derived from the log-likelihood function:
Watch the blue line. At low n, it is chaotic. At high n, it locks onto the truth.
Consistency Visualizer
Watch the estimator converge to the true value as .
3. Efficiency & Cramér-Rao
How 'good' is our estimator? We measure this with Mean Squared Error (MSE), which combines variance and bias:
Since MLE is asymptotically unbiased, the MSE converges to the variance. The Theoretical Bar below represents the Cramér-Rao Lower Bound—the absolute best precision mathematically possible.
The MSE Race
Theoretical Limit vs. Simulation
4. Interval Estimation
A single number (Point Estimate) is never enough. We need a range of plausible values. In this project, we used the Pivot Method to construct Confidence Intervals.
Notice the 'Funnel Effect' below. As n increases, the interval width collapses. Data buys precision.
Precision vs. Sample Size
As increases, our uncertainty collapses.
The interval tightens around the true parameter.
5. The Simulation Engine
This is the raw simulation code (translated from R to Python) that powers the insights above.
Execute the kernel below to verify the theoretical MSE against the simulated results in real-time.