← Y2KDASH // reading log · 2026-04-19
> field manual · why your dashboard lies

The Speed Test Lie: Why Your 300 Mbps Connection Still Feels Slow

Ookla says 312 Mbps down. Fast.com agrees. Then your video call freezes, a game server kicks you for lag, and a blog post takes six seconds to render. The number was real. The number was also useless. Here's the gap between what one-shot speed tests measure and what your connection actually does when you live on it.

PUBLISHED 2026-04-19 · ~8 MIN READ · FIELD NOTES FROM Y2KDASH

1. A speed test is not a latency test

The public conception of "internet speed" is a single number in megabits per second. Marketing departments love this because it's a bigger-is-better benchmark you can slap on a billboard. Every major speed test you've used — Ookla, Fast.com, speed.cloudflare.com — rounds your entire network experience down to that number and one or two sidekicks (ping, jitter) that most users don't look at.

But throughput is the easy part. A 40 GB game download will happily eat 300 Mbps for twenty minutes. That's one scenario. Meanwhile, every interactive thing you do all day — typing into a web form, talking on Zoom, clicking a link, opening a Slack message, joining a deathmatch — depends on round-trip latency, and it depends on how that latency behaves when the pipe is actually being used.

Ookla measures latency for about one second, at idle, before it starts the download. Then it ignores latency for the rest of the test. So if your link is a Ferrari that pulls into a parking lot every time a packet has to squeeze past someone else's download, the final score still says Ferrari.

A one-shot speed test is a dump truck clocking 60 mph on an empty road. Your actual experience is a bike messenger trying to pass the dump truck.

2. What Ookla / Fast.com / Cloudflare actually do

To understand the blind spot, look at the shape of the test. Every major speed test follows the same three-step structure, and the limits of that structure are the limits of what it can tell you.

  1. A quick ping flight — a handful of ICMP or HTTP probes to the nearest server. Reports as "ping" or "idle latency." Takes <1 second.
  2. A saturating download — one or several concurrent streams pulling bytes as fast as they'll come. Usually 10–15 seconds, sometimes less. The peak throughput during that window is reported as "download speed."
  3. A saturating upload — same idea in reverse. Usually shorter than the download.

That's the entire test. Notice what's missing: any probe for how the connection behaves between those phases, any measurement of what happens to latency during the download, any coverage of the long tail of drops and spikes that dominate real experience.

Also missing: time. A ten-second dump is over before a lot of the interesting pathology even starts. Your router's queueing behavior, your ISP's traffic shaping, your Wi-Fi's retry loops, the upstream carrier's congestion — those take longer than ten seconds to bite. A speed test that ends before the problem starts can't catch the problem.

3. Bufferbloat: the hidden variable

The biggest thing one-shot tests don't see is bufferbloat, and bufferbloat is usually the reason "fast" connections feel slow.

When data arrives at your router or modem faster than it can send it out, the excess piles up in a buffer. Buffers are necessary — they smooth out bursts — but device manufacturers tend to install enormous buffers, because "never drop a packet" sounds great in a marketing test. The problem is that when the buffer is full of download traffic, every other packet has to wait in line behind it. Your click, your keystroke, your voice packet — all stuck behind a queue that might be hundreds of milliseconds long.

You see this as: the page half-loads instantly, then stalls for a beat. The Zoom call garbles for two seconds every time someone uploads a photo. Your game ping doubles the moment a family member starts a Netflix stream. None of these show up on a speed test, because the test runs in its own protected window.

To catch bufferbloat, you measure latency during a saturated download. That number — called loaded latency — is the single most predictive metric for real-world feel. The delta between your idle latency and your loaded latency is the bufferbloat grade.

bufferbloat deltagradewhat it feels like
< 30 msAcalls, games, everything stays smooth under load
30 – 100 msB / Cnoticeable lag on calls when uploads happen
100 – 300 msDcalls garble, games rubber-band, pages stall
> 300 msFthe link is unusable for anything interactive while saturated

4. Why five seconds isn't enough

There's another reason short tests lie: TCP slow-start. TCP doesn't just fire at full speed the instant a connection opens. It starts small, doubles its congestion window every round-trip, and keeps doubling until it sees packet loss. Only then does it know the ceiling.

On a fast link with even modest latency, slow-start can take three or four seconds to reach the real capacity. A test that ends at 10 seconds spends a third of its time ramping up, which drags the reported number below the actual peak. A test that ends at 2 seconds — and some free speed-test widgets are this short — sometimes doesn't reach the ceiling at all.

The fix is simple: measure for at least five seconds of saturated transfer, and ideally average across a rolling window so the ramp is diluted. That's what a continuous monitor does. That's what a one-shot "tap the button once, show a big number" UI can't do.

5. The metrics a real test measures

If throughput is just one dimension, what are the others? Here's the honest list of everything that affects how a connection feels, ranked roughly by how much it matters for interactive use:

6. Why one-shot dashboards can't catch it

The structural problem with a test you have to trigger is that you only run it when you're paying attention. You run it because something feels off — the video froze, the game hitched — and by the time you click the button, whatever caused it is probably gone. You measure the recovery, not the event.

The other half is that an active test tells the network, "I am about to pour traffic through you, please behave." The router empties its queues. Other devices back off. Your ISP's QoS kicks in. The number is flattering because the test is conspicuous.

A continuous monitor measures while nothing else is going on, while you're browsing, while you're uploading, while someone else in the house is streaming. It catches the thing that caused the hitch fifteen minutes ago, because it was already sampling when the hitch happened.

The difference between a benchmark and a monitor is the difference between a dentist's x-ray and a pulse oximeter. Both are useful. One of them tells you what's happening right now.

7. How to read an ambient monitor

Y2KDASH is a continuous monitor. It samples once per minute in the background, hits Cloudflare's public speed endpoints, and plots everything on a rolling window. You leave it open on a second monitor and look at it the way you look at a weather map — for patterns, not spot readings.

A few things worth looking for:

8. What a good connection actually looks like

A healthy residential broadband link, measured continuously, looks like this:

metricgoodacceptablebroken
download (sustained)> 80% of plan60–80%< 50%
upload (sustained)> 80% of plan60–80%< 50%
idle latency< 25 ms25–60 ms> 80 ms
loaded latency< 80 ms80–180 ms> 250 ms
jitter (stdev)< 5 ms5–20 ms> 30 ms
packet loss (24h)< 0.1%0.1–1%> 1%

Most residential connections pass download and upload and fail loaded latency. That's the industry's dirty secret. The pipe is fat, the queueing is terrible, and nobody tests for it because the tests people use don't measure it. The fix is almost always router-side — enabling a proper queue management algorithm like CAKE, Codel, or FQ-Codel, or replacing the ISP's rental box with something that runs one. Some operators ship with it on by default. Most don't.

9. FAQ

Why is my speed test fast but my internet feels slow?

Speed tests measure a short, saturated download to a nearby server. They do not measure latency under load, jitter, or packet loss over time. Most "fast but slow" connections have a bufferbloat problem: their raw throughput is fine, but their latency balloons whenever the link is busy — which is exactly when you are using it for a video call, a game, or a page load.

What is bufferbloat?

Extra latency caused by oversized packet buffers in routers, modems, or ISP equipment. When a download saturates the pipe, every other packet (a click, a voice frame, a game move) queues behind it. The download finishes fine; everything interactive feels awful until it ends.

How is loaded latency different from ping?

Ping measures round-trip time when the link is idle. Loaded latency measures round-trip time while the link is fully busy with a download or upload. Loaded latency is almost always much higher; the difference is the bufferbloat grade. Healthy connections show a delta under 30 ms; bad ones show 200 ms or more.

Why do short speed tests under-report my real speed?

TCP uses a ramp-up algorithm called slow-start. It begins at a small rate and doubles every round-trip until packet loss appears. On a fast connection, slow-start can take 3–4 seconds to reach your real ceiling. A speed test that ends at 2–3 seconds never hits the real number.

Does Ookla or Fast.com measure bufferbloat?

Neither one measures loaded latency well. Ookla reports ping only before the test begins, so it never sees bufferbloat. Fast.com is similar. Waveform measures loaded latency specifically. y2kdash measures it continuously over time.

What is a good loaded-latency number?

Under 80 ms total loaded latency (idle ping + bufferbloat delta combined). Delta alone should be under 30 ms. Above 150 ms total, video calls and games become notably worse. Above 300 ms, they become unusable while the connection is saturated.

How does a continuous speed test differ from a one-shot test?

A one-shot test measures the network in isolation while it is paying attention. A continuous test samples in the background once a minute, recording the connection's behavior while you actually use it. Continuous monitoring is the only reliable way to catch intermittent bufferbloat, packet-loss bursts, or ISP congestion that only appears at certain times of day.

10. The short version

A one-shot speed test is a trust fall. It tells you your link can do X megabits, once, for a few seconds, when conditions are ideal. It does not tell you what your link does when you actually use it.

If your speed test keeps agreeing with your plan but your experience keeps disagreeing with the speed test, the speed test isn't wrong. It's just answering a different question. Measure loaded latency, jitter, and packet loss over time, and the "why does my internet feel slow" mystery usually solves itself inside a day.

> field probe
Run the ambient monitor

Y2KDASH measures everything above continuously and plots it. Leave it open on a second monitor. Come back in an hour and read the ceiling, the floor, and everything in between.

> LAUNCH Y2KDASH →