John Carmack tweeted,

I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?

And if this weren’t John Carmack, I’d file it under “the interwebs being silly”.

But this is John Carmack.

How can this be true?

To avoid discussions about what exactly is meant in the tweet, this is what I would like to get answered:

How long does it take, in the best case, to get a single IP packet sent from a server in the US to somewhere in Europe, measuring from the time that a software triggers the packet, to the point that it’s received by a software above driver level?

How long does it take, in the best case, for a pixel to be displayed on the screen, measured from the point where a software above driver level changes that pixel’s value?

Even assuming that the transatlantic connection is the finest fibre optics cable that money can buy, and that John is sitting right next to his ISP, the data still has to be encoded in an IP packet, get from the main memory across to his network card, from there through a cable in the wall into another building, will probably hop across a few servers there (but let’s assume that it just needs a single relay), gets photonized across the ocean, converted back into an electrical impulse by a photosensor, and finally interpreted by another network card. Let’s stop there.

As for the pixel, this is a simple machine word that gets sent across the PCI express slot, written into a buffer, which is then flushed to the screen. Even accounting for the fact that “single pixels” probably result in the whole screen buffer being transmitted to the display, I don’t see how this can be slower: it’s not like the bits are transferred “one by one” – rather, they are consecutive electrical impulses which are transferred without latency between them (right?).

  • 103
  • 3
Konrad Rudolph
  • 7,601
  • 6
  • 29
  • 36
  • 52
    Either he's crazy or this is an unusual situation. Due to the speed of light in fiber, you cannot get data from the US to Europe in less than about 60 milliseconds one way. Your video card puts out an entire new screen of pixels every 17 milliseconds or so. Even with double buffering, you can still beat the packet by quite a bit. – David Schwartz May 01 '12 at 09:38
  • 92
    @DavidSchwartz: You're thinking of the GPU in isolation. Yes, the GPU can do a whole lot of work in less than 60ms. But John is complaining about the entire chain, which involves the monitor. Do you know how much latency is involved, from the image data is transmitted to the monitor, and until it is shown on the screen? The 17ms figure is meaningless and irrelevant. Yes, the GPU prepares a new image every 17 ms, and yes, the screen displays a new image every 17 ms. But that says nothing about how long the image has been en route before it was displayed – jalf May 01 '12 at 09:59
  • @user1203: That's why I said, "even with double buffering". – David Schwartz May 01 '12 at 10:30
  • 27
    He's a game programmer, and he said *faster than **I** can send a pixel to the screen*... so perhaps account for 3D graphics rendering delay? Though that should be quite low in most video games; they optimise for performance, not quality. And of course, there's the *very* high chance he's just exaggerating (there, I stated the obvious, happy?). – Bob May 01 '12 at 10:51
  • 25
    Go to Best Buy some time and watch all the TV sets, where they have them all tuned to the same in-house channel. Even apparently identical sets will have a noticeable (perhaps quarter-second) lag relative to each other. But beyond that there's having to implement the whole "draw" cycle inside the UI (which may involve re-rendering several "layers" of the image). And, of course, if 3-D rendering or some such is required that adds significant delay. – Daniel R Hicks May 01 '12 at 11:43
  • 5
    There is a lot of room for speculation in question, I don't think there is a perfect answer unless you know what J.Carmack was really talking about. Maybe his tweet was just some stupid comment on some situation he encountered. – Baarn May 01 '12 at 12:09
  • 2
    @Walter True. I asked the question because a lot of people retweeted it, suggesting some deep insight. Or not. I’d still be interested in a calculation comparing the two raw operations. As such, I don’t think the question is “not constructive”, as at least two people seem to think. – Konrad Rudolph May 01 '12 at 12:24
  • I think this question is very interesting, too. If an answer adding up all possible delays in modern hardware is acceptable for you, I don't see a problem. – Baarn May 01 '12 at 13:22
  • @slhck So far there’s only *one* answer, which isn’t speculating at all. But I’ll edit the question to make it clearer. **EDIT** Updated. Please consider all other discussions about the meaning of the tweet as off-topic. – Konrad Rudolph May 01 '12 at 14:01
  • Reminds me of the discussion on neutrinos being faster than light. http://news.sciencemag.org/scienceinsider/2012/02/breaking-news-error-undoes-faster.html No potential measurement errors anywhere? –  May 01 '12 at 15:41
  • Of course. But reading John’s answer the measuring is pretty straightforward. There are plenty of opportunities for errors to creep in, but not so much in his measurements … – Konrad Rudolph May 01 '12 at 15:44
  • @DavidSchwartz double buffering still causes buffer dead-locks. You can only eliminate the deadlock using a triple buffer... – Breakthrough May 01 '12 at 16:22
  • 2
    @DavidSchwartz - distance Boston to London ~5000 km; add in a ~1000 km for non-direct route to a server directly on the other side; you get 20 ms one way travel time by speed of light 20 ms = 6 000 km/(300 000 km/s) = 20 ms as roughly the lower limit. – dr jimbob May 01 '12 at 17:06
  • Note that a ping, an ICMP Echo Request, may be handled by software at the driver level or immediately above it at the bottom of the networking stack. – Tommy McGuire May 01 '12 at 19:40
  • 3
    The point is not that it was a very fast packet, but a very slow pixel. – Crashworks May 01 '12 at 23:53
  • 3
    @drjimbob the speed of light in fiber is a bit slower than in vacum, it's just ~ 200 000 km/s. So the rough lower limit is ~60ms for a two-way trip. – kolinko May 02 '12 at 08:34
  • 2
    @Merlin - Completely agree; which is why I presented it as a lower limit (and was doing one-way trip). Note that while optical fiber/coax-cable/ethernet cable is ~0.7 c (200 000 km/s), there are a couple of ways you could send an IP packet one-way significantly faster -- say transmission by satellite/radio (~.99c) or a ladder-line (~0.95c). – dr jimbob May 02 '12 at 13:41
  • Couldn't the ping be actually be served by a cache from the ISP? Isn't a traceroute pretty much the only way to tell if it's actually making it across the ocean? – Michael Frederick May 02 '12 at 20:31
  • 2
    @Neutrino http://slatest.slate.com/posts/2012/02/23/cern_neutrinos_two_errors_to_blame_for_faster_than_light_results.html – rickyduck May 03 '12 at 14:23
  • @rickyduck You should have read the article linked by Neutrino. He’s saying the same as you. – Konrad Rudolph May 03 '12 at 16:06
  • 3
    @drjimbob, transmission by satellite is even slower since the signal has much further to go. Typical satellite ping times are more like 200-300 ms. – psusi May 03 '12 at 18:07
  • @MichaelFrederick, no, there is no such thing as caching for pings. Traceroute uses the same underlying packet, it just sets a short TTL and increases it by one until it gets the echo from the destination. – psusi May 03 '12 at 18:08
  • @psusi - Yes; but that's because most satellites you would use in practice would be in a geosynchronous orbit (orbital period = earth rotation period), so they are always at visible to you at the same location in the sky (~36 000 km above earth surface + further as its not necessarily directly above you). Granted if you had a relay satellite in a [low-earth orbit at ~600 km](http://en.wikipedia.org/wiki/Geocentric_orbit#Earth_orbits) above the Earth surface which orbits the earth every ~100 minutes visible to antennas following it in Boston/London you could send a one-way IP packet in ~20 ms. – dr jimbob May 03 '12 at 18:40
  • @psusi - By my calculations as long as the satellite is halfway between Boston/London; earth is a perfect sphere; and the satellite is at a height d >= (sec θ - 1)* R= 521 km (where R is the radius of the Earth ~6400 km) where θ ~ 2500km/6400km ~ 0.4 rads the angle between Boston/satellite (also same between satellite/London) then the satellite can be seen by both with a lower limit of total travel distance of 2sqrt( (r+d)^2 - r^2) = 5270 km at a one-way travel speed of ~18 ms. I use c to say lower limit; as faster methods then 0.7c are feasible - though not in practice. – dr jimbob May 03 '12 at 18:47
  • Today people actually learns that electronics are under what makes programming work. Programming is accessible to everyone, but designing stuff like an entire computer is not up to everyone and have big repercutions in term of cost and make-bility. Graphics chips are so much different than other chips, and data still has to go through the screen hardware. Technology and physics are not as simple as programming is, and it costs money. Deal with it people. But still it'd quite cool if carmack could change things like he did for gfx cards ! – jokoon May 03 '12 at 19:40
  • @KonradRudolph I was just adding to the conversation, my article claimed that it was two errors, it was more of a reference than a reply – rickyduck May 04 '12 at 08:12
  • Transatlantic cables, see the CANCAT 3 cable in http://en.wikipedia.org/wiki/Transatlantic_communications_cable. Time for light from Nova Scotia to Iceland (part of Europe) in fiber is 16.7 ms, see http://www.wolframalpha.com/input/?i=distance+halifax%2C+canada+iceland – FredrikD Sep 13 '12 at 09:35
  • 2
    Apparently you can do a transatlantic ping faster, but that also means you wouldn't see it on the screen ;) – Stormenet Oct 12 '12 at 06:32
  • 2
    This complaint is spurious. It's not a problem, and furthermore it makes complete sense. Because (unless the person plugging the desktop monitor into the VGA / HDMI / DVI port has very specialized requirements and is also an idiot) that "screen" he's talking about is meant to be processed by the human visual system. Which processes frames at ~30 fps. Network packets are used, among other things, to sync clocks. Human eyes aren't getting any better, nor is our optical cortex getting any faster, so why should our screens update more often? Is he trying to embed subliminal messages in his games? – Parthian Shot Jul 13 '14 at 05:20
  • So I suppose my parenthetical answer to your question "How can this be true?" is "There is no logical reason for people to pour resources into one over the other". At the moment, output frame rates on normal display devices are far faster than the human eye can detect. They're better than they need to be already. Networking, however, allows for distributed processing; it is what drives supercomputers. It still needs work. – Parthian Shot Jul 13 '14 at 05:27
  • @Parthian There’s nothing “spurious” here, because your reasoning contains two errors. The first error is that even with high latency you can presumably develop protocols to update clocks. In fact, when I ping a site in the US, the latency is three times too high for 30 FPS (~100 ms). Second of all, your fancy reasoning simply ignores hard constraints placed by physics: due to the speed of light, the *minimum* ping we can hope to attain is 32 ms, which is the same as the human eye’s refresh rate, and this ignores lots of fancy signal processing on the way. – Konrad Rudolph Jul 13 '14 at 10:06
  • @Parthian To make the signal processing point more salient: read John’s answer about the latencies inherent in display hardware, and then his statement that “[t]he bad performance on the Sony is due to poor software engineering”. On the network side, the signal needs to cross (at the least) through the network card, the router, a server this side of the atlantic, and all this twice. And you are saying that all this can be done *trivially* (because, hey, my question is spurious) in <1 ms, whereas the video system has higher latencies than this for several of its steps (see John’s answer again). – Konrad Rudolph Jul 13 '14 at 10:12
  • 2
    @KonradRudolph "even with high latency you can presumably develop protocols to update clocks" I didn't say "with high latency", and there is such a protocol. It's called NTP, and it's used pretty much everywhere. "when I ping a site in the US, the latency is three times too high for 30 FPS" You're making my point; namely, that network speed needs to improve, but display technology doesn't. So OF COURSE more research needs to go into networks. – Parthian Shot Jul 14 '14 at 15:30
  • 2
    @KonradRudolph " your fancy reasoning simply ignores hard constraints placed by physics" I'm a computer engineer. So, yes, I've taken some special relativity. That's kind of orthogonal to my point. "you are saying that all this can be done trivially" I'm not. What I'm saying is that people have put way more effort into making it faster because it needs to be faster, but no one puts effort into display technology because it doesn't. Hence, one is much faster; not because it's easier, but because people have worked way harder on it. – Parthian Shot Jul 14 '14 at 15:32
  • @ParthianShot I *know* that there is such a protocol. From your comment it appeared as if you didn’t. – To your overall point: you claim that my question is moot because of reasons, but I’ve shown that these reasons are simply not a sufficient argument, and partially false. And when you say “you’re making my point” – no, I’ve contradicted it. To make it blindingly obvious: the *best* ping we can hope for *under ideal conditions* is just barely on par with adequate (not great) display speed, so there’s no reason to assume it should be faster. – Konrad Rudolph Jul 14 '14 at 15:40
  • 2
    @KonradRudolph "the best ping we can hope for under ideal conditions is just barely on par with adequate (not great) display speed" ...Okay, I think you don't get the point I'm trying to make, because I agree with that. "so there’s no reason to assume it should be faster" And I agree with that. What I'm saying is, while there's no physical reason display devices would need to be slow, there's no financial reason for them to be fast. Physically, there's no reason there can't be a nine-ton pile of mashed potatoes in the middle of Idaho. And that would be way easier than going to the moon. – Parthian Shot Jul 14 '14 at 18:00
  • 2
    @KonradRudolph But we've been to the moon, and there isn't an enormous pile of mashed potatoes at the center of Idaho, because no one cares enough to build or pay for such a pile. In the same way that no one cares enough to make affordable and widespread display technology that updates more than adequately. Because adequate is... adequate. – Parthian Shot Jul 14 '14 at 18:01
  • My ping time to Google is 10ms and my screen is 60hz (16ms pixel time).Just normal ADSL internet and Wireless-N – Suici Doga Sep 10 '16 at 03:10
  • You are all drowning in a glass of water! There are many factors involved that constantly create random latency. Think about it. – Joe R. May 22 '17 at 00:32
  • @FrankR. I think we’re *all* very well aware of that. The question is simply what the upper bound on these latencies is; and they can be quantified, and meaningfully compared, as the answers show. – Konrad Rudolph May 22 '17 at 12:05

3 Answers3


The time to send a packet to a remote host is half the time reported by ping, which measures a round trip time.

The display I was measuring was a Sony HMZ-T1 head mounted display connected to a PC.

To measure display latency, I have a small program that sits in a spin loop polling a game controller, doing a clear to a different color and swapping buffers whenever a button is pressed. I video record showing both the game controller and the screen with a 240 fps camera, then count the number of frames between the button being pressed and the screen starting to show a change.

The game controller updates at 250 Hz, but there is no direct way to measure the latency on the input path (I wish I could still wire things to a parallel port and use in/out Sam instructions). As a control experiment, I do the same test on an old CRT display with a 170 Hz vertical retrace. Aero and multiple monitors can introduce extra latency, but under optimal conditions you will usually see a color change starting at some point on the screen (vsync disabled) two 240 Hz frames after the button goes down. It seems there is 8 ms or so of latency going through the USB HID processing, but I would like to nail this down better in the future.

It is not uncommon to see desktop LCD monitors take 10+ 240 Hz frames to show a change on the screen. The Sony HMZ averaged around 18 frames, or 70+ total milliseconds.

This was in a multimonitor setup, so a couple frames are the driver's fault.

Some latency is intrinsic to a technology. LCD panels take 4-20 milliseconds to actually change, depending on the technology. Single chip LCoS displays must buffer one video frame to convert from packed pixels to sequential color planes. Laser raster displays need some amount of buffering to convert from raster return to back and forth scanning patterns. A frame-sequential or top-bottom split stereo 3D display can't update mid frame half the time.

OLED displays should be among the very best, as demonstrated by an eMagin Z800, which is comparable to a 60 Hz CRT in latency, better than any other non-CRT I tested.

The bad performance on the Sony is due to poor software engineering. Some TV features, like motion interpolation, require buffering at least one frame, and may benefit from more. Other features, like floating menus, format conversions, content protection, and so on, could be implemented in a streaming manner, but the easy way out is to just buffer between each subsystem, which can pile up to a half dozen frames in some systems.

This is very unfortunate, but it is all fixable, and I hope to lean on display manufacturers more about latency in the future.

Peter Mortensen
  • 12,090
  • 23
  • 70
  • 90
John Carmack
  • 6,922
  • 1
  • 13
  • 3
  • 240
    I'd like to not have to lock this answer for excessive off-topic comments. We're all thrilled that John provided this answer, but we don't need 25 comments all expressing their gratitude, disbelief, or excitement. Thank you. – nhinkle May 02 '12 at 08:48
  • 32
    Your USB trigger is probably running as a Low speed USB device (bus frames at 125usec) causing a minimal 8ms delay (hardware issue). Maybe try a PS2 keyboard instead ? – Boris May 02 '12 at 09:10
  • 3
    It would help if the timing you got for the monitor was more clearly expressed. Had to hunt a bit to find 70ms in your (otherwise well written) answer. :) – Macke May 03 '12 at 06:12
  • 8
    @Marcus Lindblom by hunt for, you mean read? I think in this case, *how* he got to his number is just as important as the number - the skepticism regarding the tweet is not going to be addressed by citing another number. Also the context helps - he was most directly annoyed by this specific monitor with its sub-optimal software. – Jeremy May 03 '12 at 11:54
  • 17
    It sounds like you are saying that when LCD makers claim say, a 5ms response time, that may be time time it takes the raw panel to change, but the monitor adds quite a bit more time buffering and processing the signal before it actually drives the LCD. Doesn't that mean the manufacturers are publishing false/misleading specs? – psusi May 03 '12 at 18:19
  • 14
    @psusi http://doubledeej.blogspot.com/2009/07/lies-damn-lies-and-hdtv-spec-numbers.html http://www.zdnet.com/blog/ou/how-lcd-makers-lie-to-you-about-viewing-angles/930 http://gizmodo.com/5669331/why-most-hardware-specs-are-total-bullshit http://www.maximumpc.com/article/features/display_myths_shattered – Dan Is Fiddling By Firelight May 04 '12 at 12:48
  • 2
    Hopefully in the future, direct-view LED displays will be readily available. Sony has announced one that will be coming out within the next year or two, and I actually had an opportunity to look at one and talk to one of the engineers behind it. I specifically asked about latency, and he said it was on the order of nanoseconds. Plus a 60" screen was razor-thin, lightweight, and took something like 20 watts to operate, so I mean, how is this NOT a winning technology? – fluffy May 14 '12 at 15:18
  • 8
    Here's how I measure display latency: Most chipsets provide some GPIO pins, which you can toggle with an `outp` instruction (your program must run very privileged of course for this to work). Then clone the screen on a digital and a analogue connection. The display goes to digital. Put a photodiode on the display and hook up the analoge video and the photodiode to an oscilloscope, and the scope's external trigger to the GPIO. Now you can use the GPIO for triggerin and accurately measure the time it takes for the signal to appear on the line and the display. – datenwolf May 22 '12 at 09:56

Some monitors can have significant input lag

Accounting for an awesome internet connection compared to a crappy monitor and video card combo its possible


Console Gaming: The Lag Factor • Page 2

So, at 30FPS we get baseline performance of eight frames/133ms, but in the second clip where the game has dropped to 24FPS, there is a clear 12 frames/200ms delay between me pulling the trigger, and Niko beginning the shotgun firing animation. That's 200ms plus the additional delay from your screen. Ouch.

A Display can add another 5-10ms

So, a console can have upto 210ms of lag

And, as per David's comment the best case should be about 70ms for sending a packet

  • 419
  • 2
  • 6
  • 18
  • 3,740
  • 1
  • 19
  • 34
  • 1
    -1 I don't think that John Carmack uses a crappy monitor or video card. Please reference your claim with credible sources. – Baarn May 01 '12 at 10:41
  • @WalterMaier-Murdnelch added source. Its a console, but I imagine a PC would have similar lags – Akash May 01 '12 at 10:48
  • 14
    Sorry but I still don’t see this really answering the question. The quote tells about “pulling the trigger” and this implies much more work, as in input processing, scene rendering etc., than just sending a pixel to the screen. Also, human reaction speed is relatively lousy compared to modern hardware performance. The time between the guy *thinking* he pulled the trigger, and actually pulling it, could well be the bottleneck. – Konrad Rudolph May 01 '12 at 10:57
  • 2
    The linked article shows that the author of this analysis purchased a special device that can show you exactly when the button was pressed, so I don't think they're just winging the numbers. – Melikoth May 01 '12 at 13:40
  • 13
    @KonradRudolph: Perception is pretty weird stuff. I read an article a while ago about an experimental controller that read impulses directly off the spinal cord. People would feel that the computer was acting before they had clicked, even though it was their own nerve command to click it was reacting to. – Zan Lynx May 01 '12 at 16:48
  • 13
    @Zan Lynx: This is a known effect. Google for "Benjamin Libet's Half Second Delay". Human consciousness requires significant processing time. Everything that think is happening now actually happened in the past. All your senses are giving you an "integrated multi-media experience" of an event from half a second ago. Furthermore, events appear to be "time stamped" by the brain. A direct brain stimulation has to be delayed relative to a tactile stimulation in order for the subject to report the sensations as simultaneous! – Kaz May 01 '12 at 21:24

It is very simple to demonstrate input lag on monitors, just stick an lcd next to a crt and show a clock or an animation filling the screen and record it. One can be a second or more behind. It is something that LCD manufacturers have tightened up on since gamers, etc have noticed it more.

Eg. Youtube Video: Input Lag Test Vizio VL420M

  • 419
  • 2
  • 6
  • 18
  • 1,671
  • 1
  • 12
  • 20