At a past job, I was in charge of investigations as to why latency had changed in a high frequency trading system.
There was dedicated team of folks that had built a random forest model to predict what the latency SHOULD be based on features like:
- trading team
- exchange
- order volume
- time of day
- etc
If that system detected change/unexpected spike etc, it would fire off an alert and then it was my job, as part of the trade support desk, to go investigate why e.g. was it a different trading pattern, did the exchange modify something etc etc
One day, we get an alert for IEX (of Flash Boys fame). I end up on the phone with one of our network engineers and, from IEX, one of their engineers and their sales rep for our company.
We are describing the change in latency and the sales rep drops his voice and says:
"Bro, I've worked at other firms and totally get why you care about latency. Other exchanges also track their internal latencies for just this type of scenario so we can compare and figure out the the issue with the client firm. That being said, given who we are and our 'founding story', we actually don't track out latencies so I have to just go with your numbers."
foobar10000 7 hours ago [-]
You generally want the following in trading:
* mean/med/p99/p999/p9999/max over day, minute, second, 10ms
* software timestamps of rdtsc counter for interval measurements - am17 says why below
* all of that not just on a timer - but also for each event - order triggered for send, cancel sent, etc - for ease of correlation to markouts.
* hw timestamps off some sort of port replicator that has under 3ns jitter - and a way to correlate to above.
* network card timestamps for similar - solar flare card (amd now) support start of frame to start of Ethernet frame measurements.
malwrar 4 hours ago [-]
> mean/med/p99/p999/p9999/max over day, minute, second, 10ms
So basically you’re taking 18 measurements and checking if they’re <10ms? Is that your time budget to make a trading decision, or is that also counting the trip to the exchange? How frequently are these measurements taken, and how do HFT folks handle breaches of this quota?
Not in finance, but I operate a high-load, low latency service and I’ve always been curious about how y’all think about latency.
elteto 5 hours ago [-]
How does rdtsc behave in the presence of multiple cores? As in: first time sample is taken on core P, process is pre-empted, then picked up again by core Q, second time sample is taken on core Q. Assume x64 Intel/AMD etc.
khold_stare 4 hours ago [-]
In trading all threads are pinned to a core, so that scenario doesn't happen.
foobar10000 4 hours ago [-]
Yeap!
Also - all modern systems in active use have invariant tsc. So even for migrating threads - one is ok.
tombert 16 hours ago [-]
I remember in 2017, I was trying to benchmark some highly concurrent code in F# using the async monad.
I was using timers, and I was getting insanely different times for the same code, going anywhere from 0ms to 20ms without any obvious changes to the environment or anything.
I was banging my head against it for hours, until I realized that async code is weird. Async code isn’t directly “run”, it’s “scheduled” and the calling thread can yield until we get the result. By trying to do microbenchmarks, I wasn’t really testing “my code”, I was testing the .NET scheduler.
It was my first glimpse into seeing why benchmarking is deceptively hard. I think about it all the time whenever I have to write performance tests.
sfn42 6 hours ago [-]
Isn't that part of the point? If the code runs in the scheduler then its performance is relevant. Same with garbage collection, if the garbage collector slows your algorithm down then you usually want to know, you can try to avoid allocations and such to improve performance, and measure it using your benchmarks.
Maybe you don't always want to include this, I can see how it might be challenging to isolate just the code itself. It might be possible to swap out the scheduler, synchronization context etc for implementations more suited to that kind of benchmarks?
tombert 3 hours ago [-]
Yes, but it was initially leading to some incorrect conclusions on my end, about certain things being “slow”.
For example, because I was trying to use fine-grained timers for everything async, I thought the JSON parsing library we were using was a bottleneck, because I saw some numbers like 30ms to parse a simple thing. I wasn’t measuring total throughput, I was measuring individual items for parts of the flow and incorrectly assumed that that applied to everything.
You just have to be a bit more careful than I was with using timers. Either make sure than your timer isn’t going across any kind of yield points, or only use timers in a more “macro” sense (e.g. measure total throughput). Otherwise you risk misleading numbers and bad conclusions.
sfn42 1 hours ago [-]
I would highly recommend using a specialized library like BenchmarkDotNet. It's more relevant for microbenchmarks but can be used for less micro benchmarks as well.
It will do things like force you to build in Release mode to avoid debug overhead, do warmup cycles and other measures to avoid various pitfalls related to how .NET works - JIT, runtime optimization and stuff like that, and it will output nicely formatted statistics at the end. Rolling your own benchmarks with simple timers and stuff can be very unreliable for many reasons.
tombert 1 hours ago [-]
Oh no argument on this at all, though I haven't touched .NET in several years, since I no longer have a job doing F# (though if anyone here is hiring for it please contact me!).
Even still I don't know that a benchmarking tool would be helpful in this particular case, at least at a micro level; I think you'd mostly be benchmarking the scheduler more than your actual code. At a more macro scale, however, like benchmarking the processing of 10,000 items it would probably still be useful.
bob1029 5 hours ago [-]
> Isn't that part of the point? If the code runs in the scheduler then its performance is relevant.
That's the entire point.
Finding out you have tens of milliseconds of slop because of TPL should instantly send you down a warpath to use threads directly, not encourage you to find a way to cheat the benchmarking figures.
Async/await for mostly CPU-bound workloads can be measured in terms of 100-1000x latency overhead. Accepting the harsh reality at face value is the best way to proceed most of the time.
Async/await can work on the producer side of an MPSC queue, but it is pretty awful on the consumer side. There's really no point in yielding every time you finish a batch. Your whole job is to crank through things as fast as possible, usually at the expense of energy efficiency and other factors.
weinzierl 13 hours ago [-]
Is your code really fast if you haven't measured it properly? I'd say measuring is hard but a prerequisite for writing fast code, so truly fast code is harder.
The number
one mistake I see people make is measuring one time and taking the results at face value. If you do nothing else, measure three times and you will at least have a feeling for the variability of your data. If you want to compare two versions of your code with confidence there is usually no way around proper statistical analysis.
Which brings me to the second mistake. When measuring runtime, taking the mean is not a good idea.
Runtime measurements usually skew heavily towards a theoretical
minimum which is a hard lower bound. The distribution is heavily lopsided with a long tail. If your objective is to compare two versions of some code, the minimum is a much better measure than the mean.
bostik 10 hours ago [-]
> The distribution is heavily lopsided with a long tail.
You'll see this in any properly active online system. Back in the previous job we had to drill it to teams that mean() was never an acceptable latency measurement. For that reason the telemetry agent we used provided out-of-the-box p50 (median), p90, p95, p99 and max values for every timer measurement window.
The difference between p99 and max was an incredibly useful indicator of poor tail latency cases. After all, every one of those max figures was an occurrence of someone or something experiencing the long wait.
These days, if I had the pleasure of dealing with systems where individual nodes handled thousands of messages per second, I'd add p999 to the mix.
ethan_smith 8 hours ago [-]
For comparing HFT implementations, the 99th percentile is often more practical than minimum values since it accounts for tail latency while excluding extreme outliers caused by GC pauses or OS scheduling.
Leszek 12 hours ago [-]
Fast code isn't a quantum effect, it doesn't wait for a measurement to wave collapse into being fast. The _assertion_ that a certain piece of code is fast probably requires a measurement (maybe you can get away with reasoning, e.g. algorithmic complexity or counting instructions; each have their flaws but so does measurement).
sfn42 6 hours ago [-]
If you're serious about performance you generally want to use a benchmark library like JMH for Java or BenchmarkDotNet for .Net. At least for those kinds of languages where there's garbage collection and just in time compilation, runtime optimization all this stuff, there's a lot of things to consider and these libraries help you get accurate results.
am17an 15 hours ago [-]
Typically you want to measure both things - time it takes to send an order and time it takes to calculate the decision to send an order. Both are important choke points, one for latency and the other for throughput (in case of busy markets, you can spend a lot of time deciding to send an order, creating backpressure)
The other thing is that L1/L2 switches provide this functionality, of taking switch timestamps and marking them, which is the true test of e2e latency without any clock drift etc.
Also, fast code is actually really really hard, you just to create the right test harness once
auc 15 hours ago [-]
Yeah definitely. Don’t want to have an algo that makes money when times are slow but then blows up/does nothing when market volume is 10x
omgtehlion 11 hours ago [-]
In HFT context (as in the article) measurement is quite easy: you tap incoming and outgoing network fibers and measure this time. Also you can do this in production, as this kind of measurement does not impact latency at all
dmurray 7 hours ago [-]
The article also touches on some reasons this isn't enough. You might want to test outside of production, you might want to measure the latency when you decide to send no order, and you might want to profile your code at a more granular level than the full lifecycle of market data to order.
omgtehlion 4 hours ago [-]
All internal parts are usually measured by low-overhead logger (which materializes log messages in a separate thread, and uses rdtsc in the hot path to record timestamps)
nine_k 17 hours ago [-]
Fast code is easy. But slow code is equally easy, unless you keep an eye, and measure.
And measuring is hard. This us why consistently fast code is hard.
In any case, adding some crude performance testing into your CI/CD suite, and signaling a problem if a test ran for much longer than it used to, is very helpful at quickly detecting bad performance regressions.
mattigames 16 hours ago [-]
Exactly, another instance where perfect can be the enemy of good, many times you are better out deploying something to prod, have some fairly good logging system and whenever you see an spike in slowness you try to replicate the conditions that made it slow, and debug from there, instead of expecting to have the impossible perfect measuring system that can detect even missing atoms of networking cables.
auc 15 hours ago [-]
Agreed, not worth making a huge effort toward an advanced system for measuring an ATS until you’ve really built out at scale
iammabd 15 hours ago [-]
Yeah, Most people write for the happy path...
Few obsess over the runtime behavior under stress.
webdevver 8 hours ago [-]
sadly its usually cheaper to just kick the server once a week than to spend $$$xN dev hours doing the right thing and making it work well
...unless youre faang and can amortize the costs across your gigafleet
Attummm 10 hours ago [-]
The title is clickbait, unfortunately.
The article states the opposite.
> Writing fast algorithmic trading system code is hard. Measuring it properly is even harder.
There was dedicated team of folks that had built a random forest model to predict what the latency SHOULD be based on features like:
- trading team
- exchange
- order volume
- time of day
- etc
If that system detected change/unexpected spike etc, it would fire off an alert and then it was my job, as part of the trade support desk, to go investigate why e.g. was it a different trading pattern, did the exchange modify something etc etc
One day, we get an alert for IEX (of Flash Boys fame). I end up on the phone with one of our network engineers and, from IEX, one of their engineers and their sales rep for our company.
We are describing the change in latency and the sales rep drops his voice and says:
"Bro, I've worked at other firms and totally get why you care about latency. Other exchanges also track their internal latencies for just this type of scenario so we can compare and figure out the the issue with the client firm. That being said, given who we are and our 'founding story', we actually don't track out latencies so I have to just go with your numbers."
* mean/med/p99/p999/p9999/max over day, minute, second, 10ms
* software timestamps of rdtsc counter for interval measurements - am17 says why below
* all of that not just on a timer - but also for each event - order triggered for send, cancel sent, etc - for ease of correlation to markouts.
* hw timestamps off some sort of port replicator that has under 3ns jitter - and a way to correlate to above.
* network card timestamps for similar - solar flare card (amd now) support start of frame to start of Ethernet frame measurements.
So basically you’re taking 18 measurements and checking if they’re <10ms? Is that your time budget to make a trading decision, or is that also counting the trip to the exchange? How frequently are these measurements taken, and how do HFT folks handle breaches of this quota?
Not in finance, but I operate a high-load, low latency service and I’ve always been curious about how y’all think about latency.
Also - all modern systems in active use have invariant tsc. So even for migrating threads - one is ok.
I was using timers, and I was getting insanely different times for the same code, going anywhere from 0ms to 20ms without any obvious changes to the environment or anything.
I was banging my head against it for hours, until I realized that async code is weird. Async code isn’t directly “run”, it’s “scheduled” and the calling thread can yield until we get the result. By trying to do microbenchmarks, I wasn’t really testing “my code”, I was testing the .NET scheduler.
It was my first glimpse into seeing why benchmarking is deceptively hard. I think about it all the time whenever I have to write performance tests.
Maybe you don't always want to include this, I can see how it might be challenging to isolate just the code itself. It might be possible to swap out the scheduler, synchronization context etc for implementations more suited to that kind of benchmarks?
For example, because I was trying to use fine-grained timers for everything async, I thought the JSON parsing library we were using was a bottleneck, because I saw some numbers like 30ms to parse a simple thing. I wasn’t measuring total throughput, I was measuring individual items for parts of the flow and incorrectly assumed that that applied to everything.
You just have to be a bit more careful than I was with using timers. Either make sure than your timer isn’t going across any kind of yield points, or only use timers in a more “macro” sense (e.g. measure total throughput). Otherwise you risk misleading numbers and bad conclusions.
It will do things like force you to build in Release mode to avoid debug overhead, do warmup cycles and other measures to avoid various pitfalls related to how .NET works - JIT, runtime optimization and stuff like that, and it will output nicely formatted statistics at the end. Rolling your own benchmarks with simple timers and stuff can be very unreliable for many reasons.
Even still I don't know that a benchmarking tool would be helpful in this particular case, at least at a micro level; I think you'd mostly be benchmarking the scheduler more than your actual code. At a more macro scale, however, like benchmarking the processing of 10,000 items it would probably still be useful.
That's the entire point.
Finding out you have tens of milliseconds of slop because of TPL should instantly send you down a warpath to use threads directly, not encourage you to find a way to cheat the benchmarking figures.
Async/await for mostly CPU-bound workloads can be measured in terms of 100-1000x latency overhead. Accepting the harsh reality at face value is the best way to proceed most of the time.
Async/await can work on the producer side of an MPSC queue, but it is pretty awful on the consumer side. There's really no point in yielding every time you finish a batch. Your whole job is to crank through things as fast as possible, usually at the expense of energy efficiency and other factors.
The number one mistake I see people make is measuring one time and taking the results at face value. If you do nothing else, measure three times and you will at least have a feeling for the variability of your data. If you want to compare two versions of your code with confidence there is usually no way around proper statistical analysis.
Which brings me to the second mistake. When measuring runtime, taking the mean is not a good idea. Runtime measurements usually skew heavily towards a theoretical minimum which is a hard lower bound. The distribution is heavily lopsided with a long tail. If your objective is to compare two versions of some code, the minimum is a much better measure than the mean.
You'll see this in any properly active online system. Back in the previous job we had to drill it to teams that mean() was never an acceptable latency measurement. For that reason the telemetry agent we used provided out-of-the-box p50 (median), p90, p95, p99 and max values for every timer measurement window.
The difference between p99 and max was an incredibly useful indicator of poor tail latency cases. After all, every one of those max figures was an occurrence of someone or something experiencing the long wait.
These days, if I had the pleasure of dealing with systems where individual nodes handled thousands of messages per second, I'd add p999 to the mix.
The other thing is that L1/L2 switches provide this functionality, of taking switch timestamps and marking them, which is the true test of e2e latency without any clock drift etc.
Also, fast code is actually really really hard, you just to create the right test harness once
And measuring is hard. This us why consistently fast code is hard.
In any case, adding some crude performance testing into your CI/CD suite, and signaling a problem if a test ran for much longer than it used to, is very helpful at quickly detecting bad performance regressions.
...unless youre faang and can amortize the costs across your gigafleet
The article states the opposite.
> Writing fast algorithmic trading system code is hard. Measuring it properly is even harder.