In most cases only one string is sent per message and no pointer
tracking is needed.
This is only plumbing work, no changes to messages have been made yet.
This enables better bindings in languages that do not have 0-terminated
strings for source/function name. It does not introduce any additional
overhead in languages that do use 0-terminated strings, either, but it
_is_ a breaking API change.
Fixes https://github.com/wolfpld/tracy/issues/53
But rdtscp is serializing!
No, it's not. Quoting the Intel Instruction Set Reference:
"The RDTSCP instruction is not a serializing instruction, but it does
wait until all previous instructions have executed and all previous
loads are globally visible. But it does not wait for previous stores to
be globally visible, and subsequent instructions may begin execution
before the read operation is performed.",
"The RDTSC instruction is not a serializing instruction. It does not
necessarily wait until all previous instructions have been executed
before reading the counter. Similarly, subsequent instructions may begin
execution before the read operation is performed."
So, the difference is in waiting for prior instructions to finish
executing. Notice that even in the rdtscp case, execution of the
following instructions may commence before time measurement is finished
and data stores may be still pending.
But, you may say, Intel in its "How to Benchmark Code Execution Times"
document shows that using rdtscp is superior to rdstc. Well, not
exactly. What they do show is that when a *single function* is
considered, there are ways to measure its execution time with little to
no error.
This is not what Tracy is doing.
In our case there is no way to determine absolute "this is before" and
"this is after" points of a zone, as we probably already are inside
another zone. Stopping the CPU execution, so that a deeply nested zone
may be measured with great precision, will skew the measurements of all
parent zones.
And this is not what we want to measure, anyway. We are not interested
in how a *single function* behaves, but how a *whole program* behaves.
The out-of-order CPU behavior may influence the measurements? Good! We
are interested in that. We want to see *how* the code is really
executed. How is *stopping* the CPU to make a timer read an appropriate
thing to do, when we want to see how a program is performing?
At least that's the theory.
And besides all that, the profiling overhead is now reduced.
The monotonic raw clock has the same accuracy as reading cntvct registers, but
using clock_gettime() has a measurable impact on queueing time (135 us vs
83 us).
This change is needed to enable ftrace time readings on ARM linux, which
doesn't provide any way to get raw cntvct readings, like x86-tsc on x86.
This change exploits the fact that events are processed in batches
originating from a single thread. A single message changing thread
context is enough to handle multiple messages, as opposed to inclusion
of thread identifier in each message.
Long-lived zones could send their end events without begin events in a
following scenario:
1. On-demand connection is made.
2. Zone begin is emitted, m_active is set to true.
3. Connection is terminated.
4. A new connection is made.
5. Zone end is emitted, because m_active is true.
To this point it was assumed that all zone end events will happen before
a new connection is made, but it's not necessarily true.
This should prevent a race condition that would result in invalid last
element of the queue, in case a freezed thread already got the queue
item, but didn't wrote to it (or didn't wrote fully).