Extend `DIBasicType` and `DISubroutineType` with additional field
`annotations`, e.g. as below:
```
!5 = !DIBasicType(name: "int", size: 32, encoding: DW_ATE_signed, annotations: !6)
!6 = !{!7}
!7 = !{!"btf:type_tag", !"tag1"}
```
The field would be used by BPF backend to generate DWARF attributes
corresponding to `btf_type_tag` type attributes, e.g.:
```
0x00000029: DW_TAG_base_type
DW_AT_name ("int")
DW_AT_encoding (DW_ATE_signed)
DW_AT_byte_size (0x04)
0x0000002d: DW_TAG_LLVM_annotation
DW_AT_name ("btf:type_tag")
DW_AT_const_value ("tag1")
```
Such DWARF entries would be used to generate BTF definitions by tools
like [pahole](https://github.com/acmel/dwarves).
Note: similar fields with similar purposes are already present in
DIDerivedType and DICompositeType.
Currently "btf_type_tag" attributes are represented in debug information
as 'annotations' fields in DIDerivedType with DW_TAG_pointer_type tag.
The annotation on a pointer corresponds to pointee having the attributes
in the final BTF.
The discussion in
[thread](https://lore.kernel.org/bpf/87r0w9jjoq.fsf@oracle.com/) came to
conclusion, that such annotations should apply to the annotated type
itself. Hence the necessity to extend `DIBasicType` & `DISubroutineType`
types with 'annotations' field to represent cases like below:
```
int __attribute__((btf_type_tag("foo"))) bar;
```
This was previously tracked as differential revision:
https://reviews.llvm.org/D143966
Remove support for the icmp and fcmp constant expressions.
This is part of:
https://discourse.llvm.org/t/rfc-remove-most-constant-expressions/63179
As usual, many of the updated tests will no longer test what they were
originally intended to -- this is hard to preserve when constant
expressions get removed, and in many cases just impossible as the
existence of a specific kind of constant expression was the cause of the
issue in the first place.
A cycle profile of a thin link showed a lot of time spent in sort called
from the BitcodeWriter, which was being used to compute the unique
references to stack ids in the summaries emitted for each backend in a
distributed thinlto build. We were also frequently invoking lower_bound
to locate stack id indices in the resulting vector when writing out the
referencing memprof records.
Change this to use a map to uniquify the references, and to hold the
index of the corresponding stack id in the StackIds vector, which is
now populated at the same time.
This reduced the time of a large thin link by about 10%.
PR #92983 accidentally changed the buffer allocation in
`llvm::WriteBitcodeToFile` to be allocated on the stack, which is
problematic given it's a large-ish buffer (256K)
- added unittests for the raw_fd_stream output case.
- the `BitstreamWriter` ctor was confusing, the relationship between the buffer and the file stream wasn't clear and in fact there was a potential bug in `BitcodeWriter` in the mach-o case, because that code assumed in-buffer only serialization. The incremental flushing behavior of flushing at end of block boundaries was an implementation detail that meant serializers not using blocks (for example) would need to know to check the buffer and flush. The bug was latent - in the sense that, today, because the stream being passed was not a `raw_fd_stream`, incremental buffering never kicked in.
The new design moves the responsibility of flushing to the `BitstreamWriter`, and makes it work with any `raw_ostream` (but incrementally flush only in the `raw_fd_stream` case). If the `raw_ostream` is over a buffer - i.e. a `raw_svector_stream` - then it's equivalent to today's buffer case. For all other `raw_ostream` cases, buffering is an implementation detail. In all cases, the buffer is flushed (well, in the buffer case, that's a moot statement).
This simplifies the state and state transitions the user has to track: you have a raw_ostream -> BitstreamWrite in it -> destroy the writer => the bitstream is completely written in your raw_ostream. The "buffer" case and the "raw_fd_stream" case become optimizations rather than imposing state transition concerns to the user.
This defines a new kind of IR Constant that represents a ptrauth signed
pointer, as used in AArch64 PAuth.
It allows representing most kinds of signed pointer constants used thus
far in the llvm ptrauth implementations, notably those used in the
Darwin and ELF ABIs being implemented for c/c++. These signed pointer
constants are then lowered to ELF/MachO relocations.
These can be simply thought of as a constant `llvm.ptrauth.sign`, with
the interesting addition of discriminator computation: the `ptrauth`
constant can also represent a combined blend, when both address and
integer discriminator operands are used. Both operands are otherwise
optional, with default values 0/null.
This implements the `nusw` and `nuw` flags for `getelementptr` as
proposed at
https://discourse.llvm.org/t/rfc-add-nusw-and-nuw-flags-for-getelementptr/78672.
The three possible flags are encapsulated in the new `GEPNoWrapFlags`
class. Currently this class has a ctor from bool, interpreted as the
InBounds flag. This ctor should be removed in the future, as code gets
migrated to handle all flags.
There are a few places annotated with `TODO(gep_nowrap)`, where I've had
to touch code but opted to not infer or precisely preserve the new
flags, so as to keep this as NFC as possible and make sure any changes
of that kind get test coverage when they are made.
For distributed ThinLTO, the LTO indexing step generates combined
summary for each module, and postlink pipeline reads the combined
summary which stores the information for link-time optimization.
This patch populates the 'import type' of a summary in bitcode, and
updates bitcode reader to parse the bit correctly.
Reverts llvm/llvm-project#90692
Breaking PPC buildbots. The bots are not meant to test LLD, but are
running a test that is using an old version of LLD without the change
(so is incompatible). Revert until a fix is found.
This reverts commit 2aabfc811670beb843074c765c056fff4a7b443b.
Add fixes to LLD and Gold tests missed in original change.
Co-authored-by: Jan Voung <jvoung@google.com>
GUID often have content in the higher bits of a 64-bit entry so using
the unabbrev encoding is inefficient (lots of VBR control bits).
Instead, use an abbrev with two 32-bit fixed width chunks.
The abbrev also helps encode the "count" in one place instead of
in every record.
Reduces size of distributed backend summary files by 8.7% in one
example app.
Co-authored-by: Jan Voung <jvoung@google.com>
The motivating use case is to support import the function declaration
across modules to construct call graph edges for indirect calls [1]
when importing the function definition costs too much compile time
(e.g., the function is too large has no `noinline` attribute).
1. Currently, when the compiled IR module doesn't have a function
definition but its postlink combined summary contains the function
summary or a global alias summary with this function as aliasee, the
function definition will be imported from source module by IRMover. The
implementation is in FunctionImporter::importFunctions [2]
2. In order for FunctionImporter to import a declaration of a function,
both function summary and alias summary need to carry the def / decl
state. Specifically, all existing summary fields doesn't differ across
import modules, but the def / decl state of is decided by
`<ImportModule, Function>`.
This change encodes the def/decl state in `GlobalValueSummary::GVFlags`.
In the subsequent changes
1. The indexing step `computeImportForModule` [3]
will compute the set of definitions and the set of declarations for each
module, and passing on the information to bitcode writer.
2. Bitcode writer will look up the def/decl state and sets the state
when it writes out the flag value. This is demonstrated in
https://github.com/llvm/llvm-project/pull/87600
3. Function importer will read the def/decl state when reading the
combined summary to figure out two sets of global values, and IRMover
will be updated to import the declaration (aka linkGlobalValuePrototype [4])
into the destination module.
- The next change is https://github.com/llvm/llvm-project/pull/87600
[1] mentioned in rfc https://discourse.llvm.org/t/rfc-for-better-call-graph-sort-build-a-more-complete-call-graph-by-adding-more-indirect-call-edges/74029#support-cross-module-function-declaration-import-5
[2] 3b337242ee/llvm/lib/Transforms/IPO/FunctionImport.cpp (L1608-L1764)
[3] 3b337242ee/llvm/lib/Transforms/IPO/FunctionImport.cpp (L856)
[4] 3b337242ee/llvm/lib/Linker/IRMover.cpp (L605)
Add annotated vtable GUID as referenced variables in per function
summary, and update bitcode writer to create value-ids for these
referenced vtables.
- This is the part3 of type profiling work, and described in the "Virtual Table Definition Import" [1] section of the
RFC.
[1] https://github.com/llvm/llvm-project/pull/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
Another trivial rename patch, the last big one for now, which renamed
DPMarkers to DbgMarkers. This required the field `DbgMarker` in
`Instruction` to be renamed to `DebugMarker` to avoid a clash, but
otherwise was a simple string substitution of `s/DPMarker/DbgMarker` and
a manual renaming of `DPM` to `DM` in the few places where that acronym
was used for debug markers.
This patch renames DPLabel to DbgLabelRecord, in accordance with the
ongoing DbgRecord rename. This rename was fairly trivial, since DPLabel
isn't as widely used as DPValue and has no real conflicts in either its
full or abbreviated name. As usual, the entire replacement was done
automatically, with `s/DPLabel/DbgLabelRecord/` and `s/DPL/DLR/`.
As part of the migration to ptradd
(https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699),
we need to change the representation of the `inrange` attribute, which
is used for vtable splitting.
Currently, inrange is specified as follows:
```
getelementptr inbounds ({ [4 x ptr], [4 x ptr] }, ptr @vt, i64 0, inrange i32 1, i64 2)
```
The `inrange` is placed on a GEP index, and all accesses must be "in
range" of that index. The new representation is as follows:
```
getelementptr inbounds inrange(-16, 16) ({ [4 x ptr], [4 x ptr] }, ptr @vt, i64 0, i32 1, i64 2)
```
This specifies which offsets are "in range" of the GEP result. The new
representation will continue working when canonicalizing to ptradd
representation:
```
getelementptr inbounds inrange(-16, 16) (i8, ptr @vt, i64 48)
```
The inrange offsets are relative to the return value of the GEP. An
alternative design could make them relative to the source pointer
instead. The result-relative format was chosen on the off-chance that we
want to extend support to non-constant GEPs in the future, in which case
this variant is more expressive.
This implementation "upgrades" the old inrange representation in bitcode
by simply dropping it. This is a very niche feature, and I don't think
trying to upgrade it is worthwhile. Let me know if you disagree.
This is the major rename patch that prior patches have built towards.
The DPValue class is being renamed to DbgVariableRecord, which reflects
the updated terminology for the "final" implementation of the RemoveDI
feature. This is a pure string substitution + clang-format patch. The
only manual component of this patch was determining where to perform
these string substitutions: `DPValue` and `DPV` are almost exclusively
used for DbgRecords, *except* for:
- llvm/lib/target, where 'DP' is used to mean double-precision, and so
appears as part of .td files and in variable names. NB: There is a
single existing use of `DPValue` here that refers to debug info, which
I've manually updated.
- llvm/tools/gold, where 'LDPV' is used as a prefix for symbol
visibility enums.
Outside of these places, I've applied several basic string
substitutions, with the intent that they only affect DbgRecord-related
identifiers; I've checked them as I went through to verify this, with
reasonable confidence that there are no unintended changes that slipped
through the cracks. The substitutions applied are all case-sensitive,
and are applied in the order shown:
```
DPValue -> DbgVariableRecord
DPVal -> DbgVarRec
DPV -> DVR
```
Following the previous rename patches, it should be the case that there
are no instances of any of these strings that are meant to refer to the
general case of DbgRecords, or anything other than the DPValue class.
The idea behind this patch is therefore that pure string substitution is
correct in all cases as long as these assumptions hold.
Reland #82363 after fixing build failure
https://lab.llvm.org/buildbot/#/builders/5/builds/41428.
Memory sanitizer detects usage of `RawData` union member which is not
filled directly. Instead, the code relies on filling `Data` union
member, which is a struct consisting of signing schema parameters.
According to https://en.cppreference.com/w/cpp/language/union, this is
UB:
"It is undefined behavior to read from the member of the union that
wasn't most recently written".
Instead of relying on compiler allowing us to do dirty things, do not
use union and only store `RawData`. Particular ptrauth parameters are
obtained on demand via bit operations.
Original PR description below.
Emit `__ptrauth`-qualified types as `DIDerivedType` metadata nodes in IR
with tag `DW_TAG_LLVM_ptrauth_type`, baseType referring to the type
which has the qualifier applied, and the following parameters
representing the signing schema:
- `ptrAuthKey` (integer)
- `ptrAuthIsAddressDiscriminated` (boolean)
- `ptrAuthExtraDiscriminator` (integer)
- `ptrAuthIsaPointer` (boolean)
- `ptrAuthAuthenticatesNullValues` (boolean)
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
Reaplying after revert in #85382 (861ebe6446296c96578807363aa292c69d827773).
Fixed intermittent test failure by avoiding piping output in some RUN lines.
If --write-experimental-debuginfo-iterators-to-bitcode is true (default false)
and --expermental-debuginfo-iterators is also true then the new debug info
format (non-instruction records) is written to bitcode directly.
Added the following records:
FUNC_CODE_DEBUG_RECORD_LABEL
FUNC_CODE_DEBUG_RECORD_VALUE
FUNC_CODE_DEBUG_RECORD_DECLARE
FUNC_CODE_DEBUG_RECORD_ASSIGN
FUNC_CODE_DEBUG_RECORD_VALUE_SIMPLE
The last one has an abbrev in FUNCTION_BLOCK BLOCK_INFO. Incidentally, this uses
the last value available without widening the code-length for FUNCTION_BLOCK
from 4 to 5 bits.
Records are formatted as follows:
All DbgRecord start with:
1. DILocation
FUNC_CODE_DEBUG_RECORD_LABEL
2. DILabel
DPValues then share common fields:
2. DILocalVariable
3. DIExpression
FUNC_CODE_DEBUG_RECORD_VALUE
4. Location Metadata
FUNC_CODE_DEBUG_RECORD_DECLARE
4. Location Metadata
FUNC_CODE_DEBUG_RECORD_VALUE_SIMPLE
4. Location Value (single)
FUNC_CODE_DEBUG_RECORD_ASSIGN
4. Location Metadata
5. DIAssignID
6. DIExpression (address)
7. Location Metadata (address)
Encoding the DILocation metadata reference directly appeared to yield smaller
bitcode files than encoding the operands seperately (as is done with instruction
DILocations).
FUNC_CODE_DEBUG_RECORD_VALUE_SIMPLE is by far the most common DbgRecord record
in optimized code (order of 5x-10x over other kinds). Unoptimized code should
only contain FUNC_CODE_DEBUG_RECORD_DECLARE.
If --write-experimental-debuginfo-iterators-to-bitcode is true (default false)
and --expermental-debuginfo-iterators is also true then the new debug info
format (non-instruction records) is written to bitcode directly.
Added the following records:
FUNC_CODE_DEBUG_RECORD_LABEL
FUNC_CODE_DEBUG_RECORD_VALUE
FUNC_CODE_DEBUG_RECORD_DECLARE
FUNC_CODE_DEBUG_RECORD_ASSIGN
FUNC_CODE_DEBUG_RECORD_VALUE_SIMPLE
The last one has an abbrev in FUNCTION_BLOCK BLOCK_INFO. Incidentally, this uses
the last value available without widening the code-length for FUNCTION_BLOCK
from 4 to 5 bits.
Records are formatted as follows:
All DbgRecord start with:
1. DILocation
FUNC_CODE_DEBUG_RECORD_LABEL
2. DILabel
DPValues then share common fields:
2. DILocalVariable
3. DIExpression
FUNC_CODE_DEBUG_RECORD_VALUE
4. Location Metadata
FUNC_CODE_DEBUG_RECORD_DECLARE
4. Location Metadata
FUNC_CODE_DEBUG_RECORD_VALUE_SIMPLE
4. Location Value (single)
FUNC_CODE_DEBUG_RECORD_ASSIGN
4. Location Metadata
5. DIAssignID
6. DIExpression (address)
7. Location Metadata (address)
Encoding the DILocation metadata reference directly appeared to yield smaller
bitcode files than encoding the operands seperately (as is done with instruction
DILocations).
FUNC_CODE_DEBUG_RECORD_VALUE_SIMPLE is by far the most common DbgRecord record
in optimized code (order of 5x-10x over other kinds). Unoptimized code should
only contain FUNC_CODE_DEBUG_RECORD_DECLARE.
Emit `__ptrauth`-qualified types as `DIDerivedType` metadata nodes in IR
with tag `DW_TAG_LLVM_ptrauth_type`, baseType referring to the type
which has the qualifier applied, and the following parameters
representing the signing schema:
- `ptrAuthKey` (integer)
- `ptrAuthIsAddressDiscriminated` (boolean)
- `ptrAuthExtraDiscriminator` (integer)
- `ptrAuthIsaPointer` (boolean)
- `ptrAuthAuthenticatesNullValues` (boolean)
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
NOTE: For brevity the following talks about ConstantInt but
everything extends to cover ConstantFP as well.
Whilst ConstantInt::get() supports the creation of vectors whereby
each lane has the same value, it achieves this via other constants:
* ConstantVector for fixed-length vectors
* ConstantExprs for scalable vectors
However, ConstantExprs are being deprecated and ConstantVector is
not space efficient for larger vector types. By extending ConstantInt
we can represent vector splats by only storing the underlying scalar
value.
More specifically:
* ConstantInt gains an ElementCount variant of get().
* LLVMContext is extended to map <EC,APInt>->ConstantInt.
* BitcodeReader/Writer support is extended to allow vector types.
Whilst this patch adds the base support, more work is required
before it's production ready. For example, there's likely to be
many places where isa<ConstantInt> assumes a scalar type. Accordingly
the default behaviour of ConstantInt::get() remains unchanged but a
set of flags are added to allow wider testing and thus help with the
migration:
--use-constant-int-for-fixed-length-splat
--use-constant-fp-for-fixed-length-splat
--use-constant-int-for-scalable-splat
--use-constant-fp-for-scalable-splat
NOTE: No change is required to the bitcode format because types and
values are handled separately.
NOTE: For similar reasons as above, code generation doesn't work
out-the-box.
If tail call optimization was not disabled for the profiled binary, the
call contexts will be missing frames for tail calls. Handle this by
performing a limited search through tail call edges for the profiled
callee when a discontinuity is detected. The search depth is adjustable
but defaults to 5.
If we are able to identify a short sequence of tail calls, update the
graph for those calls. In the case of ThinLTO, synthesize the necessary
CallsiteInfos for carrying the cloning information to the backends.
Add the `dead_on_unwind` attribute, which states that the caller will
not read from this argument if the call unwinds. This allows eliding
stores that could otherwise be visible on the unwind path, for example:
```
declare void @may_unwind()
define void @src(ptr noalias dead_on_unwind %out) {
store i32 0, ptr %out
call void @may_unwind()
store i32 1, ptr %out
ret void
}
define void @tgt(ptr noalias dead_on_unwind %out) {
call void @may_unwind()
store i32 1, ptr %out
ret void
}
```
The optimization is not valid without `dead_on_unwind`, because the `i32
0` value might be read if `@may_unwind` unwinds.
This attribute is primarily intended to be used on sret arguments. In
fact, I previously wanted to change the semantics of sret to include
this "no read after unwind" property (see D116998), but based on the
feedback there it is better to keep these attributes orthogonal (sret is
an ABI attribute, dead_on_unwind is an optimization attribute). This is
a reboot of that change with a separate attribute.
This adds support for a HasTailCall flag on function call edges in the
ThinLTO summary. It is intended for use in aiding discovery of missing
frames from tail calls in profiled call stacks for MemProf of profiled
binaries that did not disable tail call elimination. A follow on change
will add the use of this new flag during MemProf context disambiguation.
The new flag is encoded in the bitcode along with either the hotness
flag from the profile, or the relative block frequency under the
-write-relbf-to-summary flag when there is no profile data.
Because we now will always have some additional call edge information, I
have removed the non-profile function summary record format, and we
simply encode the tail call flag along with a hotness type of none when
there is no profile information or relative block frequency. The change
of record format and name caused most of the test case changes.
I have added explicit testing of generation of the new tail call flag
into the bitcode and IR assembly format as part of the changes to
llvm/test/Bitcode/thinlto-function-summary-refgraph.ll. I have also
added round trip testing through assembly and bitcode to
llvm/test/Assembler/thinlto-summary.ll.
This flag indicates that every bit is known to be zero in at least one
of the inputs. This allows the Or to be treated as an Add since there is
no possibility of a carry from any bit.
If the flag is present and this property does not hold, the result is
poison.
This makes it easier to reverse the InstCombine transform that turns Add
into Or.
This is inspired by a comment here
https://github.com/llvm/llvm-project/pull/71955#discussion_r1391614578
Discourse thread
https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036
This reverts commit 0fd5dc94380d5fe666dc6c603b4bb782cef743e7.
The original commit removed DIArgLists from being in an MDNode map, but did
not insert a new `delete` in the LLVMContextImpl destructor. This
reapply adds that call to delete, preventing a memory leak.
This patch changes the `DIArgList` class's inheritance from `MDNode` to
`Metadata, ReplaceableMetadataImpl`, and ensures that it is always
unique, i.e. a distinct DIArgList should never be produced.
This should not result in any changes to IR or bitcode parsing and
printing, as the format for DIArgList is unchanged, and the order in which it
appears should also be identical. As a minor note, this patch also fixes
a gap in the verifier, where the ValueAsMetadata operands to a DIArgList
would not be visited.
Close https://github.com/llvm/llvm-project/issues/56980.
This patch tries to introduce a light-weight optimization attribute for
coroutines which are guaranteed to only be destroyed after it reached
the final suspend.
The rationale behind the patch is simple. See the example:
```C++
A foo() {
dtor d;
co_await something();
dtor d1;
co_await something();
dtor d2;
co_return 43;
}
```
Generally the generated .destroy function may be:
```C++
void foo.destroy(foo.Frame *frame) {
switch(frame->suspend_index()) {
case 1:
frame->d.~dtor();
break;
case 2:
frame->d.~dtor();
frame->d1.~dtor();
break;
case 3:
frame->d.~dtor();
frame->d1.~dtor();
frame->d2.~dtor();
break;
default: // coroutine completed or haven't started
break;
}
frame->promise.~promise_type();
delete frame;
}
```
Since the compiler need to be ready for all the cases that the coroutine
may be destroyed in a valid state.
However, from the user's perspective, we can understand that certain
coroutine types may only be destroyed after it reached to the final
suspend point. And we need a method to teach the compiler about this.
Then this is the patch. After the compiler recognized that the
coroutines can only be destroyed after complete, it can optimize the
above example to:
```C++
void foo.destroy(foo.Frame *frame) {
frame->promise.~promise_type();
delete frame;
}
```
I spent a lot of time experimenting and experiencing this in the
downstream. The numbers are really good. In a real-world coroutine-heavy
workload, the size of the build dir (including .o files) reduces 14%.
And the size of final libraries (excluding the .o files) reduces 8% in
Debug mode and 1% in Release mode.
This adds a writable attribute, which in conjunction with
dereferenceable(N) states that a spurious store of N bytes is
introduced on function entry. This implies that this many bytes
are writable without trapping or introducing data races. See
https://llvm.org/docs/Atomics.html#optimization-outside-atomic for
why the second point is important.
This attribute can be added to sret arguments. I believe Rust will
also be able to use it for by-value (moved) arguments. Rust likely
won't be able to use it for &mut arguments (tree borrows does not
appear to allow spurious stores).
In this patch the new attribute is only used by LICM scalar promotion.
However, the actual motivation for this is to fix a correctness issue
in call slot optimization, which needs this attribute to avoid
optimization regressions.
Followup to the discussion on D157499.
Differential Revision: https://reviews.llvm.org/D158081
Add an nneg flag to the zext instruction, which specifies that the
argument is non-negative. Otherwise, the result is a poison value.
The primary use-case for the flag is to preserve information when sext
gets replaced with zext due to range-based canonicalization. The nneg
flag allows us to convert the zext back into an sext later. This is
useful for some optimizations (e.g. a signed icmp can fold with sext but
not zext), as well as some targets (e.g. RISCV prefers sext over zext).
Discourse thread: https://discourse.llvm.org/t/rfc-add-zext-nneg-flag/73914
This patch is based on https://reviews.llvm.org/D156444 by
@Panagiotis156, with some implementation simplifications and additional
tests.
---------
Co-authored-by: Panagiotis K <karouzakispan@gmail.com>
This patch adds a new fn attribute, `optdebug`, that specifies that
optimizations should make decisions that prioritize debug info quality,
potentially at the cost of runtime performance.
This patch does not add any functional changes triggered by this
attribute, only the attribute itself. A subsequent patch will use this
flag to disable the post-RA scheduler.
* Remove if its sole use is to support an unnecessary ptr-to-ptr bitcast
(remove the bitcast as well)
* Replace with use of other APIs.
NFC opaque pointer cleanup effort.
The module paths string table mapped to both an id sequentially assigned
during LTO linking, and the module hash. The former is leftover from
before the module hash was added for caching and subsequently replaced
use of the module id when renaming promoted symbols (to avoid affects
due to link order changes). The sequentially assigned module id was not
removed, however, as it was still a convenience when serializing to/from
bitcode and assembly.
This patch removes the module id from this table, since it isn't
strictly needed and can lead to confusion on when it is appropriate to
use (e.g. see fix in D156525). It also takes a (likely not significant)
amount of overhead. Where an integer module id is needed (e.g. bitcode
writing), one is assigned on the fly.
There are a couple of test changes since the paths are now sorted
alphanumerically when assigning ids on the fly during assembly writing,
in order to ensure deterministic behavior.
Differential Revision: https://reviews.llvm.org/D156730