Since `raw_string_ostream` doesn't own the string buffer, it is
desirable (in terms of memory safety) for users to directly reference
the string buffer rather than use `raw_string_ostream::str()`.
Work towards TODO item to remove `raw_string_ostream::str()`.
p.s. also remove some unneeded/dead code.
This adds a language standard mode for the latest C standard. While
WG14 is hoping for a three-year cycle, it is not clear that the next
revision of C will be in 2026 and so a flag was not created for c26
specifically.
Virtual function pointer entries in v-tables are signed with address
discrimination in addition to declaration-based discrimination, where an
integer discriminator the string hash (see
`ptrauth_string_discriminator`) of the mangled name of the overridden
method. This notably provides diversity based on the full signature of
the overridden method, including the method name and parameter types.
This patch introduces ItaniumVTableContext logic to find the original
declaration of the overridden method.
On AArch64, these pointers are signed using the `IA` key (the
process-independent code key.)
V-table pointers can be signed with either no discrimination, or a
similar scheme using address and decl-based discrimination. In this
case, the integer discriminator is the string hash of the mangled
v-table identifier of the class that originally introduced the vtable
pointer.
On AArch64, these pointers are signed using the `DA` key (the
process-independent data key.)
Not using discrimination allows attackers to simply copy valid v-table
pointers from one object to another. However, using a uniform
discriminator of 0 does have positive performance and code-size
implications on AArch64, and diversity for the most important v-table
access pattern (virtual dispatch) is already better assured by the
signing schemas used on the virtual functions. It is also known that
some code in practice copies objects containing v-tables with `memcpy`,
and while this is not permitted formally, it is something that may be
invasive to eliminate.
This is controlled by:
```
-fptrauth-vtable-pointer-type-discrimination
-fptrauth-vtable-pointer-address-discrimination
```
In addition, this provides fine-grained controls in the
ptrauth_vtable_pointer attribute, which allows overriding the default
ptrauth schema for vtable pointers on a given class hierarchy, e.g.:
```
[[clang::ptrauth_vtable_pointer(no_authentication, no_address_discrimination,
no_extra_discrimination)]]
[[clang::ptrauth_vtable_pointer(default_key, default_address_discrimination,
custom_discrimination, 0xf00d)]]
```
The override is then mangled as a parametrized vendor extension:
```
"__vtptrauth" I
<key>
<addressDiscriminated>
<extraDiscriminator>
E
```
To support this attribute, this patch adds a small extension to the
attribute-emitter tablegen backend.
Note that there are known areas where signing is either missing
altogether or can be strengthened. Some will be addressed in later
changes (e.g., member function pointers, some RTTI).
`dynamic_cast` in particular is handled by emitting an artificial
v-table pointer load (in a way that always authenticates it) before the
runtime call itself, as the runtime doesn't have enough information
today to properly authenticate it. Instead, the runtime is currently
expected to strip the v-table pointer.
---------
Co-authored-by: John McCall <rjmccall@apple.com>
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
libstdc++ requires this define to match what is predefined in GCC for
the ABI of this platform; GCC hardcodes this define for all mingw
configurations in gcc/config/i386/cygming.h.
(It also defines __GXX_MERGED_TYPEINFO_NAMES=0, but that happens to
match the defaults in libstdc++ headers, so there's no similar need to
define it in Clang.)
This fixes a Clang/libstdc++ interop issue discussed at
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110572.
The two variables using ManagedStatic that are exported by this
header are not actually used anywhere -- they are used through
SubCommand::getTopLevel() and SubCommand::getAll() instead.
Drop the extern declarations and the include.
This commit implements the entirety of the now-accepted [N3017
-Preprocessor
Embed](https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3017.htm) and
its sister C++ paper [p1967](https://wg21.link/p1967). It implements
everything in the specification, and includes an implementation that
drastically improves the time it takes to embed data in specific
scenarios (the initialization of character type arrays). The mechanisms
used to do this are used under the "as-if" rule, and in general when the
system cannot detect it is initializing an array object in a variable
declaration, will generate EmbedExpr AST node which will be expanded by
AST consumers (CodeGen or constant expression evaluators) or expand
embed directive as a comma expression.
This reverts commit
682d461d5a.
---------
Co-authored-by: The Phantom Derpstorm <phdofthehouse@gmail.com>
Co-authored-by: Aaron Ballman <aaron@aaronballman.com>
Co-authored-by: cor3ntin <corentinjabot@gmail.com>
Co-authored-by: H. Vetinari <h.vetinari@gmx.com>
Now we can create a LocalDeclID directly with an integer without
verifying. It may be hard to refactor if we want to change the way we
serialize DeclIDs (See https://github.com/llvm/llvm-project/pull/95897).
Also it is hard for us to debug if someday someone construct a
LocalDeclID with an incorrect value.
So in this patch, I tried to unify the way we can construct a
LocalDeclID in ASTReader, where we will construct the LocalDeclID from
the serialized data. Also, now we can verify the constructed LocalDeclID
sooner in the new interface.
This commit implements the entirety of the now-accepted [N3017 -
Preprocessor
Embed](https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3017.htm) and
its sister C++ paper [p1967](https://wg21.link/p1967). It implements
everything in the specification, and includes an implementation that
drastically improves the time it takes to embed data in specific
scenarios (the initialization of character type arrays). The mechanisms
used to do this are used under the "as-if" rule, and in general when the
system cannot detect it is initializing an array object in a variable
declaration, will generate EmbedExpr AST node which will be expanded
by AST consumers (CodeGen or constant expression evaluators) or
expand embed directive as a comma expression.
---------
Co-authored-by: Aaron Ballman <aaron@aaronballman.com>
Co-authored-by: cor3ntin <corentinjabot@gmail.com>
Co-authored-by: H. Vetinari <h.vetinari@gmx.com>
Co-authored-by: Podchishchaeva, Mariya <mariya.podchishchaeva@intel.com>
Before C++23, we would check a constexpr function body to diagnose if
the function can never be evaluated in a constant expression context.
This was previously required standards behavior, but C++23 relaxed the
restrictions with P2448R2. While this checking is useful, it is also
quite expensive, especially in pathological cases (see #92924 for an
example), because it means the mere presence of a constexpr function
definition will require constant evaluation even if the function is not
used within the TU.
Clang suppresses diagnostics in system headers by default and system
headers (like STL implementations) can be full of constexpr function
bodies. Now we suppress the check for a diagnostic if the function
definition is in a system header or if the `-Winvalid-constexpr`
diagnostic is disabled. This should have some mild compile time
performance improvements.
Also, the previous implementation would disable the diagnostic in C++23
mode entirely. Due to the benefit of the check, this patch now makes it
possible to enable the diagnostic explicitly in C++23 mode.
This commit fixes https://github.com/llvm/llvm-project/issues/88896 by
passing LangOpts from the CompilerInstance to
DependencyScanningWorker so that the original LangOpts are
preserved/respected.
This makes for more accurate parsing/lexing when certain language
versions or features specific to versions are to be used.
When we generate the reduced BMI on the fly, the order of the emitting
phase is different within `-emit-obj` and `-emit-module-interface`.
Although this is meant to be fine, we observed it in
https://github.com/llvm/llvm-project/issues/93859 (that the different phase order may cause problems).
Also it turns out to be a different fundamental reason to the orders.
But it might be fine to make the order of emitting reducing BMI at first
to avoid such confusions in the future.
This commit adds the /Zc:\_\_STDC\_\_ argument from MSVC, which defines
\_\_STDC_\_.
This means, alongside stronger feature parity with MSVC, that things
that rely on \_\_STDC\_\_, such as autoconf, can work.
Link to MSVC documentation of this flag:
https://learn.microsoft.com/en-us/cpp/build/reference/zc-stdc?view=msvc-170
This tries to fix all of the places where a diagnostic message starts
with a capital letter (other than acroynyms or proper nouns) or ends
with punctuation (other than a question mark).
This is in support of a planned change to tablegen to start diagnosing
incorrect diagnostic message styles.
When using other specific exception options in Clang, such as
`-fseh-exceptions` or `-fsjlj-exceptions`, Clang defines a corresponding
preprocessor such as `-D__USING_SJLJ_EXCEPTIONS__`. Emscripten does that
in our own build system:
7dcd7f4074/tools/system_libs.py (L1577-L1578)
But to make Wasm EH usable in non-Emscripten toolchain, this has to be
defined somewhere else. This PR makes Wasm EH consistent with other
exception scheme by letting it defined by Clang depending on the
exception option.
We have been using `__USING_WASM_EXCEPTIONS__` in our current library
code, but this changes it to `__WASM_EXCEPTIONS__` for its conciseness,
and I will update other parts of LLVM as follow-ups. This does not break
anything currently working, because we have not been defining anything
in Clang so far.
This patch continues previous efforts to split `Sema` up, this time
covering code completion.
Context can be found in #84184.
Dropping `Code` prefix from function names in `SemaCodeCompletion` would
make sense, but I think this PR has enough changes already.
As usual, formatting changes are done as a separate commit. Hopefully
this helps with the review.
In ASTBitCodes.h, there are two type alias for the ID type of
Identifiers with the same underlying type. It is confusing. This patch
tries to merge the `IdentID` to `IdentifierID` to erase such confusion.
I'm planning to remove StringRef::equals in favor of
StringRef::operator==.
- StringRef::operator==/!= outnumber StringRef::equals by a factor of
24 under clang/ in terms of their usage.
- The elimination of StringRef::equals brings StringRef closer to
std::string_view, which has operator== but not equals.
- S == "foo" is more readable than S.equals("foo"), especially for
!Long.Expression.equals("str") vs Long.Expression != "str".
Depends on #87545
Emit PAuth ABI compatibility tag values as llvm module flags:
- `aarch64-elf-pauthabi-platform`
- `aarch64-elf-pauthabi-version`
For platform 0x10000002 (llvm_linux), the version value bits correspond
to the following LangOptions defined in #85232:
- bit 0: `PointerAuthIntrinsics`;
- bit 1: `PointerAuthCalls`;
- bit 2: `PointerAuthReturns`;
- bit 3: `PointerAuthAuthTraps`;
- bit 4: `PointerAuthVTPtrAddressDiscrimination`;
- bit 5: `PointerAuthVTPtrTypeDiscrimination`;
- bit 6: `PointerAuthInitFini`.
---------
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
When multiple actions are specified, the last one is used and others are
overridden. This might lead to confusion if the user is used to driver's
`-S -emit-llvm` behavior.
```
%clang_cc1 -S -emit-llvm a.c # -S is overridden
%clang_cc1 -emit-llvm -S a.c # -emit-llvm is overridden
%clang_cc1 -fsyntax-only -S a.c # -fsyntax-only is overridden
```
However, we want to continue supporting overriding the driver action
with -Xclang:
* `clang -c -Xclang -ast-dump a.c` (`%clang -cc1 -emit-obj ...
-main-file-name a.c ... -ast-dump`)
* `clang -c -xc++ -Xclang -emit-module stl.modulemap`
As an exception, we allow -ast-dump* options to be composed together
(e.g. `-ast-dump -ast-dump-lookups` in AST/ast-dump-lookups.cpp).
This relands 6c31104.
The patch was reverted due to incorrectly introduced alignment. And the
patch was re-commited after fixing the alignment issue.
Following off are the original message:
This is part of "no transitive change" patch series, "no transitive
source location change". I talked this with @Bigcheese in the tokyo's
WG21 meeting.
The idea comes from @jyknight posted on LLVM discourse. That for:
```
// A.cppm
export module A;
...
// B.cppm
export module B;
import A;
...
//--- C.cppm
export module C;
import C;
```
Almost every time A.cppm changes, we need to recompile `B`. Due to we
think the source location is significant to the semantics. But it may be
good if we can avoid recompiling `C` if the change from `A` wouldn't
change the BMI of B.
This patch only cares source locations. So let's focus on source
location's example. We can see the full example from the attached test.
```
//--- A.cppm
export module A;
export template <class T>
struct C {
T func() {
return T(43);
}
};
export int funcA() {
return 43;
}
//--- A.v1.cppm
export module A;
export template <class T>
struct C {
T func() {
return T(43);
}
};
export int funcA() {
return 43;
}
//--- B.cppm
export module B;
import A;
export int funcB() {
return funcA();
}
//--- C.cppm
export module C;
import A;
export void testD() {
C<int> c;
c.func();
}
```
Here the only difference between `A.cppm` and `A.v1.cppm` is that
`A.v1.cppm` has an additional blank line. Then the test shows that two
BMI of `B.cppm`, one specified `-fmodule-file=A=A.pcm` and the other
specified `-fmodule-file=A=A.v1.pcm`, should have the bit-wise same
contents.
However, it is a different story for C, since C instantiates templates
from A, and the instantiation records the source information from module
A, which is different from `A` and `A.v1`, so it is expected that the
BMI `C.pcm` and `C.v1.pcm` can and should differ.
To fully understand the patch, we need to understand how we encodes
source locations and how we serialize and deserialize them.
For source locations, we encoded them as:
```
|
|
| _____ base offset of an imported module
|
|
|
|_____ base offset of another imported module
|
|
|
|
| ___ 0
```
As the diagram shows, we encode the local (unloaded) source location
from 0 to higher bits. And we allocate the space for source locations
from the loaded modules from high bits to 0. Then the source locations
from the loaded modules will be mapped to our source location space
according to the allocated offset.
For example, for,
```
// a.cppm
export module a;
...
// b.cppm
export module b;
import a;
...
```
Assuming the offset of a source location (let's name the location as
`S`) in a.cppm is 45 and we will record the value `45` into the BMI
`a.pcm`. Then in b.cppm, when we import a, the source manager will
allocate a space for module 'a' (according to the recorded number of
source locations) as the base offset of module 'a' in the current source
location spaces. Let's assume the allocated base offset as 90 in this
example. Then when we want to get the location in the current source
location space for `S`, we can get it simply by adding `45` to `90` to
`135`. Finally we can get the source location for `S` in module B as
`135`.
And when we want to write module `b`, we would also write the source
location of `S` as `135` directly in the BMI. And to clarify the
location `S` comes from module `a`, we also need to record the base
offset of module `a`, 90 in the BMI of `b`.
Then the problem comes. Since the base offset of module 'a' is computed
by the number source locations in module 'a'. In module 'b', the
recorded base offset of module 'a' will change every time the number of
source locations in module 'a' increase or decrease. In other words, the
contents of BMI of B will change every time the number of locations in
module 'a' changes. This is pretty sensitive. Almost every change will
change the number of locations. So this is the problem this patch want
to solve.
Let's continue with the existing design to understand what's going on.
Another interesting case is:
```
// c.cppm
export module c;
import whatever;
import a;
import b;
...
```
In `c.cppm`, when we import `a`, we still need to allocate a base
location offset for it, let's say the value becomes to `200` somehow.
Then when we reach the location `S` recorded in module `b`, we need to
translate it into the current source location space. The solution is
quite simple, we can get it by `135 + (200 - 90) = 245`. In another
word, the offset of a source location in current module can be computed
as `Recorded Offset + Base Offset of the its module file - Recorded Base
Offset`.
Then we're almost done about how we handle the offset of source
locations in serializers.
From the abstract level, what we want to do is to remove the hardcoded
base offset of imported modules and remain the ability to calculate the
source location in a new module unit. To achieve this, we need to be
able to find the module file owning a source location from the encoding
of the source location.
So in this patch, for each source location, we will store the local
offset of the location and the module file index. For the above example,
in `b.pcm`, the source location of `S` will be recorded as `135`
directly. And in the new design, the source location of `S` will be
recorded as `<1, 45>`. Here `1` stands for the module file index of `a`
in module `b`. And `45` means the offset of `S` to the base offset of
module `a`.
So the trade-off here is that, to make the BMI more independent, we need
to record more abstract information. And I feel it is worthy. The
recompilation problem of modules is really annoying and there are still
people complaining this. But if we can make this (including stopping
other changes transitively), I think this may be a killer feature for
modules. And from @Bigcheese , this should be helpful for clang explicit
modules too.
And the benchmarking side, I tested this patch against
https://github.com/alibaba/async_simple/tree/CXX20Modules. No
significant change on compilation time. The size of .pcm files becomes
to 204M from 200M. I think the trade-off is pretty fair.
I didn't use another slot to record the module file index. I tried to
use the higher 32 bits of the existing source location encodings to
store that information. This design may be safe. Since we use `unsigned`
to store source locations but we use uint64_t in serialization. And
generally `unsigned` is 32 bit width in most platforms. So it might not
be a safe problem. Since all the bits we used to store the module file
index is not used before. So the new encodings may be:
```
|-----------------------|-----------------------|
| A | B | C |
* A: 32 bit. The index of the module file in the module manager + 1.
* The +1
here is necessary since we wish 0 stands for the current
module file.
* B: 31 bit. The offset of the source location to the module file
* containing it.
* C: The macro bit. We rotate it to the lowest bit so that we can save
* some
space in case the index of the module file is 0.
```
(The B and C is the existing raw encoding for source locations)
Another reason to reuse the same slot of the source location is to
reduce the impact of the patch. Since there are a lot of places assuming
we can store and get a source location from a slot. And if I tried to
add another slot, a lot of codes breaks. I don't feel it is worhty.
Another impact of this decision is that, the existing small
optimizations for encoding source location may be invalided. The key of
the optimization is that we can turn large values into small values then
we can use VBR6 format to reduce the size. But if we decided to put the
module file index into the higher bits, then maybe it simply doesn't
work. An example may be the `SourceLocationSequence` optimization.
This will only affect the size of on-disk .pcm files. I don't expect
this impact the speed and memory use of compilations. And seeing my
small experiments above, I feel this trade off is worthy.
The mental model for handling source location offsets is not so complex
and I believe we can solve it by adding module file index to each stored
source location.
For the practical side, since the source location is pretty sensitive,
and the patch can pass all the in-tree tests and a small scale projects,
I feel it should be correct.
I'll continue to work on no transitive decl change and no transitive
identifier change (if matters) to achieve the goal to stop the
propagation of unnecessary changes. But all of this depends on this
patch. Since, clearly, the source locations are the most sensitive
thing.
---
The release nots and documentation will be added seperately.
…te module file for C++20 modules instead of PCHGenerator
Previously we're re-using PCHGenerator to generate the module file for
C++20 modules. But this is slighty more or less odd. This patch tries to
use a new class 'CXX20ModulesGenerator' to generate the module file for
C++20 modules.
This is part of "no transitive change" patch series, "no transitive
source location change". I talked this with @Bigcheese in the tokyo's
WG21 meeting.
The idea comes from @jyknight posted on LLVM discourse. That for:
```
// A.cppm
export module A;
...
// B.cppm
export module B;
import A;
...
//--- C.cppm
export module C;
import C;
```
Almost every time A.cppm changes, we need to recompile `B`. Due to we
think the source location is significant to the semantics. But it may be
good if we can avoid recompiling `C` if the change from `A` wouldn't
change the BMI of B.
# Motivation Example
This patch only cares source locations. So let's focus on source
location's example. We can see the full example from the attached test.
```
//--- A.cppm
export module A;
export template <class T>
struct C {
T func() {
return T(43);
}
};
export int funcA() {
return 43;
}
//--- A.v1.cppm
export module A;
export template <class T>
struct C {
T func() {
return T(43);
}
};
export int funcA() {
return 43;
}
//--- B.cppm
export module B;
import A;
export int funcB() {
return funcA();
}
//--- C.cppm
export module C;
import A;
export void testD() {
C<int> c;
c.func();
}
```
Here the only difference between `A.cppm` and `A.v1.cppm` is that
`A.v1.cppm` has an additional blank line. Then the test shows that two
BMI of `B.cppm`, one specified `-fmodule-file=A=A.pcm` and the other
specified `-fmodule-file=A=A.v1.pcm`, should have the bit-wise same
contents.
However, it is a different story for C, since C instantiates templates
from A, and the instantiation records the source information from module
A, which is different from `A` and `A.v1`, so it is expected that the
BMI `C.pcm` and `C.v1.pcm` can and should differ.
# Internal perspective of status quo
To fully understand the patch, we need to understand how we encodes
source locations and how we serialize and deserialize them.
For source locations, we encoded them as:
```
|
|
| _____ base offset of an imported module
|
|
|
|_____ base offset of another imported module
|
|
|
|
| ___ 0
```
As the diagram shows, we encode the local (unloaded) source location
from 0 to higher bits. And we allocate the space for source locations
from the loaded modules from high bits to 0. Then the source locations
from the loaded modules will be mapped to our source location space
according to the allocated offset.
For example, for,
```
// a.cppm
export module a;
...
// b.cppm
export module b;
import a;
...
```
Assuming the offset of a source location (let's name the location as
`S`) in a.cppm is 45 and we will record the value `45` into the BMI
`a.pcm`. Then in b.cppm, when we import a, the source manager will
allocate a space for module 'a' (according to the recorded number of
source locations) as the base offset of module 'a' in the current source
location spaces. Let's assume the allocated base offset as 90 in this
example. Then when we want to get the location in the current source
location space for `S`, we can get it simply by adding `45` to `90` to
`135`. Finally we can get the source location for `S` in module B as
`135`.
And when we want to write module `b`, we would also write the source
location of `S` as `135` directly in the BMI. And to clarify the
location `S` comes from module `a`, we also need to record the base
offset of module `a`, 90 in the BMI of `b`.
Then the problem comes. Since the base offset of module 'a' is computed
by the number source locations in module 'a'. In module 'b', the
recorded base offset of module 'a' will change every time the number of
source locations in module 'a' increase or decrease. In other words, the
contents of BMI of B will change every time the number of locations in
module 'a' changes. This is pretty sensitive. Almost every change will
change the number of locations. So this is the problem this patch want
to solve.
Let's continue with the existing design to understand what's going on.
Another interesting case is:
```
// c.cppm
export module c;
import whatever;
import a;
import b;
...
```
In `c.cppm`, when we import `a`, we still need to allocate a base
location offset for it, let's say the value becomes to `200` somehow.
Then when we reach the location `S` recorded in module `b`, we need to
translate it into the current source location space. The solution is
quite simple, we can get it by `135 + (200 - 90) = 245`. In another
word, the offset of a source location in current module can be computed
as `Recorded Offset + Base Offset of the its module file - Recorded Base
Offset`.
Then we're almost done about how we handle the offset of source
locations in serializers.
# The high level design of current patch
From the abstract level, what we want to do is to remove the hardcoded
base offset of imported modules and remain the ability to calculate the
source location in a new module unit. To achieve this, we need to be
able to find the module file owning a source location from the encoding
of the source location.
So in this patch, for each source location, we will store the local
offset of the location and the module file index. For the above example,
in `b.pcm`, the source location of `S` will be recorded as `135`
directly. And in the new design, the source location of `S` will be
recorded as `<1, 45>`. Here `1` stands for the module file index of `a`
in module `b`. And `45` means the offset of `S` to the base offset of
module `a`.
So the trade-off here is that, to make the BMI more independent, we need
to record more abstract information. And I feel it is worthy. The
recompilation problem of modules is really annoying and there are still
people complaining this. But if we can make this (including stopping
other changes transitively), I think this may be a killer feature for
modules. And from @Bigcheese , this should be helpful for clang explicit
modules too.
And the benchmarking side, I tested this patch against
https://github.com/alibaba/async_simple/tree/CXX20Modules. No
significant change on compilation time. The size of .pcm files becomes
to 204M from 200M. I think the trade-off is pretty fair.
# Some low level details
I didn't use another slot to record the module file index. I tried to
use the higher 32 bits of the existing source location encodings to
store that information. This design may be safe. Since we use `unsigned`
to store source locations but we use uint64_t in serialization. And
generally `unsigned` is 32 bit width in most platforms. So it might not
be a safe problem. Since all the bits we used to store the module file
index is not used before. So the new encodings may be:
```
|-----------------------|-----------------------|
| A | B | C |
* A: 32 bit. The index of the module file in the module manager + 1. The +1
here is necessary since we wish 0 stands for the current module file.
* B: 31 bit. The offset of the source location to the module file containing it.
* C: The macro bit. We rotate it to the lowest bit so that we can save some
space in case the index of the module file is 0.
```
(The B and C is the existing raw encoding for source locations)
Another reason to reuse the same slot of the source location is to
reduce the impact of the patch. Since there are a lot of places assuming
we can store and get a source location from a slot. And if I tried to
add another slot, a lot of codes breaks. I don't feel it is worhty.
Another impact of this decision is that, the existing small
optimizations for encoding source location may be invalided. The key of
the optimization is that we can turn large values into small values then
we can use VBR6 format to reduce the size. But if we decided to put the
module file index into the higher bits, then maybe it simply doesn't
work. An example may be the `SourceLocationSequence` optimization.
This will only affect the size of on-disk .pcm files. I don't expect
this impact the speed and memory use of compilations. And seeing my
small experiments above, I feel this trade off is worthy.
# Correctness
The mental model for handling source location offsets is not so complex
and I believe we can solve it by adding module file index to each stored
source location.
For the practical side, since the source location is pretty sensitive,
and the patch can pass all the in-tree tests and a small scale projects,
I feel it should be correct.
# Future Plans
I'll continue to work on no transitive decl change and no transitive
identifier change (if matters) to achieve the goal to stop the
propagation of unnecessary changes. But all of this depends on this
patch. Since, clearly, the source locations are the most sensitive
thing.
---
The release nots and documentation will be added seperately.
Close https://github.com/llvm/llvm-project/issues/75057
Previously, I thought the diagnostic mappings is not meaningful with
modules incorrectly. And this problem get revealed by another change
recently. So this patch tried to rever the previous "optimization"
partially.
and "[NFC] [C++20] [Modules] Use new class CXX20ModulesGenerator to
generate module file for C++20 modules instead of PCHGenerator"
This reverts commit fb21343473e33e9a886b42d2fe95d1cec1cd0030.
and commit 18268ac0f48d93c2bcddb69732761971669c09ab.
It looks like there are some problems about linking the compiler
Previously we're re-using PCHGenerator to generate the module file for
C++20 modules. But this is slighty more or less odd. This patch tries
to use a new class 'CXX20ModulesGenerator' to generate the module file
for C++20 modules.
Introduce just the option definitions and support for their existance at
a few different points in the frontend. This will be followed soon by
functionality that uses it.
Reviewers: bcardosolopes, jansvoboda11, AaronBallman, erichkeane, MaskRay
Reviewed By: erichkeane
Pull Request: https://github.com/llvm/llvm-project/pull/89030
These macros are used by STL implementations to support implementation
of std::hardware_destructive_interference_size and
std::hardware_constructive_interference_size
Fixes#60174
---------
Co-authored-by: Louis Dionne <ldionne.2@gmail.com>
The preprocessor define `__HLSL_ENABLE_16_BIT` should be set to 1 if
native 16-bit types are enabled and not set if they are not.
Previously we were setting the value to match the HLSL active language
version, and we had no test coverage verifing the value was set and not
set as expected.
Fixes#89787
Fixes https://github.com/llvm/llvm-project/issues/88522
This PR introduce a new diagnostic to report apply AST consume actions
on LLVM IR.
---------
Signed-off-by: yronglin <yronglin777@gmail.com>
Discovered from
d86cc73bbf.
There is a potential issue of using DeclID in ASTUnit. ASTUnit may
record the declaration ID from ASTWriter. And after loading the
preamble, the ASTUnit may consume the recorded declaration ID directly
in ExternalASTSource. This is not good. According to the design, all
local declaration ID consumed in ASTReader need to be translated by
`ASTReader::getGlobaldeclID()`.
This will be problematic if we changed the encodings of declaration IDs or if we
make preamble to work more complexly.
This patch tries to remove all the direct use of DeclID except the real
low level reading and writing. All the use of DeclID is converted to
the use of LocalDeclID or GlobalDeclID. This is helpful to increase the
readability and type safety.
This patch tries to remove all the direct use of DeclID except the real
low level reading and writing. All the use of DeclID is converted to
the use of LocalDeclID or GlobalDeclID. This is helpful to increase the
readability and type safety.
Previously, the DeclID is defined in serialization/ASTBitCodes.h under
clang::serialization namespace. However, actually the DeclID is not
purely used in serialization part. The DeclID is already widely used in
AST and all around the clang project via classes like `LazyPtrDecl` or
calling `ExternalASTSource::getExernalDecl()`. All such uses are via the
raw underlying type of `DeclID` as `uint32_t`. This is not pretty good.
This patch moves the DeclID class family to a new header `AST/DeclID.h`
so that the whole project can use the wrapped class `DeclID`,
`GlobalDeclID` and `LocalDeclID` instead of the raw underlying type.
This can improve the readability and the type safety.
Fix issue https://github.com/llvm/llvm-project/issues/54624
Add front end flags -gtemplate-alias (also a cc1 flag) and -gno-template-alias
to enable/disable usage of the feature. It's enabled by default in the front
end for SCE debugger tuning only.
GCC emits DW_TAG_typedef for template aliases (as does Clang with this feature
disabled), and GDB and LLDB appear not to support DW_TAG_template_alias.
The -Xclang option -gsimple-template-names=mangled is treated the same as
=full, which is not a regression from current behaviour for template
aliases.
The current implementation omits defaulted arguments and as a consequence
also omits empty parameter packs that come after defaulted arguments. Again,
this isn't a regression as the DW_TAG_typedef name doesn't contain defaulted
arguments.
LLVM support added in https://github.com/llvm/llvm-project/pull/88943
Added template-alias.cpp - Check the metadata construction & interaction with
-gsimple-template-names.
Added variadic-template-alias.cpp - Check template parameter packs work.
Added defaulted-template-alias.cpp - Check defaulted args (don't) work.
Modified debug-options.c - Check Clang generates correct cc1 flags.
Reduced BMI
Mitigate https://github.com/llvm/llvm-project/issues/61447
The root cause of the above problem is that when we write a declaration,
we need to lookup all the redeclarations in the imported modules. Then
it will be pretty slow if there are too many redeclarations in different
modules. This patch doesn't solve the porblem.
What the patchs mitigated is, when we writing a named module, we shouldn't
write the declarations from GMF if it is unreferenced **in current
module unit**. The difference here is that, if the declaration is used
in the imported modules, we used to emit it as an update. But we
definitely want to avoid that after this patch.
For that reproducer in
https://github.com/llvm/llvm-project/issues/61447, it used to take 2.5s
to compile and now it only takes 0.49s to compile, which is a big win.
This is the driver part of
https://github.com/llvm/llvm-project/pull/75894.
This patch introduces '-fexperimental-modules-reduced-bmi' to enable
generating the reduced BMI.
This patch did:
- When `-fexperimental-modules-reduced-bmi` is specified but
`--precompile` is not specified for a module unit, we'll skip the
precompile phase to avoid unnecessary two-phase compilation phases. Then
if `-c` is specified, we will generate the reduced BMI in CodeGenAction
as a by-product.
- When `-fexperimental-modules-reduced-bmi` is specified and
`--precompile` is specified, we will generate the reduced BMI in
GenerateModuleInterfaceAction as a by-product.
- When `-fexperimental-modules-reduced-bmi` is specified for a
non-module unit. We don't do anything nor try to give a warn. This is
more user friendly so that the end users can try to test and experiment
with the feature without asking help from the build systems.
The core design idea is that users should be able to enable this easily
with the existing cmake mechanisms.
The future plan for the flag is:
- Add this to clang19 and make it opt-in for 1~2 releases. It depends on
the testing feedback to decide how long we like to make it opt-in.
- Then we can announce the existing BMI generating may be deprecated and
suggesting people (end users or build systems) to enable this for 1~2
releases.
- Finally we will enable this by default. When that time comes, the term
`BMI` will refer to the reduced BMI today and the existing BMI will only
be meaningful to build systems which loves to support two phase
compilations.
I'll send release notes and document in seperate commits after this get
landed.