1203 Commits

Author SHA1 Message Date
Pedro Lobo
98e747ba56
[clang] Use poison instead of undef as the placeholder when creating a new vector [NFC] (#117064)
Call `@llvm.vector.insert` with a `poison` vector when coercing a fixed
vector to a scalable vector with the same element type.
2024-12-02 09:00:39 +00:00
Benjamin Maxwell
db6f627f3f
[clang][SME] Ignore flatten/clang::always_inline statements for callees with mismatched streaming attributes (#116391)
If `__attribute__((flatten))` is used on a function, or
`[[clang::always_inline]]` on a statement, don't inline any callees with
incompatible streaming attributes. Without this check, clang may produce
incorrect code when these attributes are used in code with streaming
functions.

Note: The docs for flatten say it can be ignored when inlining is
impossible: "causes calls within the attributed function to be inlined
unless it is impossible to do so".

Similarly, the (clang-only) `[[clang::always_inline]]` statement
attribute is more relaxed than the GNU `__attribute__((always_inline))`
(which says it should error it if it can't inline), saying only "If a
statement is marked [[clang::always_inline]] and contains calls, the
compiler attempts to inline those calls.". The docs also go on to show
an example of where `[[clang::always_inline]]` has no effect.
2024-11-26 14:26:34 +00:00
Kazu Hirata
e8a6624325
[CodeGen] Remove unused includes (NFC) (#116459)
Identified with misc-include-cleaner.
2024-11-16 07:37:13 -08:00
joaosaffran
481bce018e
Adding splitdouble HLSL function (#109331)
- Adding hlsl `splitdouble` intrinsics
- Adding DXIL lowering
- Adding SPIRV lowering
- Adding test

Fixes: #108901

---------

Co-authored-by: Joao Saffran <jderezende@microsoft.com>
2024-10-28 13:26:59 -07:00
Momchil Velikov
53f7f8ecca
[Clang][AArch64] Fix Pure Scalables Types argument passing and return (#112747)
Pure Scalable Types are defined in AAPCS64 here:

https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst#pure-scalable-types-psts

And should be passed according to Rule C.7 here:

https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst#682parameter-passing-rules

This part of the ABI is completely unimplemented in Clang, instead it
treats PSTs sometimes as HFAs/HVAs, sometime as general composite types.

This patch implements the rules for passing PSTs by employing the
`CoerceAndExpand` method and extending it to:
* allow array types in the `coerceToType`; Now only `[N x i8]` are
considered padding.
* allow mismatch between the elements of the `coerceToType` and the
elements of the `unpaddedCoerceToType`; AArch64 uses this to map
fixed-length vector types to SVE vector types.

Corectly passing a PST argument needs a decision in Clang about whether
to pass it in memory or registers or, equivalently, whether to use the
`Indirect` or `Expand/CoerceAndExpand` method. It was considered
relatively harder (or not practically possible) to make that decision in
the AArch64 backend.
Hence this patch implements the register counting from AAPCS64 (cf.
`NSRN`, `NPRN`) to guide the Clang's decision.
2024-10-28 15:43:14 +00:00
Kiran
a96c14eeb8 [Clang] Always forward sret parameters to musttail calls
If a call using the musttail attribute returns it's value through an
sret argument pointer, we must forward an incoming sret pointer to it,
instead of creating a new alloca. This is always possible because the
musttail attribute requires the caller and callee to have the same
argument and return types.
2024-10-25 09:34:08 +01:00
Jay Foad
4dd55c567a
[clang] Use {} instead of std::nullopt to initialize empty ArrayRef (#109399)
Follow up to #109133.
2024-10-24 10:23:40 +01:00
Jonas Paulsson
14120227a3
Target ABI: improve call parameters extensions handling (#100757)
For the purpose of verifying proper arguments extensions per the target's ABI,
introduce the NoExt attribute that may be used by a target when neither sign-
or zeroextension is required (e.g. with a struct in register). The purpose of
doing so is to be able to verify that there is always one of these attributes
present and by this detecting cases where sign/zero extension is actually
missing.

As a first step, this patch has the verification step done for the SystemZ
backend only, but left off by default until all known issues have been
addressed.

Other targets/front-ends can now also add NoExt attribute where needed and do
this check in the backend.
2024-09-19 16:59:31 +02:00
Chris B
89fb8490a9
[HLSL] Implement output parameter (#101083)
HLSL output parameters are denoted with the `inout` and `out` keywords
in the function declaration. When an argument to an output parameter is
constructed a temporary value is constructed for the argument.

For `inout` pamameters the argument is initialized via copy-initialization
from the argument lvalue expression to the parameter type. For `out`
parameters the argument is not initialized before the call.

In both cases on return of the function the temporary value is written
back to the argument lvalue expression through an implicit assignment
binary operator with casting as required.

This change introduces a new HLSLOutArgExpr ast node which represents
the output argument behavior. The OutArgExpr has three defined children:
- An OpaqueValueExpr of the argument lvalue expression.
- An OpaqueValueExpr of the copy-initialized parameter.
- A BinaryOpExpr assigning the first with the value of the second.

Fixes #87526

---------

Co-authored-by: Damyan Pepper <damyanp@microsoft.com>
Co-authored-by: John McCall <rjmccall@gmail.com>
2024-08-31 10:59:08 -05:00
Kiran
c50d11e6d9 Revert "[ARM] musttail fixes"
committed by accident, see #104795

This reverts commit a2088a24dad31ebe44c93751db17307fdbe1f0e2.
2024-08-27 11:17:17 +01:00
Kiran
ad468da038 Revert "Seperate frontend changes, add debug directives, remove redundant stuff from tests"
This reverts commit 1a908c6be3317bbbac73e6a6fc52cabefbdebf7d.
2024-08-27 10:46:18 +01:00
Kiran
1a908c6be3 Seperate frontend changes, add debug directives, remove redundant stuff from tests 2024-08-27 10:44:06 +01:00
Kiran
a2088a24da [ARM] musttail fixes
Backend:
- Caller and callee arguments no longer have to match, just to take up the same space, as they can be changed before the call
- Allowed tail calls if callee and callee both (or neither) use sret, wheras before it would be dissalowed if either used sret
- Allowed tail calls if byval args are used
- Added debug trace for IsEligibleForTailCallOptimisation

Frontend (clang):
- Do not generate extra alloca if sret is used with musttail, as the space for the sret is allocated already

Change-Id: Ic7f246a7eca43c06874922d642d7dc44bdfc98ec
2024-08-27 10:44:06 +01:00
eddyz87
64e464349b
[BPF] introduce __attribute__((bpf_fastcall)) (#105417)
This commit introduces attribute bpf_fastcall to declare BPF functions
that do not clobber some of the caller saved registers (R0-R5).

The idea is to generate the code complying with generic BPF ABI,
but allow compatible Linux Kernel to remove unnecessary spills and
fills of non-scratched registers (given some compiler assistance).

For such functions do register allocation as-if caller saved registers
are not clobbered, but later wrap the calls with spill and fill
patterns that are simple to recognize in kernel.

For example for the following C code:

    #define __bpf_fastcall __attribute__((bpf_fastcall))

    void bar(void) __bpf_fastcall;
    void buz(long i, long j, long k);

    void foo(long i, long j, long k) {
      bar();
      buz(i, j, k);
    }

First allocate registers as if:

    foo:
      call bar    # note: no spills for i,j,k (r1,r2,r3)
      call buz
      exit

And later insert spills fills on the peephole phase:

    foo:
      *(u64 *)(r10 - 8) = r1;  # Such call pattern is
      *(u64 *)(r10 - 16) = r2; # correct when used with
      *(u64 *)(r10 - 24) = r3; # old kernels.
      call bar
      r3 = *(u64 *)(r10 - 24); # But also allows new
      r2 = *(u64 *)(r10 - 16); # kernels to recognize the
      r1 = *(u64 *)(r10 - 8);  # pattern and remove spills/fills.
      call buz
      exit

The offsets for generated spills/fills are picked as minimal stack
offsets for the function. Allocated stack slots are not used for any
other purposes, in order to simplify in-kernel analysis.
2024-08-22 03:40:56 +03:00
Vassil Vassilev
6c62ad446b
[clang-repl] [codegen] Reduce the state in TBAA. NFC for static compilation. (#98138)
In incremental compilation clang works with multiple `llvm::Module`s.
Our current approach is to create a CodeGenModule entity for every new
module request (via StartModule). However, some of the state such as the
mangle context needs to be preserved to keep the original semantics in
the ever-growing TU.

Fixes: llvm/llvm-project#95581.

cc: @jeaye
2024-08-21 07:22:31 +02:00
Eduard Zingerman
5e8f4618ce Revert "[BPF] introduce __attribute__((bpf_fastcall)) (#101228)"
This reverts commit e9b2e16dc98345bb1b91b1a6dacb3cec85f49e31.
Reverting because of the test failure:
https://lab.llvm.org/buildbot/#/builders/187/builds/509
2024-08-19 11:30:27 -07:00
eddyz87
e9b2e16dc9
[BPF] introduce __attribute__((bpf_fastcall)) (#101228)
This commit introduces attribute bpf_fastcall to declare BPF functions
that do not clobber some of the caller saved registers (R0-R5).

The idea is to generate the code complying with generic BPF ABI, but
allow compatible Linux Kernel to remove unnecessary spills and fills of
non-scratched registers (given some compiler assistance).

For such functions do register allocation as-if caller saved registers
are not clobbered, but later wrap the calls with spill and fill patterns
that are simple to recognize in kernel.

For example for the following C code:

     #define __bpf_fastcall __attribute__((bpf_fastcall))

     void bar(void) __bpf_fastcall;
     void buz(long i, long j, long k);

     void foo(long i, long j, long k) {
       bar();
       buz(i, j, k);
     }

First allocate registers as if:

    foo:
      call bar    # note: no spills for i,j,k (r1,r2,r3)
      call buz
      exit

And later insert spills fills on the peephole phase:

    foo:
      *(u64 *)(r10 - 8) = r1;  # Such call pattern is
      *(u64 *)(r10 - 16) = r2; # correct when used with
      *(u64 *)(r10 - 24) = r3; # old kernels.
      call bar
      r3 = *(u64 *)(r10 - 24); # But also allows new
      r2 = *(u64 *)(r10 - 16); # kernels to recognize the
      r1 = *(u64 *)(r10 - 8);  # pattern and remove spills/fills.
      call buz
      exit

The offsets for generated spills/fills are picked as minimal stack
offsets for the function. Allocated stack slots are not used for any
other purposes, in order to simplify in-kernel analysis.

Corresponding functionality had been merged in Linux Kernel as
[this](https://lore.kernel.org/bpf/172179364482.1919.9590705031832457529.git-patchwork-notify@kernel.org/)
patch set (the patch assumed that `no_caller_saved_regsiters` attribute
would be used by LLVM, naming does not matter for the Kernel).
2024-08-19 19:49:11 +03:00
Daniel Kiss
9e9fa00dcb
[Arm][AArch64][Clang] Respect function's branch protection attributes. (#101978)
Default attributes assigned to all functions according to the command
line parameters. Some functions might have their own attributes and we
need to set or remove attributes accordingly.
Tests are updated to test this scenarios too.
2024-08-09 17:51:38 +02:00
Jeremy Morse
92aec5192c
[DebugInfo][RemoveDIs] Use iterator-inserters in clang (#102006)
As part of the LLVM effort to eliminate debug-info intrinsics, we're
moving to a world where only iterators should be used to insert
instructions. This isn't a problem in clang when instructions get
generated before any debug-info is inserted, however we're planning on
deprecating and removing the instruction-pointer insertion routines.

Scatter some calls to getIterator in a few places, remove a
deref-then-addrof on another iterator, and add an overload for the
createLoadInstBefore utility. Some callers passes a null insertion
point, which we need to handle explicitly now.
2024-08-09 10:17:48 +01:00
Eli Friedman
1762e01cca
Fix codegen of consteval functions returning an empty class, and related issues (#93115)
Fix codegen of consteval functions returning an empty class, and related
issues

If a class is empty, don't store it to memory: the store might overwrite
useful data. Similarly, if a class has tail padding that might overlap
other fields, don't store the tail padding to memory.

The problem here turned out a bit more general than I initially thought:
basically all uses of EmitAggregateStore were broken. Call lowering had
a method that did mostly the right thing, though: CreateCoercedStore.
Adapt CreateCoercedStore so it always does the conservatively right
thing, and use it for both calls and ConstantExpr.

Also, along the way, fix the "overlap" bit in AggValueSlot: the bit was
set incorrectly for empty classes in some cases.

Fixes #93040.
2024-08-01 16:18:20 -07:00
darkbuck
fa84297002
[clang][CUDA] Add 'noconvergent' function and statement attribute
- For languages following SPMD/SIMT programming model, functions and
  call sites are marked 'convergent' by default. 'noconvergent' is added
  in this patch to allow developers to remove that 'convergent'
  attribute when it's safe.

Reviewers:
nhaehnle, Sirraide, yxsamliu, Artem-B, ilovepi, jayfoad, ssahasra, arsenm

Reviewed By: arsenm

Pull Request: https://github.com/llvm/llvm-project/pull/100637
2024-07-31 11:30:48 -04:00
Qiu Chaofan
20957d2091
[AIX] Add -msave-reg-params to save arguments to stack (#97524)
In PowerPC ABI, a few initial arguments are passed through registers,
but their places in parameter save area are reserved, arguments passed
by memory goes after the reserved location.

For debugging purpose, we may want to save copy of the pass-by-reg
arguments into correct places on stack. The new option achieves by
adding new function level attribute and make argument lowering part
aware of it.
2024-07-24 20:58:37 +08:00
Oliver Hunt
4dcd91aea3
[PAC] Implement authentication for C++ member function pointers (#99576)
Introduces type based signing of member function pointers. To support
this discrimination schema we no longer emit member function pointer to
virtual methods and indices into a vtable but migrate to using thunks.
This does mean member function pointers are no longer necessarily
directly comparable, however as such comparisons are UB this is
acceptable.

We derive the discriminator from the C++ mangling of the type of the
pointer being authenticated.

Co-Authored-By: Akira Hatanaka ahatanaka@apple.com
Co-Authored-By: John McCall rjmccall@apple.com
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
2024-07-22 18:29:06 -07:00
Mariya Podchishchaeva
9ad72df55c
[clang] Use different memory layout type for _BitInt(N) in LLVM IR (#91364)
There are two problems with _BitInt prior to this patch:
1. For at least some values of N, we cannot use LLVM's iN for the type
of struct elements, array elements, allocas, global variables, and so
on, because the LLVM layout for that type does not match the high-level
layout of _BitInt(N).
Example: Currently for i128:128 targets correct implementation is
possible either for __int128 or for _BitInt(129+) with lowering to iN,
but not both, since we have now correct implementation of __int128 in
place after a21abc7.
When this happens, opaque [M x i8] types used, where M =
sizeof(_BitInt(N)).
2. LLVM doesn't guarantee any particular extension behavior for integer
types that aren't a multiple of 8. For this reason, all _BitInt types
are now have in-memory representation that is a whole number of bytes.
I.e. for example _BitInt(17) now will have memory layout type i32.

This patch also introduces concept of load/store type and adds an API to
CodeGenTypes that returns the IR type that should be used for load and
store operations. This is particularly useful for the case when a
_BitInt ends up having array of bytes as memory layout type. For
_BitInt(N), let M = sizeof(_BitInt(N)), and let BITS = M * 8. Loads and
stores of iM would both (1) produce far better code from the backends
and (2) be far more optimizable by IR passes than loads and stores of [M
x i8].

Fixes https://github.com/llvm/llvm-project/issues/85139
Fixes https://github.com/llvm/llvm-project/issues/83419

---------

Co-authored-by: John McCall <rjmccall@gmail.com>
2024-07-15 09:40:39 +02:00
Daniel Kiss
7d1b6b2c32
[Clang][ARM][AArch64] Add branch protection attributes to the defaults. (#83277)
These attributes are no longer inherited from the module flags,
therefore need to be added for synthetic functions.
2024-07-12 20:52:56 +02:00
Chen Zheng
afd0e6d06b
[PowerPC] Diagnose musttail instead of crash inside backend (#93267)
musttail is not often possible to be generated on PPC targets as when
calling to a function defined in another module, PPC needs to restore
the TOC pointer. To restore the TOC pointer, compiler needs to emit a
nop after the call to let linker generate codes to restore TOC pointer.
Tail call cannot generate expected call sequence for this case.

To avoid the crash inside the compiler backend, a diagnosis is added in
the frontend.

Fixes #63214
2024-07-08 09:30:01 +08:00
Ahmed Bougacha
e23250ecb7
[clang] Implement function pointer signing and authenticated function calls (#93906)
The functions are currently always signed/authenticated with zero
discriminator.

Co-Authored-By: John McCall <rjmccall@apple.com>
2024-06-21 10:20:15 -07:00
Mariya Podchishchaeva
6d973b4548
[clang][CodeGen] Return RValue from EmitVAArg (#94635)
This should simplify handling of resulting value by the callers.
2024-06-17 13:29:20 +02:00
beetrees
db3a47c810
Fix silent truncation of inline ASM srcloc cookie when going through a DiagnosticInfoSrcMgr (#84559)
The size of the inline ASM `srcloc` cookie was changed from 32 bits to
64 bits in [D105491](https://reviews.llvm.org/D105491). However, that
commit only updated the size of the cookie in `DiagnosticInfoInlineAsm`,
meaning that inline ASM diagnostics that are instead represented with a
`DiagnosticInfoSrcMgr` have their cookies truncated to 32 bits. This PR
replaces the remaining uses of `unsigned` to represent the cookie with
`uint64_t`, allowing the cookie to make it all the way to the diagnostic
handler without being truncated.
2024-06-14 15:05:57 +01:00
Oliver Stannard
1a5239251e
[ARM] r11 is reserved when using -mframe-chain=aapcs (#86951)
When using the -mframe-chain=aapcs or -mframe-chain=aapcs-leaf options,
we cannot use r11 as an allocatable register, even if
-fomit-frame-pointer is also used. This is so that r11 will always point
to a valid frame record, even if we don't create one in every function.
2024-06-07 10:58:10 +01:00
Lukacma
375761bcab
[Clang] Emit lifetime markers for non-aggregate temporary allocas (#90849)
This patch extends https://reviews.llvm.org/D68611 and emits lifetime
markers for temporary allocas of non-aggregate types as well.
2024-05-21 11:07:29 +01:00
Ahmed Bougacha
3575d23ca8
[clang][CodeGen] Remove unused LValue::getAddress CGF arg. (#92465)
This is in effect a revert of f139ae3d93797, as we have since gained a
more sophisticated way of doing extra IRGen with the addition of
RawAddress in #86923.
2024-05-20 10:23:04 -07:00
Nathan Gauër
e08f1fda75
[clang][SPIR-V] Always add convergence intrinsics (#88918)
PR #80680 added bits in the codegen to lazily add convergence intrinsics
when required. This logic relied on the LoopStack. The issue is when
parsing the condition, the loopstack doesn't yet reflect the correct
values, as expected since we are not yet in the loop.

However, convergence tokens should sometimes already be available. The
solution which seemed the simplest is to greedily generate the tokens
when we generate SPIR-V.

Fixes #88144

---------

Signed-off-by: Nathan Gauër <brioche@google.com>
2024-05-14 17:00:40 +02:00
ostannard
1fd196c8df
[AArch64] Diagnose more functions when FP not enabled (#90832)
When using a hard-float ABI for a target without FP registers, it's not
possible to correctly generate code for functions with arguments which
must be passed in floating-point registers. This is diagnosed in CodeGen
instead of Sema, to more closely match GCC's behaviour around inline
functions, which is relied on by the Linux kernel.

Previously, this only checked function signatures as they were
code-generated, but this missed some cases:
* Calls to functions not defined in this translation unit.
* Calls through function pointers.
* Calls to variadic functions, where the variadic arguments have a
floating-point type.

This adds checks to function calls, as well as definitions, so that
these cases are correctly diagnosed.
2024-05-07 09:17:05 +01:00
Utkarsh Saxena
d72146f471
Re-apply "Emit missing cleanups for stmt-expr" and other commits (#89154)
Latest diff:
f1ab4c2677..adf9bc902b

We address two additional bugs here: 

### Problem 1: Deactivated normal cleanup still runs, leading to
double-free
Consider the following:
```cpp

struct A { };

struct B { B(const A&); };

struct S {
  A a;
  B b;
};

int AcceptS(S s);

void Accept2(int x, int y);

void Test() {
  Accept2(AcceptS({.a = A{}, .b = A{}}), ({ return; 0; }));
}
```
We add cleanups as follows:
1. push dtor for field `S::a`
2. push dtor for temp `A{}` (used by ` B(const A&)` in `.b = A{}`)
3. push dtor for field `S::b`
4. Deactivate 3 `S::b`-> This pops the cleanup.
5. Deactivate 1 `S::a` -> Does not pop the cleanup as *2* is top. Should
create _active flag_!!
6. push dtor for `~S()`.
7. ...

It is important to deactivate **5** using active flags. Without the
active flags, the `return` will fallthrough it and would run both `~S()`
and dtor `S::a` leading to **double free** of `~A()`.
In this patch, we unconditionally emit active flags while deactivating
normal cleanups. These flags are deleted later by the `AllocaTracker` if
the cleanup is not emitted.

### Problem 2: Missing cleanup for conditional lifetime extension
We push 2 cleanups for lifetime-extended cleanup. The first cleanup is
useful if we exit from the middle of the expression (stmt-expr/coro
suspensions). This is deactivated after full-expr, and a new cleanup is
pushed, extending the lifetime of the temporaries (to the scope of the
reference being initialized).
If this lifetime extension happens to be conditional, then we use active
flags to remember whether the branch was taken and if the object was
initialized.
Previously, we used a **single** active flag, which was used by both
cleanups. This is wrong because the first cleanup will be forced to
deactivate after the full-expr and therefore this **active** flag will
always be **inactive**. The dtor for the lifetime extended entity would
not run as it always sees an **inactive** flag.

In this patch, we solve this using two separate active flags for both
cleanups. Both of them are activated if the conditional branch is taken,
but only one of them is deactivated after the full-expr.

---

Fixes https://github.com/llvm/llvm-project/issues/63818
Fixes https://github.com/llvm/llvm-project/issues/88478

---

Previous PR logs:
1. https://github.com/llvm/llvm-project/pull/85398
2. https://github.com/llvm/llvm-project/pull/88670
3. https://github.com/llvm/llvm-project/pull/88751
4. https://github.com/llvm/llvm-project/pull/88884
2024-04-29 12:33:46 +02:00
Hugo Melder
3dcd2cca77
Fix Objective-C++ Sret of non-trivial data types on Windows ARM64 (#88671)
Linked to https://github.com/gnustep/libobjc2/pull/289.

More information can be found in issue: #88273.

My solution involves creating a new message-send function for this
calling convention when targeting MSVC. Additional information is
available in the libobjc2 pull request.

I am unsure whether we should check for a runtime version where
objc_msgSend_stret2_np is guaranteed to be present or leave it as is,
considering it remains a critical bug. What are your thoughts about this
@davidchisnall?
2024-04-25 19:51:52 +01:00
Timm Baeder
3d56ea05b6
[clang][NFC] Fix FieldDecl::isUnnamedBitfield() capitalization (#89048)
We always capitalize bitfield as "BitField".
2024-04-18 07:39:29 +02:00
Harald van Dijk
60de56c743
[ValueTracking] Restore isKnownNonZero parameter order. (#88873)
Prior to #85863, the required parameters of llvm::isKnownNonZero were
Value and DataLayout. After, they are Value, Depth, and SimplifyQuery,
where SimplifyQuery is implicitly constructible from DataLayout. The
change to move Depth before SimplifyQuery needed callers to be updated
unnecessarily, and as commented in #85863, we actually want Depth to be
after SimplifyQuery anyway so that it can be defaulted and the caller
does not need to specify it.
2024-04-16 15:21:09 +01:00
Utkarsh Saxena
9d8be24087
Revert "[codegen] Emit missing cleanups for stmt-expr and coro suspensions" and related commits (#88884)
The original change caused widespread breakages in msan/ubsan tests and
causes `use-after-free`. Most likely we are adding more cleanups than
necessary.
2024-04-16 15:30:32 +02:00
Utkarsh Saxena
5a46123ddf
Fix missing dtor in function calls accepting trivial ABI structs (#88751)
Fixes https://github.com/llvm/llvm-project/issues/88478

Promoting the `EHCleanup` to `NormalAndEHCleanup` in `EmitCallArgs`
surfaced another bug with deactivation of normal cleanups. Here we
missed emitting CPP scope ends for deactivated normal cleanups. This
patch also fixes that bug.

We missed emitting CPP scope ends because we remove the `fallthrough`
(clears the insertion point) before deactivating normal cleanups. This
is to make the emitted "normal" cleanup code unreachable. But we still
need to emit CPP scope ends in the original basic block even for a
deactivated normal cleanup.
(This worked correctly before we did not remove `fallthrough` for
`EHCleanup`s).
2024-04-16 11:01:03 +02:00
Yingwei Zheng
e0a628715a
[ValueTracking] Convert isKnownNonZero to use SimplifyQuery (#85863)
This patch converts `isKnownNonZero` to use SimplifyQuery. Then we can
use the context information from `DomCondCache`.

Fixes https://github.com/llvm/llvm-project/issues/85823.
Alive2: https://alive2.llvm.org/ce/z/QUvHVj
2024-04-12 23:47:20 +08:00
Eli Friedman
71097e9271
[ARM64EC] Add support for parsing __vectorcall (#87725)
MSVC doesn't support generating __vectorcall calls in Arm64EC mode, but
it does treat it as a distinct type. The Microsoft STL depends on this
functionality. (Not sure if this is intentional.) Add support for
parsing the same way as MSVC, and add some checks to ensure we don't try
to actually generate code.

The error handling in CodeGen is ugly, but I can't think of a better way
to do it.
2024-04-09 19:53:56 -07:00
Sam McCall
7ef602b58c
Reapply "[clang][nullability] allow _Nonnull etc on nullable class types (#82705)" (#87325)
This reverts commit 28760b63bbf9e267713957105a8d17091fb0d20e.

The last commit was missing the new testcase, now fixed.
2024-04-02 13:48:45 +02:00
Chris B
9434c08347
[HLSL] Implement array temporary support (#79382)
HLSL constant sized array function parameters do not decay to pointers.
Instead constant sized array types are preserved as unique types for
overload resolution, template instantiation and name mangling.

This implements the change by adding a new `ArrayParameterType` which
represents a non-decaying `ConstantArrayType`. The new type behaves the
same as `ConstantArrayType` except that it does not decay to a pointer.

Values of `ConstantArrayType` in HLSL decay during overload resolution
via a new `HLSLArrayRValue` cast to `ArrayParameterType`.

`ArrayParamterType` values are passed indirectly by-value to functions
in IR generation resulting in callee generated memcpy instructions.

The behavior of HLSL function calls is documented in the [draft language
specification](https://microsoft.github.io/hlsl-specs/specs/hlsl.pdf)
under the Expr.Post.Call heading.

Additionally the design of this implementation approach is documented in
[Clang's
documentation](https://clang.llvm.org/docs/HLSL/FunctionCalls.html)

Resolves #70123
2024-04-01 12:10:10 -05:00
dyung
28760b63bb
Revert "Reapply "[clang][nullability] allow _Nonnull etc on nullable class types (#82705)"" (#87041)
This reverts commit bbbcc1d99d08855069f4501c896c43a6d4d7b598.

This change is causing the following build bots to fail due to a missing
header file:
- https://lab.llvm.org/buildbot/#/builders/188/builds/43765
- https://lab.llvm.org/buildbot/#/builders/176/builds/9428
- https://lab.llvm.org/buildbot/#/builders/187/builds/14696
- https://lab.llvm.org/buildbot/#/builders/186/builds/15551
- https://lab.llvm.org/buildbot/#/builders/182/builds/9413
- https://lab.llvm.org/buildbot/#/builders/245/builds/22507
- https://lab.llvm.org/buildbot/#/builders/258/builds/16026
- https://lab.llvm.org/buildbot/#/builders/249/builds/17221
- https://lab.llvm.org/buildbot/#/builders/38/builds/18566
- https://lab.llvm.org/buildbot/#/builders/214/builds/11735
- https://lab.llvm.org/buildbot/#/builders/231/builds/21947
- https://lab.llvm.org/buildbot/#/builders/230/builds/26675
- https://lab.llvm.org/buildbot/#/builders/57/builds/33922
- https://lab.llvm.org/buildbot/#/builders/124/builds/10311
- https://lab.llvm.org/buildbot/#/builders/109/builds/86173
- https://lab.llvm.org/buildbot/#/builders/280/builds/1043
- https://lab.llvm.org/buildbot/#/builders/283/builds/440
- https://lab.llvm.org/buildbot/#/builders/247/builds/16034
- https://lab.llvm.org/buildbot/#/builders/139/builds/62423
- https://lab.llvm.org/buildbot/#/builders/216/builds/36718
- https://lab.llvm.org/buildbot/#/builders/259/builds/2039
- https://lab.llvm.org/buildbot/#/builders/36/builds/44091
- https://lab.llvm.org/buildbot/#/builders/272/builds/12629
- https://lab.llvm.org/buildbot/#/builders/271/builds/6020
- https://lab.llvm.org/buildbot/#/builders/236/builds/10319
2024-03-29 00:50:11 -07:00
Sam McCall
bbbcc1d99d Reapply "[clang][nullability] allow _Nonnull etc on nullable class types (#82705)"
This reverts commit ca4c4a6758d184f209cb5d88ef42ecc011b11642.

This was intended not to introduce new consistency diagnostics for
smart pointer types, but failed to ignore sugar around types when
detecting this.
Fixed and test added.
2024-03-28 23:57:09 +01:00
Nathan Gauër
0f61051f54
[clang][HLSL][SPRI-V] Add convergence intrinsics (#80680)
HLSL has wave operations and other kind of function which required the
control flow to either be converged, or respect certain constraints as
where and how to re-converge.

At the HLSL level, the convergence are mostly obvious: the control flow
is expected to re-converge at the end of a scope.
Once translated to IR, HLSL scopes disapear. This means we need a way to
communicate convergence restrictions down to the backend.

For this, the SPIR-V backend uses convergence intrinsics. So this commit
adds some code to generate convergence intrinsics when required.

---------

Signed-off-by: Nathan Gauër <brioche@google.com>
2024-03-28 17:18:05 +01:00
Akira Hatanaka
84780af4b0
[CodeGen][arm64e] Add methods and data members to Address, which are needed to authenticate signed pointers (#86923)
To authenticate pointers, CodeGen needs access to the key and
discriminators that were used to sign the pointer. That information is
sometimes known from the context, but not always, which is why `Address`
needs to hold that information.

This patch adds methods and data members to `Address`, which will be
needed in subsequent patches to authenticate signed pointers, and uses
the newly added methods throughout CodeGen. Although this patch isn't
strictly NFC as it causes CodeGen to use different code paths in some
cases (e.g., `mergeAddressesInConditionalExpr`), it doesn't cause any
changes in functionality as it doesn't add any information needed for
authentication.

In addition to the changes mentioned above, this patch introduces class
`RawAddress`, which contains a pointer that we know is unsigned, and
adds several new functions for creating `Address` and `LValue` objects.

This reapplies d9a685a9dd589486e882b722e513ee7b8c84870c, which was
reverted because it broke ubsan bots. There seems to be a bug in
coroutine code-gen, which is causing EmitTypeCheck to use the wrong
alignment. For now, pass alignment zero to EmitTypeCheck so that it can
compute the correct alignment based on the passed type (see function
EmitCXXMemberOrOperatorMemberCallExpr).
2024-03-28 06:54:36 -07:00
Akira Hatanaka
f75eebab88
Revert "[CodeGen][arm64e] Add methods and data members to Address, which are needed to authenticate signed pointers (#86721)" (#86898)
This reverts commit d9a685a9dd589486e882b722e513ee7b8c84870c.

The commit broke ubsan bots.
2024-03-27 18:14:04 -07:00
Akira Hatanaka
d9a685a9dd
[CodeGen][arm64e] Add methods and data members to Address, which are needed to authenticate signed pointers (#86721)
To authenticate pointers, CodeGen needs access to the key and
discriminators that were used to sign the pointer. That information is
sometimes known from the context, but not always, which is why `Address`
needs to hold that information.

This patch adds methods and data members to `Address`, which will be
needed in subsequent patches to authenticate signed pointers, and uses
the newly added methods throughout CodeGen. Although this patch isn't
strictly NFC as it causes CodeGen to use different code paths in some
cases (e.g., `mergeAddressesInConditionalExpr`), it doesn't cause any
changes in functionality as it doesn't add any information needed for
authentication.

In addition to the changes mentioned above, this patch introduces class
`RawAddress`, which contains a pointer that we know is unsigned, and
adds several new functions for creating `Address` and `LValue` objects.

This reapplies 8bd1f9116aab879183f34707e6d21c7051d083b6. The commit
broke msan bots because LValue::IsKnownNonNull was uninitialized.
2024-03-27 12:24:49 -07:00