With gcc 15 we end up emitting a reference to the
std::__glibcxx_assert_fail function because of this change:
361d230fd7
combined with assertion checks in the std::atomic implementation.
This reference is undefined with dfsan causing the test to fail. Fix it
by defining the macro that disables assertions.
Pull Request: https://github.com/llvm/llvm-project/pull/153873
This makes the tests pass on my AArch64 machine with 16K pages.
Not sure why some of the AArch64-specific test failures don't seem to
occur on sanitizer-aarch64-linux. I could also reproduce them by running
buildbot_cmake.sh on my machine.
Pull Request: https://github.com/llvm/llvm-project/pull/153860
For x86, the halt instruction is defined as a terminator instruction.
When building the CFG, the instruction sequence following the hlt
instruction is treated as an independent MBB. Since there is no jump
information, the predecessor of this MBB cannot be identified, and it is
considered an unreachable MBB that will be removed.
Using this fix, the instruction sequences before and after hlt are
refused to be placed in different blocks.
…210)"
This reverts commit 9a14b1d254a43dc0d4445c3ffa3d393bca007ba3.
Revert "RuntimeLibcalls: Return StringRef for libcall names (#153209)"
This reverts commit cb1228fbd535b8f9fe78505a15292b0ba23b17de.
Revert "TableGen: Emit statically generated hash table for runtime
libcalls (#150192)"
This reverts commit 769a9058c8d04fc920994f6a5bbb03c8a4fbcd05.
Reverted three changes because of a CMake error while building llvm-nm
as reported in the following PR:
https://github.com/llvm/llvm-project/pull/150192#issuecomment-3192223073
Setting LLVM_LIT_ARGS to include --quiet and then running check-mlir in
a standard checkout will otherwise cause test failures here because
LLVM_LIT_ARGS gets propagated into this project.
If the copyable schedule data is created and the user is used several
times in the user node, no need to count same data for the same user
several times, need to include it only ones.
Fixes#153754
We just need to use a BinOpFrag to share the patterns. This also moves
UABDL to where it belongs in with similar instructions, and removes some
patterns that are now handled by abd nodes. This is mostly NFC except
for GISel, which will catch back up when it handles abd nodes in the
same way.
https://github.com/llvm/llvm-project/pull/152131 added a few tests that
depend on `mlir/test/Target/Wasm/inputs/*`, e.g.
`mlir/test/Target/Wasm/import.mlir` reads `inputs/import.yaml.wasm`.
These inputs should be included as data dependency.
Update printf format string to match argument list
---------
Co-authored-by: Joachim <protze@rz.rwth-aachen.de>
Co-authored-by: Joachim Jenke <jenke@itc.rwth-aachen.de>
The current implementation of
CPlusPlusLanguage::SymbolNameFitsToLanguage will return true if the
symbol is mangled for any language that lldb knows about.
Now that "vibe coding" is a thing, ignore the documentation artifacts
that coding assistants, like Claude and Gemini, use to retain coding
workflows and other metadata.
This PR reapplies https://github.com/llvm/llvm-project/pull/149461
In the original `combineVectorSizedSetCCEquality`, the result of setcc
is being negated by returning setcc with the same cond code, leading to
wrong logic.
For example, with
```llvm
%cmp_16 = call i32 @memcmp(ptr %a, ptr %b, i32 16)
%res = icmp eq i32 %cmp_16, 0
```
the original PR producese all_true and then also compares the result
equal to 0 (using the same SETEQ in the returning setcc), meaning that
semantically, it effectively is calling icmp ne.
Instead, the PR should have use SETNE in the returning setcc, this way,
all true return 1, then it is compared again ne 0, which is equivalent
to icmp eq.
The purpose of this fence is to ensure that any `dataSubmit`s inserted
into a queue before a `dataFence` finish before finish before any
`dataSubmit`s
inserted after it begin.
This is a no-op for most queues, since they are in-order, and by design
any operations inserted into them occur in order.
But the interface is supposed to be functional for out-of-order queues.
The addition of the interface means that any operations that rely on
such ordering (like ATTACH map-type support in #149036) can invoke it,
without worrying about whether the underlying queue is in-order or
out-of-order.
Once a plugin supports out-of-order queues, the plugin can implement
this function, without requiring any change at the libomptarget level.
---------
Co-authored-by: Alex Duran <alejandro.duran@intel.com>
If the copyable schedule data is created and the user is used several
times in the user node, no need to count same data for the same user
several times, need to include it only ones.
Fixes#153754
The 'firstprivate' clause requires that we do a 'copy' operation, so
this patch creates some AST nodes from which we can generate the copy
operation, including a 'temporary' and array init. For the most part
this is pretty similar to what 'private' does other than the fact that
the source is copy (and not default init!), and that there is a
temporary from which to copy.
---------
Co-authored-by: Andy Kaylor <akaylor@nvidia.com>
GFX1250 SPG says: S_GETREG_B32 does not wait for idle before executing.
The user must S_WAIT_ALU 0 before S_GETREG_B32 on:
STATUS, STATE_PRIV, EXCP_FLAG_PRIV, or EXCP_FLAG_USER.
Fixes#153012
As we tolerate unfoldable constant expressions in `scalarizeOpOrCmp`, we
may fold
```llvm
define void @bug(ptr %ptr1, ptr %ptr2, i64 %idx) #0 {
entry:
%158 = insertelement <2 x i64> <i64 5, i64 ptrtoint (ptr @val to i64)>, i64 %idx, i32 0
%159 = or disjoint <2 x i64> splat (i64 2), %158
store <2 x i64> %159, ptr %ptr2
ret void
}
```
to
```llvm
define void @bug(ptr %ptr1, ptr %ptr2, i64 %idx) {
entry:
%.scalar = or disjoint i64 2, %idx
%0 = or <2 x i64> splat (i64 2), <i64 5, i64 ptrtoint (ptr @val to i64)>
%1 = insertelement <2 x i64> %0, i64 %.scalar, i64 0
store <2 x i64> %1, ptr %ptr2, align 16
ret void
}
```
And it would be folded back in `foldInsExtBinop`, resulting in an
infinite loop.
This patch forces scalarization iff InstSimplify can fold the constant
expression.
We've backported a lot more features from C to previous C standards than
we were documenting. I took a pass over the c_status page for Clang and
pulled more entries to add to our documentation.
Fixes#139023.
This PR essentially removes unused global variables:
- Restores the `GlobalDCE` Legacy pass and adds it to the DirectX
backend after the finalize linkage pass
- Converts external global variables with no usage to internal linkage
in the finalize linkage pass
- (so they can be removed by `GlobalDCE`)
- Makes the `dxil-finalize-linkage` pass usable using the new pass
manager flag syntax
- Adds tests to `finalize_linkage.ll` that make sure unused global
variables are removed
- Adds a use for variable `@CBV` in `opaque-value_as_metadata.ll` so it
isn't removed
- Changes the `scalar-data.ll` run command to avoid removing its global
variables
---------
Co-authored-by: Farzon Lotfi <farzonlotfi@microsoft.com>
We cannot actually retire an infinite number of uops per cycle. This
patch adds a RCU to the skylake scheduling model to fix this. I'm
purposefully using a loose upper bound here. We're unlikely to actually
get four fused uops per cycle, but this is better than not setting
anything. Most realistic code I've put through uiCA will retire up to ~6
uops per cycle.
Information taken from
https://en.wikichip.org/wiki/intel/microarchitectures/skylake_(client).
This requires modification of the two zero idiom tests because we do not
currently model the CPU frontend which would likely be the actual
bottleneck in that case.
Related to #153747.
Similar to `IntegerRelation::addLocalFloorDiv`, this adds a utility
`IntegerRelation::addLocalModulo` that adds and returns a local variable
that is the modulus of an affine function of the variables modulo some
constant modulus. The function returns the absolute index of the new var
in the relation.
This is computed by first finding the floordiv of `exprs // modulus = q`
and then computing the remainder `result = exprs - q * modulus`.
Signed-off-by: Asra Ali <asraa@google.com>
When matching integers, `m_ConstantInt` is a convenient alternative to
`m_APInt` for matching unsigned 64-bit integers, allowing one to
simplify
```cpp
const APInt *IntC;
if (match(V, m_APInt(IntC))) {
if (IntC->ule(UINT64_MAX)) {
uint64_t Int = IntC->getZExtValue();
// ...
}
}
```
to
```cpp
uint64_t Int;
if (match(V, m_ConstantInt(Int))) {
// ...
}
```
However, this simplification is only true if `V` is a scalar type.
Specifically, `m_APInt` also matches integer splats, but `m_ConstantInt`
does not.
This patch ensures that the matching behaviour of `m_ConstantInt`
parallels that of `m_APInt`, and also incorporates it in some obvious
places.
This modifies InjectAnonymousStructOrUnionMembers to inject an
IndirectFieldDecl and mark it invalid even if its name conflicts with
another name in the scope.
This resolves a crash on a further diagnostic
diag::err_multiple_mem_union_initialization which via
findDefaultInitializer relies on these declarations being present.
Fixes#149985