…types usi… (#144676)"
This reverts commit 68471d29eed2c49f9b439e505b3f24d387d54f97.
IntegralAP contains a union:
union {
uint64_t *Memory = nullptr;
uint64_t Val;
};
On 64bit systems, both Memory and Val have the same size. However, on 32
bit system, Val is 64bit and Memory only 32bit. Which means the default
initializer for Memory will only zero half of Val. We fixed this by
zero-initializing Val explicitly in the IntegralAP(unsigned BitWidth)
constructor.
See also the discussion in
https://github.com/llvm/llvm-project/pull/144246
Both `APInt` and `APFloat` will heap-allocate memory themselves using
the system allocator when the size of their data exceeds 64 bits.
This is why clang has `APNumericStorage`, which allocates its memory
using an allocator (via `ASTContext`) instead. Calling `getValue()` on
an ast node like that will then create a new `APInt`/`APFloat` , which
will copy the data (in the `APFloat` case, we even copy it twice).
That's sad but whatever.
In the bytecode interpreter, we have a similar problem. Large integers
and floating-point values are placement-new allocated into the
`InterpStack` (or into the bytecode, which is a `vector<std::byte>`).
When we then later interrupt interpretation, we don't run the destructor
for all items on the stack, which means we leak the memory the
`APInt`/`APFloat` (which backs the `IntegralAP`/`Floating` the
interpreter uses).
Fix this by using an approach similar to the one used in the AST. Add an
allocator to `InterpState`, which is used for temporaries and local
values. Those values will be freed at the end of interpretation. For
global variables, we need to promote the values to global lifetime,
which we do via `InitGlobal` and `FinishInitGlobal` ops.
Interestingly, this results in a slight _improvement_ in compile times:
https://llvm-compile-time-tracker.com/compare.php?from=6bfcdda9b1ddf0900f82f7e30cb5e3253a791d50&to=88d1d899127b408f0fb0f385c2c58e6283195049&stat=instructions:u
(but don't ask me why).
Fixes https://github.com/llvm/llvm-project/issues/139012
Rename isConstexpr to isValid, the former was always a bad name. Save a
constexpr bit in Function so we don't have to access the decl in
CheckCallable.
CheckStore is for assignments, but we're constructing something here, so
pass AK_Construct instead. We already diagnosed the test case, but as an
assignment.
For
```c++
struct S {
constexpr S(int=0) : i(1) {}
int i;
};
constexpr volatile S vs;
```
reading from `vs.i` is not allowed, even though `i` is not volatile
qualified. Propagate the IsVolatile bit down the hierarchy, so we know
reading from `vs.i` is a volatile read.
This should be fine as long as we're not reading from it.
Note that this regresses
CXX/special/class.init/class.inhctor.init/p1.cpp, which used to work
fine with the bytecode interpreter.
That's because this code now fails:
```c++
struct Param;
struct A {
constexpr A(Param);
int a;
};
struct B : A { B(); using A::A; int b = 2; };
struct Wrap1 : B { constexpr Wrap1(); };
struct Wrap2 : Wrap1 {};
extern const Wrap2 b;
struct Param {
constexpr Param(int c) : n(4 * b.a + b.b + c) {}
int n;
};
```
and reports that the Param() constructor is never a valid constant
expression. But that's true and the current interpeter should report
that as well. It also fails when calling at compile time.
Differentiate between a volarile read via a lvalue-to-rvalue cast of a
volatile qualified subexpression and a read from a pointer with a
volatile base object.
Fix comparing type id pointers, add mor info when print()ing them, use
the most derived type in GetTypeidPtr() and the canonically unqualified
type when we know the type statically.
This is a basic implementation of P2719: "Type-aware allocation and
deallocation functions" described at http://wg21.link/P2719
The proposal includes some more details but the basic change in
functionality is the addition of support for an additional implicit
parameter in operators `new` and `delete` to act as a type tag. Tag is
of type `std::type_identity<T>` where T is the concrete type being
allocated. So for example, a custom type specific allocator for `int`
say can be provided by the declaration of
void *operator new(std::type_identity<int>, size_t, std::align_val_t);
void operator delete(std::type_identity<int>, void*, size_t, std::align_val_t);
However this becomes more powerful by specifying templated declarations,
for example
template <typename T> void *operator new(std::type_identity<T>, size_t, std::align_val_t);
template <typename T> void operator delete(std::type_identity<T>, void*, size_t, std::align_val_t););
Where the operators being resolved will be the concrete type being
operated over (NB. A completely unconstrained global definition as above
is not recommended as it triggers many problems similar to a general
override of the global operators).
These type aware operators can be declared as either free functions or
in class, and can be specified with or without the other implicit
parameters, with overload resolution performed according to the existing
standard parameter prioritisation, only with type parameterised
operators having higher precedence than non-type aware operators. The
only exception is destroying_delete which for reasons discussed in the
paper we do not support type-aware variants by default.
The Pointer class already has the capability to be a function pointer,
but we still classifed function pointers as PT_FnPtr/FunctionPointer.
This means when converting from a Pointer to a FunctionPointer, we lost
the information of what the original Pointer pointed to.
When revisiting a variable, we do that by simply calling visitDecl() for
it, which means it will end up with the same EvalID as the rest of the
evaluation - but this way we end up allowing reads from mutable
variables. Disallow that.
This returns the type of data in the Block, which might be different
than the type of the expression or declaration we created the block for.
This lets us remove some special cases from CheckNewDeleteForms() and
CheckNewTypeMismatch().
Create the Function* handles for all functions we see, but delay the
actual compilation until we really call the function. This speeds up
compile times with the new interpreter a bit.
Use the regular code paths for interpreting.
Add new instructions: `StartSpeculation` will reset the diagnostics
pointers to `nullptr`, which will keep us from reporting any diagnostics
during speculation. `EndSpeculation` will undo this.
The rest depends on what `Emitter` we use.
For `EvalEmitter`, we have no bytecode, so we implement `speculate()` by
simply visiting the first argument of `__builtin_constant_p`. If the
evaluation fails, we push a `0` on the stack, otherwise a `1`.
For `ByteCodeEmitter`, add another instrucion called `BCP`, that
interprets all the instructions following it until the next
`EndSpeculation` instruction. If any of those instructions fails, we
jump to the `EndLabel`, which brings us right before the
`EndSpeculation`. We then push the result on the stack.