228 Commits

Author SHA1 Message Date
Kazu Hirata
ae372bfca8
[CodeGen] Use range-based for loops (NFC) (#145142) 2025-06-21 08:20:57 -07:00
Timm Baeder
cfe26358e3
Reapply "[clang] Avoid re-evaluating field bitwidth" (#122289) 2025-01-11 07:12:37 +01:00
Reid Kleckner
26aa20a3dd
Use range-based for to iterate over fields in record layout, NFCI (#122029)
Move the common case of FieldDecl::getFieldIndex() inline to mitigate
the cost of removing the extra `FieldNo` induction variable.

Also rename isNoUniqueAddress parameter to isNonVirtualBaseType, which
appears to be more accurate. I think the current name is just a
consequence of autocomplete gone wrong.
2025-01-09 11:21:03 -08:00
Timm Bäder
59bdea24b0 Revert "[clang] Avoid re-evaluating field bitwidth (#117732)"
This reverts commit 81fc3add1e627c23b7270fe2739cdacc09063e54.

This breaks some LLDB tests, e.g.
SymbolFile/DWARF/x86/no_unique_address-with-bitfields.cpp:

lldb: ../llvm-project/clang/lib/AST/Decl.cpp:4604: unsigned int clang::FieldDecl::getBitWidthValue() const: Assertion `isa<ConstantExpr>(getBitWidth())' failed.
2025-01-08 15:09:52 +01:00
Timm Baeder
81fc3add1e
[clang] Avoid re-evaluating field bitwidth (#117732)
Save the bitwidth value as a `ConstantExpr` with the value set. Remove
the `ASTContext` parameter from `getBitWidthValue()`, so the latter
simply returns the value from the `ConstantExpr` instead of
constant-evaluating the bitwidth expression every time it is called.
2025-01-08 14:45:19 +01:00
Michael Buch
4497ec293a
[clang][CGRecordLayout] Remove dependency on isZeroSize (#96422)
This is a follow-up from the conversation starting at
https://github.com/llvm/llvm-project/pull/93809#issuecomment-2173729801

The root problem that motivated the change are external AST sources that
compute `ASTRecordLayout`s themselves instead of letting Clang compute
them from the AST. One such example is LLDB using DWARF to get the
definitive offsets and sizes of C++ structures. Such layouts should be
considered correct (modulo buggy DWARF), but various assertions and
lowering logic around the `CGRecordLayoutBuilder` relies on the AST
having `[[no_unique_address]]` attached to them. This is a
layout-altering attribute which is not encoded in DWARF. This causes us
LLDB to trip over the various LLVM<->Clang layout consistency checks.
There has been precedent for avoiding such layout-altering attributes
from affecting lowering with externally-provided layouts (e.g., packed
structs).

This patch proposes to replace the `isZeroSize` checks in
`CGRecordLayoutBuilder` (which roughly means "empty field with
[[no_unique_address]]") with checks for
`CodeGen::isEmptyField`/`CodeGen::isEmptyRecord`.

**Details**
The main strategy here was to change the `isZeroSize` check in
`CGRecordLowering::accumulateFields` and
`CGRecordLowering::accumulateBases` to use the `isEmptyXXX` APIs
instead, preventing empty fields from being added to the `Members` and
`Bases` structures. The rest of the changes fall out from here, to
prevent lookups into these structures (for field numbers or base
indices) from failing.

Added `isEmptyRecordForLayout` and `isEmptyFieldForLayout` (open to
better naming suggestions). The main difference to the existing
`isEmptyRecord`/`isEmptyField` APIs, is that the `isEmptyXXXForLayout`
counterparts don't have special treatment for `unnamed bitfields`/arrays
and also treat fields of empty types as if they had
`[[no_unique_address]]` (i.e., just like the `AsIfNoUniqueAddr` in
`isEmptyField` does).
2024-07-16 04:59:51 +01:00
Mariya Podchishchaeva
9ad72df55c
[clang] Use different memory layout type for _BitInt(N) in LLVM IR (#91364)
There are two problems with _BitInt prior to this patch:
1. For at least some values of N, we cannot use LLVM's iN for the type
of struct elements, array elements, allocas, global variables, and so
on, because the LLVM layout for that type does not match the high-level
layout of _BitInt(N).
Example: Currently for i128:128 targets correct implementation is
possible either for __int128 or for _BitInt(129+) with lowering to iN,
but not both, since we have now correct implementation of __int128 in
place after a21abc7.
When this happens, opaque [M x i8] types used, where M =
sizeof(_BitInt(N)).
2. LLVM doesn't guarantee any particular extension behavior for integer
types that aren't a multiple of 8. For this reason, all _BitInt types
are now have in-memory representation that is a whole number of bytes.
I.e. for example _BitInt(17) now will have memory layout type i32.

This patch also introduces concept of load/store type and adds an API to
CodeGenTypes that returns the IR type that should be used for load and
store operations. This is particularly useful for the case when a
_BitInt ends up having array of bytes as memory layout type. For
_BitInt(N), let M = sizeof(_BitInt(N)), and let BITS = M * 8. Loads and
stores of iM would both (1) produce far better code from the backends
and (2) be far more optimizable by IR passes than loads and stores of [M
x i8].

Fixes https://github.com/llvm/llvm-project/issues/85139
Fixes https://github.com/llvm/llvm-project/issues/83419

---------

Co-authored-by: John McCall <rjmccall@gmail.com>
2024-07-15 09:40:39 +02:00
Nathan Sidwell
1d6bf0ca29
[clang][NFC] Remove class layout scissor (#89055)
PR #87090 amended `accumulateBitfields` to do the correct clipping. The scissor is no longer necessary and  `checkBitfieldClipping` can compute its location directly when needed.
2024-05-12 09:45:00 -04:00
Nathan Sidwell
69d861e132
[clang] Move tailclipping to bitfield allocation (#87090)
Move bitfield access clipping to bitfield access computation.
2024-04-15 16:55:58 -04:00
Nathan Sidwell
ee99475068
[clang] Fix bitfield access unit for vbase corner case (#87238)
This fixes #87227, a vbase can be placed below nvsize when empty members and/or bases are in play. We must account for that.
2024-04-01 15:41:38 -04:00
Nathan Sidwell
49839f97d2 [clang] Better SysV bitfield access units (#65742)
Reimplement bitfield access unit computation for SysV ABIs. Considers
expense of unaligned and non-power-of-2 accesses.
2024-03-29 09:35:31 -04:00
Nathan Sidwell
cb2dd0282c
[clang][NFC] constify or staticify some CGRecordLowering fns (#82874)
Some CGRecordLowering functions either do not need the object or do not mutate it.  Thus marking static or const as appropriate.
2024-02-26 04:44:03 -05:00
Kazu Hirata
f3dcc2351c
[clang] Use StringRef::{starts,ends}_with (NFC) (#75149)
This patch replaces uses of StringRef::{starts,ends}with with
StringRef::{starts,ends}_with for consistency with
std::{string,string_view}::{starts,ends}_with in C++20.

I'm planning to deprecate and eventually remove
StringRef::{starts,ends}with.
2023-12-13 08:54:13 -08:00
JOE1994
204883623e [NFC] Replace uses of Type::getPointerTo
Replace some uses of `Type::getPointerTo` via 2 ways
* Remove entirely if it's only used to support an unnecessary bitcast
  (remove the bitcast as well).
* Replace with `PointerType::get`/`PointerType::getUnqual`

NFC opaque pointer clean-up effort.
2023-09-29 21:38:53 -04:00
Bjorn Pettersson
fd05c34b18 Stop using legacy helpers indicating typed pointer types. NFC
Since we no longer support typed LLVM IR pointer types, the code can
be simplified into for example using PointerType::get directly instead
of using Type::getInt8PtrTy and Type::getInt32PtrTy etc.

Differential Revision: https://reviews.llvm.org/D156733
2023-08-02 12:08:37 +02:00
Vladislav Dzhidzhoev
af6c0b6d8c [clang][CodeGen] Use base subobject type layout for potentially-overlapping fields
RecordLayoutBuilder assumes the size of a potentially-overlapping
class/struct field with non-zero size as the size of the base subobject
type corresponding to the field type.
Make CGRecordLayoutBuilder to acknowledge that in order to avoid incorrect
padding insertion.

Differential Revision: https://reviews.llvm.org/D139741
2023-02-17 15:11:42 +01:00
Guillaume Chatelet
bf5c17ed0f [clang][NFC] Remove dependency on DataLayout::getPrefTypeAlignment 2023-01-13 15:01:29 +00:00
Vladislav Dzhidzhoev
19d300fb41 Revert "[clang][CodeGen] Use base subobject type layout for potentially-overlapping fields"
This reverts commit 731abdfdcc33d813e6c3b4b89eff307aa5c18083.

This commit breaks some tests in libcxx, e.g.
`std/utilities/expected/expected.expected/ctor/ctor.inplace.pass.cpp`
2022-12-15 18:12:26 +03:00
Vladislav Dzhidzhoev
731abdfdcc [clang][CodeGen] Use base subobject type layout for potentially-overlapping fields
RecordLayoutBuilder assumes the size of a potentially-overlapping field
with non-zero size as the size of the base subobject type corresponding
to the field type.
Make CGRecordLayoutBuilder to acknowledge that in order to avoid incorrect
padding insertion.

Differential Revision: https://reviews.llvm.org/D139741
2022-12-15 15:10:41 +03:00
Kazu Hirata
17d4bd3d78 [clang] Fix bugprone argument comments (NFC)
Identified with bugprone-argument-comment.
2022-01-09 00:19:49 -08:00
Bjorn Pettersson
72f863fd37 [CodeGen] Use getCharWidth() more consistently in CGRecordLowering. NFC
When using getByteArrayType the requested size is calculated in
char units, but the type used for the array was hardcoded to the
Int8Ty. This patch is using getCharWIdth a bit more consistently
by using getIntNTy in combination with getCharWidth, instead
of explictly using getInt8Ty.

Reviewed By: rjmccall

Differential Revision: https://reviews.llvm.org/D94977
2021-01-22 21:12:17 +01:00
Bevin Hansson
101309fe04 [AST] Change return type of getTypeInfoInChars to a proper struct instead of std::pair.
Followup to D85191.

This changes getTypeInfoInChars to return a TypeInfoChars
struct instead of a std::pair of CharUnits. This lets the
interface match getTypeInfo more closely.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D86447
2020-10-13 13:26:56 +02:00
Ties Stuij
208987844f [ARM] Follow AACPS standard for volatile bit-fields access width
This patch resumes the work of D16586.
According to the AAPCS, volatile bit-fields should
be accessed using containers of the widht of their
declarative type. In such case:
```
struct S1 {
  short a : 1;
}
```
should be accessed using load and stores of the width
(sizeof(short)), where now the compiler does only load
the minimum required width (char in this case).
However, as discussed in D16586,
that could overwrite non-volatile bit-fields, which
conflicted with C and C++ object models by creating
data race conditions that are not part of the bit-field,
e.g.
```
struct S2 {
  short a;
  int  b : 16;
}
```
Accessing `S2.b` would also access `S2.a`.

The AAPCS Release 2020Q2
(https://documentation-service.arm.com/static/5efb7fbedbdee951c1ccf186?token=)
section 8.1 Data Types, page 36, "Volatile bit-fields -
preserving number and width of container accesses" has been
updated to avoid conflict with the C++ Memory Model.
Now it reads in the note:
```
This ABI does not place any restrictions on the access widths of bit-fields where the container
overlaps with a non-bit-field member or where the container overlaps with any zero length bit-field
placed between two other bit-fields. This is because the C/C++ memory model defines these as being
separate memory locations, which can be accessed by two threads simultaneously. For this reason,
compilers must be permitted to use a narrower memory access width (including splitting the access into
multiple instructions) to avoid writing to a different memory location. For example, in
struct S { int a:24; char b; }; a write to a must not also write to the location occupied by b, this requires at least two
memory accesses in all current Arm architectures. In the same way, in struct S { int a:24; int:0; int b:8; };,
writes to a or b must not overwrite each other.
```

I've updated the patch D16586 to follow such behavior by verifying that we
only change volatile bit-field access when:
 - it won't overlap with any other non-bit-field member
 - we only access memory inside the bounds of the record
 - avoid overlapping zero-length bit-fields.

Regarding the number of memory accesses, that should be preserved, that will
be implemented by D67399.

Reviewed By: ostannard

Differential Revision: https://reviews.llvm.org/D72932
2020-10-13 10:31:48 +01:00
Ties Stuij
d6f3f61231 Revert "[ARM] Follow AACPS standard for volatile bit-fields access width"
This reverts commit 514df1b2bb1ecd1a33327001ea38a347fd2d0380.

Some of the buildbots got llvm-lit errors on CodeGen/volatile.c
2020-09-08 18:46:27 +01:00
Ties Stuij
514df1b2bb [ARM] Follow AACPS standard for volatile bit-fields access width
This patch resumes the work of D16586.
According to the AAPCS, volatile bit-fields should
be accessed using containers of the widht of their
declarative type. In such case:
```
struct S1 {
  short a : 1;
}
```
should be accessed using load and stores of the width
(sizeof(short)), where now the compiler does only load
the minimum required width (char in this case).
However, as discussed in D16586,
that could overwrite non-volatile bit-fields, which
conflicted with C and C++ object models by creating
data race conditions that are not part of the bit-field,
e.g.
```
struct S2 {
  short a;
  int  b : 16;
}
```
Accessing `S2.b` would also access `S2.a`.

The AAPCS Release 2020Q2
(https://documentation-service.arm.com/static/5efb7fbedbdee951c1ccf186?token=)
section 8.1 Data Types, page 36, "Volatile bit-fields -
preserving number and width of container accesses" has been
updated to avoid conflict with the C++ Memory Model.
Now it reads in the note:
```
This ABI does not place any restrictions on the access widths of bit-fields where the container
overlaps with a non-bit-field member or where the container overlaps with any zero length bit-field
placed between two other bit-fields. This is because the C/C++ memory model defines these as being
separate memory locations, which can be accessed by two threads simultaneously. For this reason,
compilers must be permitted to use a narrower memory access width (including splitting the access into
multiple instructions) to avoid writing to a different memory location. For example, in
struct S { int a:24; char b; }; a write to a must not also write to the location occupied by b, this requires at least two
memory accesses in all current Arm architectures. In the same way, in struct S { int a:24; int:0; int b:8; };,
writes to a or b must not overwrite each other.
```

Patch D16586 was updated to follow such behavior by verifying that we
only change volatile bit-field access when:
 - it won't overlap with any other non-bit-field member
 - we only access memory inside the bounds of the record
 - avoid overlapping zero-length bit-fields.

Regarding the number of memory accesses, that should be preserved, that will
be implemented by D67399.

Differential Revision: https://reviews.llvm.org/D72932

The following people contributed to this patch:
- Diogo Sampaio
- Ties Stuij
2020-09-08 17:49:49 +01:00
Alex Bradbury
3dcfd482cb [CodeGen] Increase applicability of ffine-grained-bitfield-accesses for targets with limited native integer widths
As pointed out in PR45708, -ffine-grained-bitfield-accesses doesn't
trigger in all cases you think it might for RISC-V. The logic in
CGRecordLowering::accumulateBitFields checks OffsetInRecord is a legal
integer according to the datalayout. RISC targets will typically only
have the native width as a legal integer type so this check will fail
for OffsetInRecord of 8 or 16 when you would expect the transformation
is still worthwhile.

This patch changes the logic to check for an OffsetInRecord of a at
least 1 byte, that fits in a legal integer, and is a power of 2. We
would prefer to query whether native load/store operations are
available, but I don't believe that is possible.

Differential Revision: https://reviews.llvm.org/D79155
2020-06-12 10:33:47 +01:00
David Blaikie
9b77242c9a CodeGenTypes::CGRecordLayouts: Use unique_ptr to simplify memory management 2020-04-28 22:31:16 -07:00
Erich Keane
5f0903e9be Reland Implement _ExtInt as an extended int type specifier.
I fixed the LLDB issue, so re-applying the patch.

This reverts commit a4b88c044980337bb14390be654fe76864aa60ec.
2020-04-17 10:45:48 -07:00
Sterling Augustine
a4b88c0449 Revert "Implement _ExtInt as an extended int type specifier."
This reverts commit 61ba1481e200b5b35baa81ffcff81acb678e8508.

I'm reverting this because it breaks the lldb build with
incomplete switch coverage warnings. I would fix it forward,
but am not familiar enough with lldb to determine the correct
fix.

lldb/source/Plugins/TypeSystem/Clang/TypeSystemClang.cpp:3958:11: error: enumeration values 'DependentExtInt' and 'ExtInt' not handled in switch [-Werror,-Wswitch]
  switch (qual_type->getTypeClass()) {
          ^
lldb/source/Plugins/TypeSystem/Clang/TypeSystemClang.cpp:4633:11: error: enumeration values 'DependentExtInt' and 'ExtInt' not handled in switch [-Werror,-Wswitch]
  switch (qual_type->getTypeClass()) {
          ^
lldb/source/Plugins/TypeSystem/Clang/TypeSystemClang.cpp:4889:11: error: enumeration values 'DependentExtInt' and 'ExtInt' not handled in switch [-Werror,-Wswitch]
  switch (qual_type->getTypeClass()) {
2020-04-17 10:29:40 -07:00
Erich Keane
61ba1481e2 Implement _ExtInt as an extended int type specifier.
Introduction/Motivation:
LLVM-IR supports integers of non-power-of-2 bitwidth, in the iN syntax.
Integers of non-power-of-two aren't particularly interesting or useful
on most hardware, so much so that no language in Clang has been
motivated to expose it before.

However, in the case of FPGA hardware normal integer types where the
full bitwidth isn't used, is extremely wasteful and has severe
performance/space concerns.  Because of this, Intel has introduced this
functionality in the High Level Synthesis compiler[0]
under the name "Arbitrary Precision Integer" (ap_int for short). This
has been extremely useful and effective for our users, permitting them
to optimize their storage and operation space on an architecture where
both can be extremely expensive.

We are proposing upstreaming a more palatable version of this to the
community, in the form of this proposal and accompanying patch.  We are
proposing the syntax _ExtInt(N).  We intend to propose this to the WG14
committee[1], and the underscore-capital seems like the active direction
for a WG14 paper's acceptance.  An alternative that Richard Smith
suggested on the initial review was __int(N), however we believe that
is much less acceptable by WG14.  We considered _Int, however _Int is
used as an identifier in libstdc++ and there is no good way to fall
back to an identifier (since _Int(5) is indistinguishable from an
unnamed initializer of a template type named _Int).

[0]https://www.intel.com/content/www/us/en/software/programmable/quartus-prime/hls-compiler.html)
[1]http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2472.pdf

Differential Revision: https://reviews.llvm.org/D73967
2020-04-17 07:10:57 -07:00
Richard Smith
78b239ea67 P0840R2: support for [[no_unique_address]] attribute
Summary:
Add support for the C++2a [[no_unique_address]] attribute for targets using the Itanium C++ ABI.

This depends on D63371.

Reviewers: rjmccall, aaron.ballman

Subscribers: dschuff, aheejin, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D63451

llvm-svn: 363976
2019-06-20 20:44:45 +00:00
Fangrui Song
899d13926d Use llvm::stable_sort
llvm-svn: 359098
2019-04-24 14:43:05 +00:00
Chandler Carruth
2946cd7010 Update the file headers across all of the LLVM projects in the monorepo
to reflect the new license.

We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.

Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.

llvm-svn: 351636
2019-01-19 08:50:56 +00:00
Richard Trieu
6368818fd5 Move CodeGenOptions from Frontend to Basic
Basic uses CodeGenOptions and should not depend on Frontend.

llvm-svn: 348827
2018-12-11 03:18:39 +00:00
Fangrui Song
6907ce2f8f Remove trailing space
sed -Ei 's/[[:space:]]+$//' include/**/*.{def,h,td} lib/**/*.{cpp,h}

llvm-svn: 338291
2018-07-30 19:24:48 +00:00
George Karpenkov
39e5137f43 [AST] Add a convenient getter from QualType to RecordDecl
Differential Revision: https://reviews.llvm.org/D49951

llvm-svn: 338187
2018-07-28 02:16:13 +00:00
Strahinja Petrovic
0f274c0111 This patch provides that bitfields are splitted even in case
when current field is not legal integer type.

Differential Revision: https://reviews.llvm.org/D39053

llvm-svn: 331979
2018-05-10 12:31:12 +00:00
Adrian Prantl
9fc8faf9e6 Remove \brief commands from doxygen comments.
This is similar to the LLVM change https://reviews.llvm.org/D46290.

We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.

Patch produced by

for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done

Differential Revision: https://reviews.llvm.org/D46320

llvm-svn: 331834
2018-05-09 01:00:01 +00:00
Erich Keane
cf3c4a9f24 [NFC] Fix terrible formatting of CGRecordLower constructor.
llvm-svn: 329952
2018-04-12 20:46:31 +00:00
Alexander Kornienko
2a8c18d991 Fix typos in clang
Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:

  archtype
  cas
  classs
  checkk
  compres
  definit
  frome
  iff
  inteval
  ith
  lod
  methode
  nd
  optin
  ot
  pres
  statics
  te
  thru

Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)

Differential revision: https://reviews.llvm.org/D44188

llvm-svn: 329399
2018-04-06 15:14:32 +00:00
Richard Smith
866dee4ea0 Add helper to determine if a field is a zero-length bitfield.
llvm-svn: 328999
2018-04-02 18:29:43 +00:00
George Burgess IV
00f70bd933 Remove redundant casts. NFC
So I wrote a clang-tidy check to lint out redundant `isa`, `cast`, and
`dyn_cast`s for fun. This is a portion of what it found for clang; I
plan to do similar cleanups in LLVM and other subprojects when I find
time.

Because of the volume of changes, I explicitly avoided making any change
that wasn't highly local and obviously correct to me (e.g. we still have
a number of foo(cast<Bar>(baz)) that I didn't touch, since overloading
is a thing and the cast<Bar> did actually change the type -- just up the
class hierarchy).

I also tried to leave the types we were cast<>ing to somewhere nearby,
in cases where it wasn't locally obvious what we were dealing with
before.

llvm-svn: 326416
2018-03-01 05:43:23 +00:00
Akira Hatanaka
fc681efde4 [CodeGen] Fix an assertion failure in CGRecordLowering.
This patch fixes a bug in CGRecordLowering::accumulateBitFields where it
unconditionally starts a new run and emits a storage field when it sees
a zero-sized bitfield, which causes an assertion in insertPadding to
fail when -fno-bitfield-type-align is used.

It shouldn't emit new storage if UseZeroLengthBitfieldAlignment and
UseBitFieldTypeAlignment are both false.

rdar://problem/36762205

llvm-svn: 323943
2018-02-01 03:04:15 +00:00
Wei Mi
9b3d627280 [Bitfield] Add an option to access bitfield in a fine-grained manner.
Currently all the consecutive bitfields are wrapped as a large integer unless there is unamed zero sized bitfield in between. The patch provides an alternative manner which makes the bitfield to be accessed as separate memory location if it has legal integer width and is naturally aligned. Such separate bitfield may split the original consecutive bitfields into subgroups of consecutive bitfields, and each subgroup will be wrapped as an integer. Now This is all controlled by an option -ffine-grained-bitfield-accesses. The alternative of bitfield access manner can improve the access efficiency of those bitfields with legal width and being aligned, but may reduce the chance of load/store combining of other bitfields, so it depends on how the bitfields are defined and actually accessed to choose when to use the option. For now the option is off by default.

Differential revision: https://reviews.llvm.org/D36562

llvm-svn: 315915
2017-10-16 16:50:27 +00:00
Saleem Abdulrasool
10a4972a8d revert SVN r265702, r265640
Revert the two changes to thread CodeGenOptions into the TargetInfo allocation
and to fix the layering violation by moving CodeGenOptions into Basic.
Code Generation is arguably not particularly "basic".  This addresses Richard's
post-commit review comments.  This change purely does the mechanical revert and
will be followed up with an alternate approach to thread the desired information
into TargetInfo.

llvm-svn: 265806
2016-04-08 16:52:00 +00:00
Saleem Abdulrasool
94cfc603d1 Basic: move CodeGenOptions from Frontend
This is a mechanical move of CodeGenOptions from libFrontend to libBasic.  This
fixes the layering violation introduced earlier by threading CodeGenOptions into
TargetInfo.  It should also fix the modules based self-hosting builds.  NFC.

llvm-svn: 265702
2016-04-07 17:49:44 +00:00
Yaron Keren
cdae941e03 Annotate dump() methods with LLVM_DUMP_METHOD, addressing Richard Smith r259192 post commit comment.
llvm-svn: 259232
2016-01-29 19:38:18 +00:00
Rui Ueyama
83aa97941f Update for LLVM function name change.
llvm-svn: 257802
2016-01-14 21:00:27 +00:00
David Majnemer
78945d0721 [MS ABI] Don't crash when inheriting from base with trailing empty array member
We got this right for Itanium but not MSVC because CGRecordLayoutBuilder
was checking if the base's size was zero when it should have been
checking the non-virtual size.

This fixes PR21040.

llvm-svn: 251036
2015-10-22 18:04:22 +00:00
Ulrich Weigand
03ce2a16bf Respect alignment of nested bitfields
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:

struct XBitfield {
  unsigned b1 : 10;
  unsigned b2 : 12;
  unsigned b3 : 10;
};
struct YBitfield {
  char x;
  struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;

unsigned test7() {
  // CHECK: @test7
  // CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
  return gbitfield.y.b2;
}

The "align 4" is actually wrong.  Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.

This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp.  Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.

Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).

In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage.  This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.

Differential Revision: http://reviews.llvm.org/D11034

llvm-svn: 241916
2015-07-10 17:30:00 +00:00