0.9.9 API documenation
|
Include <glm/integer.hpp> to use these core features. More...
Functions | |
template<typename genType > | |
GLM_FUNC_DECL int | bitCount (genType v) |
Returns the number of bits set to 1 in the binary representation of value. More... | |
template<length_t L, typename T , qualifier Q> | |
GLM_FUNC_DECL vec< L, int, Q > | bitCount (vec< L, T, Q > const &v) |
Returns the number of bits set to 1 in the binary representation of value. More... | |
template<length_t L, typename T , qualifier Q> | |
GLM_FUNC_DECL vec< L, T, Q > | bitfieldExtract (vec< L, T, Q > const &Value, int Offset, int Bits) |
Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of the result. More... | |
template<length_t L, typename T , qualifier Q> | |
GLM_FUNC_DECL vec< L, T, Q > | bitfieldInsert (vec< L, T, Q > const &Base, vec< L, T, Q > const &Insert, int Offset, int Bits) |
Returns the insertion the bits least-significant bits of insert into base. More... | |
template<length_t L, typename T , qualifier Q> | |
GLM_FUNC_DECL vec< L, T, Q > | bitfieldReverse (vec< L, T, Q > const &v) |
Returns the reversal of the bits of value. More... | |
template<typename genIUType > | |
GLM_FUNC_DECL int | findLSB (genIUType x) |
Returns the bit number of the least significant bit set to 1 in the binary representation of value. More... | |
template<length_t L, typename T , qualifier Q> | |
GLM_FUNC_DECL vec< L, int, Q > | findLSB (vec< L, T, Q > const &v) |
Returns the bit number of the least significant bit set to 1 in the binary representation of value. More... | |
template<typename genIUType > | |
GLM_FUNC_DECL int | findMSB (genIUType x) |
Returns the bit number of the most significant bit in the binary representation of value. More... | |
template<length_t L, typename T , qualifier Q> | |
GLM_FUNC_DECL vec< L, int, Q > | findMSB (vec< L, T, Q > const &v) |
Returns the bit number of the most significant bit in the binary representation of value. More... | |
template<length_t L, qualifier Q> | |
GLM_FUNC_DECL void | imulExtended (vec< L, int, Q > const &x, vec< L, int, Q > const &y, vec< L, int, Q > &msb, vec< L, int, Q > &lsb) |
Multiplies 32-bit integers x and y, producing a 64-bit result. More... | |
template<length_t L, qualifier Q> | |
GLM_FUNC_DECL vec< L, uint, Q > | uaddCarry (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &carry) |
Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32). More... | |
template<length_t L, qualifier Q> | |
GLM_FUNC_DECL void | umulExtended (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &msb, vec< L, uint, Q > &lsb) |
Multiplies 32-bit integers x and y, producing a 64-bit result. More... | |
template<length_t L, qualifier Q> | |
GLM_FUNC_DECL vec< L, uint, Q > | usubBorrow (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &borrow) |
Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise. More... | |
Include <glm/integer.hpp> to use these core features.
These all operate component-wise. The description is per component. The notation [a, b] means the set of bits from bit-number a through bit-number b, inclusive. The lowest-order bit is bit 0.
GLM_FUNC_DECL int glm::bitCount | ( | genType | v | ) |
Returns the number of bits set to 1 in the binary representation of value.
genType | Signed or unsigned integer scalar or vector types. |
GLM_FUNC_DECL vec<L, int, Q> glm::bitCount | ( | vec< L, T, Q > const & | v | ) |
Returns the number of bits set to 1 in the binary representation of value.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar or vector types. |
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldExtract | ( | vec< L, T, Q > const & | Value, |
int | Offset, | ||
int | Bits | ||
) |
Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of the result.
For unsigned data types, the most significant bits of the result will be set to zero. For signed data types, the most significant bits will be set to the value of bit offset + base - 1.
If bits is zero, the result will be zero. The result will be undefined if offset or bits is negative, or if the sum of offset and bits is greater than the number of bits used to store the operand.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldInsert | ( | vec< L, T, Q > const & | Base, |
vec< L, T, Q > const & | Insert, | ||
int | Offset, | ||
int | Bits | ||
) |
Returns the insertion the bits least-significant bits of insert into base.
The result will have bits [offset, offset + bits - 1] taken from bits [0, bits - 1] of insert, and all other bits taken directly from the corresponding bits of base. If bits is zero, the result will simply be base. The result will be undefined if offset or bits is negative, or if the sum of offset and bits is greater than the number of bits used to store the operand.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar or vector types. |
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldReverse | ( | vec< L, T, Q > const & | v | ) |
Returns the reversal of the bits of value.
The bit numbered n of the result will be taken from bit (bits - 1) - n of value, where bits is the total number of bits used to represent value.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar or vector types. |
GLM_FUNC_DECL int glm::findLSB | ( | genIUType | x | ) |
Returns the bit number of the least significant bit set to 1 in the binary representation of value.
If value is zero, -1 will be returned.
genIUType | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL vec<L, int, Q> glm::findLSB | ( | vec< L, T, Q > const & | v | ) |
Returns the bit number of the least significant bit set to 1 in the binary representation of value.
If value is zero, -1 will be returned.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL int glm::findMSB | ( | genIUType | x | ) |
Returns the bit number of the most significant bit in the binary representation of value.
For positive integers, the result will be the bit number of the most significant bit set to 1. For negative integers, the result will be the bit number of the most significant bit set to 0. For a value of zero or negative one, -1 will be returned.
genIUType | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL vec<L, int, Q> glm::findMSB | ( | vec< L, T, Q > const & | v | ) |
Returns the bit number of the most significant bit in the binary representation of value.
For positive integers, the result will be the bit number of the most significant bit set to 1. For negative integers, the result will be the bit number of the most significant bit set to 0. For a value of zero or negative one, -1 will be returned.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL void glm::imulExtended | ( | vec< L, int, Q > const & | x, |
vec< L, int, Q > const & | y, | ||
vec< L, int, Q > & | msb, | ||
vec< L, int, Q > & | lsb | ||
) |
Multiplies 32-bit integers x and y, producing a 64-bit result.
The 32 least-significant bits are returned in lsb. The 32 most-significant bits are returned in msb.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
GLM_FUNC_DECL vec<L, uint, Q> glm::uaddCarry | ( | vec< L, uint, Q > const & | x, |
vec< L, uint, Q > const & | y, | ||
vec< L, uint, Q > & | carry | ||
) |
Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32).
The value carry is set to 0 if the sum was less than pow(2, 32), or to 1 otherwise.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
GLM_FUNC_DECL void glm::umulExtended | ( | vec< L, uint, Q > const & | x, |
vec< L, uint, Q > const & | y, | ||
vec< L, uint, Q > & | msb, | ||
vec< L, uint, Q > & | lsb | ||
) |
Multiplies 32-bit integers x and y, producing a 64-bit result.
The 32 least-significant bits are returned in lsb. The 32 most-significant bits are returned in msb.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
GLM_FUNC_DECL vec<L, uint, Q> glm::usubBorrow | ( | vec< L, uint, Q > const & | x, |
vec< L, uint, Q > const & | y, | ||
vec< L, uint, Q > & | borrow | ||
) |
Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise.
The value borrow is set to 0 if x >= y, or to 1 otherwise.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |