0.9.9 API documenation
|
These all operate component-wise. More...
Functions | |
template<typename genType > | |
GLM_FUNC_DECL int | bitCount (genType v) |
Returns the number of bits set to 1 in the binary representation of value. More... | |
template<length_t L, typename T , qualifier P> | |
GLM_FUNC_DECL vec< L, int, P > | bitCount (vec< L, T, P > const &v) |
Returns the number of bits set to 1 in the binary representation of value. More... | |
template<length_t L, typename T , qualifier P> | |
GLM_FUNC_DECL vec< L, T, P > | bitfieldExtract (vec< L, T, P > const &Value, int Offset, int Bits) |
Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of the result. More... | |
template<length_t L, typename T , qualifier P> | |
GLM_FUNC_DECL vec< L, T, P > | bitfieldInsert (vec< L, T, P > const &Base, vec< L, T, P > const &Insert, int Offset, int Bits) |
Returns the insertion the bits least-significant bits of insert into base. More... | |
template<length_t L, typename T , qualifier P> | |
GLM_FUNC_DECL vec< L, T, P > | bitfieldReverse (vec< L, T, P > const &v) |
Returns the reversal of the bits of value. More... | |
template<typename genIUType > | |
GLM_FUNC_DECL int | findLSB (genIUType x) |
Returns the bit number of the least significant bit set to 1 in the binary representation of value. More... | |
template<length_t L, typename T , qualifier P> | |
GLM_FUNC_DECL vec< L, int, P > | findLSB (vec< L, T, P > const &v) |
Returns the bit number of the least significant bit set to 1 in the binary representation of value. More... | |
template<typename genIUType > | |
GLM_FUNC_DECL int | findMSB (genIUType x) |
Returns the bit number of the most significant bit in the binary representation of value. More... | |
template<length_t L, typename T , qualifier P> | |
GLM_FUNC_DECL vec< L, int, P > | findMSB (vec< L, T, P > const &v) |
Returns the bit number of the most significant bit in the binary representation of value. More... | |
template<length_t L, qualifier P> | |
GLM_FUNC_DECL void | imulExtended (vec< L, int, P > const &x, vec< L, int, P > const &y, vec< L, int, P > &msb, vec< L, int, P > &lsb) |
Multiplies 32-bit integers x and y, producing a 64-bit result. More... | |
template<length_t L, qualifier P> | |
GLM_FUNC_DECL vec< L, uint, P > | uaddCarry (vec< L, uint, P > const &x, vec< L, uint, P > const &y, vec< L, uint, P > &carry) |
Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32). More... | |
template<length_t L, qualifier P> | |
GLM_FUNC_DECL void | umulExtended (vec< L, uint, P > const &x, vec< L, uint, P > const &y, vec< L, uint, P > &msb, vec< L, uint, P > &lsb) |
Multiplies 32-bit integers x and y, producing a 64-bit result. More... | |
template<length_t L, qualifier P> | |
GLM_FUNC_DECL vec< L, uint, P > | usubBorrow (vec< L, uint, P > const &x, vec< L, uint, P > const &y, vec< L, uint, P > &borrow) |
Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise. More... | |
These all operate component-wise.
The description is per component. The notation [a, b] means the set of bits from bit-number a through bit-number b, inclusive. The lowest-order bit is bit 0.
GLM_FUNC_DECL int glm::bitCount | ( | genType | v | ) |
Returns the number of bits set to 1 in the binary representation of value.
genType | Signed or unsigned integer scalar or vector types. |
GLM_FUNC_DECL vec<L, int, P> glm::bitCount | ( | vec< L, T, P > const & | v | ) |
Returns the number of bits set to 1 in the binary representation of value.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar or vector types. |
GLM_FUNC_DECL vec<L, T, P> glm::bitfieldExtract | ( | vec< L, T, P > const & | Value, |
int | Offset, | ||
int | Bits | ||
) |
Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of the result.
For unsigned data types, the most significant bits of the result will be set to zero. For signed data types, the most significant bits will be set to the value of bit offset + base - 1.
If bits is zero, the result will be zero. The result will be undefined if offset or bits is negative, or if the sum of offset and bits is greater than the number of bits used to store the operand.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL vec<L, T, P> glm::bitfieldInsert | ( | vec< L, T, P > const & | Base, |
vec< L, T, P > const & | Insert, | ||
int | Offset, | ||
int | Bits | ||
) |
Returns the insertion the bits least-significant bits of insert into base.
The result will have bits [offset, offset + bits - 1] taken from bits [0, bits - 1] of insert, and all other bits taken directly from the corresponding bits of base. If bits is zero, the result will simply be base. The result will be undefined if offset or bits is negative, or if the sum of offset and bits is greater than the number of bits used to store the operand.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar or vector types. |
GLM_FUNC_DECL vec<L, T, P> glm::bitfieldReverse | ( | vec< L, T, P > const & | v | ) |
Returns the reversal of the bits of value.
The bit numbered n of the result will be taken from bit (bits - 1) - n of value, where bits is the total number of bits used to represent value.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar or vector types. |
GLM_FUNC_DECL int glm::findLSB | ( | genIUType | x | ) |
Returns the bit number of the least significant bit set to 1 in the binary representation of value.
If value is zero, -1 will be returned.
genIUType | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL vec<L, int, P> glm::findLSB | ( | vec< L, T, P > const & | v | ) |
Returns the bit number of the least significant bit set to 1 in the binary representation of value.
If value is zero, -1 will be returned.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL int glm::findMSB | ( | genIUType | x | ) |
Returns the bit number of the most significant bit in the binary representation of value.
For positive integers, the result will be the bit number of the most significant bit set to 1. For negative integers, the result will be the bit number of the most significant bit set to 0. For a value of zero or negative one, -1 will be returned.
genIUType | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL vec<L, int, P> glm::findMSB | ( | vec< L, T, P > const & | v | ) |
Returns the bit number of the most significant bit in the binary representation of value.
For positive integers, the result will be the bit number of the most significant bit set to 1. For negative integers, the result will be the bit number of the most significant bit set to 0. For a value of zero or negative one, -1 will be returned.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
T | Signed or unsigned integer scalar types. |
GLM_FUNC_DECL void glm::imulExtended | ( | vec< L, int, P > const & | x, |
vec< L, int, P > const & | y, | ||
vec< L, int, P > & | msb, | ||
vec< L, int, P > & | lsb | ||
) |
Multiplies 32-bit integers x and y, producing a 64-bit result.
The 32 least-significant bits are returned in lsb. The 32 most-significant bits are returned in msb.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
GLM_FUNC_DECL vec<L, uint, P> glm::uaddCarry | ( | vec< L, uint, P > const & | x, |
vec< L, uint, P > const & | y, | ||
vec< L, uint, P > & | carry | ||
) |
Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32).
The value carry is set to 0 if the sum was less than pow(2, 32), or to 1 otherwise.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
GLM_FUNC_DECL void glm::umulExtended | ( | vec< L, uint, P > const & | x, |
vec< L, uint, P > const & | y, | ||
vec< L, uint, P > & | msb, | ||
vec< L, uint, P > & | lsb | ||
) |
Multiplies 32-bit integers x and y, producing a 64-bit result.
The 32 least-significant bits are returned in lsb. The 32 most-significant bits are returned in msb.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |
GLM_FUNC_DECL vec<L, uint, P> glm::usubBorrow | ( | vec< L, uint, P > const & | x, |
vec< L, uint, P > const & | y, | ||
vec< L, uint, P > & | borrow | ||
) |
Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise.
The value borrow is set to 0 if x >= y, or to 1 otherwise.
L | An integer between 1 and 4 included that qualify the dimension of the vector. |