Search Results

Search found 181 results on 8 pages for 'bitwise'.

Page 1/8 | 1 2 3 4 5 6 7 8  | Next Page >

  • Bitwise AND, Bitwise Inclusive OR question, in Java

    - by Dave
    I've a few lines of code within a project, that I can't see the value of... buffer[i] = (currentByte & 0x7F) | (currentByte & 0x80); It reads the filebuffer from a file, stored as bytes, and then transfers then to buffer[i] as shown, but I can't understand what the overall purpose is, any ideas? Thanks

    Read the article

  • C question: Padding bits in unsigned integers and bitwise operations (C89)

    - by Anonymous Question Guy
    I have a lot of code that performs bitwise operations on unsigned integers. I wrote my code with the assumption that those operations were on integers of fixed width without any padding bits. For example an array of 32 bit unsigned integers of which all 32 bits available for each integer. I'm looking to make my code more portable and I'm focused on making sure I'm C89 compliant (in this case). One of the issues that I've come across is possible padded integers. Take this extreme example, taken from the GMP manual: However on Cray vector systems it may be noted that short and int are always stored in 8 bytes (and with sizeof indicating that) but use only 32 or 46 bits. The nails feature can account for this, by passing for instance 8*sizeof(int)-INT_BIT. I've also read about this type of padding in other places. I actually read of a post on SO last night (forgive me, I don't have the link and I'm going to cite something similar from memory) where if you have, say, a double with 60 usable bits the other 4 could be used for padding and those padding bits could serve some internal purpose so they cannot be modified. So let's say for example my code is compiled on a platform where an unsigned int type is sized at 4 bytes, each byte being 8 bits, however the most significant 2 bits are padding bits. Would UINT_MAX in that case be 0x3FFFFFFF (1073741823) ? #include <stdio.h> #include <stdlib.h> /* padding bits represented by underscores */ int main( int argc, char **argv ) { unsigned int a = 0x2AAAAAAA; /* __101010101010101010101010101010 */ unsigned int b = 0x15555555; /* __010101010101010101010101010101 */ unsigned int c = a ^ b; /* ?? __111111111111111111111111111111 */ unsigned int d = c << 5; /* ?? __111111111111111111111111100000 */ unsigned int e = d >> 5; /* ?? __000001111111111111111111111111 */ printf( "a: %X\nb: %X\nc: %X\nd: %X\ne: %X\n", a, b, c, d, e ); return 0; } is it safe to XOR two integers with padding bits? wouldn't I XOR whatever the padding bits are? I can't find this behavior covered in C89. furthermore is the c var guaranteed to be 0x3FFFFFFF or if for example the two padding bits were both on in a or b would c be 0xFFFFFFFF ? same question with d and e. am i manipulating the padding bits by shifting? I would expect to see this below, assuming 32 bits with the 2 most significant bits used for padding, but I want to know if something like this is guaranteed: a: 2AAAAAAA b: 15555555 c: 3FFFFFFF d: 3FFFFFE0 e: 01FFFFFF Also are padding bits always the most significant bits or could they be the least significant bits? Thanks guys EDIT 12/19/2010 5PM EST: Christoph has answered my question. Thanks! I had also asked (above) whether padding bits are always the most significant bits. This is cited in the rationale for the C99 standard, and the answer is no. I am playing it safe and assuming the same for C89. Here is specifically what the C99 rationale says for §6.2.6.2 (Representation of Integer Types): Padding bits are user-accessible in an unsigned integer type. For example, suppose a machine uses a pair of 16-bit shorts (each with its own sign bit) to make up a 32-bit int and the sign bit of the lower short is ignored when used in this 32-bit int. Then, as a 32-bit signed int, there is a padding bit (in the middle of the 32 bits) that is ignored in determining the value of the 32-bit signed int. But, if this 32-bit item is treated as a 32-bit unsigned int, then that padding bit is visible to the user’s program. The C committee was told that there is a machine that works this way, and that is one reason that padding bits were added to C99. Footnotes 44 and 45 mention that parity bits might be padding bits. The committee does not know of any machines with user-accessible parity bits within an integer. Therefore, the committee is not aware of any machines that treat parity bits as padding bits. EDIT 12/28/2010 3PM EST: I found an interesting discussion on comp.lang.c from a few months ago. Bitwise Operator Effects on Padding Bits (VelocityReviews reader) Bitwise Operator Effects on Padding Bits (Google Groups alternate link) One point made by Dietmar which I found interesting: Let's note that padding bits are not necessary for the existence of trap representations; combinations of value bits which do not represent a value of the object type would also do.

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

  • Bitwise Interval Arithmetic

    - by KennyTM
    I've recently read an interesting thread on the D newsgroup, which basically asks, Given two signed integers a ∈ [amin, amax], b ∈ [bmin, bmax], what is the tightest interval of a | b? I'm think if interval arithmetics can be applied on general bitwise operators (assuming infinite bits). The bitwise-NOT and shifts are trivial since they just corresponds to -1 − x and 2n x. But bitwise-AND/OR are a lot trickier, due to the mix of bitwise and arithmetic properties. Is there a polynomial-time algorithm to compute the intervals of bitwise-AND/OR? Note: Assume all bitwise operations run in linear time (of number of bits), and test/set a bit is constant time. The brute-force algorithm runs in exponential time. Because ~(a | b) = ~a & ~b and a ^ b = (a | b) & ~(a & b), solving the bitwise-AND and -NOT problem implies bitwise-OR and -XOR are done. Although the content of that thread suggests min{a | b} = max(amin, bmin), it is not the tightest bound. Just consider [2, 3] | [8, 9] = [10, 11].)

    Read the article

  • What kind of specific projects can I do to master bitwise operations in C++? Also is there a canonical book? [closed]

    - by Ford
    I don't use C++ or bitwise operations at my current job but I'm thinking of applying to companies where it is a requirement to be fluent with them (on their tests anyway). So my question is: Can anyone suggest a project which will require gaining a fluency in bitwise operations to complete? On a side note, is there a canonical book on optimization techniques using bitwise operations since that seems to be an important use of them?

    Read the article

  • 48-bit bitwise operations in Javascript?

    - by randomhelp
    I've been given the task of porting Java's Java.util.Random() to JavaScript, and I've run across a huge performance hit/inaccuracy using bitwise operators in Javascript on sufficiently large numbers. Some cursory research states that "bitwise operators in JavaScript are inherently slow," because internally it appears that JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (see https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Operators/Bitwise_Operators for more on this.) Because of this, I can't do a direct port of the Java random number generator, and I need to get the same numeric results as Java.util.Random(). Writing something like this.next = function(bits) { if (!bits) { bits = 48; } this.seed = (this.seed * 25214903917 + 11) & ((1 << 48) - 1); return this.seed >>> (48 - bits); }; (which is an almost-direct port of the Java.util.Random()) code won't work properly, since Javascript can't do bitwise operations on an integer that size.) I've figured out that I can just make a seedable random number generator in 32-bit space using the Lehmer algorithm, but the trick is that I need to get the same values as I would with Java.util.Random(). What should I do to make a faster, functional port?

    Read the article

  • Using bitwise operators on > 32 bit integers

    - by dqhendricks
    I am using bitwise operations in order to represent many access control flags within one integer. ADMIN_ACCESS = 1; EDIT_ACCOUNT_ACCESS = 2; EDIT_ORDER_ACCESS = 4; var myAccess = 3; // ie: ( ADMIN_ACCESS | EDIT_ACCOUNT_ACCESS ) if ( myAccess & EDIT_ACCOUNT_ACCESS ) { // check for correct access // allow for editing of account } Most of this is occurring on the PHP side of my project. There is one piece however where Javascript is used to join several access flags using | when saving someone's access level. This works fine to a point. I have found that once an integer (flag) gets too large ( 32bit), it no longer works correctly with bitwise operators in Javascript. For instance: alert( 4294967296 | 1 ); // equals 1, but should equal 4294967297 I am trying to find a workaround for this so that I do not have to limit my number of access control flags to 32. Each access control flag is two times the previous control flag so that each control flag will not interfere with other control flags. dec(4) = bin(100) dec(8) = bin(1000) dec(16) = bin(10000) I have noticed that when adding two of these flags together with a simple +, it seems to come out with the same answer as a bitwise or operation, but am having trouble wrapping my head around whether this is a simple substitution, or if there might be problems with doing this. Can anyone comment on the validity of this workaround? Example: (4294967296 | 262144 | 524288) == (4294967296 + 262144 + 524288)

    Read the article

  • Bitwise Shifting in C

    - by user313943
    I've recently decided to undertake an SMS project for sending and receiving SMS though a mobile. The data is sent in PDU format - I am required to change ASCII characters to 7 bit GSM alphabet characters. To do this I've come across several examples, such as http://www.dreamfabric.com/sms/hello.html This example shows Rightmost bits of the second septet, being inserted into the first septect, to create an octect. Bitwise shifts do not cause this to happen, as will insert to the left, and << to the right. As I understand it, I need some kind of bitwise rotate to create this - can anyone tell me how to move bits from the right handside and insert them on the left? Thanks,

    Read the article

  • bitwise OR on strings

    - by mr.bio
    How can i do a Bitwise OR on strings? A: 10001 01010 ------ 11011 Why on strings? The Bits can have length of 40-50.Maybe this could be problematic on int ? Any Ideas ?

    Read the article

  • PHP bitwise left shifting 32 spaces problem and bad results with large numbers arithmetic operations

    - by Victor Stanciu
    Hello, I have the following problems: First: I am trying to do a 32-spaces bitwise left shift on a large number, and for some reason the number is always returned as-is. For example: echo(516103988<<32); // echoes 516103988 Because shifting the bits to the left one space is the equivalent of multiplying by 2, i tried multiplying the number by 2^32, and it works, it returns 2216649749795176448. Second: I have to add 9379 to the number from the above point: printf('%0.0f', 2216649749795176448 + 9379); // prints 2216649749795185920 Should print: 2216649749795185827

    Read the article

  • Algorithm for bitwise fiddling

    - by EquinoX
    If I have a 32-bit binary number and I want to replace the lower 16-bit of the binary number with a 16-bit number that I have and keep the upper 16-bit of that number to produce a new binary number.. how can I do this using simple bitwise operator? For example the 32-bit binary number is: 1010 0000 1011 1111 0100 1000 1010 1001 and the lower 16-bit I have is: 0000 0000 0000 0001 so the result is: 1010 0000 1011 1111 0000 0000 0000 0001 how can I do this?

    Read the article

  • Bitwise OR of constants

    - by ryyst
    While reading some documentation here, I came across this: unsigned unitFlags = NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit; NSDateComponents *comps = [gregorian components:unitFlags fromDate:date]; I have no idea how this works. I read up on the bitwise operators in C, but I do not understand how you can fit three (or more!) constants inside one int and later being able to somehow extract them back from the int? Digging further down the documentation, I also found this, which is probably related: typedef enum { kCFCalendarUnitEra = (1 << 1), kCFCalendarUnitYear = (1 << 2), kCFCalendarUnitMonth = (1 << 3), kCFCalendarUnitDay = (1 << 4), kCFCalendarUnitHour = (1 << 5), kCFCalendarUnitMinute = (1 << 6), kCFCalendarUnitSecond = (1 << 7), kCFCalendarUnitWeek = (1 << 8), kCFCalendarUnitWeekday = (1 << 9), kCFCalendarUnitWeekdayOrdinal = (1 << 10), } CFCalendarUnit; How do the (1 << 3) statements / variables work? I'm sorry if this is trivial, but could someone please enlighten me by either explaining or maybe posting a link to a good explanation? Thanks! -- ry

    Read the article

  • Inherited varibles are not reading correctly when using bitwise comparisons

    - by Shawn B
    Hey, I have a few classes set up for a game, with XMapObject as the base, and XEntity, XEnviron, and XItem inheriting it. MapObjects have a number of flags, one of them being MAPOBJECT_SOLID. My problem is, that XEntity is the only class that correctly detects MAPOBJECT_SOLID. Both Items are Environs are always considered solid by the game, regardless of the flag's state. What is important, is that Environs and Item should almost never be solid. Here are the relevent code samples: XMapObject: class XMapObject : public XObject { public: Uint8 MapObjectType,Location[2],MapObjectFlags; XMapObject *NextMapObject,*PrevMapObject; XMapObject(); void CreateMapObject(Uint8 MapObjectType); void SpawnMapObject(Uint8 MapObjectLocation[2]); void RemoveMapObject(); void DeleteMapObject(); void MapObjectSetLocation(Uint8 Y,Uint8 X); void MapObjectMapLink(); void MapObjectMapUnlink(); }; XEntity: class XEntity : public XMapObject { public: Uint8 Health,EntityFlags; float Speed,Time; XEntity *NextEntity,*PrevEntity; XItem *IventoryList; XEntity(); void CreateEntity(Uint8 EntityType,Uint8 EntityLocation[2]); void DeleteEntity(); void EntityLink(); void EntityUnlink(); Uint8 MoveEntity(Uint8 YOffset,Uint8 XOffset); }; XEnviron: class XEnviron : public XMapObject { public: Uint8 Effect,TimeOut; void CreateEnviron(Uint8 Type,Uint8 Y,Uint8 X,Uint8 TimeOut); }; XItem: class XItem : public XMapObject { public: void CreateItem(Uint8 Type,Uint8 Y,Uint8 X); }; And lastly, the entity move code. Only entities are capable of moving themselves. Uint8 XEntity::MoveEntity(Uint8 YOffset,Uint8 XOffset) { Uint8 NewY = Location[0] + YOffset, NewX = Location[1] + XOffset; if((NewY >= 0 && NewY < MAPY) && (NewX >= 0 && NewX < MAPX)) { XTile *Tile = GetTile(NewY,NewX); if(Tile->MapList != NULL) { XMapObject *MapObject = Tile->MapList; while(MapObject != NULL) { if(MapObject->MapObjectFlags & MAPOBJECT_SOLID) { printf("solid\n"); return 0; } MapObject = MapObject->NextMapObject; } } if(Tile->Flags & TILE_SOLID && EntityFlags & ENTITY_CLIPPING) { return 0; } this->MapObjectSetLocation(NewY,NewX); return 1; } return 0; } What is wierd, is that the bitwise operator always returns true when the MapObject is an Environ or an Item, but it works correctly for Entities. For debug I am using the printf "Solid", and also a printf containing the value of the flag for both Environs and Items. Any help is greatly appreciated, as this is a major bug for the small game I am working on.

    Read the article

  • Inherited variables are not reading correctly when using bitwise comparisons

    - by Shawn B
    Hey, I have a few classes set up for a game, with XMapObject as the base, and XEntity, XEnviron, and XItem inheriting it. MapObjects have a number of flags, one of them being MAPOBJECT_SOLID. My problem is that XEntity is the only class that correctly detects MAPOBJECT_SOLID. Both Items are Environs are always considered solid by the game, regardless of the flag's state. What is important is that Environs and Item should almost never be solid. Here are the relevent code samples: XMapObject: class XMapObject : public XObject { public: Uint8 MapObjectType,Location[2],MapObjectFlags; XMapObject *NextMapObject,*PrevMapObject; XMapObject(); void CreateMapObject(Uint8 MapObjectType); void SpawnMapObject(Uint8 MapObjectLocation[2]); void RemoveMapObject(); void DeleteMapObject(); void MapObjectSetLocation(Uint8 Y,Uint8 X); void MapObjectMapLink(); void MapObjectMapUnlink(); }; XEntity: class XEntity : public XMapObject { public: Uint8 Health,EntityFlags; float Speed,Time; XEntity *NextEntity,*PrevEntity; XItem *IventoryList; XEntity(); void CreateEntity(Uint8 EntityType,Uint8 EntityLocation[2]); void DeleteEntity(); void EntityLink(); void EntityUnlink(); Uint8 MoveEntity(Uint8 YOffset,Uint8 XOffset); }; XEnviron: class XEnviron : public XMapObject { public: Uint8 Effect,TimeOut; void CreateEnviron(Uint8 Type,Uint8 Y,Uint8 X,Uint8 TimeOut); }; XItem: class XItem : public XMapObject { public: void CreateItem(Uint8 Type,Uint8 Y,Uint8 X); }; And lastly, the entity move code. Only entities are capable of moving themselves. Uint8 XEntity::MoveEntity(Uint8 YOffset,Uint8 XOffset) { Uint8 NewY = Location[0] + YOffset, NewX = Location[1] + XOffset; if((NewY >= 0 && NewY < MAPY) && (NewX >= 0 && NewX < MAPX)) { XTile *Tile = GetTile(NewY,NewX); if(Tile->MapList != NULL) { XMapObject *MapObject = Tile->MapList; while(MapObject != NULL) { if(MapObject->MapObjectFlags & MAPOBJECT_SOLID) { printf("solid\n"); return 0; } MapObject = MapObject->NextMapObject; } } if(Tile->Flags & TILE_SOLID && EntityFlags & ENTITY_CLIPPING) { return 0; } this->MapObjectSetLocation(NewY,NewX); return 1; } return 0; } What is wierd, is that the bitwise operator always returns true when the MapObject is an Environ or an Item, but it works correctly for Entities. For debug I am using the printf "Solid", and also a printf containing the value of the flag for both Environs and Items. Any help is greatly appreciated, as this is a major bug for the small game I am working on.

    Read the article

  • Bit Reversal using bitwise

    - by Yongwei Xing
    Hi all I am trying to do bit reversal in a byte. I use the code below static int BitReversal(int n) { int u0 = 0x55555555; // 01010101010101010101010101010101 int u1 = 0x33333333; // 00110011001100110011001100110011 int u2 = 0x0F0F0F0F; // 00001111000011110000111100001111 int u3 = 0x00FF00FF; // 00000000111111110000000011111111 int x, y, z; x = n; y = (x >> 1) & u0; z = (x & u0) << 1; x = y | z; y = (x >> 2) & u1; z = (x & u1) << 2; x = y | z; y = (x >> 4) & u2; z = (x & u2) << 4; x = y | z; y = (x >> 8) & u3; z = (x & u3) << 8; x = y | z; y = (x >> 16) & u4; z = (x & u4) << 16; x = y | z; return x; } It can reverser the bit (on a 32-bit machine), but there is a problem, For example, the input is 10001111101, I want to get 10111110001, but this method would reverse the whole byte including the heading 0s. The output is 10111110001000000000000000000000. Is there any method to only reverse the actual number? I do not want to convert it to string and reverser, then convert again. Is there any pure math method or bit operation method? Best Regards,

    Read the article

  • Bitwise operation on void* in C#

    - by code poet
    So I am Reflector-ing some framework 2.0 code and end up with the following deconstruction fixed (void* voidRef3 = ((void*) & _someMember)) { ... } This won't compile due to 'The right hand side of a fixed statement assignment may not be a cast expression' I understand that Reflector can only approximate and generally I can see a clear path but this is a bit outside my experience. Question: what is Reflector trying to describe to me? Update: Am also seeing the following fixed (IntPtr* ptrRef3 = ((IntPtr*) & this._someMember))

    Read the article

  • Most common C# bitwise operations

    - by steffenj
    For the life of me, I can't remember how to set, delete, toggle or test a bit in a bitfield. Either I'm unsure or I mix them up because I rarely need these. So a "bit-cheat-sheet" would be nice to have. For example: flags = flags | FlagsEnum.Bit4; // Set bit 4. or if ((flags == FlagsEnum.Bit4)) == FlagsEnum.Bit4) // Is there a less verbose way? Can you give examples of all the other common operations, preferably in C# syntax using a [Flags] enum?

    Read the article

  • Bitwise Operations -- Arithmetic Operations..

    - by RBA
    Hi, Can you please explain the below lines, with some good examples. A left arithmetic shift by n is equivalent to multiplying by 2n (provided the value does not overflow), while a right arithmetic shift by n of a two's complement value is equivalent to dividing by 2n(2 to the power n) and rounding toward negative infinity. If the binary number is treated as ones' complement, then the same right-shift operation results in division by 2n and rounding toward zero. Thankx..

    Read the article

1 2 3 4 5 6 7 8  | Next Page >