Search Results

Search found 1486 results on 60 pages for 'unsigned'.

Page 12/60 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Bitlocker-to-go on fixed drive

    - by Unsigned
    Scenario Two drives are connected to a computer. One via a SATA-to-USB interface, the other directly via a SATA-to-eSATA cable. The drive on USB appears as a removable drive, the drive on eSATA appears as a fixed drive. Both use NTFS. The USB drive offers Bitlocker-To-Go, the eSATA drive only offers BitLocker. Question It is my understanding that drives encrypted with BitLocker-To-Go include an app to allow Windows XP read-only access to the volume. Is this the only difference, and is there a way to use Bitlocker-To-Go on the eSATA drive? Update Another difference is found here: The recovery key is required when a BitLocker-protected fixed data drive configured for automatic unlocking is moved to another computer.[1] Assuming that does not apply to removable drives.

    Read the article

  • Remote Desktop doesn't recognize username change

    - by Unsigned
    There are two active user accounts on the Windows 7 Professional server, Owner, and Guest. Owner is an Administrator with a password. Guest is the default Guest account with no password, but has been added to Remote Desktop Users. When attempting to connect to the server via a Windows 7 Professional client, Guest accepts RD connections fine, however, Owner throws an error "Unable to connect to Local Security Authority." I created a new Administrator account, named Remote, with the same password as Owner. Remote Desktop worked perfectly. I then deleted Owner, and renamed Remote to Owner. Now, Remote Desktop gives the same error ("Unable to connect to Local Security Authority") when attempting to log into the new Owner. However, attempting to log into Remote (even though it was renamed to Owner), works. Completely at a loss here, what is going on? Why won't Owner work, and why does Remote Desktop still use the old name on the renamed account?

    Read the article

  • Does BitLocker reduce write reliability?

    - by Unsigned
    For the purposes of this question, BitLocker refers to the BitLocker-to-go variety on a disk with write-caching disabled. NTFS supports metadata journaling, which, although not completely failsafe, does mitigate certain types of potential filesystem errors. Assuming an NTFS volume is protected with BitLocker, does this reduce the failure tolerance? Would a power failure during a write to an NTFS volume, that's protected with BitLocker, be more prone to corruption than on an unencrypted NTFS volume?

    Read the article

  • Dissect System Restore snapshots

    - by Unsigned
    Is there any way to map the A000????.??? filenames in the System Volume Information to their original names, without restoring them? The reason I ask is that several files in one user's System Volume Information RP1 were infected by a rootkit. Although they've been removed, I'd like to be able to figure out what they were originally. A0001253.sys and A0001211.sys are not very helpful names. :) It happened on two systems, one XP SP2, the other XP SP3.

    Read the article

  • Scenario - NTFS Symbolic Link or Junction?

    - by Unsigned
    Differences Absolute Relative File Directory UNC Symbolic link ? ? ? ? ? Junction ? x x ? x Scenario Let's assume we're creating a reparse point to create the redirect C:\SomeDir => D:\SomeDir Since this scenario only requires local, absolute paths, either a junction or symlink would work. In this situation, is there any advantage to using one or the other? Assume Windows 7 for the OS, disregarding backward-compatibility (prior to Vista, symlinks are not supported). Update I have found another difference. Symbolic Link - Link's permissions only affect delete/rename operations on the link itself, read/write access (to the target) is governed by the target's permissions Junction - Junction's permissions affect enumeration, revoking permissions on the junction will deny file listing through that junction, even if the target folder has more permissive ACLs The permissions make it interesting, as symlinks can allow legacy applications to access configuration files in UAC-restricted areas (such as %ProgramFiles%) without changing existing access permissions, by storing the files in a non-restricted location and creating symlinks in the restricted directory.

    Read the article

  • Thunderbird message sorting - wrong dates

    - by Unsigned
    Thunderbird 3 is having a problem with one of my IMAP accounts. When sorted by date, it displays 10-15 messages at the top (newest) when in fact they are very old message. (pre-2010, over a year old, some up to 6-7 years old) It's not game-breaking, but it is annoying, as I have to look past those to check for "new" messages. Is there any way to fix this? I'd considered re-downloading all the messages via IMAP, but that would take weeks (50,000 messages). When I open it, it displays the correct date, also when I use "View Source" the Date: header has the correct date (2009, 2006, etc). Is there any way to make Thunderbird realize that it should be sorted down with the rest of the 2006/2009 messages? None of my other IMAP accounts (many with the same host server) have this issue.

    Read the article

  • Prevent applications from being run as administrator

    - by Unsigned
    Background Most installation toolkits have the ability to launch, automatically or otherwise, external programs after installation. This is often appears in the installer via such options as "Show readme", or "Start program". Issue The problem is, many of these installers are poorly coded, and do not drop permissions appropriately, for example, opening the application's homepage in the browser... launching the browser with the installer's Administrative privileges, or a "High" UAC integrity level. This has the potential to open up security breaches, by opening up a webpage (and possibly browser addons), that are now running with elevated permissions. Question The question is: Is there a way to prevent certain applications (such as a web browser) from ever being launched with Administrative privileges, i.e., an automatic drop-privilege?

    Read the article

  • Prevent specific applications from being run as administrator

    - by Unsigned
    Background Most installation toolkits have the ability to launch, automatically or otherwise, external programs after installation. This is often appears in the installer via such options as "Show readme", or "Start program". Issue The problem is, many of these installers are poorly coded, and do not drop permissions appropriately. For example, starting the application automatically, or opening the application's homepage in the browser, often results in launching the application or browser with the installer's Administrative privileges, or a "High" UAC integrity level! This has the potential to open up security breaches, by opening up the installed application, or a web page (and possibly browser add-ons), that are now running with elevated permissions. (This is the reason I strongly recommend never choosing auto-launch options when installing software.) Question The question is: Is there a way to prevent certain applications (such as a web browser) from ever being launched with Administrative privileges, i.e., an automatic drop-privilege?

    Read the article

  • Large recovery partitions

    - by Unsigned
    Is there any good reason as to why factory restore partitions are generally much larger than they need to be? Examples I have found in my own experience: Dell XPS laptop Partition: 13.67 GB Used: 6.68 GB Dell Inspiron laptop Partition: 14.7 GB Used: 7.2 GB Toshiba laptop Partition: 15.3 GB Used: 9 GB In all cases, shrinking the partition to only slightly more than the Used space had no ill effects on future factory restorations. Why the exorbitant amount of extra space, given that neither of the three computers ever writes any data to the recovery partition? Is there a good reason I'm overlooking?

    Read the article

  • qT quncompress gzip data

    - by talei
    Hello, I stumble upon a problem, and can't find a solution. So what I want to do is uncompress data in qt, using qUncompress(QByteArray), send from www in gzip format. I used wireshark to determine that this is valid gzip stream, also tested with zip/rar and both can uncompress it. Code so far, is like this: static const char dat[40] = { 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xaa, 0x2e, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xb6, 0x4a, 0x4b, 0xcc, 0x29, 0x4e, 0xad, 0x05, 0x00, 0x00, 0x00, 0xff, 0xff, 0x03, 0x00, 0x2a, 0x63, 0x18, 0xc5, 0x0e, 0x00, 0x00, 0x00 }; //this data contains string: {status:false}, in gzip format QByteArray data; data.append( dat, sizeof(dat) ); unsigned int size = 14; //expected uncompresed size, reconstruct it BigEndianes //prepand expected uncompressed size, last 4 byte in dat 0x0e = 14 QByteArray dataPlusSize; dataPlusSize.append( (unsigned int)((size >> 24) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 16) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 8) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 0) & 0xFF)); QByteArray uncomp = qUncompress( dataPlusSize ); qDebug() << uncomp; And uncompression fails with: qUncompress: Z_DATA_ERROR: Input data is corrupted. AFAIK gzip consist of 10 byte header, DEFLATE peyload, 12 byte trailer ( 8 byte CRC32 + 4 byte ISIZE - uncompresed data size ). Striping header and trailer should leave me with DEFLATE data stream, qUncompress yields same error. I checked with data string compressed in PHP, like this: $stringData = gzcompress( "{status:false}", 1); and qUncompress uncompress that data.(I didn't see and gzip header though i.e. ID1 = 0x1f, ID2 = 0x8b ) I checked above code with debug, and error occurs at: if ( #endif ((BITS(8) << 8) + (hold >> 8)) % 31) { //here is error, WHY? long unsigned int hold = 35615 strm->msg = (char *)"incorrect header check"; state->mode = BAD; break; } inflate.c line 610. I know that qUncompress is simply a wrapper to zlib, so I suppose it should handle gzip without any problem. Any comments are more then welcome. Best regards

    Read the article

  • C++ _beginthreadex in cygwin

    - by Hugh
    Hello all, I am aware that _beginthreadex is part of the MSCVRT functions and therefore not accessible via Cygwin/MinGW uintptr_t _beginthreadex( void *security, unsigned stack_size, unsigned ( __stdcall *start_address )( void * ), void *arglist, unsigned initflag, unsigned *thrdaddr ); However, _beginthreadex does call upon CreateThread(). HANDLE CreateThread( LPSECURITY_ATTRIBUTES secAttr, SIZE_T stackSize, LPTHREAD_START_ROUTINE threadFunc, LPVOID param, DWORD flags, LPDWORD threadID ); However, does anyone have a wrapper or a URL to a library that is compatible with Cygwin/WinGW. Or can offer some advice? As this is the last little piece of moving from VSutdio Project over to makefiles for Windows/Darwin/Linux. Thanks.

    Read the article

  • Conversion from Iphone Core Surface RGB Frame into ffmepg AVFarme

    - by Sridhar
    Hello, I am trying to convert Core Surface RGB frame buffer(Iphone) to ffmpeg Avfarme to encode into a movie file. But I am not getting the correct video output (video showing colors dazzling not the correct picture) I guess there is something wrong with converting from core surface frame buffer into AVFrame. Here is my code : Surface *surface = [[Surface alloc]initWithCoreSurfaceBuffer:coreSurfaceBuffer]; [surface lock]; unsigned int height = surface.height; unsigned int width = surface.width; unsigned int alignmentedBytesPerRow = (width * 4); if (!readblePixels) { readblePixels = CGBitmapAllocateData(alignmentedBytesPerRow * height); NSLog(@"alloced readablepixels"); } unsigned int bytesPerRow = surface.bytesPerRow; void *pixels = surface.baseAddress; for (unsigned int j = 0; j < height; j++) { memcpy(readblePixels + alignmentedBytesPerRow * j, pixels + bytesPerRow * j, bytesPerRow); } pFrameRGB->data[0] = readblePixels; // I guess here is what I am doing wrong. pFrameRGB->data[1] = NULL; pFrameRGB->data[2] = NULL; pFrameRGB->data[3] = NULL; pFrameRGB->linesize[0] = pCodecCtx->width; pFrameRGB->linesize[1] = 0; pFrameRGB->linesize[2] = 0; pFrameRGB->linesize[3] = 0; sws_scale (img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize); Please help me out. Thanks, Raghu

    Read the article

  • Vector is pointing to uninitialized bytes when used in recvfrom call

    - by Adam A.
    In a function that I am writing I am trying to return a pointer to a vector of unsigned chars. The relevant code is below. std::vector<unsigned char> *ret = new std::vector<unsigned char>(buffSize,'0'); int n = recvfrom(fd_, &((*ret)[0]) ,buffSize, &recvAddress, &sockSize); //This should work too... recvfrom(fd_, ret ,buffSize, &recvAddress, &sockSize); // display chars somehow just for testing for(std::vector<unsigned char>::iterator it=ret->begin(); it<it->end();i++) { std::cout<<*it; } std::cout<<std::endl; ... return ret; When I run this through valgrind I get errors talking about how the buffer in recvfrom is pointing to uninitialized bytes. I've narrowed this down to the vector since I swapped it out for an unsigned char array and everything works fine. Any suggestions?

    Read the article

  • SIMPLE OpenSSL RSA Encryption in C/C++ is causing me headaches

    - by Josh
    Hey guys, I'm having some trouble figuring out how to do this. Basically I just want a client and server to be able to send each other encrypted messages. This is going to be incredibly insecure because I'm trying to figure this all out so I might as well start at the ground floor. So far I've got all the keys working but encryption/decryption is giving me hell. I'll start by saying I am using C++ but most of these functions require C strings so whatever I'm doing may be causing problems. Note that on the client side I receive the following error in regards to decryption. error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed I don't really understand how padding works so I don't know how to fix it. Anywho here are the relevant variables on each side followed by the code. Client: RSA *myKey; // Loaded with private key // The below will hold the decrypted message unsigned char* decrypted = (unsigned char*) malloc(RSA_size(myKey)); /* The below holds the encrypted string received over the network. Originally held in a C-string but C strings never work for me and scare me so I put it in a C++ string */ string encrypted; // The reinterpret_cast line was to get rid of an error message. // Maybe the cause of one of my problems? if(RSA_private_decrypt(sizeof(encrypted.c_str()), reinterpret_cast<const unsigned char*>(encrypted.c_str()), decrypted, myKey, RSA_PKCS1_OAEP_PADDING)==-1) { cout << "Private decryption failed" << endl; ERR_error_string(ERR_peek_last_error(), errBuf); printf("Error: %s\n", errBuf); free(decrypted); exit(1); } Server: RSA *pkey; // Holds the client's public key string key; // Holds a session key I want to encrypt and send //The below will hold the encrypted message unsigned char *encrypted = (unsigned char*)malloc(RSA_size(pkey)); // The reinterpret_cast line was to get rid of an error message. // Maybe the cause of one of my problems? if(RSA_public_encrypt(sizeof(key.c_str()), reinterpret_cast<const unsigned char*>(key.c_str()), encrypted, pkey, RSA_PKCS1_OAEP_PADDING)==-1) { cout << "Public encryption failed" << endl; ERR_error_string(ERR_peek_last_error(), errBuf); printf("Error: %s\n", errBuf); free(encrypted); exit(1); } Let me once again state, in case I didn't before, that I know my code sucks but I'm just trying to establish a framework for understanding this. I'm sorry if this offends you veteran coders. Thanks in advance for any help you guys can provide!

    Read the article

  • Is it fair for us to conclude XOR string encryption is less secure than well known encryption (Say Blowfish)

    - by Yan Cheng CHEOK
    I was wondering, is it fair to conclude, XOR string encryption is less secure than other encryption method, say Blowfish This is because for both methods, their input are Unencrypted string A secret key string XOR(string value,string key) { string retval(value); short unsigned int klen=key.length(); short unsigned int vlen=value.length(); short unsigned int k=0; short unsigned int v=0; for(v;v<vlen;v++) { retval[v]=value[v]^key[k]; k=(++k<klen?k:0); } return retval; } Is there any proof that XOR encryption method is more easy to be "broken" than Blowfish if the same key is being chosen?

    Read the article

  • 32bit to 64bit inline assembly porting

    - by Simone Margaritelli
    I have a piece of C++ code (compiled with g++ under a GNU/Linux environment) that load a function pointer (how it does that doesn't matter), pushes some arguments onto the stack with some inline assembly and then calls that function, the code is like : unsigned long stack[] = { 1, 23, 33, 43 }; /* save all the registers and the stack pointer */ unsigned long esp; asm __volatile__ ( "pusha" ); asm __volatile__ ( "mov %%esp, %0" :"=m" (esp)); for( i = 0; i < sizeof(stack); i++ ){ unsigned long val = stack[i]; asm __volatile__ ( "push %0" :: "m"(val) ); } unsigned long ret = function_pointer(); /* restore registers and stack pointer */ asm __volatile__ ( "mov %0, %%esp" :: "m" (esp) ); asm __volatile__ ( "popa" ); I'd like to add some sort of #ifdef _LP64 // 64bit inline assembly #else // 32bit version as above example #endif But i don't know inline assembly for 64bit machines, anyone could help me? Thanks

    Read the article

  • Python TEA implementation

    - by Gaks
    Anybody knows proper python implementation of TEA (Tiny Encryption Algorithm)? I tried the one I've found here: http://sysadminco.com/code/python-tea/ - but it does not seem to work properly. It returns different results than other implementations in C or Java. I guess it's caused by completely different data types in python (or no data types in fact). Here's the code and an example: def encipher(v, k): y=v[0];z=v[1];sum=0;delta=0x9E3779B9;n=32 w=[0,0] while(n>0): y += (z << 4 ^ z >> 5) + z ^ sum + k[sum & 3] y &= 4294967295L # maxsize of 32-bit integer sum += delta z += (y << 4 ^ y >> 5) + y ^ sum + k[sum>>11 & 3] z &= 4294967295L n -= 1 w[0]=y; w[1]=z return w def decipher(v, k): y=v[0] z=v[1] sum=0xC6EF3720 delta=0x9E3779B9 n=32 w=[0,0] # sum = delta<<5, in general sum = delta * n while(n>0): z -= (y << 4 ^ y >> 5) + y ^ sum + k[sum>>11 & 3] z &= 4294967295L sum -= delta y -= (z << 4 ^ z >> 5) + z ^ sum + k[sum&3] y &= 4294967295L n -= 1 w[0]=y; w[1]=z return w Python example: >>> import tea >>> key = [0xbe168aa1, 0x16c498a3, 0x5e87b018, 0x56de7805] >>> v = [0xe15034c8, 0x260fd6d5] >>> res = tea.encipher(v, key) >>> "%X %X" % (res[0], res[1]) **'70D16811 F935148F'** C example: #include <unistd.h> #include <stdio.h> void encipher(unsigned long *const v,unsigned long *const w, const unsigned long *const k) { register unsigned long y=v[0],z=v[1],sum=0,delta=0x9E3779B9, a=k[0],b=k[1],c=k[2],d=k[3],n=32; while(n-->0) { sum += delta; y += (z << 4)+a ^ z+sum ^ (z >> 5)+b; z += (y << 4)+c ^ y+sum ^ (y >> 5)+d; } w[0]=y; w[1]=z; } int main() { unsigned long v[] = {0xe15034c8, 0x260fd6d5}; unsigned long key[] = {0xbe168aa1, 0x16c498a3, 0x5e87b018, 0x56de7805}; unsigned long res[2]; encipher(v, res, key); printf("%X %X\n", res[0], res[1]); return 0; } $ ./tea **D6942D68 6F87870D** Please note, that both examples were run with the same input data (v and key), but results were different. I'm pretty sure C implementation is correct - it comes from a site referenced by wikipedia (I couldn't post a link to it because I don't have enough reputation points yet - some antispam thing)

    Read the article

  • Boost.MultiIndex: Are there way to share object between two processes?

    - by Arman
    Hello, I have a Boost.MultiIndex big array about 10Gb. In order to reduce the reading I thought there should be a way to keep the data in the memory and another client programs will be able to read and analyse it. What is the proper way to organize it? The array looks like: struct particleID { int ID;// real ID for particle from Gadget2 file "ID" block unsigned int IDf;// postition in the file particleID(int id,const unsigned int idf):ID(id),IDf(idf){} bool operator<(const particleID& p)const { return ID<p.ID;} unsigned int getByGID()const {return (ID&0x0FFF);}; }; struct ID{}; struct IDf{}; struct IDg{}; typedef multi_index_container< particleID, indexed_by< ordered_unique< tag<IDf>, BOOST_MULTI_INDEX_MEMBER(particleID,unsigned int,IDf)>, ordered_non_unique< tag<ID>,BOOST_MULTI_INDEX_MEMBER(particleID,int,ID)>, ordered_non_unique< tag<IDg>,BOOST_MULTI_INDEX_CONST_MEM_FUN(particleID,unsigned int,getByGID)> > > particlesID_set; Any ideas are welcome. kind regards Arman.

    Read the article

  • boost::asio buffer impossible to convert parameter from char to const mutable_buffer&

    - by Ekyo777
    visual studio tells me "error C2664: 'boost::asio::mutable_buffer::mutable_buffer(const boost::asio::mutable_buffer&)': impossible to convert parameter 1 from 'char' to 'const boost::asio::mutable_buffer&' at line 163 of consuming_buffers.hpp" I am unsure of why this happen nor how to solve it(otherwise I wouldn't ask this ^^') but I think it could be related to those functions.. even tough I tried them in another project and everything worked fine... but I can hardly find what's different so... here comes code that could be relevant, if anything useful seems to be missing I'll be glad to send it. packets are all instances of this class. class CPacketBase { protected: const unsigned short _packet_type; const size_t _size; char* _data; public: CPacketBase(unsigned short packet_type, size_t size); ~CPacketBase(); size_t get_size(); const unsigned short& get_type(); virtual char* get(); virtual void set(char*); }; this sends a given packet template <typename Handler> void async_write(CPacketBase* packet, Handler handler) { std::string outbuf; outbuf.resize(packet->get_size()); outbuf = packet->get(); boost::asio::async_write( _socket , boost::asio::buffer(outbuf, packet->get_size()) , handler); } this enable reading packets and calls a function that decodes the packet's header(unsigned short) and resize the buffer to send it to another function that reads the real data from the packet template <typename Handler> void async_read(CPacketBase* packet, Handler handler) { void (CTCPConnection::*f)( const boost::system::error_code& , CPacketBase*, boost::tuple<Handler>) = &CTCPConnection::handle_read_header<Handler>; boost::asio::async_read(_socket, _buffer_data , boost::bind( f , this , boost::asio::placeholders::error , packet , boost::make_tuple(handler))); } and this is called by async_read once a packet is received template <typename Handler> void handle_read_header(const boost::system::error_code& error, CPacketBase* packet, boost::tuple<Handler> handler) { if (error) { boost::get<0>(handler)(error); } else { // Figures packet type unsigned short packet_type = *((unsigned short*) _buffer_data.c_str()); // create new packet according to type delete packet; ... // read packet's data _buffer_data.resize(packet->get_size()-2); // minus header size void (CTCPConnection::*f)( const boost::system::error_code& , CPacketBase*, boost::tuple<Handler>) = &CTCPConnection::handle_read_data<Handler>; boost::asio::async_read(_socket, _buffer_data , boost::bind( f , this , boost::asio::placeholders::error , packet , handler)); } }

    Read the article

  • GCC, -O2, and bitfields - is this a bug or a feature?

    - by Rooke
    Today I discovered alarming behavior when experimenting with bit fields. For the sake of discussion and simplicity, here's an example program: #include <stdio.h> struct Node { int a:16 __attribute__ ((packed)); int b:16 __attribute__ ((packed)); unsigned int c:27 __attribute__ ((packed)); unsigned int d:3 __attribute__ ((packed)); unsigned int e:2 __attribute__ ((packed)); }; int main (int argc, char *argv[]) { Node n; n.a = 12345; n.b = -23456; n.c = 0x7ffffff; n.d = 0x7; n.e = 0x3; printf("3-bit field cast to int: %d\n",(int)n.d); n.d++; printf("3-bit field cast to int: %d\n",(int)n.d); } The program is purposely causing the 3-bit bit-field to overflow. Here's the (correct) output when compiled using "g++ -O0": 3-bit field cast to int: 7 3-bit field cast to int: 0 Here's the output when compiled using "g++ -O2" (and -O3): 3-bit field cast to int: 7 3-bit field cast to int: 8 Checking the assembly of the latter example, I found this: movl $7, %esi movl $.LC1, %edi xorl %eax, %eax call printf movl $8, %esi movl $.LC1, %edi xorl %eax, %eax call printf xorl %eax, %eax addq $8, %rsp The optimizations have just inserted "8", assuming 7+1=8 when in fact the number overflows and is zero. Fortunately the code I care about doesn't overflow as far as I know, but this situation scares me - is this a known bug, a feature, or is this expected behavior? When can I expect gcc to be right about this? Edit (re: signed/unsigned) : It's being treated as unsigned because it's declared as unsigned. Declaring it as int you get the output (with O0): 3-bit field cast to int: -1 3-bit field cast to int: 0 An even funnier thing happens with -O2 in this case: 3-bit field cast to int: 7 3-bit field cast to int: 8 I admit that attribute is a fishy thing to use; in this case it's a difference in optimization settings I'm concerned about.

    Read the article

  • Concurrent Generation of Sequential Keys

    - by GenTiradentes
    I'm working on a project which generates a very large number of sequential text strings, in a very tight loop. My application makes heavy use of SIMD instruction set extensions like SSE and MMX, in other parts of the program, but the key generator is plain C++. The way my key generator works is I have a keyGenerator class, which holds a single char array that stores the current key. To get the next key, there is a function called "incrementKey," which treats the string as a number, adding one to the string, carrying where necessary. Now, the problem is, the keygen is somewhat of a bottleneck. It's fast, but it would be nice if it were faster. One of the biggest problems is that when I'm generating a set of sequential keys to be processed using my SSE2 code, I have to have the entire set stored in an array, which means I have to sequentially generate and copy 12 strings into an array, one by one, like so: char* keys[12]; for(int i = 0; i < 12; i++) { keys[i] = new char[16]; strcmp(keys[i], keygen++); } So how would you efficiently generate these plaintext strings in order? I need some ideas to help move this along. Concurrency would be nice; as my code is right now, each successive key depends on the previous one, which means that the processor can't start work on the next key until the current one has been completely generated. Here is the code relevant to the key generator: KeyGenerator.h class keyGenerator { public: keyGenerator(unsigned long long location, characterSet* charset) : location(location), charset(charset) { for(int i = 0; i < 16; i++) key[i] = 0; charsetStr = charset->getCharsetStr(); integerToKey(); } ~keyGenerator() { } inline void incrementKey() { register size_t keyLength = strlen(key); for(register char* place = key; place; place++) { if(*place == charset->maxChar) { // Overflow, reset char at place *place = charset->minChar; if(!*(place+1)) { // Carry, no space, insert char *(place+1) = charset->minChar; ++keyLength; break; } else { continue; } } else { // Space available, increment char at place if(*place == charset->charSecEnd[0]) *place = charset->charSecBegin[0]; else if(*place == charset->charSecEnd[1]) *place = charset->charSecBegin[1]; (*place)++; break; } } } inline char* operator++() // Pre-increment { incrementKey(); return key; } inline char* operator++(int) // Post-increment { memcpy(postIncrementRetval, key, 16); incrementKey(); return postIncrementRetval; } void integerToKey() { register unsigned long long num = location; if(!num) { key[0] = charsetStr[0]; } else { num++; while(num) { num--; unsigned int remainder = num % charset->length; num /= charset->length; key[strlen(key)] = charsetStr[remainder]; } } } inline unsigned long long keyToInteger() { // TODO return 0; } inline char* getKey() { return key; } private: unsigned long long location; characterSet* charset; std::string charsetStr; char key[16]; // We need a place to store the key for the post increment operation. char postIncrementRetval[16]; }; CharacterSet.h struct characterSet { characterSet() { } characterSet(unsigned int len, int min, int max, int charsec0, int charsec1, int charsec2, int charsec3) { init(length, min, max, charsec0, charsec1, charsec2, charsec3); } void init(unsigned int len, int min, int max, int charsec0, int charsec1, int charsec2, int charsec3) { length = len; minChar = min; maxChar = max; charSecEnd[0] = charsec0; charSecBegin[0] = charsec1; charSecEnd[1] = charsec2; charSecBegin[1] = charsec3; } std::string getCharsetStr() { std::string retval; for(int chr = minChar; chr != maxChar; chr++) { for(int i = 0; i < 2; i++) if(chr == charSecEnd[i]) chr = charSecBegin[i]; retval += chr; } return retval; } int minChar, maxChar; // charSec = character set section int charSecEnd[2], charSecBegin[2]; unsigned int length; };

    Read the article

  • Optimizing for speed - 4 dimensional array lookup in C

    - by Tiago
    I have a fitness function that is scoring the values on an int array based on data that lies on a 4D array. The profiler says this function is using 80% of CPU time (it needs to be called several million times). I can't seem to optimize it further (if it's even possible). Here is the function: unsigned int lookup_array[26][26][26][26]; /* lookup_array is a global variable */ unsigned int get_i_score(unsigned int *input) { register unsigned int i, score = 0; for(i = len - 3; i--; ) score += lookup_array[input[i]][input[i + 1]][input[i + 2]][input[i + 3]]; return(score) } I've tried to flatten the array to a single dimension but there was no improvement in performance. This is running on an IA32 CPU. Any CPU specific optimizations are also helpful. Thanks

    Read the article

  • x86 Assembly Question about outputting

    - by jdea
    My code looks like this _declspec(naked) void f(unsigned int input,unsigned int *output) { __asm{ push dword ptr[esp+4] call factorial pop ecx mov [output], eax //copy result ret } } __declspec(naked) unsigned int factorial(unsigned int n) { __asm{ push esi mov esi, dword ptr [esp+8] cmp esi, 1 jg RECURSE mov eax, 1 jmp END RECURSE: dec esi push esi call factorial pop esi inc esi mul esi END: pop esi ret } } Its a factorial function and I'm trying to output the answer after it recursively calculates the number that was passed in But what I get returned as an output is the same large number I keep getting Not sure about what is wrong with my output, by I also see this error CXX0030: Error: expression cannot be evaluated Thanks!

    Read the article

  • why is my 3n+1 problem solution wrong?

    - by nunos
    I have recently started reading "Programming Challenges" book by S. Skiena and believe or not I am kind of stuck in the very first problem. Here's a link to the problem: 3n+1 problem Here's my code: #include <iostream> #include <vector> #include <algorithm> using namespace std; unsigned long calc(unsigned long n); int main() { int i, j, a, b, m; vector<int> temp; while (true) { cin >> i >> j; if (cin.fail()) break; if (i < j) { a = i; b = j; } else { a = j; b = i; } temp.clear(); for (unsigned int k = a; k != b; k++) { temp.push_back(calc(k)); } m = *max_element(temp.begin(), temp.end()); cout << i << ' ' << j << ' ' << m << endl; } } unsigned long calc(unsigned long n) { unsigned long ret = 1; while (n != 1) { if (n % 2 == 0) n = n/2; else n = 3*n + 1; ret++; } return ret; } I know the code is inefficient and I should not be using vectors to store the data. Anyway, I still would like to know what it's wrong with this, since, for me, it makes perfect sense, even though I am getting WA (wrong answer at programming challenges judge and RE (Runtime Error) at UVa judge). Thanks.

    Read the article

  • Audit Table using Triggers

    - by Jose
    DROP TABLE IF EXISTS `actividades`.`act_actividad_audit`; CREATE TABLE `actividades`.`act_actividad_audit` ( `fe_creacion` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `usr_digitador` char(10) NOT NULL, `ip_digitador` char(15) NOT NULL, `id_act_actividad` int(10) unsigned NOT NULL, `titulo` char(64) NOT NULL, `act_prioridad_id` int(10) unsigned NOT NULL, `act_motivo_id` int(10) unsigned NOT NULL, `detalle` text, `detalle_tecnico` text, `hostname_id` int(10) unsigned NOT NULL, `hostname_nombre` char(50) NOT NULL, `es_SMOP` tinyint(1) NOT NULL, `url_SMOP` text, `es_tecnico` tinyint(1) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='Auditoria Actividad General'; I want to populate that audit table with a trigger but how can i send or fill the values for usr_digitador or ip_digitador if that values are on client side.? please help

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >