Search Results

Search found 1466 results on 59 pages for 'sizeof'.

Page 49/59 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • How to store and remove dynamically and automatic variable of generic data type in custum list data

    - by Vineel Kumar Reddy
    Hi I have created a List data structure implementation for generic data type with each node declared as following. struct Node { void *data; .... .... } So each node in my list will have pointer to the actual data(generic could be anything) item that should be stored in the list. I have following signature for adding a node to the list AddNode(struct List *list, void* eledata); the problem is when i want to remove a node i want to free even the data block pointed by *data pointer inside the node structure that is going to be freed. at first freeing of datablock seems to be straight forward free(data) // forget about the syntax..... But if data is pointing to a block created by malloc then the above call is fine....and we can free that block using free function int *x = (int*) malloc(sizeof(int)); *x = 10; AddNode(list,(void*)x); // x can be freed as it was created using malloc what if a node is created as following int x = 10; AddNode(list,(void*)&x); // x cannot be freed as it was not created using malloc Here we cannot call free on variable x!!!! How do i know or implement the functionality for both dynamically allocated variables and static ones....that are passed to my list.... Thanks in advance...

    Read the article

  • how to remove subsets form given text file

    - by user324887
    i have a problem like this 10 20 30 40 70 20 30 70 30 40 10 20 29 70 80 90 20 30 40 40 45 65 10 20 80 45 65 20 I want to remove all subset transaction from this file. output file should be like follows 10 20 30 40 70 29 70 80 90 20 30 40 40 45 65 10 20 80 Where records like 20 30 70 30 40 10 20 45 65 20 are removed because of they are subset of other records. i AM using set for this but i am not able to create one set for one line can anybody know how to do this please help me here i am sending you my code include include include using namespace std; using namespace std; set s1; int main() { FILE fp = fopen ( "abc.txt", "r" ); if ( fp != NULL ) { char line [ 128 ]; / or other suitable maximum line size */ while ( fgets ( line, sizeof line, fp ) != NULL ) /* read a line */ { istringstream iss(line); do { string sub; iss >> sub; s1.insert(sub); } while (iss); for (set<string>::const_iterator p = s1.begin( );p != s1.end( ); ++p) cout << *p << endl; } } }

    Read the article

  • CUDA - multiple kernels to compute a single value

    - by Roger
    Hey, I'm trying to write a kernel to essentially do the following in C float sum = 0.0; for(int i = 0; i < N; i++){ sum += valueArray[i]*valueArray[i]; } sum += sum / N; At the moment I have this inside my kernel, but it is not giving correct values. int i0 = blockIdx.x * blockDim.x + threadIdx.x; for(int i=i0; i<N; i += blockDim.x*gridDim.x){ *d_sum += d_valueArray[i]*d_valueArray[i]; } *d_sum= __fdividef(*d_sum, N); The code used to call the kernel is kernelName<<<64,128>>>(N, d_valueArray, d_sum); cudaMemcpy(&sum, d_sum, sizeof(float) , cudaMemcpyDeviceToHost); I think that each kernel is calculating a partial sum, but the final divide statement is not taking into account the accumulated value from each of the threads. Every kernel is producing it's own final value for d_sum? Does anyone know how could I go about doing this in an efficient way? Maybe using shared memory between threads? I'm very new to GPU programming. Cheers

    Read the article

  • How can variadic char template arguments from user defined literals be converted back into numeric types?

    - by Pubby
    This question is being asked because of this one. C++11 allows you to define literals like this for numeric literals: template<char...> OutputType operator "" _suffix(); Which means that 503_suffix would become <'5','0','3'> This is nice, although it isn't very useful in the form it's in. How can I transform this back into a numeric type? This would turn <'5','0','3'> into a constexpr 503. Additionally, it must also work on floating point literals. <'5','.','3> would turn into int 5 or float 5.3 A partial solution was found in the previous question, but it doesn't work on non-integers: template <typename t> constexpr t pow(t base, int exp) { return (exp > 0) ? base * pow(base, exp-1) : 1; }; template <char...> struct literal; template <> struct literal<> { static const unsigned int to_int = 0; }; template <char c, char ...cv> struct literal<c, cv...> { static const unsigned int to_int = (c - '0') * pow(10, sizeof...(cv)) + literal<cv...>::to_int; }; // use: literal<...>::to_int // literal<'1','.','5'>::to_int doesn't work // literal<'1','.','5'>::to_float not implemented

    Read the article

  • Structure's with strings and input

    - by Beginnernato
    so i have the following structure and function that add's things to the function - struct scoreentry_node { struct scoreentry_node *next; int score; char* name; } ; typedef struct scoreentry_node *score_entry; score_entry add(int in, char* n, score_entry en) { score_entry r = malloc(sizeof(struct scoreentry_node)); r->score = in; r->name = n; r->next = en; return r; } i have input that take it in the following main file: int score; char name[]; int main(void) { score_entry readin = NULL; while(1) { scanf("%s%d", name, &score); readin = add(score, name, readin); // blah blah I dont know why but when input a name it gets added to readin, but when i input another name all the name's in readin have this new name for example: input: bob 10 readin = 10 bob NULL jill 20 readin = 20 jill 10 jill NULL I dont know why bob disappear's... any reason why it does that ?

    Read the article

  • Reverse reading WORD from a binary file?

    - by Angel
    Hi, I have a structure: struct JFIF_HEADER { WORD marker[2]; // = 0xFFD8FFE0 WORD length; // = 0x0010 BYTE signature[5]; // = "JFIF\0" BYTE versionhi; // = 1 BYTE versionlo; // = 1 BYTE xyunits; // = 0 WORD xdensity; // = 1 WORD ydensity; // = 1 BYTE thumbnwidth; // = 0 BYTE thumbnheight; // = 0 }; This is how I read it from the file: HANDLE file = CreateFile(filename, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0); DWORD tmp = 0; DWORD size = GetFileSize(file, &tmp); BYTE *DATA = new BYTE[size]; ReadFile(file, DATA, size, &tmp, 0); JFIF_HEADER header; memcpy(&header, DATA, sizeof(JFIF_HEADER)); This is how the beginning of my file looks in hex editor: 0xFF 0xD8 0xFF 0xE0 0x00 0x10 0x4A 0x46 0x49 0x46 0x00 0x01 0x01 0x00 0x00 0x01 When I print header.marker, it shows exactly what it should (0xFFD8FFE0). But when I print header.length, it shows 0x1000 instead of 0x0010. The same thing is with xdensity and ydensity. Why do I get wrong data when reading a WORD? Thank you.

    Read the article

  • Universal oAuth for objective-c class?

    - by phpnerd211
    I have an app that connects to 6+ social networks via APIs. What I want to do is transfer over my oAuth calls to call directly from the phone (not from the server). Here's what I have (for tumblr): // Set some variables NSString *consumerKey = CONSUMER_KEY_HERE; NSString *sharedSecret = SHARED_SECRET_HERE; NSString *callToURL = @"https://tumblr.com/oauth/access_token"; NSString *thePassword = PASSWORD_HERE; NSString *theUsername = USERNAME_HERE; // Calculate nonce & timestamp NSString *nonce = [[NSString stringWithFormat:@"%d", arc4random()] retain]; time_t t; time(&t); mktime(gmtime(&t)); NSString *timestamp = [[NSString stringWithFormat:@"%d", (int)(((float)([[NSDate date] timeIntervalSince1970])) + 0.5)] retain]; // Generate signature NSString *baseString = [NSString stringWithFormat:@"GET&%@&%@",[callToURL urlEncode],[[NSString stringWithFormat:@"oauth_consumer_key=%@&oauth_nonce=%@&oauth_signature_method=HMAC-SHA1&oauth_timestamp=%@&oauth_version=1.0&x_auth_mode=client_auth&x_auth_password=%@&x_auth_username=%@",consumerKey,nonce,timestamp,thePassword,theUsername] urlEncode]]; NSLog(@"baseString: %@",baseString); const char *cKey = [sharedSecret cStringUsingEncoding:NSASCIIStringEncoding]; const char *cData = [baseString cStringUsingEncoding:NSASCIIStringEncoding]; unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH]; CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC); NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC length:sizeof(cHMAC)]; NSString *signature = [HMAC base64EncodedString]; NSString *theUrl = [NSString stringWithFormat:@"%@?oauth_consumer_key=%@&oauth_nonce=%@&oauth_signature=%@&oauth_signature_method=HMAC-SHA1&oauth_timestamp=%@&oauth_version=1.0&x_auth_mode=client_auth&x_auth_password=%@&x_auth_username=%@",callToURL,consumerKey,nonce,signature,timestamp,thePassword,theUsername]; From tumblr, I get this error: oauth_signature does not match expected value I've done some forum scouring, and no oAuth for objective-c classes worked for what I want to do. I also don't want to have to download and implement 6+ social API classes into my project and do it that way.

    Read the article

  • ERROR 2019 Linker Error Visual Studio

    - by Corrie Duck
    Hey I hope someone can tell me know to fix this issue I am having i keep getting an error 2019 from Visual studio for the following file. Now most of the functions have been removed so excuse the empty varriables etc. Error error LNK2019: unresolved external symbol "void * __cdecl OpenOneDevice(void *,struct _SP_DEVICE_INTERFACE_DATA *,char *)" (?OpenOneDevice@@YAPAXPAXPAU_SP_DEVICE_INTERFACE_DATA@@PAD@Z) referenced in function _wmain c:\Users\K\documents\visual studio 2010\Projects\test2\test2\test2.obj test2 #include "stdafx.h" #include <windows.h> #include <setupapi.h> SP_DEVICE_INTERFACE_DATA deviceInfoData; HDEVINFO hwDeviceInfo; HANDLE hOut; char *devName; // HANDLE OpenOneDevice(IN HDEVINFO hwDeviceInfo,IN PSP_DEVICE_INTERFACE_DATA DeviceInfoData,IN char *devName); // HANDLE OpenOneDevice(IN HDEVINFO HardwareDeviceInfo,IN PSP_DEVICE_INTERFACE_DATA DeviceInfoData,IN char *devName) { PSP_DEVICE_INTERFACE_DETAIL_DATA functionClassDeviceData = NULL; ULONG predictedLength = 0, requiredLength = 0; HANDLE hOut = INVALID_HANDLE_VALUE; SetupDiGetDeviceInterfaceDetail(HardwareDeviceInfo, DeviceInfoData, NULL, 0, &requiredLength, NULL); predictedLength = requiredLength; functionClassDeviceData = (PSP_DEVICE_INTERFACE_DETAIL_DATA)malloc(predictedLength); if(NULL == functionClassDeviceData) { return hOut; } functionClassDeviceData->cbSize = sizeof (SP_DEVICE_INTERFACE_DETAIL_DATA); if (!SetupDiGetDeviceInterfaceDetail(HardwareDeviceInfo, DeviceInfoData, functionClassDeviceData, predictedLength, &requiredLength, NULL)) { free( functionClassDeviceData ); return hOut; } //strcpy(devName,functionClassDeviceData->DevicePath) ; hOut = CreateFile(functionClassDeviceData->DevicePath, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL); free(functionClassDeviceData); return hOut; } // int _tmain(int argc, _TCHAR* argv[]) { hOut = OpenOneDevice (hwDeviceInfo, &deviceInfoData, devName); if(hOut != INVALID_HANDLE_VALUE) { // error report } return 0; } Been driving me mad for hours. Any help appreciated. SOLVED THANKS TO CHRIS :-) Add #pragma comment (lib, "Setupapi.lib") Thanks

    Read the article

  • Generating authentication header from azure table through objective-c

    - by user923370
    I'm fetching data from iCloud and for that I need to generate a header (azure table storage). I used the code below for that and it is generating the headers. But when I use these headers in my project it is showing "make sure that the value of authorization header is formed correctly including the signature." I googled a lot and tried many codes but in vain. Can anyone kindly please help me with where I'm going wrong in this code. -(id)generat{ NSString *messageToSign = [NSString stringWithFormat:@"%@/%@/%@", dateString,AZURE_ACCOUNT_NAME, tableName]; NSString *key = @"asasasasasasasasasasasasasasasasasasasasas=="; const char *cKey = [key cStringUsingEncoding:NSUTF8StringEncoding]; const char *cData = [messageToSign cStringUsingEncoding:NSUTF8StringEncoding]; unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH]; CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC); NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC length:sizeof(cHMAC)]; NSString *hash = [Base64 encode:HMAC]; NSLog(@"Encoded hash: %@", hash); NSURL *url=[NSURL URLWithString: @"http://my url"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; [request addValue:[NSString stringWithFormat:@"SharedKeyLite %@:%@",AZURE_ACCOUNT_NAME, hash] forHTTPHeaderField:@"Authorization"]; [request addValue:dateString forHTTPHeaderField:@"x-ms-date"]; [request addValue:@"application/atom+xml, application/xml"forHTTPHeaderField:@"Accept"]; [request addValue:@"UTF-8" forHTTPHeaderField:@"Accept-Charset"]; NSLog(@"Headers: %@", [request allHTTPHeaderFields]); NSLog(@"URL: %@", [[request URL] absoluteString]); return request; } -(NSString*)rfc1123String:(NSDate *)date { static NSDateFormatter *df = nil; if(df == nil) { df = [[NSDateFormatter alloc] init]; df.locale = [[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"] autorelease]; df.timeZone = [NSTimeZone timeZoneWithAbbreviation:@"GMT"]; df.dateFormat = @"EEE',' dd MMM yyyy HH':'mm':'ss 'GMT'"; } return [df stringFromDate:date]; }

    Read the article

  • Rand(); with exclusion to and already randomly generated number..?

    - by Stefan
    Hey, I have a function which calls a users associated users from a table. The function then uses the rand(); function to chose from the array 5 randomly selected userID's however!... In the case where a user doesnt have many associated users but above the min (if below the 5 it just returns the array as it is) then it gives bad results due to repeat rand numbers... How can overcome this or exclude a previously selected rand number from the next rand(); function call. Here is the section of code doing the work. Bare in mind this must be highly efficient as this script is used everywhere. $size = sizeof($users)-1; $nusers[0] = $users[rand(0,$size)]; $nusers[1] = $users[rand(0,$size)]; $nusers[2] = $users[rand(0,$size)]; $nusers[3] = $users[rand(0,$size)]; $nusers[4] = $users[rand(0,$size)]; return $nusers; Thanks in advance! Stefan

    Read the article

  • Using std::ifstream to load in an array of struct data type into a std::vector

    - by Sent1nel
    I am working on a bitmap loader in C++ and when moving from the C style array to the std::vector I have run into an usual problem of which Google does not seem to have the answer. 8 Bit and 4 bit, bitmaps contain a colour palette. The colour palette has blue, green, red and reserved components each 1 byte in size. // Colour palette struct BGRQuad { UInt8 blue; UInt8 green; UInt8 red; UInt8 reserved; }; The problem I am having is when I create a vector of the BGRQuad structure I can no longer use the ifstream read function to load data from the file directly into the BGRQuad vector. // This code throws an assert failure! std::vecotr quads; if (coloursUsed) // colour table available { // read in the colours quads.reserve(coloursUsed); inFile.read( reinterpret_cast(&quads[0]), coloursUsed * sizeof(BGRQuad) ); } Does anyone know how to read directly into the vector without having to create a C array and copy data into the BGRQuad vector?

    Read the article

  • C++ arrays as parameters, subscript vs. pointer

    - by awshepard
    Alright, I'm guessing this is an easy question, so I'll take the knocks, but I'm not finding what I need on google or SO. I'd like to create an array in one place, and populate it inside a different function. I define a function: void someFunction(double results[]) { for (int i = 0; i<100; ++i) { for (int n = 0; n<16; ++n) //note this iteration limit { results[n] += i * n; } } } That's an approximation to what my code is doing, but regardless, shouldn't be running into any overflow or out of bounds issues or anything. I generate an array: double result[16]; for(int i = 0; i<16; i++) { result[i] = -1; } then I want to pass it to someFunction someFunction(result); When I set breakpoints and step through the code, upon entering someFunction, results is set to the same address as result, and the value there is -1.000000 as expected. However, when I start iterating through the loop, results[n] doesn't seem to resolve to *(results+n) or *(results+n*sizeof(double)), it just seems to resolve to *(results). What I end up with is that instead of populating my result array, I just get one value. What am I doing wrong?

    Read the article

  • problem with kCFSocketReadCallBack

    - by zp26
    Hello. I have a problem with my program. I created a socket with "kCFSocketReadCallBack. My intention was to call the "acceptCallback" only when it receives a string to the socket. Instead my program does not just accept the connection always goes into "startReceive" stop doing so and sometimes crash the program. Can anybody help? Thanks readSocket = CFSocketCreateWithNative( NULL, fd, kCFSocketReadCallBack, AcceptCallback, &context ); static void AcceptCallback(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) // Called by CFSocket when someone connects to our listening socket. // This implementation just bounces the request up to Objective-C. { ServerVistaController * obj; #pragma unused(address) // assert(address == NULL); assert(data != NULL); obj = (ServerVistaController *) info; assert(obj != nil); #pragma unused(s) assert(s == obj->listeningSocket); if (type & kCFSocketAcceptCallBack){ [obj acceptConnection:*(int *)data]; } if (type & kCFSocketAcceptCallBack){ [obj startReceive:*(int *)data]; } } -(void)startReceive:(int)fd { CFReadStreamRef readStream = NULL; CFIndex bytes; UInt8 buffer[MAXLENGTH]; CFStreamCreatePairWithSocket( kCFAllocatorDefault, fd, &readStream, NULL); if(!readStream){ close(fd); [self updateLabel:@"No readStream"]; } CFReadStreamOpen(readStream); [self updateLabel:@"OpenStream"]; bytes = CFReadStreamRead( readStream, buffer, sizeof(buffer)); if (bytes < 0) { [self updateLabel:(NSString*)buffer]; close(fd); } CFReadStreamClose(readStream); }

    Read the article

  • Self referencing userdata and garbage collection

    - by drtwox
    Because my userdata objects reference themselves, I need to delete and nil a variable for the garbage collector to work. Lua code: obj = object:new() -- -- Some time later obj:delete() -- Removes the self reference obj = nil -- Ready for collection C Code: typedef struct { int self; // Reference to the object // Other members and function references removed } Object; // Called from Lua to create a new object static int object_new( lua_State *L ) { Object *obj = lua_newuserdata( L, sizeof( Object ) ); // Create the 'self' reference, userdata is on the stack top obj->self = luaL_ref( L, LUA_REGISTRYINDEX ); // Put the userdata back on the stack before returning lua_rawgeti( L, LUA_REGISTRYINDEX, obj->self ); // The object pointer is also stored outside of Lua for processing in C return 1; } // Called by Lua to delete an object static int object_delete( lua_State *L ) { Object *obj = lua_touserdata( L, 1 ); // Remove the objects self reference luaL_unref( L, LUA_REGISTRYINDEX, obj->self ); return 0; } Is there some way I can set the object to nil in Lua, and have the delete() method called automatically? Alternatively, can the delete method nil all variables that reference the object? Can the self reference be made 'weak'?

    Read the article

  • How to properly recreate BITMAP, that was previously shared by CreateFileMapping()?

    - by zim22
    Dear friends, I need your help. I need to send .bmp file to another process (dialog box) and display it there, using MMF(Memory Mapped File) But the problem is that image displays in reversed colors and upside down. In first application I open picture from HDD and link it to the named MMF "Gigabyte_picture" HANDLE hFile = CreateFile("123.bmp", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL); CreateFileMapping(hFile, NULL, PAGE_READONLY, 0, 0, "Gigabyte_picture"); In second application I open mapped bmp file and at the end I display m_HBitmap on the static component, using SendMessage function. HANDLE hMappedFile = OpenFileMapping(FILE_MAP_READ, FALSE, "Gigabyte_picture"); PBYTE pbData = (PBYTE) MapViewOfFile(hMappedFile, FILE_MAP_READ, 0, 0, 0); BITMAPINFO bmpInfo = { 0 }; LONG lBmpSize = 60608; // size of the bmp file in bytes bmpInfo.bmiHeader.biBitCount = 32; bmpInfo.bmiHeader.biHeight = 174; bmpInfo.bmiHeader.biWidth = 87; bmpInfo.bmiHeader.biPlanes = 1; bmpInfo.bmiHeader.biSizeImage = lBmpSize; bmpInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); UINT * pPixels = 0; HDC hDC = CreateCompatibleDC(NULL); HBITMAP m_HBitmap = CreateDIBSection(hDC, &bmpInfo, DIB_RGB_COLORS, (void **)& pPixels, NULL, 0); SetBitmapBits(m_HBitmap, lBmpSize, pbData); SendMessage(gStaticBox, STM_SETIMAGE, (WPARAM)IMAGE_BITMAP,(LPARAM)m_HBitmap); ///////////// HWND gStaticBox = CreateWindowEx(0, "STATIC","", SS_CENTERIMAGE | SS_REALSIZEIMAGE | SS_BITMAP | WS_CHILD | WS_VISIBLE, 10,10,380, 380, myDialog, (HMENU)-1,NULL,NULL);

    Read the article

  • C: socket connection timeout

    - by The.Anti.9
    I have a simple program to check if a port is open, but I want to shorten the timeout length on the socket connection because the default is far too long. I'm not sure how to do this though. Here's the code: #include <sys/socket.h> #include <sys/time.h> #include <sys/types.h> #include <arpa/inet.h> #include <netinet/in.h> #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <netdb.h> #include <stdlib.h> #include <string.h> #include <unistd.h> int main(int argc, char **argv) { u_short port; /* user specified port number */ char addr[1023]; /* will be a copy of the address entered by u */ struct sockaddr_in address; /* the libc network address data structure */ short int sock = -1; /* file descriptor for the network socket */ if (argc != 3) { fprintf(stderr, "Usage %s <port_num> <address>", argv[0]); return EXIT_FAILURE; } address.sin_addr.s_addr = inet_addr(argv[2]); /* assign the address */ address.sin_port = htons(atoi(argv[2])); /* translate int2port num */ sock = socket(AF_INET, SOCK_STREAM, 0); if (connect(sock,(struct sockaddr *)&address,sizeof(address)) == 0) { printf("%i is open\n", port); } close(sock); return 0; }

    Read the article

  • How can I get swig to wrap a linked list-type structure?

    - by bk
    Here's what I take to be a pretty standard header for a list. Because the struct points to itself, we need this two-part declaration. Call it listicle.h: typedef struct _listicle listicle; struct _listicle{ int i; listicle *next; }; I'm trying to get swig to wrap this, so that the Python user can make use of the listicle struct. Here's what I have in listicle.i right now: %module listicle %{ #include "listicle.h" %} %include listicle.h %rename(listicle) _listicle; %extend listicle { listicle() {return malloc (sizeof(listicle));} } As you can tell by my being here asking, it doesn't work. All the various combinations I've tried each fail in their own special way. [This one: %extend defined for an undeclared class listicle. Change it to %extend _listicle (and fix the constructor) and loading in Python gives type object '_listicle' has no attribute '_listicle_swigregister'. And so on.] Suggestions?

    Read the article

  • RegQueryValueEx not working with a Release version but working fine with Debug

    - by Nux
    Hi. I'm trying to read some ODBC details form a registry and for that I use RegQueryValueEx. The problem is when I compile the release version it simply cannot read any registry values. The code is: CString odbcFuns::getOpenedKeyRegValue(HKEY hKey, CString valName) { CString retStr; char *strTmp = (char*)malloc(MAX_DSN_STR_LENGTH * sizeof(char)); memset(strTmp, 0, MAX_DSN_STR_LENGTH); DWORD cbData; long rret = RegQueryValueEx(hKey, valName, NULL, NULL, (LPBYTE)strTmp, &cbData); if (rret != ERROR_SUCCESS) { free(strTmp); return CString("?"); } strTmp[cbData] = '\0'; retStr.Format(_T("%s"), strTmp); free(strTmp); return retStr; } I've found a workaround for this - I disabled Optimization (/Od), but it seems strange that I needed to do that. Is there some other way? I use Visual Studio 2005. Maybe it's a bug in VS? Almost forgot - the error code is 2 (as the key wouldn't be found).

    Read the article

  • Help C++ifying this C style code.

    - by Flamewires
    Hey I'm used to developing in C and I would like to use C++ in a project. Can anyone give me an example of how I would translate this C-style code into C++ code. I know it should compile in a c++ complier but I'm talking using c++ techniques(I.e. classes, RAII) typedef struct Solution Solution; struct Solution { double x[30]; int itt_found; double value; }; Solution *NewSolution() { Solution *S = (Solution *)malloc(sizeof(Solution)); for (int i=0;<=30;i++) { S-x[i] = 0; } S-itt_found = -1; return S; } void FreeSolution(Solution *S) { if (S != NULL) free(S); } int main() { Solution *S = NewSolution(); S-value = eval(S-x);// evals is another function that returns a double S-itt_found = 0; FreeSolution(S); return EXIT_SUCCESS; } Ideally I would like to be able to so something like this in main, but I'm not sure exactly how to create the class, i've read a lot of stuff but incorporating it all together correctly seems a little hard atm. Solution S(30);//constructor that takes as an argument the size of the double array S.eval();//a method that would run eval on S.x[] and store result in S.value cout << S.value << endl; Ask if you need more info, thanks.

    Read the article

  • Generic that takes only numeric types (int double etc)?

    - by brandon
    In a program I'm working on, I need to write a function to take any numeric type (int, short, long etc) and shove it in to a byte array at a specific offset. There exists a Bitconverter.GetBytes() method that takes the numeric type and returns it as a byte array, and this method only takes numeric types. So far I have: private void AddToByteArray<T>(byte[] destination, int offset, T toAdd) where T : struct { Buffer.BlockCopy(BitConverter.GetBytes(toAdd), 0, destination, offset, sizeof(toAdd)); } So basically my goal is that, for example, a call to AddToByteArray(array, 3, (short)10) would take 10 and store it in the 4th slot of array. The explicit cast exists because I know exactly how many bytes I want it to take up. There are cases where I would want a number that is small enough to be a short to really take up 4 bytes. On the flip side, there are times when I want an int to be crunched down to just a single byte. I'm doing this to create a custom network packet, if that makes any ideas pop in to your heads. If the where clause of a generic supported something like "where T : int || long || etc" I would be ok. (And no need to explain why they don't support that, the reason is fairly obvious) Any help would be greatly appreciated! Edit: I realize that I could just do a bunch of overloads, one for each type I want to support... but I'm asking this question because I want to avoid precisely that :)

    Read the article

  • finding N contiguous zero bits in an integer to the left of the MSB from another

    - by James Morris
    First we find the MSB of the first integer, and then try to find a region of N contiguous zero bits within the second number which is to the left of the MSB from the first integer. Here is the C code for my solution: typedef unsigned int t; unsigned const t_bits = sizeof(t) * CHAR_BIT; _Bool test_fit_within_left_of_msb( unsigned width, t val1, t val2, unsigned* offset_result) { unsigned offbit = 0; unsigned msb = 0; t mask; t b; while(val1 >>= 1) ++msb; while(offbit + width < t_bits - msb) { mask = (((t)1 << width) - 1) << (t_bits - width - offbit); b = val2 & mask; if (!b) { *offset_result = offbit; return true; } if (offbit++) /* this conditional bothers me! */ b <<= offbit - 1; while(b <<= 1) offbit++; } return false; } Aside from faster ways of finding the MSB of the first integer, the commented test for a zero offbit seems a bit extraneous, but necessary to skip the highest bit of type t if it is set. I have also implemented similar algorithms but working to the right of the MSB of the first number, so they don't require this seemingly extra condition. How can I get rid of this extra condition, or even, are there far more optimal solutions?

    Read the article

  • c++ connect() keeps returning WSATIMEDOUT over internet but not localy

    - by KaiserJohaan
    Hello, For some reason, my chat application always gets WSATIMEDOUT when trying to connect to another person over the internet. int len_ip = GetWindowTextLength(GetDlgItem(hWnd,ID_EDIT_IP)); char ipBuffer[16]; SendMessage(GetDlgItem(hWnd,ID_EDIT_IP),WM_GETTEXT,16,(LPARAM)ipBuffer); long host_ip = inet_addr(ipBuffer); int initializeConnection(long host_ip, HWND hWnd) { // initialize winsock WSADATA wdata; int result = WSAStartup(MAKEWORD(2,2),&wdata); if (result != 0) { return 0; } // setup socket tcp_sock = socket(AF_INET,SOCK_STREAM,IPPROTO_TCP); if (tcp_sock == INVALID_SOCKET) { return 0; } // setup socket address SOCKADDR_IN tcp_sock_addr; tcp_sock_addr.sin_family = AF_INET; tcp_sock_addr.sin_port = SERVER_TCP_PORT; tcp_sock_addr.sin_addr.s_addr = host_ip; // connect to server if (connect(tcp_sock,(SOCKADDR*)&tcp_sock_addr,sizeof(tcp_sock_addr)) == SOCKET_ERROR) { return 0; } HRESULT hr = WSAGetLastError(); // set socket in asynchronous mode if (WSAAsyncSelect(tcp_sock,hWnd,SOCKET_TCP, FD_READ | FD_WRITE | FD_CONNECT | FD_CLOSE) == SOCKET_ERROR) { return 0; } return 1; } For some reason it works perfectly fine on local network between computers, but totally screws up over the internet. WSATIMEDOUT is always returned (not connection refused, so its not a port problem). It makes me believe something is wrong with the IP but why on earth can it work on local addresses (like 192.168.2.4) Any ideas? Cheers

    Read the article

  • waveInProc / Windows audio question...

    - by BTR
    I'm using the Windows API to get audio input. I've followed all the steps on MSDN and managed to record audio to a WAV file. No problem. I'm using multiple buffers and all that. I'd like to do more with the buffers than simply write to a file, so now I've got a callback set up. It works great and I'm getting the data, but I'm not sure what to do with it once I have it. Here's my callback... everything here works: // Media API callback void CALLBACK AudioRecorder::waveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2) { // Data received if (uMsg == WIM_DATA) { // Get wav header LPWAVEHDR mBuffer = (WAVEHDR *)dwParam1; // Now what? for (unsigned i = 0; i != mBuffer->dwBytesRecorded; ++i) { // I can see the char, how do get them into my file and audio buffers? cout << mBuffer->lpData[i] << "\n"; } // Re-use buffer mResultHnd = waveInAddBuffer(hWaveIn, mBuffer, sizeof(mInputBuffer[0])); // mInputBuffer is a const WAVEHDR * } } // waveInOpen cannot use an instance method as its callback, // so we create a static method which calls the instance version void CALLBACK AudioRecorder::staticWaveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2) { // Call instance version of method reinterpret_cast<AudioRecorder *>(dwParam1)->waveInProc(hWaveIn, uMsg, dwInstance, dwParam1, dwParam2); } Like I said, it works great, but I'm trying to do the following: Convert the data to short and copy into an array Convert the data to float and copy into an array Copy the data to a larger char array which I'll write into a WAV Relay the data to an arbitrary output device I've worked with FMOD a lot and I'm familiar with interleaving and all that. But FMOD dishes everything out as floats. In this case, I'm going the other way. I guess I'm basically just looking for resources on how to go from LPSTR to short, float, and unsigned char. Thanks much in advance!

    Read the article

  • c++ figuring out memory layout of members programatically

    - by anon
    Suppose in one program, I'm given: class Foo { int x; double y; char z; }; class Bar { Foo f1; int t; Foo f2; }; int main() { Bar b; bar.f1.z = 'h'; bar.f2.z = 'w'; ... some crap setting value of b; FILE *f = fopen("dump", "wb"); // c-style file fwrite(&b, sizeof(Bar), 1, f); } Suppose in another program, I have: int main() { File *f = fopen("dump", "rb"); std::string Foo = "int x; double y; char z;"; std::string Bar = "Foo f1; int t; Foo f2;"; // now, given this is it possible to read out // the value of bar.f1.z and bar.f2.z set earlier? } WHat I'm asking is: given I have the types of a class, can I figure out how C++ lays it out?

    Read the article

  • Help Using NetuserAdd() and NetLocalGroupAddMembers() in C++

    - by Brett Powell
    So I think I almost got it. I create my dummy account with one function, and wrote a second function to add it to the Remote Desktop group. Problem is, the Administrator account is the one logged in, so I am not sure how to specify what account to add to the group. Here is my code... The user is being created properly... void AddRDPUser() { USER_INFO_1 ui; DWORD dwLevel = 1; DWORD dwError = 0; NET_API_STATUS nStatus; ui.usri1_name = L"BrettXFactor"; ui.usri1_password = L"XfactorsServer96"; ui.usri1_priv = USER_PRIV_USER; ui.usri1_home_dir = NULL; ui.usri1_comment = NULL; ui.usri1_flags = UF_SCRIPT; ui.usri1_script_path = NULL; nStatus = NetUserAdd(NULL, dwLevel, (LPBYTE)&ui, &dwError); } But I dont know how to specify to add them to this group since they are not logged in. Any help would be appreciated void AddToGroup() { LOCALGROUP_MEMBERS_INFO_3 lgmi3; DWORD dwLevel = 3; DWORD totalEntries = 1; NET_API_STATUS nStatus; LPCWSTR TargetGroup = L"Remote Desktop Users"; LPSTR sBuffer = NULL; memset(sBuffer, 0, 255); DWORD nBuffSize = sizeof(sBuffer); if(GetUserNameEx(NameDnsDomain, sBuffer, &nBuffSize)==0) { Msg("Failed to add User to Group\n"); return; } LPWSTR user_name = (LPWSTR)sBuffer; lgmi3.lgrmi3_domainandname = user_name; nStatus = NetLocalGroupAddMembers(NULL, TargetGroup, 3, (LPBYTE)&lgmi3, totalEntries); }

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >