Search Results

Search found 23480 results on 940 pages for '32 bit'.

Page 247/940 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Setting enum values to 4-byte strings - why?

    - by psychotik
    I saw code similar to this in the Mac OS SDK: enum { kAudioFileStreamProperty_ReadyToProducePackets = 'redy', kAudioFileStreamProperty_FileFormat = 'ffmt', kAudioFileStreamProperty_DataFormat = 'dfmt', kAudioFileStreamProperty_FormatList = 'flst', kAudioFileStreamProperty_MagicCookieData = 'mgic', kAudioFileStreamProperty_AudioDataByteCount = 'bcnt', kAudioFileStreamProperty_AudioDataPacketCount = 'pcnt', kAudioFileStreamProperty_MaximumPacketSize = 'psze', kAudioFileStreamProperty_DataOffset = 'doff', kAudioFileStreamProperty_ChannelLayout = 'cmap', kAudioFileStreamProperty_PacketToFrame = 'pkfr', kAudioFileStreamProperty_FrameToPacket = 'frpk', kAudioFileStreamProperty_PacketToByte = 'pkby', kAudioFileStreamProperty_ByteToPacket = 'bypk', kAudioFileStreamProperty_PacketTableInfo = 'pnfo', kAudioFileStreamProperty_PacketSizeUpperBound = 'pkub', kAudioFileStreamProperty_AverageBytesPerPacket = 'abpp', kAudioFileStreamProperty_BitRate = 'brat' }; It's the first time I've seen this - I assume the compiler assigns the 32-bit integer equivalent of the strings to the enum values. I cannot think of a single good reason why this might be preferred over using simple integers. It looks hideous in a debugger (how do you tell which of these values corresponds to 1919247481?) and makes debugging just hard in general. So, is there any reason where assigning such strings to enum values actually makes sense.

    Read the article

  • how to convert char * to uchar16 in JNI C++

    - by Sagar Hatekar
    Hello, here's what I am trying to do: typedef uint16_t uchar16_t; uchar16_t buf[32]; // buf will contain timezone information like GMT-6, Eastern Daylight Time, etc char * str = "Test"; for (int i = 0; i <= strlen(str); i++) buf[i] = str[i]; I guess that's not correct since uchar16_t would contain 2 bytes and str contains 1 byte. What is it that I am supposed to do ?

    Read the article

  • Why can't I set attribute "TYPE" of LI element in IE?

    - by Petr Urban
    Hello, I've just come to an unusual beghavior of Internet Explorer IE (v8.0.6001.18904). When I try to set "type" attribute of any <LI> element, it will result into error. I used jQuery (v1.32): $("<li>").attr("type", "test"); The same thing works for DIV. LI element does not seem to have "type" attribute reserved by HTML or XHTML definitions. It also might be jQuery issue. Solution is simple - just use another attribute name :-) But is there someone out there who knows WHY does this error occur? Could it happen with another attribute names? Why the error comes with LI element only?

    Read the article

  • What's a good way to add a large number of small floats together?

    - by splicer
    Say you have 100000000 32-bit floating point values in an array, and each of these floats has a value between 0.0 and 1.0. If you tried to sum them all up like this result = 0.0; for (i = 0; i < 100000000; i++) { result += array[i]; } you'd run into problems as result gets much larger than 1.0. So what are some of the ways to more accurately perform the summation?

    Read the article

  • What is the most efficient method to find x contiguous values of y in an array?

    - by Alec
    Running my app through callgrind revealed that this line dwarfed everything else by a factor of about 10,000. I'm probably going to redesign around it, but it got me wondering; Is there a better way to do it? Here's what I'm doing at the moment: int i = 1; while ( ( (*(buffer++) == 0xffffffff && ++i) || (i = 1) ) && i < desiredLength + 1 && buffer < bufferEnd ); It's looking for the offset of the first chunk of desiredLength 0xffffffff values in a 32 bit unsigned int array. It's significantly faster than any implementations I could come up with involving an inner loop. But it's still too damn slow.

    Read the article

  • Perl: Why does "use strict" not let me pass a parameter hash?

    - by Thariama
    I hava a perl subroutine where i would like to pass parameters as a hash (the aim is to include a css depending on the parameter 'iconsize'). I am using the call: get_function_bar_begin('iconsize' => '32'); for the subroutine get_function_bar_begin: use strict; ... sub get_function_bar_begin { my $self = shift; my %template_params = %{ shift || {} }; return $self->render_template('global/bars /tmpl_incl_function_bar_begin.html',%template_params); } Why does this yield the error message: Error executing run mode 'start': undef error - Can't use string ("iconsize") as a HASH ref while "strict refs" in use at CheckBar.pm at line 334 Am i doing something wrong here? Is there an other way to submit my data ('iconsize') as a hash?

    Read the article

  • What's the reason behind the jumping GeneratedValue(strategy=GenerationType.TABLE) when not specifyi

    - by joeduardo
    Why do I need to add allocationSize=1 when using the @TableGenerator to ensure that the id wouldn't jump from 1, 2,... to 32,xxx, 65,xxx,... after a jvm restart? Is there a design reason for the need to specify the allocationSize? This snippet would produce the jumping ids @Id @GeneratedValue(strategy = GenerationType.TABLE) private Long id; Here's the modified snippet that produces the properly sequenced ids @Id @GeneratedValue(strategy = GenerationType.TABLE, generator = "account_generator") @TableGenerator(name = "account_generator", initialValue = 1, allocationSize = 1) private Long id;

    Read the article

  • Send JSON object via GET and POST in php without having to wrapping it in another object literal.

    - by Kucebe
    My site does some short ajax call in JSON format, using jQuery. At client-side i'd like to send object just passing it in ajax function, without being forced to wrap it in an object literal like this: {'person' : person}. For the same reasons, at server-side i'd like to manage objects without the binding of $_GET['person'] or $_POST['person']. For example: var person = { 'name' : 'John', 'lastName' : 'Doe', 'age' : 32, 'married' : true } sendAjaxRequest(person); in php, using: $person = json_decode(file_get_contents("php://input")); i can get easily the object, but only with POST format, not in GET. Any suggestions?

    Read the article

  • Under Windows CE, how can I check which RAM based DLLs are loaded in virtual memory space?

    - by Michal Drozdowicz
    I have a problem with loading a DLL under Windows Mobile 5.0. I'm pretty confident that this is caused by running out of the application virtual memory (the 32 MB slot of the process, as explained in Windows CE .NET Advanced Memory Management). I'm looking for a way to actually make sure that this is the issue and investigate whether my efforts bring expected results. Do you know of a way to check the contents of the virtual memory application slot? Any applications that can help me with this task?

    Read the article

  • Swap byte 2 and 4 from integer

    - by czar x
    I had this interview question - Swap byte 2 and byte4 within an integer sequence. Integer is a 4byte wide i.e. 32 bits My approach was to use char *pointer and a temp char to swap the bytes. For clarity i have broken the steps otherwise an character array can be considered. unsigned char *b2, *b4, tmpc; int n = 0xABCD; b2 = &n; b2++; b4 = &n; b4 +=3; ///swap the values; tmpc = *b2; *b2 = *b4; *b4 = tmpc; Any other methods?

    Read the article

  • Tsql to find the start and end date(set based)

    - by priyanka.sarkar_2
    I have the below Name Date A 2011-01-01 01:00:00.000 A 2011-02-01 02:00:00.000 A 2011-03-01 03:00:00.000 B 2011-04-01 04:00:00.000 A 2011-05-01 07:00:00.000 The desired output being Name StartDate EndDate ------------------------------------------------------------------- A 2011-01-01 01:00:00.000 2011-04-01 04:00:00.000 B 2011-04-01 04:00:00.000 2011-05-01 07:00:00.000 A 2011-05-01 07:00:00.000 NULL How to achieve the same using TSQL in Set based approach DDL is as under DECLARE @t TABLE(PersonName VARCHAR(32), [Date] DATETIME) INSERT INTO @t VALUES('A', '2011-01-01 01:00:00') INSERT INTO @t VALUES('A', '2011-01-02 02:00:00') INSERT INTO @t VALUES('A', '2011-01-03 03:00:00') INSERT INTO @t VALUES('B', '2011-01-04 04:00:00') INSERT INTO @t VALUES('A', '2011-01-05 07:00:00') Select * from @t

    Read the article

  • mysql++ compile error

    - by rizzo0917
    when i complie code that includes mysql headers i get the following errors: c:\qt\2010.03\mingw\bin../lib/gcc/mingw32/4.4.0/../../../../include/stdint.h:27: error: 'int8_t' has a previous declaration as 'typedef signed char int8_t' c:\qt\2010.03\mingw\bin../lib/gcc/mingw32/4.4.0/../../../../include/stdint.h:31: error: 'int32_t' has a previous declaration as 'typedef int int32_t' c:\qt\2010.03\mingw\bin../lib/gcc/mingw32/4.4.0/../../../../include/stdint.h:32: error: 'uint32_t' has a previous declaration as 'typedef unsigned int uint32_t' Literally all I do is this. include cppconn/driver.h include cppconn/exception.h include cppconn/resultset.h include cppconn/statement.h include Now I can go into the file and comment the lines out that give me errors //typedef signed char int8_t; //typedef int int32_t; //typedef unsigned uint32_t; It compiles, but when I try to run the mysql code: sql::Driver *driver; driver = get_driver_instance(); I get this output test.exe exited with code -1073741515 Any Ideas?

    Read the article

  • C Newbie, ascii control function

    - by user570607
    Hey there, I have written a program that works well in C that converts non-readable ASCII to their character values. I would appreciate if a C master? would show me a better way of doing it that I have currently done, mainly this section: if (isascii(ch)) { switch (ch) { case 0: printControl("NUL"); break; case 1: printControl("SOH"); break; .. etc (32 in total) case default: putchar(ch); break; } } Is it normal to make a switch that big? Or should I be using some other method (input from an ascii table?)

    Read the article

  • What .NET UnmanagedType is Unicode (UTF-16)?

    - by Pat
    I am packing bytes into a struct, and some of them correspond to a Unicode string. The following works fine for an ASCII string: [StructLayout(LayoutKind.Sequential)] private struct PacketBytes { [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 64)] public string MyString; } I assumed that I could do [StructLayout(LayoutKind.Sequential)] private struct PacketBytes { [MarshalAs(UnmanagedType.LPWStr, SizeConst = 32)] public string MyString; } to make it Unicode, but that didn't work. (Since this field is part of a struct with other fields, which I've omitted for clarity, I can't simply change the CharSet of the containing struct.) Any idea what I'm doing wrong?

    Read the article

  • JPA GeneratedValue with GenerationType.TABLE does a big jump after jvm restart

    - by joeduardo
    When I start my server and add an entry, the generated id will start with 1, 2, so on and so forth. After a restart, adding an entry would generate an id like 32,xxx. Another restart and adding of entry would generate an id like 65,xxx. I don't know why this is happening. Here's a snippet of the annotation I'm using for my id. I'm using Hibernate. @Id @GeneratedValue(strategy = GenerationType.TABLE) private Long id;

    Read the article

  • Best way to have unique key over 500M varchar(255) records in mysql/innodb?

    - by taw
    I have url column with unique key over it - but its performance on updates is absolutely atrocious. I suspect that's because the index doesn't all fit in memory. So I was thinking, how about adding a column of md5(url) with 16 bytes of binary data and unique-keying that instead. What would be the best datatype for that? I'd love to be able to just see 32-character hex hash, while mysql would convert it to/from 16 binary bytes and index that, as programs using the database might have some troubles with arbitrary binary data that I'd rather avoid if possible (also I'm a bit afraid that mysql might get some strange ideas about character sets and for example overalocating storage for that by 3:1 because it thinks it might need utf8, how do I avoid that for cure?).

    Read the article

  • ANSI C++: Diferences between delete and delete[]

    - by Sunscreen
    I was looking a snipset of code: int* ip; ip = new int[100]; delete ip; The example above states that: "This code will work with many compilers, but it should instead read:" int* ip; ip = new int[100]; delete [] ip; Is this indeed the case? I use the compiler "Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 11.00.7022 for 80x86" and does not complain (first example) while compiling. At runtime the pointer is set to NULL. Other compilers behave diferrently? Can a compiler not compain and issues can appear at runtime? Thanks, Sun

    Read the article

  • windows 7 (windows-system32-systemproperties.exe n) need programme elevation message

    - by mohammedjas
    hi, i have the issue with windows 7 32-bit professional, since this is a network computer, when i download or install something it was asking for admin password , i gave password, then its shows programme need elevation , after i gone to my computer-properties-advanced tap - again the same message displays as windows-system32-systempropertiesadvanced.exe need programme elevation .this same message showing in all eg: if i click to install something wind/sys32/isyspropertiesins.exe progrmme need elevation , also i was not able to add or change somthing in the computermanagement, user or group , says some error , even i logged in admin also,, please help me out with good soluton ..i am looking forward reply , as soon as possible. regards, mohmmed

    Read the article

  • 16 millisecond quantization when sending/receivingtcp packets

    - by MKZ
    Hi, I have a C++ application running on windows xp 32 system sending and receiving short tcp/ip packets. Measuring (accurately) the arrival time I see a quantization of the arrival time to 16 millisecond time units. (Meaning all packets arriving are at (16 )xN milliseconds separated from each other) To avoid packet aggregation I tried to disable the NAGLE algorithm by setting the IPPROTO_TCP option to TCP_NODELAY in the socket variables but it did not help I suspect that the problem is related to the windows schedular which also have a 16 millisecond clock.. Any idea of a solution to this problem ? Thanks

    Read the article

  • How to find the largest power of 2 less than the given number

    - by nazar_art
    I need to find the largest power of 2 less than the given number. And I stuck and can't find any solution. Code: public class MathPow { public int largestPowerOf2 (int n) { int res = 2; while (res < n) { res =(int)Math.pow(res, 2); } return res; } } This doesn't work correctly. Testing output: Arguments Actual Expected ------------------------- 9 16 8 100 256 64 1000 65536 512 64 256 32 How to solve this issue?

    Read the article

  • Can I run Visual Studio 2008 x86 on Windows Vista x64?

    - by TheCodeJunkie
    Hi, Is it possible to run the 32-bit version of Visual Studio 2008 Professional on a Windows Vista 64-bit system? Are there any known caveats that I would need to be aware of? Would have to install the x64 version of the .NET Framework? Would there be any issues on building software targeted for x86? Would there be any (justifiable) arguments for getting the x64 version of VS2008 instead of reusing the current x86 license? Quite tempted on getting a x64 Vista rig to be able to take advantage of more RAM :)

    Read the article

  • Linking Error Building 64bit Qt app on 32bit XP machine.

    - by photo_tom
    I'm trying to build a 64 bit version of my application (and yes I really do need the memory) on my 32bit xp dev box for production testing on our Vista64 server. Previously, I have built w/o any errors the Qt 4.6.2 DLL's in 64 bit mode. That step went vary smooth. Just to get started in building production, I'm trying to rebuild Qt's Star Delegate demo in 64bit mode. I converted the 32bit to 64bit app by changing the application configuration and adjusting the library's to the 64bit venisons. Now, when I go to link, I'm getting the following error when I link 1>------ Build started: Project: stardelegate, Configuration: Release x64 ------ 1>Linking... 1>MSVCRT.lib(crtexew.obj) : error LNK2001: unresolved external symbol WinMain 1>release64\stardelegate.exe : fatal error LNK1120: 1 unresolved externals Suggestions? edit - After some more searching, discovered if I link as a console app it will work and run. But not as a windows app. And I don't have this problem in 32 bit mode.

    Read the article

  • Permission error while trying to access an (server) program started by a Java program.

    - by Zardoz
    I am starting a server application (normally to be started from the Unix command line) by using Runtime.getRuntime().exec("path/mmserver"). My problem is now that as long as my Java program, which started that server runs, the server is correctly accessible (from command line and other programs). But when my Java program exits the sever is not accessible anymore (the process of the server is still running). I just get such a error message when trying to access the server: "Error: permission_error(flush_output(user_output),write,stream,user_output,errno(32))". The server is a blackbox for me. I am just looking for other ways to start a new process. And maybe someone has a hint why I get that permission error (even if one doesn't know what that server exactly is ... you rather won't know it).

    Read the article

  • Android App crashing on one device only

    - by Daniel1402
    I am working on a new game that works perfectly on my test devices, 7-inch tablets and smartphones. But it crashes on my Galaxy Tab2 10-inch tablet with an Out of memory error. It always crashes when I start to play a second game! I have spent a full week checking the codes and I cannot figure out what is wrong. When I play from the menu screen, everything works fine. When I want to replay a game level from the level screen, the game will crash on the second launch. The level screen is made of 3 fragments, each with 32 buttons (4kB in size). I tried to keep only one fragment in memory with viewPager.setOffscreenPageLimit(1); but it does not solve the problem. Could someone stir me in some direction as to where to look for the potential problem? Why is the 10-inch tablet the only one to crash? Thanks.

    Read the article

  • C program giving incorrect output for simple math!

    - by DuffDuff
    (all are declared as ints, none are initialized to anything beforehand. I have included math.h and am compiling with -lm) cachesize = atoi(argv[1]); blocksize = atoi(argv[3]); setnumber = (cachesize/blocksize); printf("setnumber: %d\n", setnumber); setbits = (log(setnumber))/(log(2)); printf("sbits: %d\n", setbits); when given cachesize as 1024 and blocksize as 16 the output is as follows: setnumber: 64 sbits: 5 but log(64)/log(2) = 6 ! It works correctly when given cachesize 512 and blocksize 32. I can't seem to win. I'm really hoping that it's a stupid mistake on my part, and I'd be grateful if anyone could point out what it is! Thank you! PS: I posted this in Yahoo Answers first but that was probably silly. Won't be doing that again.

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >