Search Results

Search found 23480 results on 940 pages for '32 bit'.

Page 26/940 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • ClassCastException when casting custom View subclass

    - by Jens Jacob
    Hi I've run into an early problem with developing for android. I've made my own custom View (which works well). In the beginning i just added it to the layout programmatically, but i figured i could try putting it into the XML layout instead (for consistency). So what i got is this: main.xml: [...] <sailmeter.gui.CompassView android:id="@+id/compassview1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/widget55" android:background="@color/white" /> [...] CompassView.java: public class CompassView extends View { } SailMeter.java (activity class): public class SailMeter extends Activity implements PropertyChangeListener { public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); compassview = (CompassView) findViewById(R.id.compassview1); [...] } } (Theres obviously more, but you get the point) Now, this is the stacktrace: 05-23 16:32:01.991: ERROR/AndroidRuntime(10742): Uncaught handler: thread main exiting due to uncaught exception 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): java.lang.RuntimeException: Unable to start activity ComponentInfo{sailmeter.gui/sailmeter.gui.SailMeter}: java.lang.ClassCastException: android.view.View 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2596) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2621) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at android.app.ActivityThread.access$2200(ActivityThread.java:126) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1932) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at android.os.Handler.dispatchMessage(Handler.java:99) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at android.os.Looper.loop(Looper.java:123) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at android.app.ActivityThread.main(ActivityThread.java:4595) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at java.lang.reflect.Method.invokeNative(Native Method) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at java.lang.reflect.Method.invoke(Method.java:521) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at dalvik.system.NativeStart.main(Native Method) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): Caused by: java.lang.ClassCastException: android.view.View 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at sailmeter.gui.SailMeter.onCreate(SailMeter.java:51) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2544) 05-23 16:32:02.051: ERROR/AndroidRuntime(10742): ... 11 more Why cant i cast my custom view? I need it to be that type since it has a few extra methods in it that i want to access. Should i restructure it and have another class handle the logic, and then just having the view being a view? Thanks for any help.

    Read the article

  • Deploying 32 and 64 bit COM objects on 64 bit machine from one VS setup project MSI.

    - by hooligan
    I have a Shell Namespace Extension C++ COM DLL that must have both a 32 bit and 64 bit version installed on a 64 bit machine, because when 32 bit applications perform a file- open the dialog that is presented is a 32 bit shell. The problem is that both my 32 bit and 64 bit COM objects have the same progid and the VS setup project will throw an error when including two files with the same progid. How do I get around this issue if I want to maintain the same code for both 32 and 64 bit? Currently I just have two different MSI's (32 and 64) and they both must be ran on the 64 bit machine.

    Read the article

  • Perl: How do I extract certain bits from a byte and then covert these bits to a hex value?

    - by Siegfried Hepp
    I need to extract certain bits of a byte and covert the extract bits back to a hex value. Example (the value of the byte is 0xD2) : 76543210 bit position 11010010 is 0xD2 Bit 0-3 defines the channel which is 0010b is 0x2 Bit 4-5 defines the controller which is 01b is 0x1 Bit 6-7 defines the port which is 11b is 0x3 I somehow need to get from the byte is 0xD2 to channel is 0x2, controller is 0x1, port is 0x3 I googled allot and found the functions pack/unpack, vec and sprintf. But I'm scratching by head how to use the functions to achieve this. Any idea how to achieve this in Perl ?

    Read the article

  • Invert 1 bit in C#

    - by Matt Jacobsen
    I have 1 bit in a byte (always in the lowest order position) that I'd like to invert. ie given 00000001 I'd like to get 00000000 and with 00000000 I'd like 00000001. I solved it like this: bit > 0 ? 0 : 1; I'm curious to see how else it could be done.

    Read the article

  • Add and Subtract 128 Bit Integers in C(++)

    - by Billy ONeal
    Hello :) I'm writing a compressor for a long stream of 128 bit numbers. I would like to store the numbers as differences -- storing only the difference between the numbers rather than the numbers themselves because I can pack the differences in fewer bytes because they are smaller. However, for compression then I need to subtract these 128 bit values, and for decompression I need to add these values. Maximum integer size for my compiler is 64 bits wide. Anyone have any ideas for doing this efficiently? Billy3

    Read the article

  • Getting the Leftmost Bit

    - by James
    I have a 5 bit integer that I'm working with. Is there a native function in Objective-C that will let me know which bit is the leftmost? i.e. I have 01001, it would return 8 or the position. Thanks

    Read the article

  • Media Information for Constant and Variable bit rate of Video files

    - by cpx
    What is this Maximum bit rate for a .mp4 format file whose bit rate mode is Constant? Media information displayed for MP4 (Using MediaInfo Tool) ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : No Format settings, ReFrames : 1 frame Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Constant Bit rate : 1 500 Kbps Maximum bit rate : 3 961 Kbps Display aspect ratio : 4:3 Frame rate mode : Constant Frame rate : 29.970 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.163 In this case where the bit rate mode is set to Variable, is the Bit rate field where the value is displayed as 309 is its average bit rate? Media information displayed for M4V (Using MediaInfo Tool) ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : No Format settings, ReFrames : 1 frame Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Variable Bit rate : 309 Kbps Display aspect ratio : 16:9 Frame rate mode : Variable Frame rate : 23.976 fps Minimum frame rate : 23.810 fps Maximum frame rate : 24.390 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.229 Writing library : x264 core 120

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • How to solve 'Connection refused' errors in SSH connection?

    - by frbry
    I have an Ubuntu Server 10.10 32-bit in my home. I'm making SSH connections to it from my PC via Putty. The problem is, sometimes I'm able to login seamlessly. However, sometimes it gives me an error like this: Network error: Connection refused. Then, I dont't change anything, try to login a few times more, wait a while and try again. Sometimes I can log in, sometimes I cannot. It seems pretty random to me. What can I do to solve this? Edit: And Sometimes, Putty gives Network error: Software caused connection abort error after displaying login as: text. Here is the ping -t output: Pinging 192.168.2.254 with 32 bytes of data: Reply from 192.168.2.254: bytes=32 time=6ms TTL=64 Reply from 192.168.2.254: bytes=32 time=65ms TTL=6 Reply from 192.168.2.254: bytes=32 time=88ms TTL=6 Reply from 192.168.2.254: bytes=32 time=1ms TTL=64 Reply from 192.168.2.254: bytes=32 time=3ms TTL=64 Reply from 192.168.2.254: bytes=32 time=1ms TTL=64 Reply from 192.168.2.254: bytes=32 time=1ms TTL=64 Reply from 192.168.2.254: bytes=32 time=1ms TTL=64 Reply from 192.168.2.254: bytes=32 time=1ms TTL=64

    Read the article

  • Can I delete Generic kernel if I use Generic

    - by user206049
    I currently can't update my release as there is not enough space on boot. I just have the one kernel version there, but seem to have both the Generic and Low Latency versions. uname -r just shows 3.8.0-32-lowlatency ls -lah /boot shows -rw-r--r-- 1 root root 899K Oct 2 00:00 abi-3.8.0-32-generic -rw-r--r-- 1 root root 899K Oct 7 09:27 abi-3.8.0-32-lowlatency -rw-r--r-- 1 root root 152K Oct 2 00:00 config-3.8.0-32-generic -rw-r--r-- 1 root root 152K Oct 7 09:27 config-3.8.0-32-lowlatency drwxr-xr-x 3 root root 2.0K Jan 1 1970 efi drwxr-xr-x 5 root root 1.0K Oct 22 10:05 grub -rw-r--r-- 1 root root 32M Oct 22 09:51 initrd.img-3.8.0-32-generic -rw-r--r-- 1 root root 32M Oct 22 10:05 initrd.img-3.8.0-32-lowlatency drwxr-xr-x 2 root root 12K Feb 25 2013 lost+found -rw-r--r-- 1 root root 173K Dec 5 2012 memtest86+.bin -rw-r--r-- 1 root root 175K Dec 5 2012 memtest86+_multiboot.bin -rw------- 1 root root 3.0M Oct 2 00:00 System.map-3.8.0-32-generic -rw------- 1 root root 3.0M Oct 7 09:27 System.map-3.8.0-32-lowlatency -rw------- 1 root root 5.2M Oct 2 00:00 vmlinuz-3.8.0-32-generic -rw------- 1 root root 5.2M Oct 7 09:27 vmlinuz-3.8.0-32-lowlatency So what can I do to allow me to update? Apparently I need 174m on boot and am 40m short.

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Explain the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. //hello.S #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. $cpp hello.S hello.s //pre-processor $as hello.s -o hello.o //assemble $ld hello.o // linker : converting relocatable to executable Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Kindly change this program appropriately so that it can run correctly on this machine. EDIT: After reading Paul R's answer. I checked my files claws@claws-desktop:~$ file ./hello.o ./hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped claws@claws-desktop:~$ file ./hello ./hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped All of my questions still hold true. What exactly is happening in this case? Can someone please answer my questions and provide an x86-64 version of this code?

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Expalin the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Change this program appropriately so that it can run correctly on this machine.

    Read the article

  • Huh? JDK not found? (on Windows 7 64-bit)

    - by Android Eve
    I am setting up a development environment for the latest Android 2.3 on a fresh install of Windows 7 64-bit. I first installed the 64-bit JDK 6 (jdk-6u23-windows-x64.exe). Then, I installed 64-bit Eclipse Classic 3.6 (eclipse-SDK-3.6.1-win32-x86_64.zip). Then, I proceed to install the Android SDK Starter Package: installer_r08-windows.exe. But... upon start it says: "Java SE Development Kit (JDK) not found." Why? I just installed it. Is this a mismatch between 32-bit and 64-bit? How do I solve this? Update (1): I tried setting the %JAVA_HOME% environment variable, as well as setting the Installed JREs in Eclipse, as suggested below. None of these solved the problem. It appears that I am not the only experiencing the problem, as this thread suggests: http://stackoverflow.com/questions/1919340/android-sdk-setup-under-windows-7-pro-64-bit I wonder whether there is a 64-bit version of the Android SDK. Update (2): I used the zip version instead (android-sdk_r08-windows.zip), ran android.bat, updated all SDK packages, and installed the ADT plugin (8.0.1), not before having to check: 'Contact all update sites during install to find required software'. We'll see how this goes... Update (3): It worked! (going to accept @bubu's answer shortly) -- but why doesn't the emulator include the HelloAndroid app when I run it (Ctrl+F11) from Eclipse?

    Read the article

  • Huh? JDK not found? (on Windows 7 64-bit)

    - by Android Eve
    I am setting up a development environment for the latest Android 2.3 on a fresh install of Windows 7 64-bit. I first installed the 64-bit JDK 6 (jdk-6u23-windows-x64.exe). Then, I installed 64-bit Eclipse Classic 3.6 (eclipse-SDK-3.6.1-win32-x86_64.zip). Then, I proceed to install the Android SDK Starter Package: installer_r08-windows.exe. But... upon start it says: "Java SE Development Kit (JDK) not found." Why? I just installed it. Is this a mismatch between 32-bit and 64-bit? How do I solve this? Update (1): I tried setting the %JAVA_HOME% environment variable, as well as setting the Installed JREs in Eclipse, as suggested below. None of these solved the problem. It appears that I am not the only experiencing the problem, as this thread suggests: http://stackoverflow.com/questions/1919340/android-sdk-setup-under-windows-7-pro-64-bit I wonder whether there is a 64-bit version of the Android SDK. Update (2): I used the zip version instead (android-sdk_r08-windows.zip), ran android.bat, updated all SDK packages, and installed the ADT plugin (8.0.1), not before having to check: 'Contact all update sites during install to find required software'. We'll see how this goes... Update (3): It worked! (going to accept @bubu's answer shortly) -- but why doesn't the emulator include the HelloAndroid app when I run it (Ctrl+F11) from Eclipse?

    Read the article

  • Intel annonce une puce pour serveurs composée de 32 coeurs, faisant partie de sa future gamme de pro

    Mise à jour du 02.06.2010 par Katleen Intel annonce une puce pour serveurs composée de 32 coeurs, faisant partie de sa future gamme de produits Knights Intel vient d'annoncer une puce pour serveurs composée de 32 coeurs, cadencés à 1.2 GHz, élaborée sur une architecture mêlant des coeurs x86 ainsi que d'autres spécialisés pour répondre aux besoins spécifiques des serveurs à haute performance. Répondant au nom de Knights Ferry, ce processeur est "le plus rapide pouvant traiter plus de 500 Gigaflops de données", d'après son constructeur. Il marque les premiers pas d'une gamme destinée aux serveurs (Knights), qui repose sur une architecture MIC (Many Integrated Cores). Les proce...

    Read the article

  • ubuntu 13.04 upgrade to 64 bit

    - by harlie
    I have ubuntu 13.04 dual booting wit MS windows. It is a 32 bit version but the pc is a 64 bit. When I use the 64 bit install DVD it sees the two main partitions and gives several options but I can't find how to replace the ubuntu 32 with the 64 version without chopping the hard drive into little pieces or formatting the whole drive . I don't want to to do this and don't recognise any of the partitions shown when I go to the "do something else" menu.

    Read the article

  • 40% des propriétaires de BlackBerry échangeraient pour un iPhone et 32% pour un Nexus One, selon le

    40% des utilisateurs de BlackBerry échangeraient pour un iPhone et 32% pour un Nexus One, selon le dernier sondage de Crowd Science. Un sondage réalisé par Crowd Science montre que 40% des utilisateurs de BlackBerry sont prêts à passer à l'iPhone au prochain changement de leurs Smartphones, et 32% échangeraient pour le Nexus One. [IMG]http://djug.developpez.com/rsc/Blackberry-vs-n1-iphone.jpg[/IMG] Selon John Martin, le PDG de Crowd Science ces chiffres peut être expliqués par l'impatience des utilisateurs de BlackBerry qui n'ont pas vu leurs plateformes évoluer depuis la sortie de l'iPhone. Ce sondage montre que 33% des propriétaires d'iPhone et 16% des propriétaires de Bl...

    Read the article

  • Find Window At Location Using Carbon And Carbon Problems In 64-Bit Applications

    - by JxXx
    As I said in some questions today I´m looking for the way to get window or windowPart references at a certain location. Although I know I could use Cocoa for this purpose (I don´t know how to do it yet) I prefer (and probably need) to do this using Carbon because the entire application that needs this functionality is written in C++ but I´ve found many problems trying it. Does anyone get a valid windowPtr or windowRef using one of the following functions? FindWindow, MacFindWindow, HIWindowFindAtLocation or FindWindowOfClass I always get 0 as the windowRef or windowPtr that I´m looking for. What I´m doing wrong? Any ideas? It´s true that now if you want to create a 64-bit application for Mac OS X, you need to use Cocoa to implement its user interface because some APIs commonly used by Carbon applications are not available in 64-bit applications? Thank you. JxXx

    Read the article

  • Java playback of 24 bit audio is incorrect

    - by Paul Hampson
    I am using the javax sound API to implement a simple console playback program based on http://www.jsresources.org/examples/AudioPlayer.html. Having tested it using a 24 bit ramp file (each sample is the last sample plus 1 over the full 24 bit range) it is evident that something odd is happening during playback. The recorded output is not the contents of the file (I have a digital loopback to verify this). It seems to be misinterpreting the samples in some way that causes the left channel to look like it is having some gain applied to it and the right channel looks like it is being attenuated. I have looked into whether the PAN and BALANCE controls need setting but these aren't available and I have checked the windows xp sound system settings. Any other form of playback of this ramp file is fine. If I do the same test with a 16bit file it performs correctly with no corruption of the stream. So does anyone have any idea why the Java Sound API is modifying my audio stream?

    Read the article

  • Standard (cross-platform) way for bit manipulation

    - by Kiril Kirov
    As are are different binary representation of the numbers (for example, take big/little endian), is this cross-platform: some_unsigned_type variable = some_number; // set n-th bit, starting from 1, // right-to-left (least significant-to most significant) variable |= ( 1 << ( n - 1 ) ); // clear the same bit: variable &= ~( 1 << ( n - 1 ) ); In other words, does the compiler always take care of the different binary representation of the unsigned numbers, or it's platform-specific? And what if variable is signed integral type (for example, int) and its value is zero positive negative? What does the Standard say about this? P.S. And, yes, I'm interesting in both - C and C++, please don't tell me they are different languages, because I know this :) I can paste real example, if needed, but the post will become too long

    Read the article

  • Bit conversion operations in PHP

    - by Goro
    Hello, I find myself in need of performing bit-level conversion on variables in PHP. In more detail, I have a bit stream that is read as an integer by hardware, and I need to do some operations on the bits to make it into what its actually supposed to be (a float). I have to do this a few times for different formats, and the functionality I need is Being able to select and move individual bits in a variable Being able to cast statically one type of variable to the other (ie. int to float) I know php natively supports bitwise AND, OR, etc, and shift operations, but I was wondering if: there may already be a library in php that does this sort of thing I would be better off with delegating the calculations to some other language Thanks,

    Read the article

  • How to produce 64 bit masks?

    - by egiakoum1984
    Based on the following simple program the bitwise left shit operator works only for 32 bits. Is it true? #include <iostream> #include <stdlib.h> using namespace std; int main(void) { long long currentTrafficTypeValueDec; int input; cout << "Enter input:" << endl; cin >> input; currentTrafficTypeValueDec = 1 << (input - 1); cout << currentTrafficTypeValueDec << endl; cout << (1 << (input - 1)) << endl; return 0; } The output of the program: Enter input: 30 536870912 536870912 Enter input: 62 536870912 536870912 How could I produce 64-bit masks?

    Read the article

  • What's up with this reversing bit order function?

    - by MattyW
    I'm rather ashamed to admit that I don't know as much about bits and bit manipulation as I probably should. I tried to fix that this weekend by writing some 'reverse the order of bits' and 'count the ON bits' functions. I took an example from here but when I implemented it as below, I found I had to be looping while < 29. If I loop while < 32 (as in the example) Then when I try to print the integer (using a printBits function i've written) I seem to be missing the first 3 bits. This makes no sense to me, can someone help me out? int reverse(int n) { int r = 0; int i = 0; for(i = 0; i < 29; i++) { r = (r << 1) + (n & 1); n >>=1; } return r; }

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >