Search Results

Search found 259224 results on 10369 pages for 'hardware enablement stack'.

Page 10/10369 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • CPU Usage at 100% with "Hardware Interrupts"

    - by eventualEntropy
    After turning on my desktop one day, I found that my CPU usage was maxed out at 100%, with 99% of that going to hardware "Interrupts". I tried to enable/disable all my devices one by one through the device manager, and found that I could get the CPU usage used by the Interrupts down to 50% by disabling all devices labelled "USB Host Controller" (except the ones for the mouse/keyboard). I found that I also got 10-20% more from disabling "High Definition Audio Controller". Following the tutorial at: http://www.msfn.org/board/topic/140263-how-to-get-the-cause-of-high-cpu-usage-by-dpc-interrupt/ Led me to similar conclusions (that is, that the culprit is mostly "USB Host Controller"): I've tried updating my asus motherboard driver and my video card driver. This is on Windows 7 64 bit. I've spent hours trying to figure this out and I'm running out of ideas short of formatting (which might still not fix it!).

    Read the article

  • Virtual server hardware to simulate 3-4 node web farm

    - by frankadelic
    I would like to get a dedicated server to run VMWare, VirtualBox, or similar. On this box, I would like to host 3-4 virtual instances of Linux, to act as nodes in a web farm. Performance is not that important, this would only be for testing and experimenting. I need something sub $1000 (including tax/shipping). Can someone recommend a pre-built server that would do the trick? I am pretty ignorant of hardware so building one is not going to work for me. Also, would I need multiple network cards to simulate a web farm or can the virtualization software handle that for me. Thanks

    Read the article

  • How to Diagnose a Pre-Operating System Load or Hardware Issue

    - by soandos
    How can I find out if my problem is hardware based? If it is, how can I figure out what component is to blame How can I fix other pre-operating system issues? As an aside, what are all of these components responsible for, and if they break, what can go wrong? (This question comes up frequently, and the suggested solutions are usually the same. This community wiki is an attempt to serve as the definitive, most comprehensive answer possible. Feel free to add your contributions via edits.)

    Read the article

  • Hardware asset management systems

    - by Dave
    I need to track a bunch of specialized testing tools, They are hardware devices used for testing other equipement. Each device has a serial number and is sent out for use in testing. Occasionally they break and have to be sent to the manufacture for repair. I'm looking for an open source application (preferably a webapp) to help manage them. Right now we're using Excel and it's not scaling as we get more tools. They aren't computers so all the standard IT asset management systems don't really fit the bill. I found h-harmony, but that project seems dead?

    Read the article

  • VirtualBox won't use any kind of hardware acceleration

    - by burnersk
    I see an problem with VirtualBox hardware acceleration functionality... My system configuration: MSI PH67A-C43 (B3) (BIOS: 2.70) Intel Core i5-2400 Windows 7 Professional SP1 64-bit Oracle VirtualBox 4.1.10 Oracle VirtualBox Extension Pack 4.1.10 I can select the individual acceleration options such as PAE/NX, VT-x/VMD-V or Nested Paging within VM-Settings but if I start the VM the accelerations will be disabled as of the tooltip from CPU (right next to shared folders). Each acceleration is "Disabled". Does this sounds familar to anybody? How can I solve this?

    Read the article

  • Rules to choose hardware for OLTP systems (sql server)

    - by Roman Pokrovskij
    Ok. We know database size, number of concurrent users, number of transactions per minute; should choose number of processors, RAID, RAM, mirroring and clustering. There are no exact rule.. but may be there are no rules at all? In my practice in every case I have "legacy" system, and after some inspections and interview I can form an opinion how hardware and design can be improved. But every time when I meet "absolutely" new system (I guess there are no new systems, but sometimes are such tasks) I can't say anything trustful. So I'm interesting how people deal with such tasks? They map task on theirs experience or have some base formulas?

    Read the article

  • Server 2008 R2 Datacenter (and all other version) not detecting hardware

    - by Mitchell Skurnik
    I upgraded my ASUS KFN32-D SLI/SAS motherboard to the latest BIOS version and swapped out the 2 dual-core procs with 2 quad core procs. After installing Server 2008 Datacenter onto the system I noticed that it was not connected to the network. I open device manager and see no network adapters. I have to go up to View - Show hidden devices to even see the virtual WAN Miniport adapters. This problem happens in Vista and Windows 7 (all x64). What is even stranger is it all works in x86 mode. What in x64 could be causing windows to not even detect hardware that works in x86? Is the boot manager possibly the culprit?

    Read the article

  • Windows 2008 Server on VMWare (hardware)

    - by Bill
    I want to setup a single server to run a few virtual servers for our datacenter. I do not have a lot of money to spend so I am trying to gain bang for the buck. My budget is around $2,000. So I was thinking about building the following as the VMWare physical server: Intel iCore 7 950 (LGA1366, 4 cores,8 threads) Gigabyte GA-X58-USB3 LGA 1366 X58 ATX Intel Motherboard 24 GB of Viper II Series, Sector 7 Edition, Extreme Performance DDR3-1600 (PC3-12800) CL9 Triple Channel Memory VelociRaptor 300GB 10,000 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive I am planning on running the newest version of VMWare ESXi (64-bit). On these I am planning on running a few various servers: Windows 2008 Server R2 w/ IIS (several custom built ASP.NET Apps) Windows 2008 Server R2 w/ MS SQL 2008 Database Server Linux Web Server w/ Several WordPress Blogs (XAMPP?) Windows 2008 Server R2 w/ IIS (DEV ENVIRONMENT) Windows 2008 Server R2 w/ MS SQL 2008 Database Server (DEV ENVIRONMENT) In your opinion, will this hardware be sufficient to run the above load with room for possible 2-3 more virtual machines (probably lightweight web servers)?

    Read the article

  • Hardware specs for web cache

    - by Raj
    I am looking for recommendations for hardware specs for a server that needs to be a web cache for a user population of about 2,000 concurrent connections. The clients are viewing segmented HTTP video in bitrates ranging from 150kbps to 2mbps. Most video is "live" meaning segments of 2-10 secs each, of which 100 or so are maintained at a time. There are also some pre-recorded fixed length videos. How would I go about doing the provisioning calculation for such a server: What kind of HDD (SSD?), how many NICs how much RAM etc? I am thinking of using Varnish on Linux, all the RAM I can get my hands on, 2 CPUs with 6-8 cores each.

    Read the article

  • Does programmable hardware exist to allow hardware to be programmed by computers?

    - by agentbanks217
    I am a programmer and I have never really dealt with the hardware of anything, only software. I want to start building things that I can control from my computer using programming. My question is are there such devices on the market that have a programmable interface or API? For example, I want to build a automated window blinds opening/closing device, and I would like to be able to control it from my computer e.g. writing an app or some code to schedule them when to open and close. I would like to know if there are any devices that can be programmed to do that (the computer part)? Thanks!

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • linux hardware raid 10 / lvm / virtual machine partition alignment and filesystem optimization

    - by Jason Ward
    I've been reading everything I can find about partition alignment and filesystem optimization (ext4 and xfs) but still don't know enough to be confident in setting up my current configuration. My remaining confusion comes from the LVM layer and if I should use raid parameters on the filesystem in guest os'es. My main questions are: When I use 'pvcreate --dataalignment' do I use the stripe-width as calculated for a filesystem on RAID (128kB for ext4 in my situation), the Stripe size of the RAID set (256kB), something else altogether, or do I not need this? When I create ext2/3/4 or xfs filesystems in guests on the Logical Volumes, should I add the settings for the underlying RAID (e.g. mkfs.ext4 -b 4096 -E stride=64,stripe-width=128)? Does anyone see any glaring errors in my set up below? I'm running some benchmarks now but haven't done enough to start comparing results. I have four drives in RAID 10 on a 3ware 9750-4i controller (more details on the settings below) giving me a 6.0TB device at /dev/sda. Here is my partition table: Model: LSI 9750-4i DISK (scsi) Disk /dev/sda: 5722024MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1.00MiB 257MiB 256MiB ext4 BOOTPART boot 2 257MiB 4353MiB 4096MiB linux-swap(v1) 3 4353MiB 266497MiB 262144MiB ext4 4 266497MiB 4460801MiB 4194304MiB Partition 1 is to be the /boot partition for my xen host. Partition 2 is swap. Partition 3 is to be the root (/) for my xen host. Partition 4 is to be (the only) physical volume to be used by LVM (for those who are counting, I left about 1.2TB unallocated for now) For my Xen guests, I usually create a Logical Volume of the needed size and present it to the guests for them to partition as needed. I know there are other ways of handling that but this method works best for my situation. Here's the hardware of interest on my CentOS 6.3 Xen Host: 4x Seagate Barracuda 3TB ST3000DM001 Drives (sector size: 512 logical/4096 physical) 3ware 9750-4i w/BBU (sector size reported: 512 logical/512 physical) All four drives make up a RAID 10 array. Stripe: 256kB Write Cache enabled Read Cache: intelligent StoreSave: Balance Thanks!

    Read the article

  • Screen Flickering: Hardware or Software?

    - by Wesley
    I have a Samsung N120 netbook (upgraded to 2GB DDR2 RAM) and there has been a screen flickering issue for some time now. However, I have not been able to accurately determine whether it is a software or hardware issue. Here are some of the symptoms: The flicker is white-colored and shows up as vertical lines. Flickering or not, there may be occasionally some random blue patterns (no image distortion) The screen tends to flicker more when the screen is not tilted back all the way. When tilting the screen back and forth, the screen will usually flicker. Some images on the screen may randomly distort without full-on flickering. The screen will flicker only on certain websites, but not on others. A certain part of a webpage may constantly be distorted randomly, even when scrolling. While flickering, the mouse will not move though I'm moving my finger along the touchpad. A connected external monitor does not have any problems. The flickering is completely random and does not seem to follow any CPU/GPU usage trends. Flickering usually gets worse when the screen brightness is turned higher. There will be flickering on battery and while plugged in. Search up "Samsung N120 - Screen Flickering" on YouTube for an idea of what the flickering looks like. However, there is no visible distortions and the flickering seems to stop when the screen has dimmed. Since the problems started, I tried formatting and using Windows 7, then formatted again and went back to Windows XP. The screen was also replaced sometime during this past summer. The uninstallation of the Samsung Battery Manager (on the original install of XP) seemed to reduce the flicker partially, but eventually got worse. So, what could possibly be the problem?

    Read the article

  • Piecing together low-powered hardware for an RS-232 terminal server

    - by Fred
    I'm working on reconstructing my Cisco lab for training/educational purposes and I found that the actual terminal server I have is dead. I have a couple of 8-port PCI serial cards which would be more than ample for my lab, but I don't want to leave my personal computer running to be able to access the console ports. Ideally I would access the terminal server remotely, either by SSH/RDP to the box (depending on what OS I go with) or by installing a software package that allows me to telnet directly to a serial port. I know I've found a program that does this under Linux in the past but its name escapes me at the moment. I'm thinking about scavenging for some old hardware, on eBay or something, to put together a low-powered PC. Needs to be something that: Has Low-power consumption Has at least 2 PCI slots (though I certainly wouldn't complain about having more) Has onboard Ethernet (or, if not, another PCI or ISA slot (not shared)) Can be headless once an OS installed (probably Linux) I'm currently leaning towards an old fashioned Pentium (sub-133MHz era) but I am wondering if anybody else knows of another platform/mobo that would suit these needs. Alternatively, I've been considering buying a Raspberry Pi and a big USB hub along with a bunch of USB-Serial adapters but this sounds like it'd get messy quick with cables and adapters all over the place, and I may not even have the same ttyS#'s between boots.

    Read the article

  • Inexpensive (used) hardware for Xen virtualization test?

    - by Jason Antman
    Virtualization is one of the areas where I could really use some experience. I also run quite a few services (web, mail, dns, etc.) out of my home. Since most of my hardware is getting a bit old (I'm running on stuff that was surplused years ago...) I decided that it's about time I start renewing some things, and also play around with virtualization a bit more. My plan is to setup a SAN box (simple iSCSI target, relatively inexpensive gigE switch), get a pair (for starters) of new servers, and start building some new stuff with Xen, specifically planning on playing with live migration and full virtualization. Does anyone have recommendations for used, older "servers" (really anything in a rack-mount form factor, I'm not too worried about things like iLO/iLOM for the test nodes) that support VT-x/AMD-V? I'm biased to HP, but it looks like they didn't make Proliants with VT-x/Vanderpool processors until G6 (for the DL360) or so, which is way out of my price range. I'm looking in the sub-$300 range (or less, if possible), used, probably Ebay. Any recommendations are greatly appreciated. Edit:And, to catch this before the comments start coming - these are personal systems. I have first-generation Proliants still in use (I got them as corporate surplus in 05, they've been running since then, and probably were running since 01 or 02 prior to being sold). I don't need anything shiny and new - I've got a bunch of old boxes, at least one complete replacement for every model in use, and that's fine for me (and easy on the wallet).

    Read the article

  • Reliable way of generating unique hardware ID

    - by mr.b
    Question: what's the best way to accomplish following. I have to come up with unique ID for each networked client, such that: it (ID) should persist once client software is installed on target computer, and should continue to persist if software is re-installed on same computer and same OS installment, it should not change if hardware configuration is modified in most ways (except changing the motherboard) When hard drive with client software installed is cloned to another computer with identical hardware configuration (or, as similar as possible), client software should be aware of that change. A little bit of explanation and some back-story: This question is basically age old question that also touches topic of software copy-protection, as some of mechanisms used in that area are mentioned here. I should be clear at this point that I'm not looking for a copy-protection scheme. Please, read on. :) I'm working on a client-server software that is supposed to work in local network. One of problems I have to solve is to identify each unique client in network (not so much of a problem), so that I can apply certain attributes to every specific client, retain and enforce those attributes during deployment lifetime of a specific client. While I was looking for a solution, I was aware of following: Windows activation system uses some kind of heavy fingerprinting mechanism, that is extremely sensitive to hardware modifications, Disk imaging software copies along all Volume IDs (tied to each partition when formatted), and custom, uniquely generated IDs during installation process, during first run, or in any other way, that is strictly software in its nature, and stored in registry or on hard drive, so it's very easy to confuse two Obvious choice for this kind of problem would be to find out BIOS identifiers (not 100% sure if this is unique through identical motherboard models, though), as that's the only thing I can rely on, that isn't duplicated, transferred by cloning, and that can't be changed (at least not by using some user-space program). Everything else fails as either being not reliable (MAC cloning, anyone?), or too demanding (in terms that it's too sensitive to configuration changes). Am I missing something obvious here? Sub-question that I'd like to ask is, am I doing it correctly, architecture-wise? Perhaps there is a better tool for task that I have to accomplish... Another approach I had in mind is something similar to handshake mechanism, where server maintains internal lookup table of connected client IDs (which can be even completely software-based and non-unique at any given moment), and tells client to come up with different ID during handshake, if duplicate ID is provided upon connection. That approach, unfortunately, doesn't play nicely with one of requirements to tie attributes to specific client during lifetime.

    Read the article

  • What is a hardware-id?

    - by Rob
    Some forums that I regularly visit sell premium programs, and to prevent them from being leaked they use hardware-id authentication. That is, first they send you a program to run to grab your HWID, you tell them your HWID, they store it in a database, then they send you the actual program. If your HWID isn't in the database, the program won't run. So what is Hardware-ID, and how is it generated? Why is it that my HWID is different depending on the programmer that sends me a HWID-grabber?

    Read the article

  • Simple hardware RNG

    - by roygbiv
    I made a tongue-in-cheek comment to this question about making a hardware RNG. Does anyone know of any simple plans or can anyone descibe a simple hardware based RNG and the software to drive it? Go to Radio Shack. Buy a diode, an NTR resistor, a capacitor and serial cable. Cut off the end of the serial cable that does not fit on your computer. Solder the diode and resistor in series between pins DTR and DSR of the cable. Solder the capacitor between DSR and TXD pins. Write a small C program to do the following: Set DTR to 1. Start Timer. Monitor DSR until it goes to 1. Stop Timer. Calculate resistance from elapsed time. Retreive serveral bits from that value to use as part of random number. Repeat until enough bits have accumulated.

    Read the article

  • Developer oriented hardware benchmarks?

    - by Promit
    Perhaps I'm looking in the wrong places, but every hardware benchmark I've found, for nearly any component, is oriented towards gamers and/or workstations (video editing etc). Is there anyone doing benchmarks that are relevant to software developers? For example, take SSDs. I don't care how fast Crysis loads off an SSD -- that is completely worthless information. What I want to know is, which drive yields the quickest build times? What about Intellisense and refactoring operations? What RAID configuration has the biggest benefit? I could probably come up with more examples, but you get the point. Long story short, where are the benchmarks that tell me which hardware will be most effective in helping me be a productive software developer?

    Read the article

  • I'm interested in checking out a stack-oriented programming language. Which one would you recommend?

    - by Anto
    I'm interested in learning a stack-oriented programming language (such as Forth), which one would you recommend? The qualities I want are: You should be able to develop non-trivial software in it, but it mustn't be a great language for that as: I want to learn the language so I can try out a new paradigm (that is, not because I (think) that I will have great use of it). The reason I want to learn another paradigm is that I want to broaden my views on different approaches (learn to think in new ways, different from OOP, functional and structured). The language should let me do that (learn to think differently). The language should have available and good resources to learn from. The resources should also approach stack-oriented programming in a way that you understand the paradigm (after all, I do this for the paradigm).

    Read the article

  • Fusion Applications Enablement Toolkit: the Partner's single place of information for all OPN Fusion Apps resources

    - by Richard Lefebvre
    Take a look and then come back regularly at https://blogs.oracle.com/opnenablement/resource/fusion_applications.html ... a micro site designed to give our EMEA Fusion Partners all the Fusion enablement critical information (Key links, event, materials, etc.) that they need to achieve specialization. This site will be updated on a regular basis, especially for OPN events and training sessions.

    Read the article

  • How to use Hardware RAID in Ubuntu Server

    - by user2071938
    I have an Adaptec RAID-Controller and created an RAID-1(Mirroring) succesfully. Now I have installed Ubuntu Server 12.04.3. When I type fdisk -l I get this output: bf@fileserver:~$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004c454 Device Boot Start End Blocks Id System /dev/sdc1 * 2048 499711 248832 83 Linux /dev/sdc2 501758 156301311 77899777 5 Extended /dev/sdc5 501760 156301311 77899776 8e Linux LVM Disk /dev/mapper/fileserver--vg-root: 75.6 GB, 75606523904 bytes 255 heads, 63 sectors/track, 9191 cylinders, total 147668992 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/fileserver--vg-root doesn't contain a valid partition table Disk /dev/mapper/ddf1_Data: 1000.1 GB, 1000065728512 bytes 255 heads, 63 sectors/track, 121584 cylinders, total 1953253376 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ddf1_Data doesn't contain a valid partition table Disk /dev/mapper/fileserver--vg-swap_1: 4160 MB, 4160749568 bytes 255 heads, 63 sectors/track, 505 cylinders, total 8126464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/fileserver--vg-swap_1 doesn't contain a valid partition table The 80 GB HDD is for the System The 1000.2 GB HDD should be for my data. But I'm a bit confused becauser there are listed two 1000.2 GB HDDs, due the Hardware RAID shoudln't there be only one HDD vissible to the OS? (I have two 1000.2 GB HDDs in an Raid-1 Array) dmraid gives me bf@fileserver:~$ sudo dmraid -r /dev/sdb: ddf1, ".ddf1_disks", GROUP, ok, 1953253376 sectors, data@ 0 /dev/sda: ddf1, ".ddf1_disks", GROUP, ok, 1953253376 sectors, data@ 0 so It seems to be ok? But how do I partitionate this disks and which one should I mount(sdb or sda?) Hope you can help me thx Florian

    Read the article

  • VERY strange stack overflow in C++ program

    - by mav
    Hello, I wrote a program some time ago (Mac OS X, C++, SDL, FMOD) and it perfomed rather good. But lately I wanted to extend its functionality and added some more code to it. And now, when I run it and try to test the new functionality, the program crashes with SIGABRT. Looking into debugger, on function stack I see: _kill kill$UNIX2003 raise __abort __stack_chk_fail odtworz <-- my function that was was modified As far as I know, "__stack_chk_fail" indicates a stack overflow. But that's not the weirdest thing about it. In this function "odtworz", I have some code like this: ... koniec = 0; while ( koniec == 0 ) { ... if (mode == 1) { ... } else if (mode == 2) { ... } else if (mode == 3) { ... } } mode is a global variable and is set to value "2" in a function before. And now imagine - if I delete the third if statement (mode == 3) which never gets executed in this mode, the program doesn't crash! Deleting code that doesn't even get to be executed helps the situation! Now, I don't want to delete this code because it's for other mode of my program. And it works fine there. So any hints where I can search? What could be possibly wrong with this?

    Read the article

  • Dynamic stack allocation in C++

    - by Poni
    I want to allocate memory on the stack. Heard of _alloca / alloca and I understand that these are compiler-specific stuff, which I don't like. So, I came-up with my own solution (which might have it's own flaws) and I want you to review/improve it so for once and for all we'll have this code working: /*#define allocate_on_stack(pointer, size) \ __asm \ { \ mov [pointer], esp; \ sub esp, [size]; \ }*/ /*#define deallocate_from_stack(size) \ __asm \ { \ add esp, [size]; \ }*/ void test() { int buff_size = 4 * 2; char *buff = 0; __asm { // allocate mov [buff], esp; sub esp, [buff_size]; } // playing with the stack-allocated memory for(int i = 0; i < buff_size; i++) buff[i] = 0x11; __asm { // deallocate add esp, [buff_size]; } } void main() { __asm int 3h; test(); } Compiled with VC9. What flaws do you see in it? Me for example, not sure that subtracting from ESP is the solution for "any kind of CPU". Also, I'd like to make the commented-out macros work but for some reason I can't.

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >