Search Results

Search found 6387 results on 256 pages for 'cpu allocation'.

Page 135/256 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Linux Kernel - Slab Allocator Question

    - by Drex
    I am playing around with the kernel and am looking at the kmem_cache files_cachep belonging to fork.c. It detects the sizeof(files_struct). My question is this: I have altered files_struct and added a rb_root (red/black tree root) using the built-in functionality in linux/rbtree.h. I can properly insert values into this tree. However, at some point, a segfault occurs and GDB backtraces the following information: (gdb) backtrace 0 0x08066ad7 in page_ok (page=) at arch/um/os-Linux/sys-i386/task_size.c:31 1 0x08066bdf in os_get_top_address () at arch/um/os-Linux/sys-i386/task_size.c:100 2 0x0804a216 in linux_main (argc=1, argv=0xbfb05f14) at arch/um/kernel/um_arch.c:277 3 0x0804acdc in main (argc=1, argv=0xbfb05f14, envp=0xbfb05f1c) at arch/um/os-Linux/main.c:150 I have spent many hours trying to figure out why there is a segfault given that the red/black tree inserts properly. I'm thinking it's a memory allocation issue with new processes made by fork() of a parent process. Could this be the case and could it have something to do with kmem_cache files_cachep?

    Read the article

  • Desktop interface crashes after software updates

    - by N.C. Weber
    Recently, after installing Ubuntu software updates on the evening of December 7th, 2012, my desktop interface crashes regularly leaving me with a command line screen with a long string of automated commands showing (I assume what goes on behind the pretty desktop). At first, I thought it was only crashing whenever I played DirectX games in WINE, but now it crashes if I open the native Firefox browser or if it's doing nothing at all but sitting there. Apport attempts to report the bugs after restart, but often they crash as well. I've done a SMART check on the hard drive, and everything report OK. No read errors, no bad sectors. I am using an Acer Extensa 4620Z Memory: 2.0 GiB Processor: Intel Pentium Dual CPU T2370 @ 1.73GHz x 2 GraphicsL: Intel 965GM x86/MMX/SSE2 OS: Ubuntu 12.10 32-bit Disk: 116.0 GB with 33.4 GB Available

    Read the article

  • ubuntu 14 painfully slow on dell r200

    - by sirmonkey
    I didn't notice it at first. The machines (there is 20 plus) are to be used a simple file servers. It wasn't until samba just wouldn't act right that I installed a desktop gui and started more diagnoseing the problem did I catch the slow preformance... I've tested 4 servers they all suck. And windows 7 runs fantastic on them. I have Google and searched. But nothing to explain this. The easy test is dmesg is so slow you can almost read it. I'm guessing it's an apic or cpu power management issue. What output would you all like????? It is a core2 machine with 4Gb of ram. On board data.

    Read the article

  • Slashdotted web site seeks new home

    - by Arthur Edelstein
    I am maintaining a website that contains mostly simple html (just a little php). Normally the site receives only 4000 hits per month, but it was recently slashdotted by the New York Times (30,000 visitors and 30 GB in a day) and the web host provider (bluehost) throttled the CPU in response. This slowed down the website considerably. What web host providers would offer a more scalable solution? Ideally I would like a high-quality host that charges by the GB and can handle bandwidth to expand during sudden slashdotting episodes without a reduction in performance.

    Read the article

  • Scrambled screen on 12.04 with Radeon HD 7670M/2GB when scrolling the page

    - by Mihkel
    I have Ubuntu 12.04 LTS 64 bit and I have installed proprietary drivers for my Radeon HD 7670M with 2GB memory. But if I scroll page or do anything like move a window then I get blurred screen (more like scrambled maybe) for a second and if I try to take PrtScr of it, it is goes to normal. I have tried other drivers and it does not solve my problem. And I do not want to go over 32 bit Ubuntu because I have 6 GB ram and I would lose so much of it. Also if it helps, my processor is Intel® Core™ i5-3210M CPU @ 2.50GHz × 4.

    Read the article

  • Firefox 4 : sortie de la beta 12, améliorations du support du Flash et de l'accélération matérielle

    Firefox 4 : sortie de la beta 12 Améliorations du support du Flash et de l'accélération matérielle Mise à jour du 28/02/11 La douzième ? et a priori dernière - beta de Firefox 4 est sortie ce week-end. Elle corrige 7.000 bugs et apporte une amélioration dans la lecture des vidéos (en Flash). L'intégration de l'accélération matérielle (allouer des tâches spécifiques de calcul au GPU plutôt qu'au CPU) a elle aussi été retravaillée. Le tout permettant une meilleure stabilité du navigateur. Elle n'inclut malheureusement pas encore les patchs « miracles*» qui permettent de diviser par deux son temps de démarrage (lire par ail...

    Read the article

  • Lens showing only music after zeitgeist removal

    - by chris
    I can't get anything to show up in the Dash (lens?) other than music (no applications , no files) . This began when I removed zeitgeist. I've uninstalled and reinstalled, but still not working. I've also installed unity-place-files and unity-place-applications as suggested elsewhere. Under processes that are running I don't see zeitgeist.. (the original reason I wiped it out was because it was sucking up CPU). Ubuntu 12.04 Thanks in advance.

    Read the article

  • wacom bamboo connect CTL470 "no tablet detected..."

    - by LAS
    Wacom bamboo connect CTL470 - "no tablet detected ..." I downloaded and attempted to install drivers and software -- I am relatively new in ubuntu and downloaded, extracted,ran in terminal but could not successfully install drivers and software for this device. All of this took up a good deal of space on the drive. Manual compilation failed. I need some help. How do I install so as to use this device? Or please direct me to a suitable (relative beginner) "how to". i have downloaded several packages but the install fails in Software Center and Synaptic. System: HP a1220n Intel Pentium 4 CPU 2.93 GHz OS Type 32 bit Ubuntu 11.10

    Read the article

  • Is there a way to log when a particular memory location gets written and by which function?

    - by rusbi
    I'm having a bug in my c++ program which happens very rarely but crashes my program. It's seems I have some buffer overflow problem or something similar. I find that these types of bug are most difficult to find. My program always crashes because of the same corrupted memory location. I'm wondering if there is some debugging tool which could detect when a particular memory location get written to and logs the function which does it. I'm using VLD (visual leak detector) for my memory leak hunting and it works great. It substitutes the original mallocs which its own and logs every allocation. I was wondering if there is something similar for memory? I know that something like that would cripple a program, but it could be really helpful.

    Read the article

  • KVM guest disk performance

    - by Alex
    My KVM guest does max. 200MB/s although the host does easily 700MB/s (Raid 0 with 4 SSDs). Configuration: File-based storage (raw), cache none. Host 24 cores, 96GB ram, Ubuntu 12.04.1 LTS and virt-manager. I suspect the CPU to be the bottleneck (one core goes up during hdparm). Anyone experienced the same or has an explanation ? Edit: one more info: guest is the same as host (Ubuntu 12). Same poor disk performance observed with Windows 2008 R2 and Suse Enterprise Linux (9 or 10 I think). Max 1 guest running.

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • How to make bash script run with a latency (i.e. wait 1 sec at each iterations)?

    - by user2413
    I have this bash script; for (( i = 1 ; i <= 160 ; i++ )); do qsub myccomputations"${i}".pbs done Basically, I would prefer if there was a 1 second delay between each iteration. The reason is that at each iterations, it sends the program file mycomputation"${i}$.pbs to a core node for solving. Solving in this instance involves the use of pseudo random numbers. I suspect the RNG I use (R's) uses CPU time as seed because as things are now I get repeating pseudo random numbers (at the rate of approx 1 out of 100). So how to you ask bash to for (( i = 1 ; i <= 160 ; i++ )); do wait 1 sec qsub myccomputations"${i}".pbs done

    Read the article

  • Design a Distributed System

    - by Bonton255
    I am preparing for an interview on Distributed Systems. I have gone through a lot of text and understand the basics of the area. However, I need some examples of discussions on designing a distributed system given a scenario. For example, if I were to design a distributed system to calculate if a number N is primary or not, what will the be design of the system, what will be the impact of network latency, CPU performance, node failure, addition of nodes, time synchronization etc. If you guys could present your in-depth thoughts on this example, or point me to some similar discussion, that would be really helpful.

    Read the article

  • Dimming the backlight is irreversible on a Samsung Q210 notebook, what do I do?

    - by user27304
    I'm new to the community, although I have been using Ubuntu since 2010. I have a Samsung Q210 notebook; Specs: Intel® Core™2 Duo CPU P8400 @ 2.26GHz × 2 4 Gigs RAM Nvidia 9200m GS (although system information in Ubuntu doesn't know) 194 GB HD OS: Ubuntu 11.10 Kernel is 3.0.0-12-generic-pae Although Samsung seems to be infamous for problems with Ubuntu, after upgrading to Oneiric, finally the FN Brightness Buttons are recognized. The only problem is, after dimming the backlight for a fixed amount of steps (3 or 4, I dare not count now because that would mean rebooting because I can't see anything), the display goes completely dark and using the FN buttons to brighten the backlight does not work anymore (before reaching that threshold, going brighter after dimming works). Now what do I do? File a bug report? If not, what then? If yes, how? Not sure... guess I should ask here first.. thanks for answering in advance.

    Read the article

  • Strange profiling results: definitely non-bottleneck method pops up

    - by jkff
    I'm profiling a program using sampling profiling in YourKit and JProfiler, and also "manually" (I launch it and press Ctrl-Break several times to get thread dumps). All three methods give me extremely strange results: some tens of percents of time spent in a 3-line method that does not even do any allocation or synchronization and doesn't have loops etc. Moreover, after I made this method into a NOP and even removed its invocation completely, the observable program performance didn't change at all (although it got a negligible memory leak, since it was a method for freeing a cheap resource). I'm thinking that this might be because of the constraints that JVM puts on the moments at which a thread's stacktrace may be taken, and it somehow turns out that in my program it is exactly the moments where this method is invoked, although there is absolutely nothing special about it or the context in which it is invoked. What can be the explanation for this phenomenon? What are the aforementioned constraints? What further measurements can I take to clarify the situation?

    Read the article

  • How to implement arrays in an interpreter?

    - by Ray
    I have managed to write several interpreters including Tokenizing Parsing, including more complicated expressions such as ((x+y)*z)/2 Building bytecode from syntax trees Actual bytecode execution What I didn't manage: Implementation of dictionaries/lists/arrays. I always got stuck with getting multiple values into one variable. My value structure (used for all values passed around, including variables) looks like this, for example: class Value { public: ValueType type; int integerValue; string stringValue; } Works fine with integers and strings, but how could I implement arrays? (From now on with array I mean arrays in my experimental language, not in C++) How can I fit the array concept into the Value class above? Is it possible? How should I make arrays able to be passed around just as you could pass around integers and strings in my language, using the class above? Accessing array elements or memory allocation wouldn't be the problem, I just don't know how to store them.

    Read the article

  • FAQ: Creating a new LDOM domain

    - by Owen Allen
    I got a question about creating LDOM domains: "I have a Server Pool set up, and I need to create a secondary LDom domain on a machine in the pool. When I click on the machine, though, the 'create logical domain' command is grayed out. The machine still has available CPU threads and free RAM. What's going on?" This one has an easy answer. In a Server Pool, the Create Logical Domain action is under the pool's actions, rather than the individual machine's actions. This is because the Server Pool decides where to put the new domain based on the Server Pool's placement policy. So, in this case, you need to select the Server Pool in the Assets section, and then create the new domain from there.

    Read the article

  • Uncontrolled Fan and Crash

    - by RobotbeatsHuman
    I don't have sensors to properly run lm-sensors. The computer will turn on but shortly there after all the fans in it will speed way up. It stays like this for a few minutes and then the computer shuts off. Tried resetting the BIO. Went to try installing a BIOs update but it wont stay on long enough for me to try that or to do a clean install. Could this be the motherboard dying? It's mainly the CPU fan that ends up going max. after a few minutes. I checked the PSU and It's a Dell Inspiron 580. If you need more system specs just le me know.

    Read the article

  • Redoundant code in exception handling

    - by Nicola Leoni
    Hi, I've a recurrent problem, I don't find an elegant solution to avoid the resource cleaning code duplication: resource allocation: try { f() } catch (...) { resource cleaning code; throw; } resource cleaning code; return rc; So, I know I can do a temporary class with cleaning up destructor, but I don't really like it because it breaks the code flow and I need to give the class the reference to the all stack vars to cleanup, the same problem with a function, and I don't figure out how does not exists an elegant solution to this recurring problem.

    Read the article

  • Clock drift even though NTPD running

    - by droffo
    I'm having a problem with the clock drifting on my PC. I'M running Ubuntu 10.10 on an somewhat crusty IBM e-server (1.5GB RAM, 2.4GHz CPU) ntpd is running (started at run level 2) servers are defined: server 1.us.pool.ntp.org server 2.us.pool.ntp.org server 3.us.pool.ntp.org server time.nrc.ca server ntp1.cmc.ec.gc.ca server ntp2.cmc.ec.gc.ca server wuarchive.wustl.edu server clock.psu.edu Looking at the log file, it would seem that the ntp daemon is running, but the system clock never seems to be set, however. If I manually set the time from a Casio "atomic" watch, the date/time displayed by the Clock applet drifts out of sync over time. Looking at the log file (below) it would seem the ntp daemon started ok and is running. So I am totally flummoxed right now :-( Here's a copy of my ntp.log file.

    Read the article

  • How can I make KDE faster in Ubuntu 12.04. It's very slow

    - by Rizwan Rifan
    I installed the kubuntu-desktop package in Ubuntu 12.04 LTS, but the problem is KDE responses very slowly. If I click on an application's icon to run it, it appears after 10 seconds and sometimes does not appear at all. It hangs all the time. The cursor is almost impossible to follow because of the lag. I have read on the Internet that Unity uses more memory and CPU than KDE. But on my PC Unity runs smoothly and KDE does not. So what should I do to make KDE as fast, responsive and smooth as Unity? My specifications are as follows: RAM: 1.5 GB (DDR2) Processor: 3 GHz Dual Core Graphics Card: Intel HD graphics with 256 MB memory.

    Read the article

  • Can the JVM recover from an OutOfMemoryError without a restart

    - by askullhead
    Can the JVM recover from an OutOfMemoryError without a restart if it gets a chance to run the GC before more object allocation requests come in? Do the various JVM implementations differ in this aspect? EDIT: My question was about the JVM recovering and not the user program trying to recover by catching the error. In other words if an OOME is thrown in an application server (jboss/websphere/..) do I have to restart it? Or can I let it run if further requests seem to work without a problem. Sorry if that wan't clear.

    Read the article

  • Ikoula propose 1000 nouveaux serveurs virtuels dédiés Flex'Servers gratuits pendant un mois à l'occasion des TechDays

    Ikoula propose 1000 nouveaux serveurs virtuels dédiés Flex'Servers gratuits Pendant un mois à l'occasion des TechDays Ikoula avait déjà lancé une promotion sur son offre Flex'Server en proposant 500 seveurs gratuits. Aujourd'hui, l'entreprise renouvelle son opération à l'occasion des TechDays et relance son offre. Elle comprend désormais de nouvelles ressources à prix privilégié. Quatre configurations sont disponibles : de ½ à 4 CPU, de 256 Mo à 2 Go de RAM, et de 10 à 80 Go de disque dur. A l'occasion des TechDays, 1 000 Flex'Servers sont offerts pendant un mois. Après le premier mois, ces serveurs dédiés virtuels sont facturés à partir de 5.99€ HT/moi...

    Read the article

  • File system layout for multiple build targets

    - by Yttrill
    I am seeking some ideas for how to build and install software with some parameters. These including target OS, target platform CPU details, debugging variant, etc. Some parts of the install are shared, such as documentation and many platform independent files, others are not, such as 64 and 32 bit libraries when these are separated and not together in a multi-arch library. On big networked platforms one often has multiple computers sharing some large server space, so there is actually cause to have even Windows and Unix binaries on the same disk. My product has already fixed an install philosophy of $INSTALL_ROOT/genericname/version/ so that multiple versions can coexist. The question is: how to manage the layout of all the other stuff?

    Read the article

  • Microsoft Developers Development Laptops [closed]

    - by FidEliO
    Possible Duplicate: What should I be focusing on when building a development PC? I am a Microsoft Developer on Sharepoint and ASP.NET. I am tring to buy a new laptop since the one that I have is an old one. From my point of view, Microsoft Development tools are becomming more and more resource-consuming (I don't find a suitable reason for it though). So I thought I would go for a Lenovo U260 i-7. I do not know exactly if it is going to meet my requirement so that is why I wanted to ask specifically Microsoft Developers about the specification of CPU, RAM, and Storage Disk. Thanks in advance

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >