Search Results

Search found 17045 results on 682 pages for 'high cpu usage'.

Page 154/682 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Is there a difference between multi-tasking and time-sharing?

    - by Dummy Derp
    Just going over my school notes, my teacher identifies multi-tasking OS, and time-sharing OS as two different things. I really don't see a difference between the two. MULTI-TASKING: You load a number of programs in the memory and execute them. You execute another program if the time quantum allocated to the current program expires OR if it goes on to do I/O and leaves the CPU OR if it finishes execution. TIME-SHARING: the same,again. The same applies in case of serial processing and batch processing. Although they are the same, I guess the only difference would be the way in which control information is passed to the CPU. Maybe, and again MAYBE, in serial processing you need to provide the punch cards with all the processes while in batch, the entire batch uses the same set of control information. Like all the print jobs would have the same control information.

    Read the article

  • Please Help to bring back power to my machine

    - by Acess Denied
    I Have a samsung N150plus netbook That I have been using for a while now. I left it on and plugged to a wall outlet and went to bed. I dual boot ubuntu and win7. I tried to update the win7 to sp1 and I dozed off. I woke up and saw the machine has been booted to ubuntu and logged in as guest, which translate to mean one of my room mates have tried to use the machine and they all have denied using my machine. I tried to reboot to windows and then it appears to have no cpu, hard disk and cpu fan activity. only one led seems to come on when i plug it in. its only the led that indicate the machine is powered on powers steadily. I really cant afford to buy a new machine now and I need the machine to complete my last project in school for my last year. Help Please

    Read the article

  • Ubuntu gets slower by the day

    - by Doug
    Ive noticed that Ubuntu has been getting slower and slower to boot, launch programs, etc. I installed 12.04 about 4 months ago,now 12.10, running on a quad-core Q8300 Intel, 4GB Ram, and an 80GB WD IDE drive. For some reason (ever since 11.04), Ive noticed after installation, the speed is good. The longer I have the OS installed, every bootup gets slower and slower, launching programs get slower, frame rates change radically(onboard GF9400 gets anywhere from 60fps down to 12 in worst cases). I would think maybe the HD is the issue, however I installed 11.10 on a 160GB SATA, and the same thing occurred. Looking at system resources, I'm holding steady at 1GB memory usage (I have 4GB, but it's actually showing 3.6GB, dunno why), no swap usage, and using right around 4% on cpu currently. HD capacity is only 28% used. Has anyone else ran into this issue? I love Ubuntu to death, but using other distros other than Ubuntu, I dont have this problem.

    Read the article

  • How to render Minecraft on the GPU?

    - by l0b0
    Hardware: Intel i7 AMD Radeon HD 6970 SSD with plenty of space 6 GB RAM Software OpenJDK 6, 7, and Oracle Java 7 (reproducible with all three) AMD Catalyst 12.8 and open source driver (reproducible with both) Ubuntu 12.04 x86_64 and older Minecraft 1.3.2 vanilla and older On this setup I am getting rubbish frame rates after a short while of playing, dropping from about 45-55 to 15 in a couple of minutes. CPU use is 40-45 even when rendering the opening screen at 1920x1280, and gameRenderer is using about 90% CPU when playing. Rather than trying to eke out a few more FPS out of an obviously broken rendering pipeline, I really hope to find a solution to make the GPU render Minecraft.

    Read the article

  • JavaOne Session Report - Java ME SDK 3.2

    - by Janice J. Heiss
    Oracle Product Manager for Java ME SDK, Sungmoon Cho, presented a session, "Developing Java Mobile and Embedded Applications with Java ME SDK 3.2,” wherein he covered the basic new features of the Java ME Platform SDK 3.2, a state-of-the-art toolbox for developing mobile and embedded applications. The session began with a summary of the four main components of Java ME SDK. A device emulator allows developers to quickly run and test applications before commercialization. It supports CLDC/MIDP CLDC/IMP.NG and CLC/AGUI. A development environment assists writing, running debugging and deploying and enables on-device debugging. Samples provide developers with useful codes and frameworks. IDE Plugins – NetBeans and Eclipse – equip developers with CPU Profiler, Memory Monitor, Network Monitor, and Device Selector. This means that manual integration is no longer necessary. Cho then talked about the Java ME SDK’s on-device tooling architecture: * Java ME SDK provides an architecture ideal for on-device-debugging.* Device Manager plays the central role by managing different devices whether it is the emulator or a device that Oracle provides or recommends or a third party device as long as the devices have a Java Runtime that supports the protocol that is designated.* The Emulator provides an accurate emulation, since it uses the same code base used in Oracle’s Java ME runtime.* The Universal Emulator Interface (UEI) makes it easy for IDEs to detect the platform.He then focused on the Java ME SDK release highlights, which include: * Implementation and support for the new Oracle® Java Wireless Client 3.2 runtime and the Oracle® Java ME Embedded runtime. A full emulation for the runtime is provided.* Support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). This is a new profile for embedded devices. * A new Custom Device Skin Creator.* An Eclipse plugin for CLDC/MIDP.* Profiling, Network monitoring, and Memory monitoring are now integrated with the NetBeans profiling tools.* Java ME SDK Update CenterCho summarized the main features: IDE Integration (NetBeans and Eclipse) enables developers to write, run, profile, and debug their applications on their favorite IDE. CPU ProfilerThis enables developers to more quickly detect the hot spot and where CPU time is being used. They can double click the method to jump directly into the source code.Memory Monitor Developers can monitor objects and memory usage in real time.Debugger on the Emulator and DeviceDevelopers can run their applications step by step, and inspect the variables to pinpoint the problem. The debugging can take place either on the emulator or the device.Embedded Application DevelopmentIMP-NG, Device Access, Logging, and AMS API Support are now available.On-Device ToolingConnect your device to your computer, and run and debug the application right on your device.Custom Device Skin CreatorDefine your own device and test on an environment that is closest to your target device. The informative session concluded with a demo that showed more concretely how to apply the new features in Java ME SDK 3.2.

    Read the article

  • VMWare Player pauses often

    - by pascal
    I'm using a 64bit Windows 8 inside vmplayer, with 2 virtual processor cores, virtual hard disk resides on a fast local disc and is not preallocated; host CPU is Intel i7 3770, should be capable of hardware virtualisation but I don't know if VMWare uses it; NAT networking; Sound card connected, USB connected, accelerated 3D graphics (NVidia 313.30 on host) My problem is, that the VM often pauses for a few seconds, and then speeds up for a few seconds to reach real time again. Time in the VM actually moves faster after the pause, for example all animations using timers speed up. When running, the vmware-vmx process shows ~150% CPU usage in top, but 0% when pausing (and D state i.e. waiting for IO). iotop shows normal disk writes from vmware-vmx threads, but during pauses, the flush kernel thread uses 99%. Are there some options to try so that VMWare doesn't wait for IO? I've tried a few things available from the GUI but the issue never went away…

    Read the article

  • Make public webcam. Which protocol, which codec. (Using VLC)

    - by gsedej
    Hi! I want to use my old (1GHz) PC as webcam video stream server (like you can see those road cameras). I thought of using VLC and already tried using http output but it was not really good. Too cpu hungry, too big stream (kBps), not stable... I been reading VLC how-to's but thre is still a question. Which output should I use? Http, RTSP, UDP? I want to make for more than one computer at the same time (multicast). Which codec should be good? PC is not so fast so it shouldn't be too cpu hungry codec. Mpeg2, mpeg4, xvid? how much video buffer should I use (vb=?)? What about setting IP and ports? So I need some help with ideas, but if someone can make a VLC command line it's even better :) Oh, computer has direct internet connection and own IP.

    Read the article

  • Macbook 8.1 overheating

    - by timse201
    I have a macbook 8.1 with ubuntu 12.04 installed. But my cpu is getting very hot. On Mac my CPU is 50-60°C warm. But on ubuntu my mac is getting very hot and is by about 60°C but with min 3000rpm instead of 2000 on mac and the fan is getting very loud with 4500rpm on ubuntu when im browsing (without flash) or doing something else. i set it to 3000rpm because it is not getting so noisy instead of 2000rpm minimum. But thats not that what im expected. What ive done: i installed lm-sensors to see the temperatures and started the sensors-detect i installed macfancld, jupiter, the newest drivers from x-updates and installed the i965-va-driver oh and i installed mesa - with the default version my sandbridge was displayed as unknown i added GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=force drm.vblankoffdelay=1 pcie_aspm=force drm.vblankoffdelay=1 i915.semaphores=1 i915.i915_enable_rc6=1 i915.i915_enable_fbc=1" ive added rfkill block bluetooth to /etc/rc.local to switch of bluetooth by default on boot my mac is not as noisy as before but it is noisy and sometimes very hot i hope you can help me

    Read the article

  • Oracle Solaris Preflight Applications Checker 11.2 now available

    - by CarylTakvorian-Oracle
    ISV Engineering is happy to announce the release of the latest version of our Solaris Preflight Checker tool supporting Solaris 11.2. which is now available for download. The Solaris Preflight Checker enables a developer to determine the Oracle Solaris 11.2 readiness of an application by analyzing a working application on Oracle Solaris 10. A successful check with this tool will be a strong indicator that an application will run unmodified on the latest Oracle Solaris 11.This release includes: Updated symbol database which will help migration from Solaris 10 to Solaris 11.2 Kernel binary and source scanners that now detects, usage of "data structures" changed between Solaris 10 and Solaris 11.2 An application analyzer, which looks for usage of specific Solaris features and recommends better ways of implementing the same on Solaris 11.2   e.g. suitability of high performance libraries shipped with Solaris, crypto offload for Java & C based applications,  etc. And bug fixes

    Read the article

  • VirtualService for ESB

    This article describes the design, implementation, and usage of VirtualService for the Enterprise Service Bus, using the Microsoft .NET FX 3.5 technology.

    Read the article

  • How to debug slow session start of Gnome 3?

    - by user65521
    After Upgrade from 11.10 to 12.04, the login process of Gnome 3 is extremely slow (It takes in the order of 60 seconds when it was in the order of a few seconds before the upgrade (Harddisk is a SSD!)). Running "top" in a VT shows that gnome-shell is producing about 90% CPU load while dbus-daemon is taking roughly 10%. The moment when CPU-load of gnome-shell drops to normal levels (around 2-3%) corresponds to the time the login process is terminated and the desktop is displayed. De-activating the four gnome-shell extensions (Alternative Status Menu, Quit Button, Remove Accessibility, system-monitor) that I have installed does not have any effect on session start up time. Login to Gnome classic does not show the slow session start. The system logs do not show anything suspicious. Thus, what is the best way to identify the underlying problem?

    Read the article

  • How can I improve the battery life under 12.04 on my Inspiron 14z? [duplicate]

    - by cfogelberg
    This question already has an answer here: Tips to extend battery life for laptops and notebooks 24 answers How do I improve the battery life of my Inspiron 14z under Ubuntu 12.04? This laptop gets 4-5 hours of battery life using Windows (e.g. here). I've removed Windows, installed Ubuntu 12.04 and the initial battery life was only 2 hours. With some tweaks (described below) it's still only ~2.5 hours. For reference, the laptop is the latest model of the 14z: i5-3337U processor 32GB MSATA, 500GB HDD (5400rpm) AMD Radeon HD7570M graphics card I have put ext4 partitions on both the SSD and the HDD, and have mounted / to the SSD and /home to the HDD. I also put a 24gb linux swap partition at the start of the HDD, though I figure this won't be used all that much (the laptop has 8gb of RAM). After googling around and reading Ask Ubuntu and other sites extensively, I have done the following steps, and they have improved the battery life ~30 minutes (exact improvement not clear, but battery life is still nowhere near 4-5 hours). Installed Jupiter (and set Performance to "Power Saving") Installed laptop-mode-tools cat /proc/sys/vm/laptop_mode now outputs 5 (previously it output 0) But it's not clear that this will help: AskUbuntu question Turned down the brightness of my screen from full to 1/3 Other things I have heard about but have not tried for fear of frying the laptop or my linux install: Add "pcie_aspm=force" at the end of the line with "quiet splash" in /boot/grub/grub.cfg Enable ALPM, but it may already be enabled in 12.04? Enable i915 framebuffer compression Use a propietary driver for the graphics card? Turn off the graphics card? (what would happen if I relied on the internal Intel bridge?) Use TLP? Spin down the HDD more aggressively (howto, but I think laptop-mode-tools does this already) The only other thing I've noticed is that plastic just above the F5, F6 and F7 keys gets really hot. According to Jupiter my CPU temperature is only 69 celsius and the System Monitor shows CPU load at 7% so I don't think it's the CPU. Maybe it's the graphics card? Also, I've set up MongoDB and LAMP on the machine as well. When I run powertop MongoDB is high in the list, but I'm not sure if that's relevant to battery life because I'm not actually doing anything with MongoDB most of the time. Edit - Additional info as requested $ lspci -nnk | grep -iEA3 "(graphics|vga)" 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) Subsystem: Dell Device [1028:057f] Kernel driver in use: i915 Kernel modules: i915 -- 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] [1002:6841] Subsystem: Dell Device [1028:057f] Kernel driver in use: radeon Kernel modules: radeon

    Read the article

  • Logic - Time measurement

    - by user73384
    To measure the following for tasks- Last execution time and maximum execution time for each task. CPU load/time consumed by each task over a defined period informed by application at run time. Maximum CPU load consumed by each task. Tasks have following characteristics- First task runs as background – Event information for entering only Second task - periodic – Event information for entering and exiting from task Third task is interrupt , can start any time – no information available from this task Forth task highest priority interrupt , can start any time – Event information for entering and exiting from task Should use least possible execution time and memory. 32bit increment timer available for time counting. Lets prepare and discuss the logic, It’s OK to have limitations …! Questions on understanding problem statement are welcome

    Read the article

  • Is it better to use preprocessor directive or if(constant) statement?

    - by MByD
    Let's say we have a codebase that is used for many different costumer, and we have some code in it that relevant only for costumers of type X. Is it better to use preprocessor directives to include this code only in costumer of type X, or to use if statement, to be more clear: // some code #if TYPE_X_COSTUMER = 1 // do some things #endif // rest of the code or if(TYPE_X_COSTUMER) { // do some things } The arguments I can think about are: Preprocessor directive results in smaller code footprint and less branches (on non-optimizing compilers) If statements results with code that always compiles, e.g. if someone will make a mistake that will harm the irrelevant code for the project he works on, the error will still appear, and he will not corrupt the code base. Otherwise he will not be aware of the corruption. I was always been told to prefer the usage of the processor over the usage of the preprocessor (If this is an argument at all...) What is preferable - when talking about a code base for many different costumers?

    Read the article

  • WS-Eventing for WCF (Indigo)

    This article describes the design, implementation and usage of the WS-Eventing for distributed applications driven by new MS communication model WCF (Windows Communication Foundation)

    Read the article

  • NullTransport for WCF

    This article describes design, implementation and the usage of the custom in-process transport for Microsoft Windows Communication Foundation (WCF) model.

    Read the article

  • Are we queueing and serializing properly?

    - by insta
    We process messages through a variety of services (one message will touch probably 9 services before it's done, each doing a specific IO-related function). Right now we have a combination of the worst-case (XML data contract serialization) and best-case (in-memory MSMQ) for performance. The nature of the message means that our serialized data ends up about 12-15 kilobytes, and we process about 4 million messages per week. Persistent messages in MSMQ were too slow for us, and as the data grows we are feeling the pressure from MSMQ's memory-mapped files. The server is at 16GB of memory usage and growing, just for queueing. Performance also suffers when the memory usage is high, as the machine starts swapping. We're already doing the MSMQ self-cleanup behavior. I feel like there's a part we're doing wrong here. I tried using RavenDB to persist the messages and just queueing an identifier, but the performance there was very slow (1000 messages per minute, at best). I'm not sure if that's a result of using the development version or what, but we definitely need a higher throughput[1]. The concept worked very well in theory but performance was not up to the task. The usage pattern has one service acting as a router, which does all reads. The other services will attach information based on their 3rd party hook, and forward back to the router. Most objects are touched 9-12 times, although about 10% are forced to loop around in this system for awhile until the 3rd parties respond appropriately. The services right now account for this and have appropriate sleeping behaviors, as we utilize the priority field of the message for this reason. So, my question, is what is an ideal stack for message passing between discrete-but-LAN'ed machines in a C#/Windows environment? I would normally start with BinaryFormatter instead of XML serialization, but that's a rabbit hole if a better way is to offload serialization to a document store. Hence, my question. [1]: The nature of our business means the sooner we process messages, the more money we make. We've empirically proven that processing a message later in the week means we are less likely to make that money. While performance of "1000 per minute" sounds plenty fast, we really need that number upwards of 10k/minute. Just because I'm giving numbers in messages per week doesn't mean we have a whole week to process those messages.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >