Search Results

Search found 10285 results on 412 pages for 'cpu architecture'.

Page 260/412 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Printer Brother DCP-110C Linux 64-bit drivers

    - by Ondra Žižka
    Hi, I need 64-bit Linux driver for DCP-110C (for Ubuntu 10.04 64-bit) I found only 32-bit here. http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/index.html I've tried to follow those instructions. During the installation, I got this: ondra@ondra-doma:~/Downloads$ sudo dpkg -i --force-all dcp110clpr-1.0.2-1.i386.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) (Reading database ... 257283 files and directories currently installed.) Preparing to replace dcp110clpr 1.0.2-1 (using dcp110clpr-1.0.2-1.i386.deb) ... Unpacking replacement dcp110clpr ... Setting up dcp110clpr (1.0.2-1) ... ln: creating symbolic link `/usr/lib/libbrcompij2.so.1.0': File exists ln: creating symbolic link `/usr/lib/libbrcompij2.so.1': File exists ln: creating symbolic link `/usr/lib/libbrcompij2.so': File exists After installation, the printer is listed at the cups server, but does not work (no command has any effect on printer (which is, of course, on and connected)). Anyone has found some working solution? Thanks, Ondra

    Read the article

  • Exchange 2010 SP2 OWA performance

    - by Frederik Nielsen
    How do I increase performance in OWA 2010 SP2? I am running CAS on a seperate installation, which has 8GB RAM and 4 CPU cores - running virtualized in a vmware environment. However, the load times are pretty bad, so is there any way to improve those? I am thinking of installing a linux cache-stuff-server in front of the OWA, but will that work? And how should it be done? Allright, I "fixed" it - was just something temporary issue. Thanks for your replies

    Read the article

  • Clicking hyperlinks in Email messages becomes painfully slow

    - by Joel Spolsky
    Running Windows 7 (RC, 64 bit). Suddenly, today, after months without a problem, clicking on links has become extremely slow. I've noticed this in two places. (1) clicking hyperlinks in Outlook email messages, which launches Firefox, takes around a minute. Launching Firefox by itself is instantaneous - I have an SSD drive and a very fast CPU. (2) opening Word documents attached to Outlook email messages also takes a surprisingly long time. The only thing these two might have in common is that they use the DDE mechanism, if I'm not mistaken, to send a DDE open command to the application. Under Windows XP this problem could sometimes be fixed by unchecking the "Use DDE" checkbox in the file type mapping, however, I can't find any equivalent under Windows 7. See here for someone else having what I believe is the same problem. See here for more evidence that it's DDE being super-super-slow.

    Read the article

  • Asus z53 laptop overheating problem

    - by Tiberiu Hajas
    hi all, wondering if anyone encountered overheating of asus laptop ? especially the z53 model ? usually the right side of the laptop and vent in the upper corner is blowing hot air when under even minimal load, the CPU temperature can easily get to 65-70C and GPU is even above 80C. I'm using NHC (notebook health control) to set to a higher conservatory power consumption but that helps only a bit, anyone opened up the case ? wondering is require a dust clean ...etc ? I still have some warranty on it. thanks

    Read the article

  • My KDE very slow in certain operations

    - by Pietro
    I have a problem with my Linux installation. It seems that the KDE code that deals with directory windows is extremely slow (on both Dolphin and Konqueror). This happens both when I click on a directory icon and when I want to open/save a file from many KDE applications. The time the window takes to open can be one minute or more. The same happens when I right click on an icon. Looking at the CPU usage, this is very low (less than 10%). Am I the only one with this problem, or is it well known and maybe already fixed? Consider that I cannot update to a more recent version of OpenSuse. Thank you, Pietro Configuration: Linux version: OpenSuse 11.4 KDE 4.6.0 System: DELL Precision T3500 - Intel Xeon Home directory mounted on a remote drive. <-- could this be the reason?

    Read the article

  • Setting affinity on windows server 2003

    - by Samuel
    I have a program that by default only runs on one CPU. I have tried using the start /affinity x notepad.exe batch command but i can't get it to run my program. it changes the title of the command line window but doesn't execute the program. this start command does work for notepad so it might just be a problem with the software. I have set the affinity manually via task manager so i know it works. I am not the programmer of this software so changing that is not an option.

    Read the article

  • Chassis fans and power LEDs still work in Hibernate

    - by Jaded
    I have ASRock Z68 Extreme3 Gen3 motherboard recently updated to 2.20 BIOS version. OS is Windows 7 x64. The problem is after that update full hibernation (by that i mean full system power off) stopped working although everything was fine before. Now when I press hibernate, sleep in initiated as usual, monitor goes to sleep, HDD and CPU fan stop spinning, but chassis fans (i have Gigabyte Aurora 3D 570 case with two rear and one front fans) still remain working. Also power leds are lit as if computer is turned on. Tried changing different UEFI settings related to sleep mode, and none of them change above described behaviour. I have "Deep Sleep" (Advanced-South Bridge Configuration) set to "Enabled in S4-S5", "Suspend to RAM" (Advanced-ACPI Configuration) set to "Auto", all fans settings in "H/W Monitor" set to "Auto".

    Read the article

  • Unnecessary Java context switches

    - by Paul Morrison
    I have a network of Java Threads (Flow-Based Programming) communicating via fixed-capacity channels - running under WindowsXP. What we expected, based on our experience with "green" threads (non-preemptive), would be that threads would switch context less often (thus reducing CPU time) if the channels were made bigger. However, we found that increasing channel size does not make any difference to the run time. What seems to be happening is that Java decides to switch threads even though channels aren't full or empty (i.e. even though a thread doesn't have to suspend), which costs CPU time for no apparent advantage. Also changing Thread priorities doesn't make any observable difference. My question is whether there is some way of persuading Java not to make unnecessary context switches, but hold off switching until it is really necessary to switch threads - is there some way of changing Java's dispatching logic? Or is it reacting to something I didn't pay attention to?! Or are there other asynchronism mechanisms, e.g. Thread factories, Runnable(s), maybe even daemons (!). The answer appears to be non-obvious, as so far none of my correspondents has come up with an answer (including most recently two CS profs). Or maybe I'm missing something that's so obvious that people can't imagine my not knowing it... I've added the send and receive code here - not very elegant, but it seems to work...;-) In case you are wondering, I thought the goLock logic in 'send' might be causing the problem, but removing it temporarily didn't make any difference. I have added the code for send and receive... public synchronized Packet receive() { if (isDrained()) { return null; } while (isEmpty()) { try { wait(); } catch (InterruptedException e) { close(); return null; } if (isDrained()) { return null; } } if (isDrained()) { return null; } if (isFull()) { notifyAll(); // notify other components waiting to send } Packet packet = array[receivePtr]; array[receivePtr] = null; receivePtr = (receivePtr + 1) % array.length; //notifyAll(); // only needed if it was full usedSlots--; packet.setOwner(receiver); if (null == packet.getContent()) { traceFuncs("Received null packet"); } else { traceFuncs("Received: " + packet.toString()); } return packet; } synchronized boolean send(final Packet packet, final OutputPort op) { sender = op.sender; if (isClosed()) { return false; } while (isFull()) { try { wait(); } catch (InterruptedException e) { indicateOneSenderClosed(); return false; } sender = op.sender; } if (isClosed()) { return false; } try { receiver.goLock.lockInterruptibly(); } catch (InterruptedException ex) { return false; } try { packet.clearOwner(); array[sendPtr] = packet; sendPtr = (sendPtr + 1) % array.length; usedSlots++; // move this to here if (receiver.getStatus() == StatusValues.DORMANT || receiver.getStatus() == StatusValues.NOT_STARTED) { receiver.activate(); // start or wake up if necessary } else { notifyAll(); // notify receiver // other components waiting to send to this connection may also get // notified, // but this is handled by while statement } sender = null; Component.network.active = true; } finally { receiver.goLock.unlock(); } return true; }

    Read the article

  • First requests are painfully slow

    - by winSharp93
    I am running Redmine under IIS using Zoo. Installation was done using the Web Platform Installer and the default configuration has not been touched. However, when using the application, the first requests take very long to complete (sometimes more than one minute). During that time, the ruby.exe causes some CPU load (about 15%). According to the log files, it's mainly the views taking that long to render: Started GET "/redmine/login" for IP at 2012-09-04 09:54:08 +0200 Processing by AccountController#login as HTML Rendered account/login.html.erb within layouts/base (42150.5ms) Completed 200 OK in 43508ms (Views: 43008.5ms | ActiveRecord: 0.0ms) Rendered account/login.html.erb within layouts/base (42435.1ms) Completed 200 OK in 44100ms (Views: 43523.3ms | ActiveRecord: 0.0ms) After the initial delay, further request times are totally acceptable. Any ideas on how to speed up the warmup time?

    Read the article

  • Slow Web Performance on two Windows 2008 R2 Terminal Servers

    - by Frank Owen
    We have two Windows 2008 R2 servers that we use for agents to log into to access our customers systems. Saturday morning we received complaints that on both servers the web is running horribly slow. This happens on all websites and the majority of the time the web site times out trying to load. Other users located at the same site but using their desktop machine do not see any issue. We have rebooted the boxes and checked settings and cannot find the cause. The CPU/Memory/Network/Disk Space use on the server is very low. I thought it might have been a MS update causing the issue but it appears the last update was applied in January. We have rebooted both boxes and I am in process of trying a different browser. Any ideas what could be causing this?

    Read the article

  • stdout and key press

    - by Jack
    Hi, when in console, if I press a key, some interrupt controller sends code of that key to CPU, which looks into some table and than represent that keypress by printing some charracter to stdout. But, is keyboard sending an ASCII code of that key, or just some standardised code? Since there is so many languages and extra characters, OS must further translate its code into some character according to user selected scheme, I guess. I ask, becouse I am from Czech Republic, and we use some characters that do not exists in standart ASCII code. So I was thinking, if I enter this character into a console, and then print it, lets say in C++ using cin and cout, and I have set locale to Czech, stdin must actually send some non-ASCII code of the character I pressed to input stream. Am I right?

    Read the article

  • Solaris TCP/IP performance tuning

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck? PS I posted this on StackOverflow originally. One person suggested snoop and dtrace. dtrace seems pretty general - are there any additional pointers on how to use it to diagnose TCP issues?

    Read the article

  • What are the specifications on the RAM inside my computer?

    - by Faken
    I have a Dell Dimension 9200 that I bought 4 years ago. I want to find out the exact specifications (manufacturer, speed, timings, etc.). Is there a way to get this specific info without having to open up the PC (it's buried in and under a bunch of furniture, I'd perfer not to have to dig it out). All I know about it right now is that it is 4 1GB sticks of DDR2 RAM at 667 Mhz. It is the standard RAM that shipped with the computer 4 years ago (from Dell). Does anyone know what the specifications are of the RAM that Dell used in this particular model of computer 4 years ago? Note: I've done my research before coming here. CPU-Z, EVEREST, and AIDA32 all have been unable to give me any more information other than 4 x 1GB @ 667Mhz. I can't find any specifications in the Dell online manuals either (at least not as specific as I want). Thanks -Faken

    Read the article

  • What hardware makes a good MongoDB Server ? Where to get it ?

    - by João Pinto Jerónimo
    Suppose you're on dell.com right now and you're buying a server to run your MongoDB database for your small startup. You will have to handle literally tens of thousands of writes and reads per minute (but small objects). Would you go for 2 processors ? Invest more on RAM ? I've heard (correct me if I'm wrong) MongoDB handles the most it can on the RAM and then flushes everything to the disk, in that case I should invest on a CPU with a large L2 cache, probably 40GB of RAM and a solid state drive.. right ? Would I be better off with a high end (~$11,309, 2 expensive processors, 96GB of RAM) server or 2x(~$6,419, 2 expensive processors, 12GB of RAM) servers ? Is Dell ok or do you have better sugestions ? (I'm outside the US, on Portugal)

    Read the article

  • Need to upgrade DDR2 RAM on HP Desktop

    - by jds
    I have this HP Pavilion Desktop. As you can see, that page says the memory speed supported is PC2-4200. It currently has a 512 MB stick - CPU-Z Screenshots: hxxp://i41.tinypic.com/j5clj6.jpg and hxxp://i39.tinypic.com/20tldlc.jpg However, a crucial.com scan gives a slightly different report - hxxp://crucial.com/systemscanner/viewscanbyid.aspx?id=5718CFE831D926C3 It says the system can support PC2-5300 memory. So my question is which one should I trust? I want to upgrade the computer's ram to 2 GB (the maximum supported), because XP Media Center is giving me problems and I will install Windows 7 on this. PC2-6400 is the most common DDR2 memory I have been able to find here in the market. Will it cause any problems if I install 2 × 1 GB PC2-6400 DDR2 memory sticks (in dual channel) in this computer, (afaik, it will just run at the lower speed of 533 MHz, or whatever the motherboard supports), or do I absolutely need to get PC2-4200 sticks?

    Read the article

  • Which JMX statistics to watch out for in Catalina/Tomcat?

    - by geoaxis
    I have configured OpenNMS to collect all kinds of numeric data coming out of tomcat7 jmx. There are a lot of things. I am interested in monitoring this tomcat instance so that I can avoid down time and lockups. What metrics should I be watching out for? I am already monitoring things like CPU, Memory, Network via SNMP. With this JMX connection the things that I find interesting are Catalina:type=GlobalRequestProcessor,name="ajp-bio-/a.b.c.d-XXXX" RequestsCount so far. Catalina:type=Manager,context=/myApp,host=localhost Active sessions and its maximum so far

    Read the article

  • Is a ext3 Linux filesystem byte order independent

    - by Lothar
    I have a good old HP-C3700 Workstation with PA-RISC CPU here that I would like to use as a subversion server for a very large repository. I just worry what happens if the workstation dies (everybody who knows this computer knows that it is running like an Abrams tank and unlikely to happen in the next decade). I'm using Debian Linux on this system. If the mainboard dies can I just plug the SCSI drive into a PC and read the files from a normal Intel Linux PC? Which software RAID levels would be safe?

    Read the article

  • Virtual Machine files on ramdisk doesn't run faster than on physical disk

    - by Landy
    I installed total 36G memory (4x8G + 2x2G) in the host (Windows 7) and I used ImDisk to create a 32G ramdisk and format it to NTFS file system. Then I copied the virtual machine (in VMware Workstation format) folder, including vmx, vmdk, etc... to the new created ram disk. Then I tried to power on it in VMware Workstation. What made me surprised is that the performance is not better than before. It cost almost the same time to power on the Windows 7 VM. I check the Resource Monitor in the Windows 7 host, and the statistics of CPU, disk, network are rather normal. The memory has reported 3000+ hard fault/sec when guest OS boot then drop to 0 after the guest powered on. Any idea about this issue? I had thought the performance of ramdisk will be better than physical disk in this case. Am I wrong? Thanks.

    Read the article

  • How many VPS' can I create on my server?

    - by user197692
    I need to create as many VPS's on my dedicated server (KVM or OpenVZ) in order to sell them but I really don't know the answer. RAM calculation is simple, it's more about CPU resources, how many VPS's can hold. I'am talking about Intel i7-2600 (4 cores, 8 Threads). I need to deploy as many VPS's. It's all about the nr threads? i.e. 8 threads = maximum 8 x 1vCPU or maximum 4 x 2vCPU? I'am planning to use 1Gb and 2Gb memory on each VPS, the server has 16Gb (but I can raise RAM if need it. So, can I create 8 KVM VPS's with 4 vCPU and 2Gb ram each ? How about 20 VPS's with 1Gb ram and 4vCPU each? How is this decision affected by the hypervisor (KVM, OpenVZ, VMware)?

    Read the article

  • XenServer/Center: Shared SRs for hosts not in same pool?

    - by 3molo
    I would like to use the same SRs on XenServer hosts that are not able to be part of the same pool (because of not having the exact same cpu feature set, if I understand it correctly) in order to share templates, being able to (manually) start a host on another node, backing up running hosts on other hardware etc etc. The technology for SR can be any of iSCSI, NFS or CIFS, iSCSI would obviously be preferred. Trying to add an iSCSI volume renders a "This LUN is already in use as SR iSCSI - Shared Storage on pool xxxxxx.". Adding a NFS share on one XS host, creating a template there and then checking another XS host reveals they don't agree on used space etc. Coming from a vSphere world this is quite baffling, but if these are limitations then I will have to rethink some of the concepts for this low budget project.

    Read the article

  • Does Windows 7 support multiple simultaneous nVidia graphics cards with different drivers?

    - by mckenzieg1
    On one of my dev machines at work (currently running XP), I have two nVidia graphics cards: Quadro NVS 440 (my original card, for my three primary monitors) GeForce GTX 275 (just added, for CUDA development) I can get both cards to work OK by installing the latest GeForce drivers, but I get some annoying-but-not-crippling display artifacts on the Quadro's screens (mostly scattered black rectanges where repainting fails for a few bits of UI in certain applications). Under XP, this seems to be the best I can do. I can use Device Manager to supposedly install different nVidia drivers for the two cards (the latest Quadro drivers for the NVS, the latest GeForce drivers for the GTX), but I actually end up with the same driver for both, because the driver dlls all have the same names and get installed on top of one another in the system directory. I have read that Win7 has a new video driver architecture that better supports multiple heterogeneous cards. Does anyone know if that will handle my scenario? If so, it will give me a compelling reason to get that machine on Win7 ASAP.

    Read the article

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • Open a screen session inside a certain user on boot Ubuntu Server Linux

    - by Pez Cuckow
    I currently have a private server which I test my web apps on which runs Ubuntu Server 10.04 I also host a few game servers (rather than having wasted CPU time :-D) for some of my friends. These game servers I run in the game user account and each one has it's own screen session (so friends can ssh in and reboot the game server etc...). For example screen -R l4d2 runs ./start in the L4D2 folder. However if I reboot the server (which I have to do occasionally) all these sessions close and I have to manually create all the screen sessions and run the required games within them. Is there a way to set these screen sessions as Daemons or services or just boot on server start so they restart themselves on server reboot? I hope I have made my question easy to understand but feel free to ask questions! Many thanks,

    Read the article

  • Clear / Flush cached memory

    - by TheDave
    I have a small VPS with 6GB RAM hosting a couple of websites. Recently I have noticed that my cached memory size is quite high - see below: Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.2%hi, 0.4%si, 0.0%st Mem: 6113256k total, 5949620k used, 163636k free, 398584k buffers Swap: 1048564k total, 104k used, 1048460k free, 3586468k cached After investigating if there is some method to have this flushed or cleared I stumbled upon a command which is: sync; echo 3 > /proc/sys/vm/drop_caches I read it could be useful to add this to a chron-task/job. Is this method recommended or could this lead to potential problems? The only concern I have is that I use one Magento installation on Memcached - could this have any negative effects on it? I am certainly not a pro therefore I would very much appreciate some expert advise. PS: My VPS runs on CentOS 5 x64 and I have WHM + NGINX installed.

    Read the article

  • MBP becomes very hot after using Xcode

    - by Globalhawk
    Hardware: MBP early 2011 version OS: Mountain lion App: Xcode 4.5.2 Problem: Every time when I start Xcode, 2 or 3 processes called "git" start running. But when I quit Xcode the "git" process won't quit and are still using a lot of CPU. Then the computer becomes quite hot and the battery gets drained very quickly. If I manually kill these processes the problem is gone. I tried to reinstall Xcode several times but the problem comes back every time. It drives me crazy. Any help will be appreciated!

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >