Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 102/255 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Trackpad and USB mouse - different settings

    - by soupagain
    On my laptop I use both the trackpad and an external USB mouse. I'd like different settings for each device. e.g. the mouse is high resolution so I prefer to use a slow mouse settings, and for the trackpad I prefer to use the "Enhance pointer precision" option. But I can only seem to set these option for both devices, not individually. Can this be done? [Windows 7]

    Read the article

  • D-Link DIR-300 slows down / loses network

    - by basic6
    there are 2 buildings (A and B). In bldg A is an open WLAN (which I'm allowed to use btw). In bldg B is a computer that I want to connect to that network. So I flashed an old D-Link DIR-300 AP with DD-WRT, mounted it to the wall (bldg B) near a window, attached a 13 dBi directional antenna (pointing to bldg A) and configured it as AP client in that wireless network. Then there's another AP, connected to the D-Link AP, acting as standard access point, which the computer is connected to. That's basically working so far, but: Every now and then the connection is lost. Not the connection between the computer and the D-Link (I can access the DD-WRT admin page normally) or the connection between the D-Link and the WLAN (in Status - Wireless it says it's still connected to the network), but when I want to access a web page (which only works if I'm connected to the wireless network from bldg A), my Firefox keeps "Looking for" (name resolution) without finding anything. When I reset the D-Link (power off, power on) in this situation, after some moments, everything's working fine again (Internet access). I've no idea why this is happening, but usually it's at most every few weeks (most times when nobody was using the computer, so no traffic). Compared to the connection speed when I connect directly to the WLAN in bldg A (Laptop), the speed in bldg B is rather slow, but I have the impression that this difference is worse in the last few days. A few minutes ago, I got 582 KB/s down and 911 KB/s up in bldg A (directly/laptop) and 84 KB/s down and 9 KB/s up in bldg B. The speed in bldg B used to be way higher (I remember 200 KB/s up) while the actual network speed in bldg A was lower than it is now (close to those 200). I'm aware that the wireless connection between those buildings should slow things down, but I'm wondering why this difference has become that extreme. Thanks for any tips... Update: I currently want to upload a large file (1.5 GB) via FTP (FileZilla). Since that caused the D-Link to disconnect (as described in first post), I took my laptop to bldg A, connected directly to the original WLAN (bypassing my D-Link) and tried the same upload. Guess what - same issue: At some point the connection is dead (at this point I would have reset my D-Link if I was connected to it). Just as the D-Link, my laptop is still connected, but not even name resolution is working ("Looking for..." in Firefox). After reconnecting, it's working again. Maybe my D-Link isn't the problem at all...

    Read the article

  • Improve file transfer speed between Windows PCs and servers

    - by Geotarget
    I've setup a server which I've connected to multiple PCs in my workplace. Sadly, data transfer speeds are at max 3 MB/sec per connection which works out slow for file transfers, especially when transferring large files. I'm using Windows filesharing and the server is a Windows Server 2008 (2 Ghz CPU, 1 GB RAM) and the client PCs mostly running Windows 7. How can I detect bottlenecks in my network and improve file sharing speed within the network?

    Read the article

  • Change TCP wait for ACK timeouts in Win7/WinServer

    - by maseth
    Is there any possibility to change default wait for ACK timeout in TCP network on Windows 7 or Windows Server ? I'm using very slow network ( 1200 bps ) and want to tweak TCP. When using default parameters network stuck on multiple retransmissions . If I'm able to change the ACK timeout and tx window size I think that it would work. On Windows XP it was possible but cant find any document for Win7 and Win Server.

    Read the article

  • Dvd flick audio out of sync problem?

    - by DvDfLiCk
    When I burn a .avi or any other type of file sometimes the audio is out of sync. How can I fix this problem should I burn it at a slow or faster speed. Please leave a couple of suggestions on how to fix this problem. Now before you ask the files where already in sync before burning with DVD flick.

    Read the article

  • iTunes Over the Air Sync

    - by aceinthehole
    Is there any software or hack in existence that will allow iTunes to sync wirelessly with my iPhone or iPod touch? I'd like the iPhone to be constantly synced without having to plug it into the USB at my computer via the 802.11 connection, or even better I would like it to happen over 3G when I am not at home. I'd heard that is might be possible (albeit slow) but have not been able to find any software or specific steps out anywhere that lets you do it.

    Read the article

  • install mingw and msys to run c programs on windows

    - by Hesham Abouelsoaod
    I installed MinGW by using mingw-get-inst-20111118.exe and it works but it is very slow! i don't want to install it online, i remember that i have previously installed MinGW and msys by using two files mingw.exe and msys.exe without using the internet and it was great, but now i cant repeat what i have done and i cant find a link to mingw.exe! please i want a simple steps for offline better installation? thanks

    Read the article

  • Linux: find out what process is using all the RAM?

    - by Timur
    Before actually asking, just to be clear: yes, I know about disk cache, and no, it is not my case :) Sorry, for this preamble :) I'm using CentOS 5. Every application in the system is swapping heavily, and the system is very slow. When I do free -m, here is what I got: total used free shared buffers cached Mem: 3952 3929 22 0 1 18 -/+ buffers/cache: 3909 42 Swap: 16383 46 16337 So, I actually have only 42 Mb to use! As far as I understand, -/+ buffers/cache actually doesn't count the disk cache, so I indeed only have 42 Mb, right? I thought, I might be wrong, so I tried to switch off the disk caching and it had no effect - the picture remained the same. So, I decided to find out who is using all my RAM, and I used top for that. But, apparently, it reports that no process is using my RAM. The only process in my top is MySQL, but it is using 0.1% of RAM and 400Mb of swap. Same picture when I try to run other services or applications - all go in swap, top shows that MEM is not used (0.1% maximum for any process). top - 15:09:00 up 2:09, 2 users, load average: 0.02, 0.16, 0.11 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4046868k total, 4001368k used, 45500k free, 748k buffers Swap: 16777208k total, 68840k used, 16708368k free, 16632k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 3214 ntp 15 0 23412 5044 3916 S 0.0 0.1 0:00.00 17m ntpd 2319 root 5 -10 12648 4460 3184 S 0.0 0.1 0:00.00 8188 iscsid 2168 root RT 0 22120 3692 2848 S 0.0 0.1 0:00.00 17m multipathd 5113 mysql 18 0 474m 2356 856 S 0.0 0.1 0:00.11 472m mysqld 4106 root 34 19 251m 1944 1360 S 0.0 0.0 0:00.11 249m yum-updatesd 4109 root 15 0 90152 1904 1772 S 0.0 0.0 0:00.18 86m sshd 5175 root 15 0 90156 1896 1772 S 0.0 0.0 0:00.02 86m sshd Restart doesn't help, and, by they way is very slow, which I wouldn't normally expect on this machine (4 cores, 4Gb RAM, RAID1). So, with that - I'm pretty sure that this is not a disk cache, who is using the RAM, because normally it should have been reduced and let other processes to use RAM, rather then go to swap. So, finally, the question is - if someone has any ideas how to find out what process is actually using the memory so heavily?

    Read the article

  • How would I recognize the "spoon-feeding problem" on a dynamic webapp server?

    - by Don Spaulding
    The "spoon-feeding problem", as it was recently explained to me, happens when connections to your application server are tied up feeding data across slow network connections to your clients. This makes sense to me and now I understand the importance of putting a highly-concurrent proxy in front of my app servers. My question is, how did the first person to recognize this problem figure it out? What *nix tools and troubleshooting techniques would help me to recognize this problem if I hadn't had it explained to me?

    Read the article

  • How to optimally configure memcache running on 16 cores 144G ram server?

    - by Ivko Maksimovic
    Memcache is the only important app running on the server Server has 16 cores and 144G RAM Memcache is given 135G Memcache runs at 32 threads Gigabit network, test shows at least 300Mbit/s availability on network port 600 connections 3000 requests per second Say that memcache (memory) usage is at 50% - it's definitely not full As we increase number of requests towards server, requests slow down (from 8ms to 100ms per request) but server load remains 0.00. We suspect this can be solved by adjusting configuration but we don't understand many of the configuration parameters (besides, maybe, the number of threads). Any ideas?

    Read the article

  • MicrosoftOnline Migration - Why do I have to wait several minutes before I can click "Finish?"

    - by Giffyguy
    I'm using the MicrosoftOnline Internet E-Mail Mailbox Migration Wizard. I'm moving my email from several GMail accounts to my Microsoft Exchange Online mailboxes. Every time I migrate all or part of a GMail mailbox, I have to wait about five minutes after it completes migration before the Finish button becomes available. What is going on during this time? Is it something I am doing wrong, or is the system just slow?

    Read the article

  • SSD performance

    - by Tom
    I recently upgraded to a Kingston Hyper-X 120GB SSD, when I run Crystaldiskmark my scores look really slow, my MB (gigabyte 775) does not have an option for ACHI in the BIOS, I'm wondering if that's an issue. The scores were: Seq read -233 write-176.8 512K-224 write-175.8 4K-25 write-80 4K-23 write-102 This drive is rated for over 500, Any help or input would be greatly appreciated..

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • Xubuntu Running Slower than windows with constant freezing

    - by Joseph
    So I switched my computer to Ubuntu after some issues with Windows, then switched the GUI to Xubuntu after I noticed Ubuntu and Unity were painfully slow. I'm running an i5 with 8 gigs of ram and I still experience at least one full freeze (where I can't do anything and have to unplug) per week. REISUB does nothing and never has. I am constantly running Virtualbox because I still need Windows. Any thoughts to prevent this annoying freezing?

    Read the article

  • Varnish send while cache

    - by osmano807
    I wanted the varnish download the file to the cache while sends to client. From what I am seeing, it first download and then send the file, and that with very large files, is slow. (sorry for english, I'm using an online translator.)

    Read the article

  • Windows 7 freezing after returning from idle.

    - by myname
    If i return to my windows 7 x64 computer after it has been idle for a couple of minutes it will respond for 3 seconds and after that everything freezes for around 5-30 seconds and then the computer starts working normally again. The freezing time varies and some times it even get stuck there forever requiring a hard reboot. This is very annoying as you can imagine, it's almost as slow as resuming from hibernation every time you left it idle enough to turn off the monitor.

    Read the article

  • Is there such a thing as a persistent ram drive?

    - by Linus
    I have a laptop with a LAMP setup. The HDD is slow, which causes my unit tests to run slowly. I was wondering whether I could mount the web root the mysql database on some kind of ramdisk. From what I have read of ramdisks, they are non-persistent. Is there anyway to create a ramdisk that writes changes to an area of the hdd when shutting down and re-mounts the ramdisk on bootup?

    Read the article

  • How to make sure all mails using Postfix?

    - by Ngompol2
    I'm using Ubuntu 10.10 32 bit. This is new server with nginx, php-fpm and PHP 5.3 I will install postfix. Currently the server can send mail (maybe sending through sendmail) but very slow until PHP timeout. To install, I will run: sudo apt-get install php-pear sudo pear install mail sudo pear install Net_SMTP sudo pear install Auth_SASL sudo pear install mail_mime sudo apt-get install postfix But after Postfix installed, how to make sure all mails using Postfix?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >