Search Results

Search found 7473 results on 299 pages for 'usage statistics'.

Page 176/299 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Flash stream makes my internet slow and cpu rush

    - by user1225840
    When I try to watch a live Flash stream, my CPU usage goes up to 75% and my Internet speed goes down. If I run a test before the video-stream, my speed is ~40/10Mbps and during the stream it drops to 0.1-0.5Mbps. The stream is laggy and I can only watch one to two seconds at a time, start/stop/start/stop. I have cleared my history, cache, cookies, temp files, and so on. I have searched for malware and took care of that. I have updated my drivers, reinstalled Flash and everything else I can think of, but it remains slow. I had this problem before and it just started working normally from one day to another. Could it be a hardware problem?

    Read the article

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • What is a safe way to dispose of personal info on an old laptop and what to do with said laptop?

    - by MikeN
    I have an old laptop someone gave me that only has 64Megs of RAM on it and runs WIN XP. I wanted to wipe the drive clean by installing Ubuntu Desktop to remove any shred of personal information on it and to make it useful to someone else. But the Ubuntu installer keeps failing because there is not enough RAM. Is there another version of Linux that would easily install on a 64 Megs of RAM system? 2nd part of question, what do I do with this old laptop? It doesn't have a battery anymore and has to be plugged into the wall to run. Assuming I can install a good Linux distro on it, who do I give it to? Salvation Army? I'm looking to just have it be useful to someone or some organization for spare parts or some basic computer usage.

    Read the article

  • What software works well for viewing massive TIFF images on Windows 7?

    - by nhinkle
    Today I saw an article about a half-gig, 24000 square-pixel high-res composite image of the moon. (This is a much smaller version of the image) I find astronomy interesting, so I thought I'd download it and take a look. With 4GB of RAM and an i5 processor, I figured my computer could handle it. Unfortunately, the built-in Windows Picture Viewer didn't do such a great job. While it opened the file without a problem, zooming in was ineffective. The zoomed out image loaded, but zooming in just showed a scaled-up version of the zoomed-out version, not any detail: Closing the picture viewer also took a very long time, and the whole process used up much more RAM than the 500MB of the picture (usage went from 1.3GB to 3.8GB). What other software would work better for this? I would prefer something that is free and fairly simple. I don't really want to use an editor (like photoshop or GIMP), just a nice lightweight viewer. Any suggestions?

    Read the article

  • Home movie video browser

    - by Jim Hunziker
    I have a bunch of home movies that don't have useful filenames because they came straight off the camera. (I'm using Vista 64, by the way.) Picasa is pretty good for browsing through them and watching them, but it doesn't use my video card for rendering the videos. My CPU gets pegged at max usage, and full screen barely works. Windows Media Player or Quicktime works fine. Is there another application (like Picasa) that can be used for browsing through movies that both uses my video card and shows thumbnails of all the movies in my collection? I'd rather have something nicer than Windows Explorer. (The movies are h.264 AAC 1280x720.)

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • Is there a free tool/package that can monitor web traffic and display URLS accessed? [closed]

    - by Anthony
    I couldn't find a similar question but then maybe I am searching for the wrong terms. A few years ago I used a router like device, I'm pretty sure it was a SonicWall, that did this on a clients site. Basically all traffic would be routed through this device and it allowed the manager/administrator to inspect web usage of the workers, determine how often certain resources were accessed and block them if necessary (much like content filter). It showed reports based on domain name reached etc. Facebbok.com, Bebo.com and so on. It also displayed the usual IP traffic information etc. it was a UTM also. I have tried Endian firewall, with it's NTOP install, but I don't think that will show URLs browsed. Maybe I just haven't found it in NTOP yet? I need this to troubleshoot connection and traffic issue at my home, with about twenty devices/users so didn't want to buy a dedicated solution and have spare hardware to use a community product.

    Read the article

  • How much space do NTFS hardlinks/symlinks occupy?

    - by Felix Dombek
    Well, I guess it must be something proportional to the original filename plus the new filename for symlinks, and only the new filename for hardlinks, but how does this affect the disk space exactly? I just made a folder with about a hundred thousand symbolic links in it, and the folder still reported 0 bytes usage. I may be mistaken, but I even think the free capacity of the drive remained the same. Then I permanently deleted the folder and the sizes still stayed the same. Could I fill up a hard disk just with symlinks? Or does NTFS have limitations in that no more than x symlinks are allowed on one drive/in one folder, so the capacity of the drive cannot be reached?

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

  • Not able to access Silverlight.net and ONLY Silverlight.net - All other domains work!

    - by Sootah
    Alrighty folks, I have an extremely odd problem. I am able to surf the web fine with one odd (and really annoying at the moment) exception: Microsoft's Silverlight.net. Every other site that I go to works just fine. This is quite frustrating because I'm in the middle of programming a web app in Silverlight 4.0, and whenever I do a search for any code examples, tutorials, or whatnot at least 50% of the results are hosted in the silverlight.net forums. The error message that I get is: Oops! Google Chrome could not find www.silverlight.net It doesn't work in my other browsers either (both IE and FireFox). What's odd, is that while the error message would lead me to assume it's a DNS error, I can ping the URL just fine. C:\Users\The Doot>ping silverlight.net Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Ping statistics for 206.72.125.201: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 103ms, Maximum = 110ms, Average = 106ms I've checked my HOSTS file, and there's nothing that refers to ANY Microsoft URL in there. What could be causing this!?? More importantly, how do I fix it? Just for kicks, I've even included the results of a traceroute here for your enjoyment. OS: Windows 7 Ultimate Thanks in advance! -Sootah

    Read the article

  • SSH reverse tunnel to monitor and manage remote devices

    - by acid_crucifix
    I have a set a distributed set of devices running Ubuntu 12.04 that I am distributing to clients. I would like to manage them remotely. They may not have fixed IPs and potentially might be behind firewalls. What I am planning to do is have the devices (permanently connected to the net) poll a request URL and based on the response open a reverse tunnel to my server, so that I can access them via that tunnel. Most of what I read about reverse tunnel over SSH is for single use cases and very little about heavy production usage. Is there some reason for this, security issues? or stability? Any help would be much obliged.

    Read the article

  • Deployment from OVA format

    - by Manvendra Bele
    I am deploying a VM using a OVA format. The size of OVA format is 57 GB. Currently free space on my datastore is 388 GB. At the time of selecting Disk Format type if shows me in red that the disk size required is 1 TB therefore you cannot select THICK provisioning. Therefore, i selected THIN provisiong. It THIN provisioing i am showed that Estimated Disk Usage is 112 GB which is less than the free space available. But even after selecting THIN proviosing at the time of deployment it throws an error that it cannot create disk as the size of disk is larger than the maximum specified limit. My block size is of 1 MB. Pasting my exact error here: Failed to deploy OVF package:File [datastore1] IMS Tester 1/IMS Tester1_2.vmdk is larger than maximum size supported by datastore 'datastore1

    Read the article

  • Using VLC to Unicast High Definition Webcam over local gigabit LAN with low/zero delay

    - by Robin Day
    We're setting up a webcam "window" between two offices in the same buildilng. The two PC's are connected to the same gigabit switch. We're using VLC to stream the webcam over HTTP using the following commands. vlc dshow:// :dshow-caching="0" :dshow-size="640x480" :sout=#transcode{vcodec=h264,vb=0,scale=0}:http{mux=ffmpeg{mux=flv},dst=:8080/} :no-sout-rtp-sap :no-sout-standard-sap :ttl=1 :sout-keep vlc http://192.168.0.1:8080 :http-caching="0" Even with the caching set to zero, the delay in the image is a good 2-3 seconds. The CPU usage of each pc is also maxed. I'm guessing it's the transcoding that's causing much of the delay. Can anyone give me some changes to these command lines that will reduce the transcoding power, or send the webcam over a different protocol, or anything that will reduce the delay of the cameras? Bandwidth is not an issue at all as the pc's can be connected to a dedicated switch/vlan if required.

    Read the article

  • Force Firefox 14 to free memory when opening/closing lots of popups

    - by aknghiem
    I'm currently trying to run some tests on a web application using Selenium IDE with Firefox 14. The tests mainly consist in loading a page containing thousands of links and clicking on every of those links. Of course, each time a popup shows, I tell Selenium to close it and proceed with the remaining links. However, it seems that even if I close the popups, Firefox is not freeing memory. Usually, I end up with Firefox crashing after opening 1500 popups (around 2.5Gb of memory usage). Is there any way to force the browser to free memory? Maybe something I should set in about:config? Or is there a flaw with Selenium? Thanks.

    Read the article

  • Home movie video browser

    - by Jim Hunziker
    I have a bunch of home movies that don't have useful filenames because they came straight off the camera. (I'm using Vista 64, by the way.) Picasa is pretty good for browsing through them and watching them, but it doesn't use my video card for rendering the videos. My CPU gets pegged at max usage, and full screen barely works. Windows Media Player or Quicktime works fine. Is there another application (like Picasa) that can be used for browsing through movies that both uses my video card and shows thumbnails of all the movies in my collection? I'd rather have something nicer than Windows Explorer. (The movies are h.264 AAC 1280x720.)

    Read the article

  • Web Farm Application deployment best practices

    - by rauts
    Hi All, We are having a web farm which hosts multiple ASP.Net applications. We typically have 4 servers on the farm. The dilemma which i am having is in terms of capacity issue of the farm. Lets say i have currently got 200 apps in total. Should I deploy all 200 apps on all 4 servers (i.e. all the servers in the farm are identical) or should i split the applications between 2 sets of server and create 2 smaller farms so that i can then manage the application based on its criticality and usage etc. Any best practices in this area would be highly appreciated. Thanks Rauts

    Read the article

  • Writing to external drive runs out of space prematurely

    - by steve
    I have a USB 2.0, 500 GB HDD. I am writing a bunch of data to it, that I previously recovered from the drive. I have formatted the drive in exFAT, since the drive will be used with Windows and OSX. At first, I tried using Windows explorer to move the files over to the drive (about 160 GiB worth) but after copying about 30% of the data (according to TeraCopy), Windows Explorer reported the drive as out of space, and that it was completely full. WinDirStat only showed the size of data that had been copied over... Where did this extra space go? Why is there a 300+ GiB discrepancy between the usage reported by the files and what Explorer sees?

    Read the article

  • Apache crashing at random intervals. Can not find a reason in log files

    - by Nick Downton
    We are having an issue with a VPS running plesk 9.5 on ubuntu 8.04 At seemingly random intervals Apache will disappear and needs to be started manually. I have checked the apache error log, /var/log/messages, individual virtual host apache error files and cannot find anything that coincides with the time of the failure. dmesg is empty which is a bit odd. We have also had the psa service go down for no apparent reason but apache stay up. I'm at a loss to diagnose this really because all the log files I can find do not point to any issues. Are there any others I can look at? Memory usage sits at about 55% (out of 400mb) and it isn't a particularly high trafficed server. Any pointers as to where else I can find out what is going on would be very much appreciated. Nick

    Read the article

  • Outlook Express hangs when selecting multiple emails

    - by Javier Badia
    I'm using Outlook Express (6, I think) on Windows XP. Lately, it has been hanging. Sometimes this happens at startup (right after the main window with all the panes loads) and sometimes when selecting many emails (sometimes as low as three emails at once, sometimes at ten, it's not a fixed number). When this happens, msimn.exe starts to use 98-100% CPU and RAM usage shoots up very quickly, reaching hundreds of megabytes in half a minute. The message pane goes gray instead of showing the message contents. As I said, this sometimes happens right after the main window loads, sometimes when selecting many emails at once. I tried backing up everything, deleting the identities, creating a new one and restoring, but this still happens.

    Read the article

  • Strange network connectivity problem

    - by Marc
    Here is my network connectivity: cable modem | |(WAN) wrt54g (default gateway, 192.168.1.1) -- earth |(LAN) | Simple Switch1 | | | | | SimpleSwitch2- neptune | | | | mars mercury | |- venus | |- laptop | saturn (Windows AD DC) simpleSwitch2 was hanging off the wrt54g. I moved it to SW1 during troubleshooting. Nothing described below was any different. earth is connected via wireless to the wrt54g. I can ping from laptop to mars, neptune & mercury. I can ping from earth to venus, saturn & laptop. However, pinging mars, mercury or neptune from earth gives the following result. Pinging mars.XXX.XXX [192.168.1.105] with 32 bytes of data: Reply from 192.168.1.122: Destination host unreachable. Reply from 192.168.1.122: Destination host unreachable. Reply from 192.168.1.122: Destination host unreachable. Reply from 192.168.1.122: Destination host unreachable. Ping statistics for 192.168.1.105: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), .122 is the address of the machine from which I am pinging. earth is a Vista machine. Windows firewall is off. saturn is my DNS & DHCP server. Can anyone give me any ideas what the h*ll is going on? Clearly the topology is a factor And yes, I am a space geek.

    Read the article

  • How to select the account on the login screen of Windows7 by start typing the name?

    - by akira
    When MacOS boots up and the users is prompted to select the account (s)he wants to login into, the users can either click the name / icon of the account with the mouse or just type in the name of the account. I want to do the same at the login screen of Windows7: Login screen pops up, I start to type my account name, I select the account with enter and then I type the password and enter again. No usage of the mouse involved. (I am aware of tab-cycling and hard-to-follow-the-almost-invisible-marker-of-where-the-focus-is-right-now)

    Read the article

  • Can't pipe echo to netcat?

    - by user1641300
    I have the following command: echo 'HTTP/1.1 200 OK\r\n' | nc -l -p 8000 -c and when I curl localhost:8000 I am not seeing HTTP/1.1 200 .. being printed. I am on mac os x with netcat 0.7.1 Any ideas? #!/bin/bash trap 'my_exit; exit' SIGINT SIGQUIT my_exit() { echo "you hit Ctrl-C/Ctrl-\, now exiting.." # cleanup commands here if any } if test $# -eq 0 ; then echo "Usage: $0 PORT" echo "" exit 1 fi while true do echo "HTTP/1.1 200 OK\r\n" | nc -l -p ${1} -c done and testing with: curl localhost:8000

    Read the article

  • Monitoring instantaneous network throughput at one second intervals?

    - by Shaddi
    For a testing setup I have, I need to monitor the throughput through a "router"* at regular intervals of around 5 seconds or less (sub-second intervals would be very nice, but not required). Ideally, I would be able to generate a file which contained both the number of bytes and packets seen during each interval. I will eventually be generating a time-series of throughput from this data. On a previous setup using an older version of FreeBSD, there was a tool called "bpfmon" which gave me this information. However, I need to do this under a modern version of Linux (namely, Ubuntu 11.04). I have looked at both iptraf and iftop, but these do not appear to provide the resolution I need, nor do they seem to easily allow scraping the data I need. I understand iptables statistics may be able to give me what I'm after, but the examples I've seen of this seem to rely on repeatedly reading and resetting traffic counters, which seems like it could give inaccurate as read/reset is not an atomic operation. I already capture a tcpdump trace of the traffic I'm interested in on the link I want to monitor, so I am open to approaches which simply parse that. I feel like this must be a common problem though, so I am hoping there will be a standard "best practice" tool for accomplishing this. *I say "router" in quotes because I am really talking about a machine with two bridged NICs through which all the traffic I'm interested in passes.

    Read the article

  • Is there a utility to visualise / isolate and watch application calls

    - by MyStream
    Note: I'm not sure what to search for so guidance on that may be just as valuable as an answer. I'm looking for a way to visually compare activity of two applications (in this case a webserver with php communicating with the system or mysql or network devices, etc) such that I can compare the performance at a glance. I know there are tools to generate data dumps from benchmarks for apache and some available for php for tracing that you can dump and analyse but what I'm looking for is something that can report performance metrics visually from data on calls (what called what, how long did it take, how much memory did it consume, how can that be represented visually in a call stack) and present it graphically as if it were a topology or layered visual with different elements of system calls occupying different layers. A typical visual may consist of (e.g. using swim diagrams as just one analogy): Network (details here relevant to network diagnostics) | ^ back out v | Linux (details here related to firewall/routing diagnostics) ^ back to network | | V ^ back to system Apache (details here related to web request) | | ^ response to V | apache PHP (etc) PHP---------->other accesses to php files/resources----- | ^ v | MySQL (total time) MySQL | ^ V | Each call listed + time + tables hit/record returned My aim would be to be able to 'inspect' a request/range of requests over a period of time to see what constituted the activity at that point in time and trace it from beginning to end as a diagnostic tool. Is there any such work in this direction? I realise it would be intensive on the server, but the intention is to benchmark and analyse processes against each other for both educational and professional reasons and a visual aid is a great eye-opener compared to raw statistics or dozens of discrete activity vs time graphs. It's hard to show the full cycle. Any pointers welcome. Thanks! FROM COMMENTS: > XHProf in conjunction with other programs such as Perconna toolkit > (percona.com/doc/percona-toolkit/2.0/pt-pmp.html) for mySQL run apache > with httpd -X & (Single threaded debug mode and background) then > attach with strace -> kcache grind

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >