Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 474/763 | < Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >

  • new xp install, but it moves slow

    - by doug
    hi there I just installed new XP windows OS on a old laptop. I did also all the updates I was asked for. I installed also, the latest driver updates from the official laptop producer site. Now, when I try to use that laptop to talk on Yahoo! Messenger, the sound quality is very bad, and I barely hear what the other person is saying. Before I was reinstalling the XP the laptop were working fine. do you have any tips for me? What software utilities to try in order to improve it's performance? what software utilities to install in order to test it's performances?

    Read the article

  • Does a bad Internet connection increase bandwidth usage?

    - by Synetech
    My (Rogers) cable connection has been pretty bad recently (channels 3 and 10 are particularly fuzzy—it’s analog, not digital cable). Not surprisingly, this has caused my cable modem to drop out and have to reestablish a connection a couple of times since it started. The poor connection of course means higher corruption (not necessarily dropped per se) which causes the TCP/IP stack to have to retransmit packets more often. Reduction of bandwidth throughput aside, I got to wondering if it increases the actual bandwidth usage. That is, if there is a high error rate on the line causing packets to have to be retransmitted: Does this increase a bandwidth monitoring program’s numbers? Does the ISP count the retransmitted packets toward the monthly cap? Based on what I remember from my university networking courses and common sense, I have a feeling that the answer to both questions is yes, but I cannot reliably measure the first, and have no authoritative answer for the second. I’m wondering if maybe the retransmitted packets are acknowledged as being duplicates and thus not counted somewhere along the line.

    Read the article

  • How do ulimit -n and /proc/sys/fs/file-max differ?

    - by bantic
    I notice that on a new CentOS image that I just booted up off of EC2 that the ulimit default is 1024 open files, but /proc/sys/fs/file-max is set at 761,408 and I'm wondering how these two limits work together. I'm guessing that ulimit -n is a per-user limit of number of file descriptors while /proc/sys/fs/file-max is system-wide? If that's the case, say I've logged in twice as the same user -- does each logged-in user have a 1024 limit on number of open files, or is it a limit of 1024 combined open files between each of those logged-in users? And is there much performance impact to setting your max file descriptors to a very high number, if your system isn't ever opening very many files?

    Read the article

  • SSRS Errors "Use Local", even though I am

    - by Corey Coogan
    I am at a loss. I posted this on SO, but think this is probably a better place. I have searched high and low and don't know what to do. I am running SQL Server Web Edition on Server 2008, which only supports local databases. I am trying to connect to localhost, but when I test my connection, I get this error. The feature: "The edition of Reporting Services that you are using requires that you use local SQL Server relational databases for report data sources and the report server database." is not supported in this edition of Reporting Services. The DB was upgraded from SQL Express and when I select @@version, it says it's Web Edition. I've tried rebooting and that seemed to fix it, but only for a little while.

    Read the article

  • Advice for migrating email server

    - by Chris Adams
    Hi there, I'm planning to migrate a Zimbra server with about 200gb of data from a server hosted in an office into a datacentre, to increase uptime (we've had a couple of outages when our network here started flaking out, and we have people in other countries relying on this server too). However, I'm not sure how best to migrate the data into the data centre without rendering the connection unusable during office hours, because there's far too much to send in over night over the two meg upstream connection we have here. I'm familiar with using tools like nice to stop a long running process degrading machine performance - is there a simple way to throttle a connection between office hours, so the long running transfer doesn't block the pipe, but then opens up outside of office hours to make the most of the bandwidth? I'm aware the alternative here is to simply mail a hard drive to the data centre, but I'd like to avoid doing that if I could. We're using Centos Linux for our servers, in the office and the datacentre, so extra points for an open source linux answer.

    Read the article

  • How to make the jump from consumer support to enterprise support?

    - by Zac Cramer
    I am currently a high level consumer break/fix technician responsible for about 300-400 repairs a month. I am good at my job, but bored, and I want to move into the enterprise side of my company, dealing with Server 2008 R2 and exchange and switches and routers that cost more than I make in a month. How do I make this transition? Whats the best thing to learn first? Is there a standard trajectory for making this leap from consumer to business? I am full time employed, so going back to school is not a great option, but I have no life, so spending my nights and weekends reading and practicing is totally within my realm. I am basically overwhelmed by the number of things to learn, and looking for any advice you may have on the best way to proceed. PS - I apologize if this is a not quite the right forum for this, I know its not a technical question exactly, but I also know the sorts of people I want to answer this question are reading this website.

    Read the article

  • Looking for a good Web Server that is cheap

    - by SoLoGHoST
    I am a Project Manager, and former Lead Developer for a software portal system that requires a forum software to run. I am in need of a server that is cheap, reliable, and supports the latest PHP (5.2+), MySQL, unlimited e-mails (preferably), a cPanel, multiple sub-domains (atleast 3+). Currently I am paying $34.95 USD/month (approx. $420 USD/year). This is too high for me to pay to keep the site running. I just recently became Project Manager and in charge of Finances and I'm extremely concerned for the future of Dream Portal. With those prices I'm not sure I'll be able to keep it running for too long. Can someone please tell me of a good server that meets all of the requirements that I listed above that is cheaper on a yearly basis? Note: Currently on a Dedicated Server with limited disk space at 15000 MB (15 GB), monthly bandwidth = 500000 MB, 50 emails limit, 20 sub-domains limit, 30 FTP accts., and 25 SQL Databases.

    Read the article

  • Debian is equal to Ubuntu

    - by rkmax
    The title of the question is confusing, and does not explain my point well. I've always used Ubuntu server from version 10.04 and never had problem, now I have 4 machines with ubuntu 12.04.1 LTS installed on them and I found that under any circumstances where there is a high burden throws me a problem and machine crashes constantly. the most common is CPU#X stuck for Ns! Now I wonder if the administration of Debian is equal to that of ubuntu, regarding Servicos, packages, folders structure for example I would like to know if the services are installed in the same manner using invoke-rc.d, which handles additional security, including for not giving blind caning. I've been looking for a comparison chart but have not found anything yet, something between Debian 6.0.6 and Ubuntu 12.04 also the most common "hiccups" when you install the system

    Read the article

  • Add "secure" in cookie by httpd server

    - by Abhishek
    How do I have to configure my httpd server to add "Secure" in the cookies? I tried the one in the below link, http://blog.modsecurity.org/2008/12/fixing-both-missing-httponly-and-secure-cookie-flags.html but this did not seem to be working. I inspected the cookie via firebug and found that the cookies have "HttpOnly" but not "Secure". I double checked the configurations and they the same as mentioned in the link. I also noticed that the server response time goes bit high when doing it by mod_security. Is there a better way to do it? Any ideas or pointers to configurations would be helpful

    Read the article

  • Knife leaves stray processes on my system

    - by Leons
    I'm seeing stray knife processes on my system. I have an automated ruby script that runs bundle exec knife bootstrap against various nodes. Most of the time the knife process completes and goes away, but sometimes it stays for days. I'm noticing it days later in ps aux I think it's related to the target node being down when knife runs. The chef server timeout is high, so the action completes eventually when the node goes back up, but I think knife may give up or hang somehow during the wait. Is there something I can do about the stray knife processes? Does knife have timeout settings separate from the chef server's timeout settings?

    Read the article

  • replace controller, add second controller, or use expander?

    - by longneck
    I have a Proliant DL380 G6 that I am re-purposing as a Hyper-V host for a new, off-site data center that will host our DR services. The server currently has a P410i controller with the 512MB BBWC module. The drives installed are SFF 6G 10K drives. I plan to add the HP 516914-B21 drive cage, which gives me 8 more SFF drives, bringing the total to 16. To get the additional 8 drives connected, I have one of three choices: Install a new controller that can support 16 drives. Install a second controller. Install a SAS expander, such as the HP 468406-B21 recommended by HP's spec sheet for my server. My question is: how do I know if I'm going hit a performance ceiling by putting 16 drives on the P410i or using the expander? And if I am, how do I select an appropriate controller? I'm not sure what specifications I should be looking at.

    Read the article

  • why is there so much variance in prices for a 2-bay NAS?

    - by jcollum
    I'm considering buying a 2bay NAS for media storage. I'm perplexed by the variety of prices. They go from about $115 to $1200. The only thing I could see that differentiated the high end drive was encryption and a dual gigabit ethernet port. I don't understand how that can add up to $800+ dollars. Clearly I should know why there's this price variance before considering buying a 2 Bay NAS. Newegg link to 2 Bay NAS Should I move this question to serverfault?

    Read the article

  • Sound card problem, no audio device detected

    - by Paul
    I bought a new sound card because my built in sound card did not function. When I open YouTube, Media Player or anything that can create a sound my computer will hang up and sometimes when I start my computer it will hang when the Windows XP sound will activate. Update: My computer has no audio. It says NO AUDIO DEVICE. I already installed Realtek AC97 and Realtek High Definition Audio Driver and I also pasted stream.dll to the Windows and system32 folders and I restarted my computer but it still says NO AUDIO DEVICE. Please help me. Thanks

    Read the article

  • I have to shard a mysql database. I want to start with 12 shards on 2 machines. What is the best w

    - by Tim
    All tables are InnoDb. I would rather not use mysqldump, because the shard sizes will be about 200 GB (about 700 million rows), and that will take too long. I was hoping to just stop mysql for an hour, copy the data files to a new machine, and start back up. But you can't do this with InnoDb, as some data is in the shared tablespace. Even if I have the innodb_file_per_table option set. This is not a website, but a custom application, used by tens of thousands right now, so uptime and performance are important. I suppose I could add logic into my server application to allow for gradual rebalancing / moving of a shard. Does anyone have a better idea?

    Read the article

  • Ubuntu Linux: Process swap memory and memory usage

    - by David Halter
    My Ubuntu eats more memory than the task manager is showing: sudo ps -e --format rss | awk 'BEGIN{c=0} {c+=$1} END{print c/1024}' 1043.84 free -m total used free shared buffers cached Mem: 3860 1878 1982 0 20 679 -/+ buffers/cache: 1178 2681 Swap: 2729 1035 1693 That's strange. Can someone explain this difference? But what is more important: I'd like to know how much memory a process is really using. I don't want to know the virtual memory size, but rather the resident memory plus swap of a process. I have also tried to output the format param "sz" of 'ps', but the sum of this is to high (5450 MB) (param 'size' gives 8323.45 MB). Are there any other options? I really want to use this, to determine which programs/processes are eating to much memory (and swap), to kill them, because hibernate might not be working if the swap partition is to little.

    Read the article

  • Areca 1880ix RAID hangs

    - by Dave
    Areca RAID controller ARC-1880ix-12 (firmware 1.50) hangs when on high load. My setup is: Chenbro 3U chassis Intel S5500BC mainboard Xeon 5603 CPU 16GB of RAM 12 Seagate SAS drives ST32000645SS (2 of them as hot spare, 10 as RAID10) Mellanox Infiniband HBA card This server is working as external infiniband storage for Xen VMs. When load is quite big Areca's firmware hangs - it becomes unreachable even from Areca's ethernet adapter. After resetting the server power it returns to normal operation. While Areca is hanged I can confirm that it is powered (ethernet link is active) and Infiniband HBA works Ok. Thanks in advance for any idea or suggestion where the problem might be!

    Read the article

  • How do ISPs/Colocation Facilities limit bandwidth for Ethernet Drops?

    - by Kyle Brandt
    I have switch providers and have run into some problems with bandwidth limitations. I have more bandwidth then before, but there are performance issues. The router is connected to a 100mBit port, but they limit it to arbitrary settings (in software I imagine). It seems when I go above the limit, the provider starts to drop packets beyond the limit (This is what they said they do as well). Is it possible the previous provider did something like queuing packets above the this limit before dropping them? Is anyone aware of not only what can be done, but what is typical? Also, is there anything I can do on my Cisco router to help this situation? It would seem I am pretty helpless if the packets are dropped before they reach my interface (The traffic that is high is inbound to my network).

    Read the article

  • Static Content Caching in Sticky Session ?

    - by Ravi
    can we do Static Content caching in Sticky Session Servers. We use SqlStateServer to store the Session of the user. right now we are doing performance tuning in our application, so we decided to cache the static Content(images, css, js) for the applicaiton. so that it loads faster. Is it Good to cache the static content in Sticky Session ? If it's good, then can any one give me some links where i can read about it. right now i done following settings in my web config file <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="500.00:00:00" /> </staticContent> can this is the good code ? will it not affect our sticky session environment ? my goal is to cache the static images, css, Js for the application

    Read the article

  • Dell PowerEdge 6850 Degraded HDD

    - by Matt
    Good Morning, We have a dell power edge 6850 with a degraded drive in the RAID array. I have never had to recover such an issue, so any help or advice would be welcome. Basically it hasn't affected the server at an operating system level, but has slowed down performance, I have a replacement drive in hand but as this is our main database server I want to proceed with extreme caution. My options as I see them are - Can I just hot swap the degraded drive with the new one and the data will automatically re-sync and we are all back to normal presumably this is dependant on the current raid configuration? reading various comments on-line I may need to re-configure the RAID array and re-build the broken drive? This screams disaster to me with the main worry being that I wipe any other data. Option 1 would of course make my day. Thanks in advance

    Read the article

  • Monitor ESXi hosts with Nagios

    - by Kyle Brandt
    Does anyone recommend any methods for monitoring ESXi 4.1 hosts with Nagios? I have looked into SNMP but it seems to be in a pretty sorry state. Net-SNMP does not seem to be included and there is a built it SNMP daemon that I set up. However from the standard MIBs there only seems to really be network interface counters and the VMWare MIBs seem quite useless. Right now I am considering SNMP for the interface speed and trying the plugins listed at http://unimpressed.org/post/96949609/monitoring-esxi-performance-through-nagios . Anyone have a better idea? I would like to monitor the hosts directly, not through something like vCenter.

    Read the article

  • Disable RAID Controller

    - by B.Mr.W.
    I have some decent HP Proliants server that come with "HP Smart Array P410i Controller" enabled, I am using these boxes to set up a Hadoop cluster and I know, RAID is for sure a no-no for Hadoop since the application itself will take care of data redundancy and extra intelligence provided by RAID won't be helpful and might turn down the performance. I tried to disable the devices at the BIOS and the box cannot even access the disk afterwards. So I am assuming the controller is sitting between disks and mother board, and we have to turn it on and configure it to "level0" or something like that. I am wondering what should I do to "disable" the RAID functionality so it will fit into the Hadoop environment.

    Read the article

  • Intel HD 4000 and Nvidia GT 650 working together on laptop

    - by Juan
    My new win7 Acer notebook has i5 CPU with Intel HD 4000 and Nvida GT650 GPU. Obviously monitor is plugged to Intel HD. In Nvidia control panel I can configure PhysX but that doesn't help. Windows system rating shows high gaming experience and average/low windows aero experience. What does that mean? Does my laptop use nvidia for games/3d apps nad Intel HD 4000 for aero? Should I disable Intel HD in bios, but how to plug monitor to nvidia? Or should I leave everything like now because everything works as it suppose to work? Here is image capture of some states: http://oi47.tinypic.com/34p0qp4.jpg

    Read the article

  • How do I lower idle cpu usage in ubuntu linux? Gnome or KDE Variants

    - by Jasen
    My question comes from a kde desktop currently, but it also happens with the gnome instance. When just sitting there, with only the cpu monitor widget running. no open windows, no background processes other than the desktop, my cpu is at ~20%. I wanna know how to fix this, and possibly get better performance out of it. When running my windows side, the cpu will sit at zero, and i generally load new programs about 400ms faster. With windows 7 being as slow as it is, this is not acceptable. and the widget is only set to check every 500ms, so im almost completely sure its not the widget. My system is a Gateway nv 53 amd 2.0 ghz turion with 4 gb of installed ram, and 500 gb hd. both linux and windows are 64 bit. average ram use on either system is about 1.4 gb for just the os

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

  • Funnelling http traffic

    - by spencer p
    I have a situation where a large batch of servers (X), on demand, need to request data from a smaller set of web servers (Y). The worst case scenario is if all servers in X decide to fetch different requests to one server in Y. That would be X amount of connections, which could be a very large burst of traffic. The best case scenario is if 1 server in X hit 1 server in Y in tandem. Life does not work like this. One idea to entertain is placing a proxy, similar to squid between X and Y. All of X servers can connect to this proxy, but would result in a few persistent (http keepalive) connections to Y. If The few were say, 3 or 4, then it would funnel. If we could then rate limit those connections and traffic decides to spike unusually high, we wouldn't hurt anyone but ourselves. Thoughts?

    Read the article

< Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >