Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 717/837 | < Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >

  • KVM error with device pass through

    - by javano
    I am running the following command booting a Debian live CD passing a host PCI device to the guest as a test and KVM errors out; kvm -m 512 -boot c -net none -hda /media/AA502592502565F3/debian.iso -device pci-assign,host=07:00.0 PCI region 1 at address 0xf7920000 has size 0x80, which is not a multiple of 4K. You might experience some performance hit due to that. No IOMMU found. Unable to assign device "(null)" kvm: -device pci-assign,host=07:00.0: Device 'pci-assign' could not be initialized lspci | grep 07 07:00.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M [Tornado] (rev 74) I shoved an old spare NIC into my motherboard to test PCI pass through. I have searched the Internet with Goolge and found that errors relating to "No IOMMU found" often mean the PCI device is not supported by KVM. Does KVM have to support the device being "passed-through"? I though the point was to pass the device through and let the guest worry about it? Ultimately I want to pass-through a PCI random number generator, is this not going to be possible with KVM? Thank you.

    Read the article

  • Defrag starting when not scheduled. What is triggering the defrag

    - by leroyclark
    I have a fileserver that is starting a defrag around 2:00 PM everyday. This is killing performance as it runs for ours becuase this is a file server and has multiple drives. All scheduled tasks regarding defrag have been disabled. I have verified that it is accessing the data drives(using SysInternals tools). The reason I might have though otherwise was the event log has multiple entries regarding defragging a db file related to shadow copies. Oh yes these drives take shadow copy snapshots multiple times per day but the times of them don't coincide with the defrag task. There is nothing in the event logs regarding defrag except those noted above in relation to shadow copies. I'm out of ideas looking for what is starting these jobs. One possiblility is that the drives are not being defgramented, but being analyized to determine if they need to be defragmented. I manually ran an analysis and the cpu usage(by dfrgntfs.exe) seems to be similar to what I'm seeing everday while the defrag process is running. However I've found no setting that schedules this analysis.

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

  • Scriptable BitTorrent clients?

    - by James McMahon
    In an effort further automate all the little computer house keeping tasks that can waste my time I am looking into BitTorrent clients that have the ability to script common tasks. I've done some Googling and it looks like Transmission might have some of said such capabilities, but there site wasn't very clear on the details. Things I am looking to do; Prioritize and label torrents based on trackers Set seed length based on trackers and filesize Set additional seed time when a torrent's seed time expires based on a number of factors, like time spent seeding, remaining disk space and ratio. Move torrents to appropriate places post seeding based on labels and tracker Basically, while I could Python or Bash script things like moving torrents around and other simple actions, I need away to talk to the client to figure out things like the torrent seed time, tracker, labels, filesize, etc. Is there any client out there that would allow me to all or a subset these actions? I have access to Linux, Mac and Windows and am not tied to any particular torrent client. I am a programmer so I have no problems writing scripts, but examples of torrent scripting would also be helpful.

    Read the article

  • Does btrfs balance also defragment files?

    - by pauldoo
    When I run btrfs filesystem balance, does this implicitly defragment files? I could imagine that balance simply reallocates each file extent separately, preserving the existing fragmentation. There is an FAQ entry, 'What does "balance" do?', which is unclear on this point: btrfs filesystem balance is an operation which simply takes all of the data and metadata on the filesystem, and re-writes it in a different place on the disks, passing it through the allocator algorithm on the way. It was originally designed for multi-device filesystems, to spread data more evenly across the devices (i.e. to "balance" their usage). This is particularly useful when adding new devices to a nearly-full filesystem. Due to the way that balance works, it also has some useful side-effects: If there is a lot of allocated but unused data or metadata chunks, a balance may reclaim some of that allocated space. This is the main reason for running a balance on a single-device filesystem. On a filesystem with damaged replication (e.g. a RAID-1 FS with a dead and removed disk), it will force the FS to rebuild the missing copy of the data on one of the currently active devices, restoring the RAID-1 capability of the filesystem.

    Read the article

  • how to back up data from a machine that keeps hanging

    - by Amit Phatarphekar
    Hello - I have a storage server running opensolaris. But lately its been acting up - it hangs at random times due to some SCSI/ATA related error messages. I've tried to fix it without any progress, so I'm giving up now. The machine keeps hanging every 30 minutes or 1 hr ...sometimes after 4 hrs. Its very unpredictable. So I've decided to just reformat the storage server and start from scratch...maybe I'll just not use solaris and install something else, since the errors are related to solaris running on ATA HDD or something. Question - Before I reformat it, I want to back up some of the important data on it. Like it has a VM with 200 GB disk files, it has a whole bunch of ISOs stored on it etc etc. I'm using a simple scp to copy the files over to a different machine. My issue is that, because the machine hangs....sometimes my file copy is incomplete and I have to start all over again. Lets say I'm trying to copy a 200GB file which takes like 4 hrs....IF the machine hangs before the whole file i copied over...I have to recopy the file from scratch. Is there a solution to copy the files over such that if the machine hangs or network goes down..the copying can resume from where it left off? - like if 50 GB of a 200GB file was copied and machine hung....next time, it'll just continue to copy rest of the amount, instead of starting all over again. Thanks Amit

    Read the article

  • Best way to integrate applications to windows 7 install.wim image

    - by cyph3r
    I have right now an unmodified .iso of a windows 7 32bit and 64bit installation disk. And I need to integrate to that some applications (office, adobe reader etc) and windows updates so that when windows are installed the above applications/updates are already installed and working. Requirements: My output has to be a install.wim image containing the new/improved windows installation files because the deployment is done via a pxe server and a custom windowsPE enviroment. The procedure to create the install.wim has to be as automatic as possible. I can't create it manually every time I want to incorporate a new windows or application update to the image. The image will be installed on 100+ computers so it needs to be 'generic'. I've never done something like this before but from what I searched a possible solution to this issue would be: To create a reference installation (preferably on a vm so I can take snapshots) complete with its applications/updates/settings. After the complete setup I take a snapshot of the installation Run C:\Windows\System32\sysprep\sysprep.exe /oobe /generalize /shutdown to sysprep the machine. Boot to a WindowsPE enviroment and capture the .wim image using gimagex. Deploy the .wim and enjoy the rapid installation times. :D Does that sound ok? Would you recommend anything else? Right now the applications are installed after the installation of windows is complete. So the total installation time is quite long. That's why I need a different approach.

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by mfn
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • Error during Time Machine backups on OS X Lion

    - by user92401
    After I turn on my machine, the first couple of Time Machine backups seem to go OK, but after about an hour I get this error: Unable to complete backup. An error occurred while creating the backup folder. Latest successful backup: 7/31/11 at 12:32 PM I'm running 10.7. Time Machine is backing up an internal HD to an external USB HD. I've already run Disk Utility to repair the Time Machine partition. It's a relatively new hard drive and didn't have any issues. Here's what I've found in the Console's log filtered for backupd: 7/31/11 12:31:21.223 PM com.apple.backupd: Starting standard backup 7/31/11 12:31:21.447 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 12:31:29.146 PM com.apple.backupd: 983.7 MB required (including padding), 391.90 GB available 7/31/11 12:32:19.471 PM com.apple.backupd: Copied 3156 files (36.0 MB) from volume Macintosh HD. 7/31/11 12:32:20.017 PM com.apple.backupd: Copied 3173 files (36.0 MB) from volume LI. 7/31/11 12:32:20.136 PM com.apple.backupd: 934.8 MB required (including padding), 391.86 GB available 7/31/11 12:32:54.755 PM com.apple.backupd: Copied 916 files (117.8 MB) from volume Macintosh HD. 7/31/11 12:32:54.894 PM com.apple.backupd: Copied 933 files (117.8 MB) from volume LI. 7/31/11 12:32:55.937 PM com.apple.backupd: Starting post-backup thinning 7/31/11 12:32:55.937 PM com.apple.backupd: No post-back up thinning needed: no expired backups exist 7/31/11 12:32:55.960 PM com.apple.backupd: Backup completed successfully. 7/31/11 1:21:28.624 PM com.apple.backupd: Starting standard backup 7/31/11 1:21:28.631 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 1:21:28.682 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:28.683 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:38.694 PM com.apple.backupd: Backup failed with error: 2

    Read the article

  • Malware Defense Shows Up in PlayOn Settings/Logs Although System Has Been Thoroughly Cleaned

    - by nicorellius
    I was hit really hard by some nasty malware: Malware Defense. I was doing something I should not have been doing when I got it (surfing Pirate Bay for TV shows). It locked up my system and I had to reboot in safe mode. I was able to shut down the process and remove it using a malware killer tool. I then installed, after my machine was cleaned up a bit, Clamwin, Malwarebytes, and another AV tool. I cleaned the heck out of my system. Simultaneously, while this was going on, I was having trouble with my media-server, PlayOn. This tool is great, but has some bugs. One in particular is that it will not function well with AV software running. I found a way to allow the new AV software to run while using PlayOn, but it still says I have Malware Defense on. Firstly, Malware Defense is long gone. I cleaned all remnants from my registry and scoured my system with the above tools multiple times. PlayOn is getting some information that I have this crap installed on my system, but it's not. The system runs OK, but not optimally. I have a feeling it is causing my streaming to be interrupted sometimes. How is it that I can't even find Malware Defense on my system if I tried but yet somehow PlayOn is getting a finger print of it somewhere? I have gone back and forth with MediaMall to no avail. I kind of just gave up, because the streaming works OK. BTW, I also uninstalled/reinstalled PlayOn several times, reverted back to previous versions, etc. The only thing I haven't done is reformat my disk and reinstall Windows. I really don't want to do this if there is another way to remove this little print. Any ideas?

    Read the article

  • Sony PMB causing failure to load Windows 7 Pro 64-bit normally or even Safe Mode

    - by Wesley
    After installing Sony's Picture Motion Browser on my desktop with Windows 7 Pro x64, it always goes to Startup Repair due to Windows 7 failing to start. This always happens after I try to install it. I've installed with all unnecessary programs closed and all disk drives and unnecessary usb ports empty. I don't exactly know what is causing the problem. Any ideas? My desktop is an HP m8530f. http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01469325&tmp_task=prodinfoCategory&lc=en&dlc=en&cc=us&product=3740333&lang=en Only upgrades are an HD4350 and a 500W PSU. EDIT: Windows 7 cannot start now. I'm currently running diagnostic tests from the BIOS. EDIT: Here are the problem details. Problem Signature: Problem Event Name: StartupRepairOffline Problem Signature 01: 6.1.7600.16385 Problem Signature 02: 6.1.7600.16385 Problem Signature 03: unknown Problem Signature 04: 21201022 Problem Signature 05: AutoFailover Problem Signature 06: 8 Problem Signature 07: CorruptFile OS Version: 6.1.7600.2.0.0.256.1 Local ID: 1033 CONCLUSION: So, I think Sony PMB may have caused some sort of corruption in the system files. So if you have Windows 7 and plan on installing Sony PMB, find a Vista or XP machine to install on.

    Read the article

  • 13" MacBook Pro with Win 7 and External VGA gets 640x480

    - by Jim McKeeth
    I have a brand new 13" MacBook Pro - 2.26 GHz and the NVIDIA 9400M Video card. I installed Windows 7 (final) in boot camp and booted up to Windows 7. Installed all the drivers from the Apple disk and it was working great. Then I attached the external VGA adapter (from apple) to connect to a projector and it dropped down at 640x480 resolution. No matter what I did it wouldn't let me change to a higher resolution if the external VGA was connected. Once it disconnects then it goes back to the normal resolution. If I am booted into Snow Leopard it works fine. I tried updating the NVIDIA drivers and it behaved exactly the same. Ultimately I want to get 1024x768 or better resolution when connected to an external display. If it isn't fixable then I am curious if anyone else has seen this, if it is a known issue, and who to contact for support (Apple, Microsoft or NVIDIA?) Update: Just attaching the Mini-DVI to VGA adapter kicks it into 640x480, no projector is required. I tried forcing the display driver from Generic PnP Monitor to one that supported 1024x768 and that didn't work either.

    Read the article

  • Webserver max CPU when apache and MYSQL are ran together

    - by Tim
    This website has been running fine without issues, Recently it went down. After some investigation it looks like the combo of MYSQL and Apache bring the box to its knees. Apache can run find serving static web pages and MYSQL can run fine when the website isn't working. As soon as the website is enabled with SQL running the CPU on the box remains at 100%. Picture of the usage: http://i.stack.imgur.com/GG2NC.png I've checked the sql database for errors, tried tuning nearly every parameter in apache/sql's conf file for performance. The server is a redhat based box running the latest software packages. Any help/suggestions are welcome. Doing an strace on a high cpu apache process I see the following: read(14, "", 8192) = 0 close(14) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 14 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) connect(14, {sa_family=AF_FILE, path="/var/lib/mysql/mysql.sock"...}, 110) = 0 setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) setsockopt(14, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 Here is what I see from a mysql process: futex(0x86fc9a4, FUTEX_WAIT_PRIVATE, 39, NULL) = 0 futex(0x86fc734, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x86fc734, FUTEX_WAKE_PRIVATE, 1) = 0 gettimeofday({1301465020, 141613}, NULL) = 0 clock_gettime(CLOCK_REALTIME, {1301465020, 141699633}) = 0 futex(0x8707a64, FUTEX_WAIT_PRIVATE, 1, {4, 999913367}) = 0 futex(0x8707a40, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x8707a40, FUTEX_WAKE_PRIVATE, 1) = 0 exit_group(0) = ?

    Read the article

  • NetInstall working on some systems, not working on others

    - by cduruk
    Hi, I'm having an issue where my NetInstall setup works on some computers and fails on others. I am not able to diagnose the issue. I created an image of a Mac Mini and then created a NetRestore image using the System Image Utility found on Snow Leopard Server. NetBoot and NFS all seem to be working fine on the server, which is an XServe. Then I select the NetInstall image from the Startup Disk on a machine. On some of the machines, the process works as expected. On some of them, I see the globe icon blink a few times and then the system boots to the regular hard drive. I have captured the tracedump and the system.log logs from the server on both cases where NetInstall seems to work and fail. Here is the link that has all the logs http://gist.github.com/232232 The gist of the failure seems to be from the lack of BSDP DISCOVER in the failure but I'm not able to identify why that exactly is happening. I'd really appreciate any help on this issue.

    Read the article

  • SBS 2003 boot stalls at acpitabl.dat

    - by John
    I have a SBS 2003 server running for 3 year without any problems, and few days ago it freezes during the boot. System is using two 500 Gb drives in RAID1 (Intel Matrix 7.5) After trying to load in safe mode, boot stops on acpitabl.dat. First idea was that there is a problem with RAID altough disk status was OK, and RAID status was Rebuild. I tried to boot with each drive, and one gives me the same problem, and the other drive is failing to load. Took both drives out, and checked it on a different machine. One drive is dead, other is without any problems. Returned the good drive back in SBS 2003 with changed status to Degraded, but the problem is still the same. I also have a clean SBS 2003 copy installed on this drive (previous installation), which loads smooth and quick. So, I believe the main problem is this installed version of SBS 2003. Did not make any hardware changes, did not make any updates (not sure about any automatic windows updates lately). Since there are tons posts about this problem, and no clear solution, I am trying to figure how to repair SBS 2003 installation, since there are some installed programs on this installation which I cannot re-install without additional issues.

    Read the article

  • Disc drive busy on MacBook, with disc stuck inside.

    - by ayaz
    I have a white MacBook, running Snow Leopard, 10.6.3 with the latest updates. I popped in a DVD that the system failed to mount. I did not see any conspicuous errors. As a result of this failure, the DVD got stuck in the drive. Neither pressing the eject button on the keyboard nor running the diskutil eject command caused the DVD to come out. The commands drutil eject and drutil tray open could not get the DVD to budge at all. The 'mount' and 'eject' buttons on the window for Disk Utility are dimmed out, while it is written in the middle for the DVD drive that that particular disc drive is busy. This is not the first time this has happened with me. I know that I will ultimately have to resort to rebooting the system and holding down the eject button to get the DVD to come out. But, is there any workaround that does not involve rebooting the system and prying the disc out? The drive on this MacBook does not have a needle-pin reset button -- at least, I couldn't find it anywhere. Any help will be greatly appreciated. Many thanks.

    Read the article

  • OC'ing a Sapphire 7950, problems with memory clock

    - by Cliff
    I bought a new rig with sapphire radeon 7950 flex w/ boost. Good scores initially in 3dmark 11, around 7300 stock. Problem is, as soon as i start overclocking, issues arise. the core clock is at 860, but rises to 925 with boost applied in pressured situations. So all the OC tools are showing 925 as base clock for some reason. Ocing the core clock has been no problem though, got up to 1200 pretty stable, but only an incremental increase in 3dmark 11, to 7600 from 7300, which is worrying. The real trouble starts when i start touching the memory clock. As soon as i touch it even by 1 point up to 1251mhz, the performance goes way down. suddenly i score under 5000 graphics in 3dmark11, no matter if its 1251hz or 1500hz. Ive tried adjusting every other parameter, different tools (sapphiretrixx, catalyst, afterburner) all still the same. Tried upping the power, still same. Where is the issue here?

    Read the article

  • Walkthrough/guide building aplication server for multi tenant web app [on hold]

    - by Khalid Adisendjaja
    The web app will detect a subdomain such as tenant1.app.com, tenant2.app.com, etc to identify tenant environment, each tenant environment will have a different database credential (port,db name,etc) but still connecting to the same database server. Each tenant should use app.com for their main domain, using their own domain is prohibitted. Each tenant will have their own rest api endpoint such as tenant1.app.com/api/v1/xxxx, tenant2.app.com/api/v1/xxxx, tenant3.app.com/api/v1/xxxx I've come to a simple solution by setting a wildcard subdomain (*.app.com) on webserver Apache/Nginx vhost configuration file. I have googled so many concept for building a multi-tenant app server but still don't understand how to really done it, what is the right way to do it and what is actually required to do this task. So I've come to this questions, Do I need a proxy server, dns masking, etc.. How to monitor each tenants activity What about server performance, load balancing, and scalability How to setup ssl certificate for each tenant what about application cache for each tenant Is it reliable to use the setup for production etc ... I have a very litte experience on server infrastructure, so I'm looking for a DIY walkthrough, step by step guide, or sophisticate solution ready to implemented for production

    Read the article

  • SQL Server replication and load balance

    - by Ahmed Galal
    I'm running a web service that serves a mobile app on IIS 8 and SQL Server 2014, my service has a massive load and i'm trying to improve performance, most of the load is happening on SQL. i don't think i have a bottleneck, my processor and ram is up to the max and i think my code is not that bad, am already using memcached and other stuff to avoid hitting SQL too much. i know i can always upgrade the server hardware but i already have a spare server that i would like to use, so i was thinking to split the SQL load on the 2 servers. What i was thinking of is to setup replication on the other server and do some load balancing, but am not sure how to do the load balance. I know i can adjust my code to hit the other server for some queries but i was hoping to find a solution that avoid changing my code. So my question is, What are the ways of doing load balancing between 2 SQL servers ? I would appreciate suggestions or best practices or some directions. Thanks.

    Read the article

  • Stop the constant random reboots of my GIGABYTE GA-B75M-D3V

    - by Frederic
    I've got some issues with a new system. It's rebooting constantly. The system consists of a: brand new: Gigabyte GA-B75M-D3V with F9 BIOS (latest) Intel Core i5-3470 Ivy Bridge 2x 8GB G.SKILL Ripjaws 1600MHz memory (mem-tested x-86) coming from a stable system: Creative Soundcard X-FI Titanium Asus Radeon HD4850 OCZ Vertex 3 120G SSD Sata 3 Hard disk 1TB Sata 2 ASUS Blu-ray Drive PSU 400w Connected peripherals : Toshiba tv (displayport on dvi of MB or HD4850) Wired mouse, wireless keyboard (logitech) Bluetooth usb key Azio main problem : it's not possible to read the errors from the MB. nothing on the manual neither on internet. At the beginning, I received a MB with graphic problems and the problem of rebooting. I RMA'd it. The new one doesn't have any graphic problems. but it's still constantly rebooting. I removed everything except the HD, the sound-card, the blu-ray drive and the wireless keyboard. It's still unexpectdly rebooting. I'm running a test with just the motherboard and the HD. I will update this text after the test. I've got some questions : Somebody have an idea of a test? The PSU could cause that problem? I used it a lot of years with the stable system. Update 1: BTW, if anyone has the same problem, the manual won't say it but you'll need to reset the bios between two tests (the screwdriver on the two pins) if you suspect a problem of compatibility .

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • CentOS vps is randomly rebooting

    - by develroot
    I have a centos vps (Parallels Virtuozzo container) which has been running for months. However, a few days ago it started to randomly reboot itself, and i can't find out why. And the biggest problem that i don't understand is that it takes 40 minutes to reboot (as far as i can see in the logs) root ~ # cat /var/log/messages | grep shutdown Oct 11 13:52:11 vps27 shutdown[23968]: shutting down for system halt Oct 14 14:55:17 vps27 shutdown[30662]: shutting down for system halt Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt And notice the time difference between shutdown and xinetd's start: Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt Oct 15 06:21:24 vps27 init: Switching to runlevel: 0 Oct 15 06:21:27 vps27 saslauthd[30614]: server_exit : master exited: 30614 Oct 15 06:21:38 vps27 named[30661]: shutting down Oct 15 06:21:47 vps27 exiting on signal 15 Oct 15 07:04:34 vps27 syslogd 1.4.1: restart. Oct 15 07:05:06 vps27 xinetd[1471]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Oct 15 07:05:06 vps27 xinetd[1471]: Started working: 0 available services And here's what Parallels Power Panel says in terms of Status Changes: Time Old Status Status Obtained Oct 15, 2011 06:23:46 AM Mounted Down Oct 15, 2011 06:22:31 AM Running Mounted Oct 14, 2011 03:06:48 PM Starting Running Oct 14, 2011 03:06:23 PM Down Starting Oct 14, 2011 03:06:08 PM Mounted Down Oct 14, 2011 02:58:24 PM Running Mounted For some reason it's getting into Mounting mode and then restarts itself. The only problem that i can imagine is disk space utilization, which is now 84%. But can that be a reson for system halt? Time Category Details Type Parameter Oct 15, 2011 07:08:33 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:27:23 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:23:50 AM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used Oct 14, 2011 03:06:24 PM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 83 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 14, 2011 03:05:50 PM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

< Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >