Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 585/837 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • Should we install the OS on an SSD or not when running virtual machines?

    - by Raghu Dodda
    I have a new Dell Mobile Precision M6500 laptop with 8 GB RAM. it has two hard drives - 500 GB @7200 RPM and a 128 GB SSD. The main purpose of these laptop is software development in virtual machines. The plan is to install the base OS (Windows 7) and all the programs in the 500 GB drive, and let the SSD only contain the virtual machine images. It is my understanding that the we get most performance from the virtual machines if the images are on a separate hard drive than the base OS. Is this the way to go, or should I install the OS on the SSD as well? What are the pros and cons? The virtual machine images would be between 20 - 30 GB, and I might run 1 or 2 at a time.

    Read the article

  • Partition and mount my secondary hard drive on CentOS 5.5 64bit?

    - by Andrew Fashion
    I am trying to prepare my second hard drive for user image uploads. Here is the current layout: # sudo parted /dev/sda print Model: ATA WDC WD2500KS-00M (scsi) Disk /dev/sda: 250GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 8595MB 8488MB primary linux-swap 3 8595MB 10.7GB 2147MB primary ext3 4 10.7GB 250GB 239GB extended 5 10.7GB 250GB 239GB logical ext3 Information: Don't forget to update /etc/fstab, if necessary. I am assuming #4 is my secondary drive? How do I partition and mount it so I can begin using it? And how do I add to fstab? I understand if it's to many questions in one, just help me with whatever you can I guess :) Thank you for any help!

    Read the article

  • What's better for deploying a website + DB on EC2: 2 small VM or a large one?

    - by devguy
    I'm planning the deployment of a mid-sized website with a SQL Server Standard DB. I've chosen Amazon EC2 to deploy it. I now have to choose between these 2 options: 1) get 2 small instances (1 core each, 1.7 GB of ram each): one for the IIS front-end, one for running the DB. Note: these "small instances" can only run the 32-bit version of Win2008 Server 2) a single large instance (4 cores, 7.5 gb of ram) where I'd install both IIS and the SQL Server. Note: this large instance can only run the 64-bit version of Win2008 Server What's better in terms on performance, scalability, ease of management (launch up a new instance while I backup the principal instance) etc. All suggestions and points of view are welcome!

    Read the article

  • How to Increase Memory Allocated to IIS .NET Application?

    - by Mark Hansen
    We are using Windows 2008 R2 and IIS 7 running on Amazon EC2. IIS is running a single .NET application written in C#. We are having performance issues and I want to give the application more memory, but I cannot figure out how to do it. How do I control the amount of memory that the CLR gets? I'm a total newbie with IIS, .NET and the CLR. If I were working with Java, I would just use the -Xmx flag to increase the memory available to the JVM (e.g., -Xmx3000m for 3GB). But, I cannot seem to figure out how to do this in the Windows world.

    Read the article

  • Encrypt backups with GPG to multiple tapes

    - by Dan
    Currently, I use tar to write my backups (ntbackup files) to a tape drive fed by an autoloader. Ex: tar -F /root/advancetape -cvf /dev/st0 *.bkf (/root/advancetape just has the logic to advance to the next tape if there is one available or notify to swap the tapes out) I was recently handed the requirement to encrypt our tape backups. I can easily encrypt the data with no problems using GPG. The problem I'm having is how do I write this to multiple tapes with the same logic that tar uses to advance the tapes once the current one is filled? I cannot write the encrypted file to disk first (2+TB). As far as I can tell, tar will not accept binary input from stdin (it's looking for file names). Any ideas? :(

    Read the article

  • ESXI with non standard hardware HDD issues

    - by Hurricanepkt
    I have 3 very underutilized servers that I am condensing to one of those shuttle PC's with VMWare ESXi The HDD seems to be the bottle neck right now (the light is almost always pure solid) right now I have a single 1TB Seagate 7200.11 connected by SATA. VMWare ESXi cannot detect it when running in AHCI mode, but does when running in IDE mode. I have read that IDE mode can give a 5% performance hit which might give me enough breathing room. However, I am open to setting up an external eSATA or some sort of raid to give me more than just the 5%. I am just weary of sinking some money into a bit of hardware without knowledge of whether it will work. Does anyone know of resources or procedures of how to get this working.

    Read the article

  • Best practice for administering a (hadoop) cluster

    - by Alex
    Dear all, I've recently been playing with Hadoop. I have a six node cluster up and running - with HDFS, and having run a number of MapRed jobs. So far, so good. However I'm now looking to do this more systematically and with a larger number of nodes. Our base system is Ubuntu and the current setup has been administered using apt (to install the correct java runtime) and ssh/scp (to propagate out the various conf files). This is clearly not scalable over time. Does anyone have any experience of good systems for administering (possibly slightly heterogenous: different disk sizes, different numbers of cpus on each node) hadoop clusters automagically? I would consider diskless boot - but imagine that with a large cluster, getting the cluster up and running might be bottle-necked on the machine serving the OS. Or some form of distributed debian apt to keep the machines native environment synchronised? And how do people successfully manage the conf files over a number of (potentially heterogenous) machines? Thanks very much in advance, Alex

    Read the article

  • Linux USB to work as cd rom on mac

    - by user157483
    I am working in driver development in linux USB modules. I have written driver for usb and it is working as cd rom in windows machine 1)I made first partation as fat32 "modprobe g_hidmass file=/dev/mmcblk0p1 cdrom=1 stall=0 removable=1" this works fine in windows 2)I made first partation as hfs partation "modprobe g_hidmass file=/dev/mmcblk0p1 cdrom=1 stall=0 removable=1" but same thing i applied with hfs partation in MAC it is getting error like this "The disk you inserted was not readable by this computer" in diskutil it is shown as CD-rom but not reading the file system. frame like this Please help me how can I overcome this error...

    Read the article

  • Create and copy a Windows Mobile 5.0 operating system image

    - by user20119
    We have several dozen Windows Mobile 5.0 devices (Symbol MC7095 handhelds equipped with embedded Verizon WLAN, if that matters) that all need the same software and configuration. We connect all of these devices via a USB cradle to add software to them via Microsoft ActiveSync, and then do several configuration changes directly on the handhelds themselves, in the OS. That process takes 30 minutes or more, per device. Is there any way to set up one device and take a 'disk image' of the entire OS/software, such that things could then be copied (quickly/easily) to the other devices? Is such a thing possible, with Windows Mobile devices?

    Read the article

  • List of recent motherboards with BIOS / without UEFI [on hold]

    - by jmn
    I am building a new desktop PC and I want to have full disk encryption on it. TrueCrypt doesn't support UEFI as of now. Are there still recent motherboards out there without UEFI ? I didn't find any list and I am afraid that I will have to study each potential candidate's technical sheet before purchase. I want to buy 2 or 3 of the same model to be future proof. Newegg links will not help, I don't live in the USA ... this means that this post is a legitimate target for PRISM ;-) Thanks for your help.

    Read the article

  • When adding second processor to SQL Server, will it automatically balance the load?

    - by ddavis
    We have a SQL Server 2008 R2 (10.5) on a dedicated box with a single 2.4Ghz processor, which regularly runs at 70-80% CPU. We are going to be adding a significant number of users to the application and therefore want to add a second processor to the box (scale up). Will SQL Server automatically use the second processor to balance threads, or is there additional configuration that will need to be done? In other words, will adding the second processor drop my CPU usage to 35-40% per CPU, automatically balancing the load? Based on what I read here, it seems that it will: http://msdn.microsoft.com/en-us/library/ms181007.aspx However, I've read elsewhere that CPU performance gains can be made by assigning database tables to different filegroups, but I'm not sure we want to get that complicated at this point.

    Read the article

  • XP Laptop Restarting

    - by Liath
    I have a Dell 1525 which I reinstalled xp on recently due to a hard disk problem. Apparently I've missed something, it's taken to restarting for no apparent reason (I believe it's too frequent for windows updates which was my first thought). Last night it was particularly bad, it stopped 5 times in 2 hours while playing an online game. I can't actually think of a time it's happened when it's not running something in full screen mode. I can't see anything in the event log apart from usual start up details. How can I find and fix this issue?

    Read the article

  • How can I deactivate the gnome desktop of my ubuntu server?

    - by 19 Lee
    I'm running a home server on my old laptop (atom cpu). I installed ubuntu 12.04 server edition, but I also installed ubuntu-desktop. So, when I turn it on, ubuntu desktop is shown. I sometimes use GUI, but I want to turn the ubuntu-desktop (gnome-desktop) off when I don't use it. I think I can save resources by turning off the GUI. It's necessary since my laptop's performance is not very good and it often becomes very hot. I guess I can run ubuntu-desktop on my terminal with "startx" command. But, I don't know how to turn the X window off for a moment. Anybody have an idea? Thanks in advance.

    Read the article

  • Windows doesn't recognise my USB key anymore (it used to work)

    - by dominicbri7
    I use my friend's USB flash drive (Corsair flash voyager 16gb) to transfer files from my laptop to my desktop computer. However, since a couple of days my laptop stopped recognizing the USB key.. while there is still no problems with all other computers. I use Windows 7 64 bits if that can help. I tried uninstalling the driver, rebooting and all those kind of tricks, but it won't work. When I connect it and open the "My computer" window, I see "Removable Disk (G:)" for a moment, then it disappears... then it reappears again and it keeps doing that periodically. I can't even right click then hit "Properties" because it disappears. As I recall, it DOES work on every other computers, I think it has to do with the driver but what can I do?

    Read the article

  • Replicate a big, dense Windows volume over a WAN -- too big for DFS-R

    - by Jesse
    I've got a server with a LOT of small files -- many millions files, and over 1.5 TB of data. I need a decent backup strategy. Any filesystem-based backup takes too long -- just enumerating which files need to be copied takes a day. Acronis can do a disk image in 24 hours, but fails when it tries to do a differential backup the next day. DFS-R won't replicate a volume with this many files. I'm starting to look at Double Take, which seems to be able to do continuous replication. Are there other solutions that can do continuous replication at a block or sector level -- not file-by-file over a WAN?

    Read the article

  • How (much) is virtualization used today?

    - by BLAKE
    I know that where I have worked, I have pushed alot for virtualizing our servers. I think that it is much easier to implement and maintain than physical servers. I have been using Microsoft's Virtual Server 2005 R2 since it was released. Right now at my workplace we have 12 VMHosts that hold about 55 VMs. We have 6 other servers that we have been unable to convert to VMs. I want to know how other people in our field view virtualization. I know that I have had developers dislike the notion of VMs claiming major performance hits. What do other Sys Admins think about virtualized servers?

    Read the article

  • How can I use rsync with a FAT file system?

    - by Kim
    I would like to write a simple backup script that saves some data to a FAT drive. Should I reformat the drive and use a better file system or is it possible to use rsync with FAT? If so, what problems might I run into? Would performance be a lot worse? EDIT: This is on linux, didn't even know there was a rsync for windows. The sources are various file systems (it's a mess), and the destination is currently formatted with FAT32. Thank you for your answers, I'll probably go for a reformat, since I'm not completely sure about the file sizes we'll have.

    Read the article

  • Can NFS be forced to refresh stale files/directories when not using noac on the mount?

    - by johnnycrash
    We mount without using noac. I have a file that I append to once every 20 minutes. Then it will be read with mmap about 5,000 times a minute. We only mmap a couple blocks for each read. Needless to say, noac just kills the access performance, so we don't use it. I add data to the end of the file using a mount with noac and read from a mount without noac. The mounts that are reading are not seeing the new data. I want to know if there is a function I can call from c to refresh the attributes of a path and all its files. EDIT: I should add we cannot mount and unmount since there are 16 servers running on each system and they are constantly accessing the files. Well...maybe we could mount and unmount if each server used their own mount. I'd like to avoid that if possible. thanks!

    Read the article

  • WindowsXP+7 on C:/D: :moving the System Partition

    - by user938921
    I had installed Windows 7 on a separate 20gig partition, and I'm absolutely loving it! Plus I can now dual-boot, with my original WinXP residing on the C: drive. But I'm running out of disk space on D:, and I was able to shrink C: and expand D:. But now I would like to make D: not just a Boot Partition, but an Active System Partition, without losing my ability to boot into Windows 7 (since it was created on a separate D: partition, not the current Active System C: Partition). Any advice?

    Read the article

  • How to measure that a host is good for users in Egypt ?

    - by Sherif Buzz
    Hi all, I currently have a site that's hosted in Texas. The majority of my users are from Egypt and I'm a bit concerned that the current hosting is not the optimal in terms of performance. The site is not slow but for how can I know if, for example, hosting it in Europe or Asia is better ? To clarify I need to know there is a way that I can test different hosting options - for example how can I test the average response time between Egypt and a host in Texas, the average response time between Egypt and a host in the UK ?

    Read the article

  • Mounting Solaris UFS partition on Debian(with FreeBSD kernel)

    - by hayalci
    I have some disks that were being used on a Solaris system. The disks are formatted as UFS. I attached them to a Debian system (with FreeBSD kernel. Debian/kFreeBSD), but I cannot mount them. $ mount -t ufs /dev/da2s1 /mnt/diska mount: /dev/da2s1 : Invalid argument Also the tunefs.ufs does not work; $ tunefs.ufs -p /dev/da2s1 tunefs.ufs: /dev/da2s1: could not read superblock to fill out disk Is there an incompatibility between FreeBSD UFS and Solaris UFS? Is it possible to mount one, under the other OS ? Note: tunefs.ufs works on the root partition $ tunefs.ufs -p /dev/da7s2 tunefs.ufs: ACLs: (-a) disabled tunefs.ufs: MAC multilabel: (-l) disabled tunefs.ufs: soft updates: (-n) disabled tunefs.ufs: gjournal: (-J) disabled tunefs.ufs: maximum blocks per file in a cylinder group: (-e) 2048 tunefs.ufs: average file size: (-f) 16384 tunefs.ufs: average number of files in a directory: (-s) 64 tunefs.ufs: minimum percentage of free space: (-m) 8% tunefs.ufs: optimization preference: (-o) time tunefs.ufs: volume label: (-L)

    Read the article

  • How to automate slipstream?

    - by Gregory MOUSSAT
    Since years I use slipstreamed Windows installations. This works very well, but preparing them is tedious : 1 - install a Windows with the last slipstreamed version we have (automated install) 2 - check Windowsupdate to see what's new, and take note 3 - download each new update available 4 - go to step 2 until no new update is available 5 - slipstream them into the last version we have (I already automated this step) I'd like a way to automate parts or all of this. Maybe a program able to know which updates are installed (already saw one, I don't remember which, and I know PowerShell can do this)... and able to download them ? Or to get them from local disk ? So the steps become : 1 - install a Windows with the last slipstreamed version we have (automated install) 2 - use Windowsupdate until no new update is available (any way to automate ?) 3 - use the magic program 4 - slipstream

    Read the article

  • Problems with the backup

    - by marcodv
    I did a script which run around 4 o'clock in the morning, for backup all the mysql databases and the config file for 250 linux vm. The problem is that it tooks ages for complete and more than 50% of these vm, need more than 8 hours for complete. More or less all the vm had the same configuration,I mean Same amount of ram same amount of disk space same number of cpu Debian 6.0.5 I am saving these backup on amazon s3, because is the cheapest solutions that I've found. Now my questions is: Has anyone some solutions or suggestions about that? On one blog I've read that probably the ionice and nice combination could be good work around about that. any thought?

    Read the article

  • limit the speed of writing files to NFS

    - by xgwang
    CentOS 5.6 NFS is mounted on the server for backup disk space. When the backup job started, it could reach 80MB/s and we really do not expect it took so much bandwidth. So i need to find a way to limit the speed of writing to NFS. I tried rsync with --bwlimit=5000. However, it did limit the reading speed, but the accumulated data still was written at 80MB/s, and no writing activities for seconds. Is there any way to limit the writing speed of NFS?

    Read the article

  • Missing 16:10 resolutions with Nvidia drivers (Can't add resolutions)

    - by Wuinny
    I have a laptop with a Nvidia 9650M GT and used the drivers that Seven brought me. It works fine but Metro 2033 tells me that I have to upgrade my drivers to play the game. So I did it. But since I did a clean install of the new Nvidia drivers, I just have 1440*900 or 4:3 resolutions. I usually played with 1280*800 or 1184*740 (for performance issue) With the "old" drivers I was able to create custom resolution (1184*740) in Nvidia control panel but now when I try it tells me that "my monitor cannot support this resolution". When I insist, it works but soon as I shut down my computer I have to recreate it.. Do anyone have a fix?

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >