Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 288/361 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Hyper-v on 2012R2 startup gen1 vm causes the host to freeze up

    - by sputnik
    I've searched a lot to resolve the following issue, but nothing helped me. My problem is, that starting up a first-gen vm locks up the whole host. Only a hard reset helps. Second-gen vm starts and runs perfectly. The freezes happened on 3 different vms. FreeBSD, Ubuntu, Windows Server 2008R2, while Windows 8.1 on second gen config works perfectly. Im using this pc mainly as a workstation. No eventlog errors nor dumps are generated. My system: Windows Server 2012R2 FX-8350, non OC ASRock 870 Extreme R2 (Crappy board imho) 32GB DDR3 1866@1600 (My motherboard, against the "support" for 1866ram won't work with full speed) 120GB SSD 4.5TB Storage space device I dont think that its due to my system, because vmware workstation was running without problems. Did I forget to configure something? Any help is appreciated. P.S: Even deactivating C1E, C6, C&Q didnt work. P.P.S: With no virtual network adapter set, the system still locks up. Creating a first gen vm without any hdds and network and launching works. Attaching a boot dvd causes the host to freeze. The host freezes as the gen1 vm begins to boot, doesn't matter if from dvd or hdd

    Read the article

  • Cannot install CentOS 6.5 using UEFI usb boot

    - by Vaindil
    I am trying to dual-boot CentOS 6.5 on my desktop that is currently running Windows 8.1. I have two storage devices: an SSD that has my Windows installation, and an HDD that has all of my data. Both are formatted using GPT, and Windows boots using UEFI. I used the CentOS 6.5 live DVD (CentOS-6.5-x86_64-LiveDVD.iso) to create an EFI-bootable flash drive (it does boot properly in EFI mode). I receive an error, however, when CentOS is booting (error is below). I have a 6.4 boot DVD which boots as expected, but it does not boot in UEFI mode and therefore doesn't play nicely with my Windows installation (I have no way to access it, even using rEFInd or any other similar tools). What do I need to do to get the device to boot properly in UEFI mode? Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.32-431.el6.x86_64 #1 Call Trace: [<ffffffff815271fa>] ? panic+0xa7/0x16f [<ffffffff81077622>] ? do_exit+0x862/0x870 [<ffffffff8118a865>] ? fput+0x25/0x30 [<ffffffff81077688>] ? do_group_exit+0x58/0xd0 [<ffffffff81077717>] ? sys_exit_group+0x17/0x20 [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b drm_kms_helper: panic occurred, switching back to text console

    Read the article

  • Xen domain migration locking problem

    - by brodie
    I am trying to live migrate a VM (domain) between two Xen servers. I have xen locking (xend-domain-lock = yes) configured with a ocfs2 shared storage between them. This locking is working fine. If I try to start up the VM on the secondary server it refuses to start (which is correct). The problem I am having is when trying to do live migration, it seems like it is trying to remove the lock twice. The first lock it removes is for "domain test", the second is for "migrating-test" which does not exist. Should their be a lock for this "migrating-test" VM? These are the relevant options in the xen config file: (xend-relocation-server yes) (xend-relocation-port 8002) (xend-relocation-address '') (xend-relocation-hosts-allow '') (xend-domain-lock yes) (xend-domain-lock-path /var/lib/xen/lock) This is the section of the log: [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain test [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) SUSPEND shinfo 000c6ceb [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) delta 21ms, dom0 95%, target 0%, sent 57Mb/s, dirtied 173Mb/s 111 pages 4: sent 111, skipped 0, delta 6ms, dom0 100%, target 0%, sent 606Mb/s, dirtied 606Mb/s 111 pages [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Total pages sent= 131295 (0.99x) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) (of which 0 were fixups) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) All memory is saved [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Save exit rc=0 [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:123) Domain 22 suspended. [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=22 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2227) Destroying device model [2010-06-10 10:45:58 14488] INFO (image:567) migrating-test device model terminated [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2234) Releasing devices [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vif, device = vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vkbd, device = vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = console, device = console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vbd, device = vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vfb, device = vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain migrating-test [2010-06-10 10:45:59 14488] ERROR (XendDomainInfo:4070) Failed to remove unmanaged directory /var/lib/xen/lock/b01515ae-9173-03cb-0cb7-06f3dfbede8b.

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • How can I change the default location/action of 'Open Outlook Data File' in Outlook 2010?

    - by Chadddada
    I have recently deployed a Remote Desktop Host server that functions as a remote Microsoft Office 2010 work space for users. In part of the locking down of this server I have installed all programs on the D: drive and, through the use of Group Policy, hidden all the drives on the server from standard users. In addition to hiding these drives I am not allowing users to save anything locally (on the server) or open Libraries. However one of the functions of the server is to provide the Outlook client. Often users will have the .PST file stored on a network location and want to open this in Outlook. Can I change the default action or location that File Open Open Outlook Data File looks or tries to pull the file from? The default location seems to be under Users / Libraries. When click 'Open' you get a warning: This operation has been cancelled due to restrictions in effect on this computer. Clicking OK drops the user into a small menu that shows attached network drives under Computer. Can I instead have the 'Open' click drop the users in a defined network drive or just open computer and allow them to select a share? I don't want them to see the error message. A solution that looks to have been used for Office 2000/03 is: Key: HKEY_CURRENT_USER\Software\Microsoft\Office\<version>\Outlook Value name: ForceOSTPath Value type: REG_EXPAND_SZ Value: path to your storage folder I am not sure if there is a better way to do this now OR if this even works with Office 2010.

    Read the article

  • Explorer.exe keeps crashing during log in

    - by asif
    I have got a weird problem. My windows 7 has two user accounts (both are administrator). I can log in to one account and do all sort of work. But whenever I try to log in to other account, it shows a blank screen and a messagebox pops up with "windows explorer has stopped working". The options available are: Close the program Check online for a solution and close the program The problem signature is as follows: Problem Event Name: InPageError Error Status Code: c000009c Faulting Media Type: 00000003 OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 If I press alt+ctrl+del and then select start task manager, it also crashes. I can not run any program using runas command (from good profile) too. The task manager and runas programs all show same problem signature. I read the similar question and followed all the steps, but no luck. Later, I viewed the event log and found that, explorer.exe could not access a file. I checked the location but the file is there. The actual message is: Windows cannot access the file C:\Users\testuser\AppData\Local\Microsoft\Windows\Caches\{AFBF9F1A-8EE8-4C77-AF34-C647E37CA0D9}.1.ver0x0000000000000020.db for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Windows Explorer because of this error. The question is, how can I resolve this issue? Should I just delete the file or replace it with another one to stop explorer.exe from crashing? offtopic: What is the content of this file and why it is necessary?

    Read the article

  • Drobo-like linux file server - how do I do it?

    - by John Hunt
    I've been pondering for a long time about how I can set up a server which operates much like the Drobo storage thing. The reasons I don't actually want a drobo is because I've heard scare stories, plus I'd like to do this on the cheap. So ideally I'm looking for something like lvm so I can create a logical volume that spans many hard disks of varying sizes... obviously that only offers redundancy if I put the LV on a raid array (as far as I know..) I have however been reading about technologies such as Microsoft's drive extender which duplicates files at the filesystem level and makes sure that the mirrored files are on a different phyiscal disk.. does anyone know or recommend a filesystem or method like this as it'll hopefully make much better use of the space available than raid ever could. Performance isn't an issue, I'd just really like to make the most of the hard disks I have lying around whilst having a bit of redundancy incase a disk dies. I understand full well that this is no replacement for a backup, but I'll only be storing files of medium importance and using the nas itself as a backup of my main pc and other systems. Thanks in advance! I'm hoping zfs or btrfs or something can do something clever for me :)

    Read the article

  • Can I trick Carbonite into backing up an external hard drive?

    - by Brian
    I use Carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 GB), so I went out and purchased an external hard drive. However, Carbonite will not back it up. Is it possible to set up Carbonite to backup an external hard drive? I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • MySQL Config on Large Machine

    - by Jonathon
    We have a Windows 2003 Enterprise Edition server (64bit) running only MySQL 5.1.45 64-bit. It has 16G RAM and 10T of hard-drive space in RAID 10. We are having horrible performance from mysqld (85-100% CPU utilization). We were running a smaller machine with better performance, so I am assuming our my.ini file is not correct for our current machine. The my.ini file is as follows: [client] port=3306 [mysql] default-character-set=latin1 [mysqld] port=3306 basedir="D:/MySQL/" datadir="D:/MySQL/data" default-character-set=latin1 default-storage-engine=MYISAM sql-mode="" skip-innodb skip-locking max_allowed_packet = 1M max_connections=800 myisam_max_sort_file_size=5G myisam_sort_buffer_size=500M table_open_cache = 512 table_cache=8000 tmp_table_size=30M query_cache_size=50M thread_cache_size=128 key_buffer_size=3072M read_buffer_size=2M read_rnd_buffer_size=16M sort_buffer_size=2M #replication settings (this is the master) log-bin=log server-id = 1 Does anyone see anything wrong with this setup? For a machine with this much RAM, why in the world would mysqld eat up so much CPU? I know we can optimize some queries, etc., but it did run okay on a smaller machine, so I am pretty sure it is the config. Thanks in advance for any help.

    Read the article

  • Debian: What are these files in /sys/devices/pci0000:00/ for?

    - by muhuk
    I am running Debian Squeeze on an MSI M670 laptop. I have these following files on my root drive, each 256MB: file:///sys/devices/pci0000:00/0000:00:05.0/resource1 file:///sys/devices/pci0000:00/0000:00:05.0/resource1_wc Here is my lspci output: muhuk@debian:~$ lspci 00:00.0 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.2 RAM memory: nVidia Corporation C51 Memory Controller 1 (rev a2) 00:00.3 RAM memory: nVidia Corporation C51 Memory Controller 5 (rev a2) 00:00.4 RAM memory: nVidia Corporation C51 Memory Controller 4 (rev a2) 00:00.5 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.6 RAM memory: nVidia Corporation C51 Memory Controller 3 (rev a2) 00:00.7 RAM memory: nVidia Corporation C51 Memory Controller 2 (rev a2) 00:03.0 PCI bridge: nVidia Corporation C51 PCI Express Bridge (rev a1) 00:05.0 VGA compatible controller: nVidia Corporation C51 [GeForce Go 6100] (rev a2) 00:09.0 RAM memory: nVidia Corporation MCP51 Host Bridge (rev a2) 00:0a.0 ISA bridge: nVidia Corporation MCP51 LPC Bridge (rev a3) 00:0a.1 SMBus: nVidia Corporation MCP51 SMBus (rev a3) 00:0a.3 Co-processor: nVidia Corporation MCP51 PMU (rev a3) 00:0b.0 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0b.1 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0d.0 IDE interface: nVidia Corporation MCP51 IDE (rev a1) 00:0e.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:0f.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:10.0 PCI bridge: nVidia Corporation MCP51 PCI Bridge (rev a2) 00:10.1 Audio device: nVidia Corporation MCP51 High Definition Audio (rev a2) 00:14.0 Bridge: nVidia Corporation MCP51 Ethernet Controller (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 04:04.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 04:04.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 01) 04:04.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) 04:09.0 Network controller: RaLink RT2561/RT61 rev B 802.11g I am speculating these have something to do with the shared RAM my GPU is using. But why a file on disk? And why two of them?

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • Anonymous access to SMB share hosted on Server 2008 R2 Enterprise

    - by bwerks
    Hi all, First off, I have read through this post and a whole slew of non-SF posts which seem to address the same or a similar problem, however I was still unable to fix my problem. I've got three machines in this situation: a domain-joined server that runs Server 2008 R2 Enterprise ("share server") a domain-joined workstation running XP Pro SP3 ("test server") a domain-unjoined test server running Server 2003 R2 SP2 ("workstation") The share server is exposing a share on the network that the test server must access--it's a Source/Symbol Server share for our debugging purposes. I believe visual studio simply accesses the the share with its own credentials in this case, meaning that the share must be accessible anonymously since the test server isn't joined to the domain and there's no opportunity to supply domain authentication. I've attempted a lot of things to avoid the authentication window when accessing the share: I've enabled the Guest account on the share server and given Guest full sharing/NTFS permissions for the share. I've given ANONYMOUS LOGON full sharing/NTFS permissions for the share. I've added my share to “Network Access: Shares that can be accessed anonymously” in LSP. I've disabled “Network access: Restrict anonymous access to Named Pipes and Shares” in LSP. I've enabled “Network access: Let Everyone permissions apply to anonymous users” in LSP. Added ANONYMOUS LOGON to “Access this computer from the network” in LSP. Added the Guest account to “Access this computer from the network” in LSP. Attempted to provision the share using the Share and Storage Management MMC snap-in. Unfortunately when I attempt to access the share from the test server, I still see the prompt and I'm forced to enter "Guest" manually. I also tried this workflow using the local administrator account on a workstation, and the same thing happens both with and without XP Simple File Sharing enabled. Any idea why I'm getting these results, or what I should have done differently?

    Read the article

  • What is the best way to connect a 3 switches with a router?

    - by Carlos Morales
    Hello everyone, I'm trying to rebuild the network from my work and I was thinking what is the best way to connect three switches and a router. The router has 4 ports so I thought to connect 2 switches to the router (each switch connected with 2 cables to the router) and then connect the third switch to one of the others with two cables. So is like this, two cables from switch one to the router, two cables from switch two to the router and two cables from switch 3 to switch 1 or 2. So my questions are: Is it better to connect the router to each switch with a cable or the more cables you have the better? If I connect the switch 3 to switch 1 or 2 is it better to connect it with a cable or you get better performance with more cables. If I'm wrong and there is a better or more efficient way to connect them please let me know. The router is a Netgear RP114 (I'll upgrade it to a Sonicwall NSA 240), switch 1 is a Netgear GS748T, switch 2 is a Cisco Catalyst 2924-XL and switch 3 is a D-link DGS-1024D Thank you very much

    Read the article

  • Understanding RedHats recommended tuned profiles

    - by espenfjo
    We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage. According to RedHats documentation we should choose the virtual-guestprofile. What it is doing can be seen here: tuned.conf We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us. However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns. As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed. Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds). The other is why they are increasing the sched_min_granularity_ns. If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • What is the replacement of the floppy

    - by alexanderpas
    While CD (and to an lesser extend DVD) disks have reached the price-point of the floppy, they have one significant downside, it is WORM (Write-Once Read-Many) media, allowing it to be used only one single time, and you need to be explicit in writing the data to the actual media (you need to burn it.) While CD-RW solves the "use only once" problem, it is still EWORM (Erasable Write-Once Read-Many) media, which still means you need to be explicit in writing the data to the actual media (you still need to burn it.), and also, you still need to be very explicit in erasing it. (simple delete is not possible.) Okay, we can use a CD-RW in Packet Writing mode, however the downside to that, is that this mode is not very universal, and also, not the native mode of the media. Now, while USB-sticks and SD-cards may not have the poblems of the CD, they have a whole other kind of problem: their PRICE! USB-sticks and SD cards are generally 10 to 100 times as expensive as diskettes per piece. SD-cards, in addition have an added problem, because they need a reader to operate. While it is a very standard thing, it is not default equipment on the computer like the CD drive or USB port (or historically the diskette drive). You wouldn't give out an USB stick or SD card with a 100 kB text file, not caring weither you would get it back or not. So, to recap: CD & DVD are basically WORM media. SD cards and USB sticks are relatively expensive. SD cards also needs special readers. Diskettes have a very low data-rate Diskettes have a very low storage capacity. Now, is there a media out there that solves all these problems, or is there a way to get (very) small USB sticks or SD cards for a very low price (as they're the closest thing to diskette).

    Read the article

  • Show full URI/URL in Chrome's developer tools Network tab

    - by Lev
    When using Chrome to debug, I find it incredibly difficult to be efficient due to the fact that I don't see how I can force the "Network" tab of the developer tools to show the full request URI. It will show the full URI if you hover the link and wait a second, but this is incredibly counterproductive. All of my AJAX requests are sent to ajax.php, and handled by using query string arguments, like: ajax.php?do=profile-set ajax.php?do=game-save ... etc. Since I use AJAX extensively, my network tab is filled with "ajax.php", but I have to manually hover each and every entry to find the request I am looking for. Surely there has got to be another way!? I am constantly fed up by something new in Firefox and immediately force myself back into Chrome, but it is always the developer tools in Chrome that keep me from using it for an extended period of time. Hopefully I can find out how to do this so I can continue using Chrome as my numero uno. I've provided a screen shot to show you where I mean:

    Read the article

  • What is the best way to connect 3 switches with a router?

    - by Carlos Morales
    Hello everyone, I'm trying to rebuild the network from my work and I was thinking what is the best way to connect three switches and a router. The router has 4 ports so I thought to connect 2 switches to the router (each switch connected with 2 cables to the router) and then connect the third switch to one of the others with two cables. So is like this, two cables from switch one to the router, two cables from switch two to the router and two cables from switch 3 to switch 1 or 2. So my questions are: Is it better to connect the router to each switch with a cable or the more cables you have the better? If I connect the switch 3 to switch 1 or 2 is it better to connect it with a cable or you get better performance with more cables. If I'm wrong and there is a better or more efficient way to connect them please let me know. The router is a Netgear RP114 (I'll upgrade it to a Sonicwall NSA 240), switch 1 is a Netgear GS748T, switch 2 is a Cisco Catalyst 2924-XL and switch 3 is a D-link DGS-1024D Thank you very much

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • Free software for backing up an attached network drive

    - by Richard
    My wireless router comes with a USB connector which allows me to plug an external hard drive in and it'll act as a Network Attached Storage. The problem is that I want to backup this hard-drive to the external drive of another computer so that if the NAS drive fails, I don't lose everything. However, Windows 7 Backup refuses to include the NAS as a location to backup. I can't fool it by mapping it to a drive letter either. Google presents lots of pages on how to backup files to a NAS, but not the other way around. Can anyone advise me on free software which can do incremental backups of a NAS drive to an external drive attached the computer it is running on? I'm aware of this question but the top answers have one or more of the following issues: They aren't free. The free version cannot backup a NAS. They cannot do incremental backups. They're just a script and therefore have limited other functionality (eg. disk space management, scheduling, compression, etc.etc.)

    Read the article

  • Configuring ZenOSS to monitor CPU load

    - by Tom
    I'm trying test out monitoring tools for a network at work with a coworker but neither of us have ever used an sort of monitoring tools before. Currently we are experimenting with ZenOSS and having some difficulties. We want to populate our CPU load graphs because that is one of the primary feature we are looking for in our monitoring tools but we have been unable to populate the graphs with data. So far we have installed the wmipreformance, sqldatasource, wmidatasource, snmpperformance(simple) zenpacks and the machine we are trying to monitor is running Windows XP. We have tried to model the device and everything seems to run and we've tride to add data points to graphs but the only options we recieve for graphs are CPU and Memory. We are able to monitor services, ZenOSS recognizes the make and model of the processor, RAM, and Harddrive and is even giving us metrics on available storage but again, we are looking for performance metrics such as CPU load and Memory utilization. I realize I probably didn't provide a lot of information but that is because we don't have a very good idea of what we are doing and can't find instruction either on the ZenOSS homepage or forums to monitor CPU load. If someone could give us step by step instruction on how to set up CPU load monitoring that would probably be more beneficial to us than a diagnostic of our current setup, but regardless, if I left any important information out and you need it to answer the question, please let me know. Thank you.

    Read the article

  • Why is it bad to map network drives in Windows?

    - by Beeblebrox
    There has been some spirited discussion within our IT department about mapping network drives. In particular, it has been said that mapping network drives is A Bad Thing and that adding DFS paths or network shares to your (Windows Explorer/Libraries) Favourites is a far better solution. Why is this the case? Personally I find the convenience of z:\folder to be better than \\server\path\folder', particularly with cmd line and scripting (of course I'm not talking about hard-coded links, naturally!). I have tried searching for pros and cons of mapped network drives, but I haven't seen anything other than 'should the network go down, the drive will be unavailable'. But this is a limitation of any network-accessed storage... I have also been told that mapped network drives poll the network when the network resource is unavailable, however I haven't found more information on this. Wouldn't this still be an issue with other network access mechanisms (that is, mapped Favourites) whenever Windows tries to enumerate the file system (for example, when a file/folder picker dialog is opened)? -- Do network drives poll the network any more than a Windows Explorer library/favourite?

    Read the article

  • Is there a way to exclude a specific drive vdi from "snapshots" in VirtualBox?

    - by Graza
    ...or is there another space-efficient way of dealing with the page/swap file of the Guest O/S? I've realised that its quite possible/likely that one of the things which "bloats" the snapshot/diff vdi's when a snapshot is taken is the guest operating system's pagefile. For example, say I have a 2Gb swap-file in a Windows guest OS, and over the course of a few weeks the usage of the swap file has gone over 1Gb a couple of times. When I next create a snapshot, it seems likely that I'd be almost guaranteed around 1Gb of space taken up in the new differencing disk just because of changes in the swap file. Obviously (provided I never did "live" snapshots on running or paused machines, and only ever did them when the machine was shut down), I would not need any of the information in the swap file to be saved. So this would simply be a waste of 1Gb. I'm wondering if there's a way to attach a vdi to a VM and flag it as "exclude from snapshots" - which would mean I could put the swap file on a different vdi which would never be included in a snapshot. Or if anyone has any other suggestions. Or an explanation about why it might not be an issue. I could obviously delete and recreate a swap drive vdi every time I did a snapshot to achieve the same effect, but this is a little more effort than simply clicking "create snapshot"....

    Read the article

  • HD video editing system with Truecrypt

    - by Rob
    I'm looking to do hi-def video editing and transcoding on an unencrypted standard partition, with Truecrypt on the system partition for sensitive data. I'm aiming to keep certain data private but still have performance where needed. Goals: Maximum, unimpacted, performance possible for hi-def video editing, encryption of video not required Encrypt system partition, using Truecrypt, for web/email privacy, etc. in the event of loss In other words I want to selectively encrypt the hard drive - i.e. make the system partition encrypted but not impact the original maximum performance that would be available to me for hi-def/HD video editing. The thinking is to use an unencrypted partition for the video and set up video applications to point at that. Assuming that they would use that partition only for their workspace and not the encrypted system partition, then I should expect to not get any performance hit. Would I be correct? I guess it might depend on the application, if that app is hard-wired to use the system partition always for temporary storage during editing and transcoding, or if it has to be installed on the C: system partition always. So some real data on how various apps behave in the respect would be useful, e.g. Adobe, Cyberlink, Nero etc. etc. I have a Intel i7 Quad-core (8 threads) 1.6Ghz (up to 2.8Ghz turbo-boost) 4Gb, 7200rpm SATA, nvidia HP laptop. I've read the excellent posting about the general performance impact of truecrypt but the benchmarks weren't specific enough for my needs where I'm dealing with HD-video and using a non-encrypted partition to maintain max performance.

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >