Search Results

Search found 6317 results on 253 pages for 'persistent storage'.

Page 173/253 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Routing traffic to a specific NIC in Windows

    - by Stoicpoet
    I added a 10GB NIC to a SQL server which is connected over to a backend storage using ISCSI. I would like to force traffic going to a certain IP address/host to use the 10gb NIC, while all other traffic should continue to use the 1GB NIC. The 10gb nic is configured using a private network. So far I have added a entry in the host file to the host I want to go over the private network and when I ping the host, it does return the private IP, but I'm still finding traffic going to the 1gb pipe. How can I force all traffic to this host to use the 10gb interface? Would the best approach be a static route? 160.205.2.3 is the IP to the 1gb host, I actually want to the traffic to route over an interface assigned 172.31.3.2, which is also defined as Interface 22. That said, would this work? route add 160.205.2.3 mask 255.255.255.255 172.31.3.2 if 22

    Read the article

  • HTTPS Proxy which answers CONNECT with own certificate

    - by user1109542
    I'm configuring a DMZ which has the following Scheme: Internet - Server A - Security Appliance - Server B - Intranet In this DMZ I need a Proxy server for http(s) connections from the Intranet to Internet. The Problem is, that all Traffic should be scanned by the Security Appliance. For this I have to terminate the SSL Connection at Server B, proxy it as plain http to Server A through the Security Appliance and then further as https into the Internet. An encryption is then persistent between the Client and Server B and the Target Server and Server A. The communication between Server A and Server B is unencrypted. I know about the security risks and that the client will see some warning about the unknown CA of Server B's certificate. As Software I want to use Apache Web Servers on Server A and Server B. As first step I tried to configure Server B that it serves as endpoint for the SSL Encryption. So it has to establish the encryption with the client (answering HTTP CONNECT). Listen 8443 <VirtualHost *:8443> ProxyRequests On ProxyPreserveHost On AllowCONNECT 443 # SSL ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProxyEngine on SSLProxyMachineCertificateFile /etc/pki/tls/certs/localhost_private_public.crt <Proxy *> Order deny,allow Deny from all Allow from 192.168.0.0/22 </Proxy> </VirtualHost> With this Proxy only the CONNECT request is passed through and an encrypted Connection between the client and the target is established. Unfortunately there is no possibility to configure mod_proxy_connect to decrypt the SSL connection. Is there any possibility to accomplish that kind of proxying with Apache?

    Read the article

  • Can a RAID 0 disk/config be rebuilt ?

    - by Rogue
    Recently one of the hard drives of one of my RAID 0 configuration gave an error. What do I do now I'm hoping that I can replace that faulty disk with a new hard drive and that the RAID can rebuild itself. (using Intel Matrix Storage Console) Is this possible? Though I doubt it. Is there anyway that I can rebuild the RAID? or have I lost all the matter on it. TECH INFO: I have a software raid on an Intel DG965WH motherboard and the current operating system is Windows

    Read the article

  • Can a RAID 0 disk be rebuilt

    - by Rogue
    Recently one of the hard drives of one of my RAID 0 configuration gave an error. What do I do now I'm hoping that I can replace that faulty disk with a new hard drive and that the RAID can rebuild itself. (using Intel Matrix Storage Console) Is this possible? Though I doubt it. Is there anyway that I can rebuild the RAID? or have I lost all the matter on it. TECH INFO: I have a software raid on an Intel DG965WH motherboard and the current operating system is Windows

    Read the article

  • glassfish timeout

    - by Stefano
    Environment: Windows 2008 Server Edition Netbeans 6.7.1 Glassfish 2.1 Apache 2.2.15 for win32 Original problem (almost fixed): The HTTP/1.1 GET method to send data fails if I wait for more than 30 seconds. What I did: I added to the http.conf file of Apache these lines: # # Timeout: The number of seconds before receives and sends time out. # Timeout 9000 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On I went to the Glassfish panel (localhost:4848) and in Configuration HTTP services and I put: Timeout request: 9000 seconds (it was 30) Standby time: -1 (it was 30 seconds) Problem: I am not able to put for glassfish a timeout bigger than 2 minutes to send a GET method. I found this article about glassfish settings, but i'm not able to find WHERE I should put those parameters, and if they could work. Can anybody help try to set this timeout to a higher limit? Maybe it's even a different setting? New tried solution: I went to the glassfish panel control, and to Configuration Subprocesses "Thread-pool-name" and changed the idle timeout from 120 seconds to 1200 seconds. Then I restarted the glassfish service (both from the administrative tools and from asadmin), but still it waits 120 seconds to go idle. I even tried restarting the whole server, still no results. Maybe some setting in postgres? Or the connection of netbeans to postgres through glassfish?

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • create symlink to another machine

    - by microchasm
    Hi, I have 2 machines. Both running CentOS. Box1 is webserver with apache, php. Box2 is mysql, and file storage. The files will only be accessible from Box1 within the webapp. I'd like to somehow create a symlink or somesuch on box1 to a folder on box2 where uploaded files can be stored and retrieved. Security in mind, what would be the best way to go about linking these 2 boxes up in a transparent (to apache) way? NB: the boxes are connected directly to each other via a crossover cable; no lan access to box2. Much thanks!

    Read the article

  • Motherboard: Intel S5520HCR s1366 SSI EEB

    - by Crazy_Bash
    I'm building a storage server for online video streaming. I thought about adding two SSD drive for a OS. other 15*(12 SATA & 3 SSD) drives i want to build with aufs XFS and ethernet 4GB/sec network. But I'm confused a little. S5520HCR board supports 6, SATA/300, RAID: 0, 1, 10, Intel ICH10R. Does it mean i can use SATAIII HDD? I'm planing on buying SEAGATE SV35 Series (3.5, 3??, 64??, SATA III-600). also my Chassis supports up-to 16 sata and the motherboard only 6 what kind of sata controller should i use? What's better in terms of performance 1366 or 2011 socket? My server so far: AIC RSC-3EG-80R-SA1S-2 3U Motherboard: Intel S5520HCR s1366 SSI EEB Kingston DDR3 8192Mb PC3-10600 1333MHz (KVR1333D3N9/8G) Seagate 3000GB 64MB 3.5" 7200rpm SATAIII (ST3000DM001) Kingston 480GB SSD 2.5" SATAIII Intel E1G44HTBLK Intel Xeon E5606 2133MHz/L3-8192Kb/QPI s1366 tray SERVER ACC CARD SAS PCIE 16P HBA 9201-16I LSI00244 SGL LSI

    Read the article

  • Server OS: put it on a separate drive? Yes, no, or depends on the situation?

    - by captainentropy
    Hi, I would like opinions, or facts, both preferably, on whether it's ok to install a server's OS on the RAID array or not. I would predict installation on separate drives is the best but I'm interested in the performance. The server in question will have 8 cores (2.4GHz ea.), 24GB RAM, and ~16TB of usable space of server-class drives in RAID10. There is also a subsytem of an ~equivalent size for backup. I will be running CPU/memory intesive applications on this server in addition to it being file storage for my work (research lab). IF I install the OS (haven't decided which one, probably Ubuntu or Fedora or some other good linux distro) on separate drives will there be any performance problems if they aren't configured in RAID10? IF it is better to have the OS on separate drives should I go for 150GB velociraptors in RAID1 or smallish SSD drives in RAID1? Money is unfortunately a factor as I think I'm close to maxing my budget as it is. Thanks!

    Read the article

  • Can a S3 mount be used as the document root for Apache?

    - by Hesse
    Has anyone been successful in having their DocumentRoot reside on an S3 mount (using s3fs)? I currently have a mounted bucket at /mnt/s3. I can read and write files to it no problem. In my httpd.conf I have DocumentRoot "/mnt/s3". When I restart Apache I get the error "DocumentRoot must be a directory". Has anyone tried something similar. My goal is to have a shared storage space so my nodes can scale easily and access the same document root. Thanks

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • How to install Windows on a laptop with no CDROM drive?

    - by Jason Kester
    I have an old Thinkpad X60 that I'd like to wipe clean and rebuild. Seeing as this machine doesn't have an optical drive, what's the easiest way of installing Windows XP? I have an external USB hard drive available. Would it be possible to run the install from that instead? Otherwise, what options do I have? Edit: assuming we're using a USB mass storage device... Is there a BIOS setting that I would need to change, or will it configure itself automatically? Would the USB drive need to be configured in any special manner, or would simply having a copy of the Windows CD files in a directory there be sufficient? Since the first couple answers that came in were basically "yes", I guess I didn't phrase my question correctly. I'm asking for detailed instructions on how to do this, not just a sanity check that I'm headed in the right direction. Thanks!

    Read the article

  • How to configure in crontab with condition statement for checks

    - by chz
    We like to monitor the NAS storage mounted on a linux box. We only like to be notified via mail when the usage exceeds a certain number say 80. We have only seen in linux books where most of them are calling shell scripts at certain times. How do we write inside crontab to only mail us if it exceeds 80 ? Usual eg 2 2 * * * /home/someUser/script.sh 2&1 | mail [email protected] Looking for solution like below 2 2 * * * if [ someNumber "80" ] ; then /home/someUser/script.sh | mail [email protected] Sincerely

    Read the article

  • Raid-3 like software backup tool

    - by Chronial
    I have a lot of data (about 7 TB), stored across multiple hard-drives with varying sizes. I would like to have a backup of that data to be safe against drive failure. A RAID is not a good option for me, as I want to keep my cost low and be able to easily extend the storage capacity of my setup by buying an additional HD. I remember seeing a piece of software that generates parity data over all drives and stores that on an extra drive. That solution protects the setup from hard drive failure and works with varying drive sizes (as long as the parity drive is the biggest one). But I can’t seem to find that software again. Does anybody now what I’m talking about or have any other solution for my situation?

    Read the article

  • Using the same local folder for Dropbox and Skydrive

    - by roryok
    Ever since I switched to windows phone I've sorely missed having an official dropbox app. Recently I've toyed with the idea of moving all my crucial files to SkyDrive instead. I have more storage on SkyDrive and the WP SkyDrive integration is very handy. I'm thinking about having both cloud sync services point to the same local folder for the first few weeks. That way if I want to go back its an easy task (and I can keep using dropbox's superior public folder) Has anyone else done this? Are there any potential issues (permissions, conflicts etc)

    Read the article

  • how to copy photos in ipad to my pc?

    - by davidshen84
    hi, i used the iTune to sync my photos to my ipad. but now, i lost the copies of the photos on my pc, so i want to restore them back from my ipad. but from the storage folder that ipad exposed, i cannot find my photos. and i am not sure if the photo sync function in iTune can sync the photos on my ipad to my pc, because it seems it can only sync stuffs from pc to ipad. i am not sure if jail break the ipad can help me.

    Read the article

  • SQL queries break our game! (Back-end server is at capacity)

    - by TimH
    We have a Facebook game that stores all persistent data in a MySQL database that is running on a large Amazon RDS instance. One of our tables is 2GB in size. If I run any queries on that table that take more than a couple of seconds, any SQL actions performed by our game will fail with the error: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity This obviously brings down our game! I've monitored CPU usage on the RDS instance during these periods, and though it does spike, it doesn't go much over 50%. Previously we were on a smaller instance size and it did hit 100%, so I'd hoped just throwing more CPU capacity at the problem would solve it. I now think it's an issue with the number of open connections. However, I've only been working with SQL for 8 months or so, so I'm no expert on MySQL configuration. Is there perhaps some configuration setting I can change to prevent these queries from overloading the server, or should I just not be running them whilst our game is up? I'm using MySQL Workbench to run the queries. Here's an example.... SELECT * FROM BlueBoxEngineDB.Transfer WHERE Amount = 1000 AND FromUserId = 4 AND Status='Complete'; As you can see, it's not overly complex. There are only 5 columns in the table. Any help would be very much appreciated - Thanks!

    Read the article

  • Automatically make or update a copy real-time on another hard drive volume whenever files are saved to a particular folder

    - by mrblint
    Whenever I save or update a file to a particular designated folder on my C:\drive I would like to make or update a copy on my network-attached storage device, ideally saving the copy to the NAS as a version rather than overwriting a copy there, if possible. I have Windows 7 x64 Ultimate. Is there any feature built-in that can accomplish this? It has to be a real copy, not merely a pointer. I'm trying to achieve some redundancy for especially critical documents (in a variety of formats) that change frequently throughout the day. P.S. I am looking for folder-level granularity; I wouldn't want this to happen for every file on the C: volume.

    Read the article

  • Does the Dell Inspiron 1501 handle more than 4 Gb of RAM?

    - by zillion
    After the following comment on my last question, I'm thinking about upgrading my RAM: I got a 160 GB Scorpio Blue a couple months ago for my 1501. It's nice. That + 2 GB Crucial RAM have rather revived my notebook (meaning a very nice speed and storage boost). I was outgrowing it... – Nathaniel What would be the best choice to add more RAM? I've already got 2 GB, but I'm not sure what their speed is. What are the size, type and speed limitations for RAM on my particular laptop?

    Read the article

  • How to delete removed devices from a mdadm RAID1?

    - by Kabuto
    I had to replace two hard drives in my RAID1. After adding the two new partitions the old ones are still showing up as removed while the new ones are only added as spare. I've had no luck removing the partitions marked as removed. Here's the RAID in question. Note the two devices (0 and 1) with state removed. $ mdadm --detail /dev/md1 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. /dev/md1: Version : 00.90 Creation Time : Thu May 20 12:32:25 2010 Raid Level : raid1 Array Size : 1454645504 (1387.26 GiB 1489.56 GB) Used Dev Size : 1454645504 (1387.26 GiB 1489.56 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 12 21:30:39 2013 State : clean, degraded Active Devices : 1 Working Devices : 3 Failed Devices : 0 Spare Devices : 2 UUID : 10d7d9be:a8a50b8e:788182fa:2238f1e4 Events : 0.8717546 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 8 34 2 active sync /dev/sdc2 3 8 18 - spare /dev/sdb2 4 8 2 - spare /dev/sda2 How do I get rid of these devices and add the new partitions as active RAID devices? Update I seem to have gotten rid of them. My RAID is resyncing, but the two drives are still marked as spares and are number 3 and 4, which looks wrong. I'll have to wait for the resync to finish. All I did was to fix the metadata error by editing my mdadm.conf and rebooting. I tried rebooting before, but this time it worked for whatever reason. Number Major Minor RaidDevice State 3 8 2 0 spare rebuilding /dev/sda2 4 8 18 1 spare rebuilding /dev/sdb2 2 8 34 2 active sync /dev/sdc2

    Read the article

  • Move /var directories to to /mnt on an EC2 instance

    - by Geoff Lanotte
    I am trying to work on a standard configuration for a set of EC2 instances running ubuntu 12.04. These servers are going to be primarily web servers for a Ruby on Rails application. When you configure a new large instance, you are given a primary of 8GB and then ephemeral storage of 400 GB that is mounted to /mnt. It seems logical to me to move some directories that have a potential for growth off to the /mnt directory, I was specifically thinking of /var/www and /var/log. My question is two-fold: Is this a good idea or are there pitfalls that I cannot see? If this is a good idea, how should I go about configuring this. I do have the ability to configure new instances and down our old instances. My concern is over long term, doing this in such a way that it prevents downtime. I am a developer with some experience in devops, but mounting drives is something I have not faced before, so explicit directions would be greatly appreciated.

    Read the article

  • building a home server with a nas appliance [closed]

    - by user51666
    Possible Duplicate: Best way to build home NAS with redundancy I was hoping to get some ideas from folks here. I'm interested in building a home web server with a nas appliance. It would be primarily used for storing pictures, video. I want a networked storage device so I can have multiple devices access it wirelessly as needed from within our home and also I want the option to access from outside the house using a login/pw access. I'm also interested in customizing, building my own web pages as well. Preferably apache. Any preferences? Does anyone have an interesting, neat set up they can share? Thanks!

    Read the article

  • Server 2003 slow share.

    - by G V
    I am running an 03 box with shares active. When uploading to the share, the speed is average. About 15-20 mbps.. But when you think about it, it is bad because it is a direct connection to a couple machines. When uploading to another server the connection speed is twice that of the direct storage. When uploading s massive folder, 250 GB, the upload will start as normal, but as it progresses it drops in speed. Now it is sitting at around 2-7 Mbps. Any ideas on how i can boost the transfer rate? On a side note, the download speed is great. It is a speed that you would expect from this setup, the main problem is uploading and what is causing the extreme slowness in speeds. Any help would be great.

    Read the article

  • Moving software RAID to Linux

    - by terman
    I'm using a RAID 1 (mirrored pair) configuration in my Media Center/ NAS system. Currently it's running Windows 8 (yeah, big mistake I know) and I'm regretting it (did it for the games, not worth it). Currently I'm having two software RAID 1s (3TB + 2TB) configured with Storage Spaces and unfortunately formatted with NTFS. Now I would like to switch to Fedora (or maybe Ubuntu if there are advantages) for good. Is there a way that I could continue using the disks as they are without the need to format them with ext or something? I'm glad for every tip. Oh, the system disk is of cause not in a raid configuration.

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >