Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 107/500 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • When a server IP changes, do exising TCP (e.g. http/mysql) connections remain running

    - by Luke Cousins
    We have some PHP-FPM servers and when they need a database connection, they connect to an HAProxy server which selects them a database server to use and the connection opens. When we then want to perform some maintenance on the HAProxy servers (such as config changes requiring an HAProxy restart), the process is as follows: Shutdown Keepalived on the HAProxy server Wait for the floating IP to transfer to the backup HAProxy server (also running Keepalived) Wait until HAProxy stats is reporting just one connection (us checking how many connections there are) Restart HAProxy Restart Keepalived As step 2 occurs, what will happen to the open mysql connections at that point? According to this TCP Sessions and IP Changes question the connections will be dropped. Is this really the case? If so, what, if anything, can be done to prevent this happening? Can the connection be in some way forced to use the main (non-floating) IP of the server? We also have a similar setup with two Nginx servers with Keepalived running on them and we were planning on doing the equivalent process. If we do, the same question applies - what happens to the existing http connections when the IP moves to the other server? I appreciate your help.

    Read the article

  • How do you create an ssh key for the apache user on Redhat?

    - by Josh Smeaton
    As the question asks, how do I generate an ssh key for the user apache on Redhat? My use case, is that we have a mercurial server running under the apache user. We also have several web servers clustered that we need to log on to manually and do pulls from. Ideally, what we'd like to do is have the mercurial server push all changes to all the webservers in the cluster. To do this, we want to use ssh, as setting up http mercurial servers on each of the web servers seems like too much work, and far too heavy. What I've tried to do is the following: > sudo mkdir /var/www/.ssh > sudo chown -R apache:nobody /var/www/.ssh > su - apache -c "ssh-keygen -t rsa" This account is currently not available. I found the above commands elsewhere, but I can only assume that Redhat has differences to whatever distro was used for the above. Is there a way I can generate an ssh-key for the apache user?

    Read the article

  • What is the best private cloud storage setup

    - by vdrmrt
    I need to create a private cloud and I'm searching for the best setup. These are my 2 most important requirements 1. Disk and system redundant 2. Price / GB as low as possible The system is going to be used as backup setup which will receive data 24/7 over SFTP and rsync. High throughput is not that important. I'm planning to use glusterfs and consumer grade 4TB hard-drives. I have worked out 3 possible setups 3 servers with 11 4TB HDD Setup up a replica 3 glusterfs and setup each hard drive as a separate ext4 brick. Total capacity: 44TB HDD / TB ratio of 0.75 (33HDD / 44TB) 2 servers with 11 4TB HDD The 11 hard-drives are combined in a RAIDZ3 ZFS storage pool. With a replica 2 gluster setup. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) 3 servers with 11 4TB consumer hard-drives Setup up a replica 3 glusterfs and setup each hard-drive as a separate zfs storage pool and export each pool as a brick. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) (Cheapest) My remarks and concerns: If a hard drive fails which setup will recover the quickest? In my opinion setup 1 and 3 because there only the contents of 1 hard-drive needs to be copied over the network. Instead of setup 2 were the hard-drive needs te be reconstructed by reading the parity of all the other harddrives in the system. Will a zfs pool on 1 harddrive give me extra protection against for example bit rot? With setup 1 and 3 I can loose 2 systems and still be up and running with setup 2 I can only loose 1 system. When I use ZFS I can enable compression which will give me some extra storage.

    Read the article

  • Recommended programming language for linux server management and web ui integration.

    - by Brendan Martens
    I am interested in making an in house web ui to ease some of the management tasks I face with administrating many servers; think Canonical's Landscape. This means doing things like, applying package updates simultaneously across servers, perhaps installing a custom .deb (I use ubuntu/debian.) Reviewing server logs, executing custom scripts, viewing status information for all my servers. I hope to be able to reuse existing command line tools instead of rewriting the exact same operations in a different language myself. I really want to develop something that allows me to continue managing on the ssh level but offers the power of a web interface for easily applying the same infrastructure wide changes. They should not be mutually exclusive. What are some recommended programming languages to use for doing this kind of development and tying it into a web ui? Why do you recommend the language(s) you do? I am not an experienced programmer, but view this as an opportunity to scratch some of my own itches as well as become a better programmer. I do not care specifically if one language is harder than another, but am more interested in picking the best tools for the job from the beginning. Feel free to recommend any existing projects except Landscape (not free,) Ebox (not entirely free, and more than I am looking for,) and webmin (I don't like it, feels clunky and does not integrate well with the "debian way" of maintaining a server, imo.) Thanks for any ideas!

    Read the article

  • choosing hosting for custom ecommerce site, shared, dedicated, what to look for?

    - by spirytus
    Hi, I have (almost) developed website for my client and now need to decide on hosting. Most of the users of the site will be located in Australia, and so am I and my client. Now, I want to consider everything before deciding on host and few questions comes to my mind: I cannot afford website being down, and all hosts say something like "99% uptime guranted". Should just that be enough or shall I ask hosts for some stats maybe? Does it make any difference if servers and whole hosting company is located in Australia or outside? I've been hosting few sites with JustHost.com on shared hosting (cheapest plan, servers in US I believe) and never seen any delays but could that be an issue? I would prefer Australian company so I can actually go to them and give them piece of my mind if something goes wrong, but US servers seem cheaper. Would share hosting do? Its ecommerce custom build php application, I know there are security issues with sessions etc on shared hosting. Will take precautions of course but could share hosting be an issue? Would dedicated be worthy option considering that my knowledge of server is very limited? I need to run php/mysql, with preferably unlimited bandwidth as with my experience I cannot tell what amount of traffic would be sufficient. Please let me know if I didn't provide you with enough information so you could answer my questions, will gladly explain further. In advance thanks for any answers :)

    Read the article

  • glusterfs mounts get unmounted when 1 of the 2 bricks goes offline

    - by Shiquemano
    I have an odd case where 1 of the 2 replicated glusterfs bricks will go offline and take all of the client mounts down with it. As I understand it, this should not be happening. It should fail over to the brick that is still online, but this hasn't been the case. I suspect that this is due to configuration issue. Here is a description of the system: 2 gluster servers on dedicated hardware (gfs0, gfs1) 8 client servers on vms (client1, client2, client3, ... , client8) Half of the client servers are mounted with gfs0 as the primary, and the other half are pointed at gfs1. Each of the clients are mounted with the following entry in /etc/fstab: /etc/glusterfs/datavol.vol /data glusterfs defaults 0 0 Here is the content of /etc/glusterfs/datavol.vol: volume datavol-client-0 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs0 end-volume volume datavol-client-1 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs1 end-volume volume datavol-replicate-0 type cluster/replicate subvolumes datavol-client-0 datavol-client-1 end-volume volume datavol-dht type cluster/distribute subvolumes datavol-replicate-0 end-volume volume datavol-write-behind type performance/write-behind subvolumes datavol-dht end-volume volume datavol-read-ahead type performance/read-ahead subvolumes datavol-write-behind end-volume volume datavol-io-cache type performance/io-cache subvolumes datavol-read-ahead end-volume volume datavol-quick-read type performance/quick-read subvolumes datavol-io-cache end-volume volume datavol-md-cache type performance/md-cache subvolumes datavol-quick-read end-volume volume datavol type debug/io-stats option count-fop-hits on option latency-measurement on subvolumes datavol-md-cache end-volume The config above is the latest attempt at making this behave properly. I have also tried the following entry in /etc/fstab: gfs0:/datavol /data glusterfs defaults,backupvolfile-server=gfs1 0 0 This was the entry for half of the clients, while the other half had: gfs1:/datavol /data glusterfs defaults,backupvolfile-server=gfs0 0 0 The results were exactly the same as the above configuration. Both configs connect everything just fine, they just don't fail over. Any help would be appreciated.

    Read the article

  • Apache2: Limit simultaneous requests & throttle bandwidth per IP/client?

    - by xentek
    I want to limit simultaneous requests & throttle bandwidth per IP/Client on a single apache vhost. In other words, I want to ensure that this site, which hosts large media files, doesn't get hammered by someone trying to download everything all at once (just happened the other night). I'd like to limit the outgoing transfer speed overall for this site, as well as limit the number of connections a single IP can make to the server to a sane default (i.e. within normal browser limits for multiple requests so page loads aren't effected too much). Bonus points if I can actually scope it to file types (i.e. leave web files alone, but apply these rules to just the media files). We're running Ubuntu 9.04 on all the servers, and have two apache/php servers being load balanced via Round Robin by a squid proxy server. MySQL is running on its own box as well. We've got plenty of bandwidth to give them, so I don't really want overall caps, but just want to throttle the amount of memory/CPU it takes to serve this site. There other sites on these servers that we don't want to apply these rules too, just want to keep this one from hogging all the resources. Let me know if you need more info! Thanks in advance for your suggestions!

    Read the article

  • What are good systems for managing PHP/MySQL infrastructure?

    - by sbrattla
    I work in a company which is about to migrate most applications from in-house custom built Java/Tomcat applications to Drupal. Due to company policies, applications and websites need to run on in-house servers. This means that we need infrastructure for Drupal (PHP/MySQL) applications. This must have been solved a million times already. I believe this is what web-hosting companies does every day. Even though we work on a much smaller scale than web-hosting companies, i assume it would make sense to look at the task as if we're going to have an internal small-scale web-hosting company. This means that the guys in IT operations could be "responsible" for "offering" web-hosting, while developers could use these "services". We have three environments; dev(elopment), test and prod(uction). It would make sense that developers could log in to a system and create/edit/delete dev and test sites as they'd like. Production sites should be available through the same system, but only available to IT ops. We need to work with clusters of web servers, meaning that an administration system should be capable of creating/editing/deleting sites across multiple servers. I know there's no "this is it" answer to my question; but what would be a good place to start to get going with this? Apart from the actual hardware, what would be a good administration system for this?

    Read the article

  • Offloading backups to secondary network

    - by user1467163
    I'm trying to solve a problem- Currently, we are constantly backing up and have no budget for additional servers. Our production network is still a 10/100 and handles voip, SQL plus our backup traffic, and I'd like to offload the backup traffic onto a secondary network- all of our servers have secondary NIC's that are not in use, and all support gigabit (Our switching hardware does not- a topic for another day). I'd like to move my backups off the production network, but I am having a hard time getting the computers to communicate. I am using a Netgear GS724T switch for the backup network- Chosen for cost and because I have used them extensively on networks saturated with ghosting traffic, so I know it's up to the task. I have defined a VLAN, with ports that are not members of any other VLAN. All traffic is untagged on the VLAN. I have set the servers with 192.168.1.10 and 192.168.1.11 addresses, 255.255.255.0 netmask and I have tried a blank GW, using the local IP of the server 192.168.1.whatever address, and I have tried using the switch's production-side IP as the GW. The machines cannot find each other. DNS addresses are blank because I am going purely by IP for now... Any ideas how to get these machines to talk? they are Windows machines, running Server 2008R2 and 2003R2. Thanks!

    Read the article

  • WSUS performance for unneeded updates

    - by mhouston100
    We have a WSUS server serving around 300 PC's and a couple of dozen servers and a discussion came up at work as to what products to include. We have a single SQL 2005 instance on one of the servers and it has NEVER been updated. My first thought was to just tick the box for SQL 2005 and let WSUS do it's thing to upgrade to the latest service pack at least. One of the other guys here has the opinion that having updates that are relevant to only a small selection of hosts would effect the performance of WSUS as a whole, claiming that each update does a 'check' against all the hosts or something similar. My argument is that manually updating these servers is obviously not working as the admins are not paying attention to what is needed. So my question is: Do updates that only effect a sub-set of the hosts effect the overall performance of the WSUS server in relation to ALL the hosts? (disk space is not an issue at this point) Is there any performance justification for or against manually updating small amounts of products? Basically I'm needing a rebuttal against his argument and I'm unable to find any concrete documentation to prove him wrong.

    Read the article

  • Is there a limit to how many sites can be hosted on a single IP address when using HTTP Host Headers on Windows 2008?

    - by Kev
    For reasons that are lost in the mists of time, our older Windows (2000, 2003) servers have been configured with a "Administrative" IP address and three further "Hosting" IP addresses. There are also additional IP's for sites with SSL certificates. The "Administrative" IP address is where all our internal provisioning, monitoring and other such apps are bound to. We lock this down and don't permit access to it from the outside world (other than over our VPN). The three "Hosting" IP addresses are used for IIS website hosting (in conjunction with host headers). Historically, new site IP address allocations have been rotated through these three IP addresses. I'm not really sure why. I'm building a new batch of servers and I'm considering just having a single hosting IP address. Our servers can host up to 1200 sites on a single machine. Is there a technical limit to the number of IIS sites that can bind to a single IP address? Our Linux platform seems to do just fine with just a single shared IP + host headers. I initially thought this might be an SEO thing, but given that IPv4 address space conservation is paramount I hardly think Google or other search engines could reasonably penalise site rankings just because hundreds of sites hang off the same IP.

    Read the article

  • Only one domain is not resolving via Windows DNS server at multiple locations, but is at others

    - by Brett G
    I'm having quite a weird issue. Had mail delivery issues to a specific domain. After looking closer, I realized that the DNS for that domain isn't resolving via the in-house Windows 2003 SP2 DNS server. C:\>nslookup foodmix.net Server: DC.DOMAIN.com Address: 10.1.1.1 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to DC.DOMAIN.com timed-out (DC.DOMAIN.com and 10.1.1.1 are generic values to replace the actual ones) Even if I run this nslookup from the DC.DOMAIN.com server, I get the same result. However, all other requests are working as they should. I had a sysadmin friend try this DNS lookup on servers at several companies that he consults for (which are also Windows 2003 AD servers). The weird thing is some of these were having the same exact issue. However using public DNS servers work. I have tried clearing the DNS cache, restarting the server, restarting the services, etc. Nothing has worked. One weird event I noticed in the DNS Server Event Logs that might be related is an event ID of 5504 with the following description: The DNS server encountered an invalid domain name in a packet from 192.33.4.12. The packet will be rejected. The event data contains the DNS packet. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. In the data section below, I can see the following mentioned: ns2.webhostingstar.com Which happens to be the nameserver for the domain in question. Several discussion threads and a MS KB have pointed to disabling EDNS. I have done this via "dnscmd /config /enableednsprobes 0" and it has not fixed the issue.

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • How to secure a group of Amazon EC2 instances

    - by ks78
    I have several Amazon EC2 instances running Ubuntu 10.04 and I've recently started using Amazon's Route 53 as my DNS. The purpose of doing that was to allow the instances to refer to each other by name rather than private IP (which can change). I've pointed my domain name (via GoDaddy) to Amazon's name servers, allowing me to access my EC2 webservers. However, I noticed I can now access the EC2 instances which I don't want to be public, such as the dedicated MySQL Server. I was thinking Amazon's Security Groups would still be in effect when using Route 53, but that doesn't seem to be the case. Before I started using Route 53, I was thinking of having one instance run a reverse proxy, which would help protect the web servers behind it. Then IP-restrict all the other instances. I know IP restricting can be done using the firewall within each instance, but should I ever need to access them from another IP address, I'd need a way in. Amazon's control panel made it a breeze to open a port when necessary. Does anyone have any suggestions for keeping EC2 instances secure, but also accessible to their administrator? Also, what's the best topology for a group of EC2 instances, consisting of web servers and a dedicated database server, from a security perspective? Does having a reverse proxy server even make sense?

    Read the article

  • Can't delete ntuser.dat file to remove profiles after reboot

    - by Matrix Mole
    I've ran into an issue where some servers will not release the handle on the ntuser.dat file even after a reboot. Or quite possible, after the reboot, the ntuser.dat file is getting re-loaded into memory. The user accounts are definitely not being accessed (some of them belong to users that have not been with the company in over a year). It seems to be on Windows 2003 servers, but I can't be 100% certain that there aren't some 2000 servers showing this issue as well. When I try to use process explorer or handle.exe from sysinternals to kill the handle on these ntuser.dat files, the handle remains open and connected. Handle.exe even reports that the handle was broken while it remains in use. I've even taken ownership on the file and tried to kill the handle to no effect (windows shows I have ownership of the file, but still refuses to release the handle). I have looked into the registry to see if I can discover where the files may be getting loaded at. Unfortunately, the username is appearing in too many places for me to be certain which one is actually loading their reg file into memory. Any suggestions on how I can either break the handle on the files, or prevent them from getting re-loaded after a reboot?

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • ssh hangs on "Last login" line

    - by Pavel H
    This happened for the first time three days ago - I ssh to the server, authenticate using a password, get the welcome message but it remains hanging on the "Last login:..." line. The command line doesn't show and the server doesn't react to my input. Other services on the server keep working ok (apache, tomcat, database, ..). The box has an out-of-band management using which I was able to restart it. After the restart the ssh worked ok again and I didn't find anything suspicious in the logs. Three days later the same problem occurs on this box again, and newly on yet another server in the cluster - 100% same symptoms. Both servers have about 2 month old installation of Debian Squeeze (6.0.2) and the problem never occurred before despite frequent ssh-ing, so it should not be a problem of settings. We haven't been installing anything new for quite some time now. I also made sure there is enough disk space on both servers. Since it started to happen all of a sudden on two servers at about the same time, I suspect some bug may have been introduced via Debian updates, yet I haven't been able to find anyone with the same problem. Most similar issues I have found: ssh freezes at the "Last Login Line" - in our case everything worked fine until recently, so nothing related to settings should be our problem. Diskspace checked, I couldn't check the memory but I would expect something would be in the logs if the system had been running out of it. Remote Fedora system unresponsive, odd but consistent behavior when trying to log in - problem with high load on the server; unlike in this case, nothing changes even if I wait for 10+ minutes

    Read the article

  • connecting to a network using route command

    - by ami
    I have a computer with an external IP(192.168.223.220) and also an internal address (10.1.1.20) in order to connect to some servers that don't have external addresses only 10.1.1.xx . in order to connect to these servers from other machines I used the following command "route ADD 10.1.1.0 MASK 255.255.255.0 192.168.223.220" and than I was able to connect to the servers using there 10.1.1.xx address. The problem is that the hard disk of main server(192.168.223.220) died and was replaced and after the that I am not able to connect to the servers as before, the route command succeeds and I can ping 10.1.1.20 but not the other servers. Thanks I am using Windows XP and the print outs are D:\AurosHome\Scriptsipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : N100-master Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection 3: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/1000 EB Network Connection with I/O Acceleration #2 Physical Address. . . . . . . . . : 00-30-48-34-BA-B9 Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.225.180 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.225.254 DNS Servers . . . . . . . . . . . : 192.168.225.2 Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/1000 EB Network Connection with I/O Acceleration Physical Address. . . . . . . . . : 00-30-48-34-BA-B8 Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 10.1.1.20 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 2: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Mellanox IPoIB Adapter Physical Address. . . . . . . . . : 00-02-C9-25-34-0D D:\AurosHome\Scriptsroute print Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 30 48 34 ba b9 ...... Intel(R) PRO/1000 EB Network Connection with I/O Acceleration #2 - Packet Sche duler Miniport 0x3 ...00 30 48 34 ba b8 ...... Intel(R) PRO/1000 EB Network Connection with I/O Acceleration - Packet Schedul er Miniport 0x10005 ...00 02 c9 25 34 0d ...... Mellanox IPoIB Adapter - Packet Scheduler Miniport =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.225.254 192.168.225.180 10 10.1.1.0 255.255.255.0 10.1.1.20 10.1.1.20 10 10.1.1.20 255.255.255.255 127.0.0.1 127.0.0.1 10 10.255.255.255 255.255.255.255 10.1.1.20 10.1.1.20 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.225.0 255.255.255.0 192.168.225.180 192.168.225.180 10 192.168.225.180 255.255.255.255 127.0.0.1 127.0.0.1 10 192.168.225.255 255.255.255.255 192.168.225.180 192.168.225.180 10 224.0.0.0 240.0.0.0 10.1.1.20 10.1.1.20 10 224.0.0.0 240.0.0.0 192.168.225.180 192.168.225.180 10 255.255.255.255 255.255.255.255 10.1.1.20 10.1.1.20 1 255.255.255.255 255.255.255.255 10.1.1.20 10005 1 255.255.255.255 255.255.255.255 192.168.225.180 192.168.225.180 1 Default Gateway: 192.168.225.254 Persistent Routes: None

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • How do I configure IIS to allow access to network resources for PHP scripts?

    - by Dereleased
    I am currently working on a PHP front-end that joins together a series of applications running on separate servers; many of these applications generate files that I need access to, but these files (for various reasons) reside on their parent servers. If I, from the command line, issue a bit of script such as: <?php var_dump(glob("\\\\machine-name\\some\\share\\*")); I will get the full contents of that directory, proving that there's no problem programmatically with PHP reading the contents of a UNC share. However, if I try to execute the same script from the web server, I get an empty array -- more specifically, if I use more explicitly functions designed to "open" a directory like it was a file, I get access errors. I believe this to be a permissions issue, but I am not a server/network administrator type, so I'm not sure what I need to do to correct this and get my script running, and the links I've checked out have not been a terrible amount of help, perhaps due to my background, or lack thereof as far as IIS is concerned, coupled with the fact that we are not actually using .NET for this. Relevant Stats: Windows Server 2008 Standard SP2 IIS 7.0 PHP 5.2.9 I will be connecting to two types of servers: a few other nearly-identical Server 2008 machines, and a machine running embedded XP. Links that have not been particularly helpful but maybe I am just misreading: http://support.microsoft.com/?id=306158 http://support.microsoft.com/kb/207671/EN-US/ http://support.microsoft.com/kb/280383/

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • Hostname vs webpage domain.

    - by Mark
    Hi All, Im just starting to look at deploying a webpage and get into the joy of DNS etc. And im wondering how you set up multiple web-servers all with thier own hostnames/public IP addresses, and yet have them serve up a webpage from one domain. For example, lets say you have a website example.com, and an A record in DNS that points at it's IP address of 1.2.3.4 . You want to have two servers, prod1 and prod2 with some kind of load balancer in front of them for fail over reasons. The way I see it you would want to have the hostnames of these servers as prod1.example.com and prod2.example.com and perhaps loadb.example.com. How would you set up the DNS so this would all work. ie you could ssh to any of the server domains, prod1.example.com, prod2.example.com or loadb.example.com and also just use the www.example.com url to go to the website. And would all these server names be resolvable from the public internet and is that safe? This would be a linux environment, for arguments sake ubuntu, a django framework dynamic website, running in apache 2.2 Cheers Mark

    Read the article

  • Email Proxy Ideas

    - by jtnire
    Hi Everyone, I wish to host some managed email servers for some customers. Each customer will have their own email server which will be an all-in-one virtual machine running postfix, dovecot and some webmail suite. Even though each customer will have their own server, I do not wish to give each email server it's own public facing IP. I wish to avail the use of proxy servers so all customers use the same public IP. As for the "smtp-in" from the public internet, this isn't a problem as I can set up many mx servers (using postfix) which will store-and-forward the mail to the correct server (using transport maps). As for the IMAP access from the customer, I was thinking of using perdition which is an IMAP proxy - I believe that this will suit my needs. I am confused however on what to use for the "smtp-out" proxy. The customers will have to authenticate with their receptive email server, however they will have to go via a proxy of some sort as they won't have direct access to their server instance. It probably can't be a store-and-forward proxy either. Does anyone have any idea on what I could use here? Many Thanks

    Read the article

  • Having an issue trying to get Gigabit speed across my network (Ubuntu Server)

    - by user94217
    I've just started looking into the network speeds at my office, the entire network is setup to be "Gigabit". This includes Gb switches, Gb Network cards and Cat 5e cabling. I'm not expecting the full speed, I just want more than ~90 Mb/s. I've been running some tests with iperf the linux tools and checking the hardware with ethtool. I have 3 servers and when doing my checks/test I discovered that the two backup servers can access each other at around 450 Mb/s but when using either one of them to connect and test the main server, I only get the 90Mb/s even though ethtool shows the networking card running at 1000/Full. The only difference between all the server/networking cards is the "Port" which ethtool shows. On the two backup servers the "Port" is shown as MII yet on the other it's shown as "Twisted Pair". When using ethtool -s to manually set the "Port" to MII on the main server, it looses all connectivity and does not show "Speed" or "Duplex". Anyway, Am i doing something wrong? Is there a specific reason my main server cannot use Gb when there appears to be no difference except the "Port"?

    Read the article

  • IPTables Reroute SSH based on Connection string?

    - by senrabdet
    We are using a cloud server (Debian Squeeze) where public ports on a public IP route traffic to internal servers. We are looking for a way to use IPTables and ssh where based on some part of the ssh connection string (or something along these lines) iptables will reroute the ssh connection to the "right" internal server. This would allow us to use one common public port, and then re-route ssh connections to individual servers. So, for example we hope to do something like the following: user issues ssh connection (public key encryption) such as ssh -X -v -p xxx [email protected] but maybe adds something into the string for iptables to use iptables uses some part of that string or some means to re-route the connection to an internal server using something like iptables -t nat -A PREROUTING ! -s xxx.xxx.xxx.0/24 -m tcp -p tcp --dport $EXTPORT -j DNAT --to-destination $HOST:$INTPORT ....where $HOST is the internal ip of a server, $EXTPORT is the common public facing port and $INTPORT is the internal server port. It appears that the "string" aspect of iptables does not do what we want. We can currently route based on the IP table syntax we're using, but rely on having a separate public port for each server and are hoping to use one common public port and then re-route to specific internal servers based on some part of the ssh connection string or some other means. Any suggestions? Thanks!

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >