Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 548/715 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • 2 Server FC SAN Configuration

    - by BSte
    I have 2 identical servers: -48GB Ram -8GigE NIC's -2FC NIC's -2x72GB RAID1 Hard Drives -Server 2008R2 Host I also Have a Fibre Channel SAN: -16x146GB RAID10 Hard Drives -2xDual-port FC Controllers (Controller A and B both have ports 1 and 2) -Server 1 has Fiber to Ports A1 and B1 -Server 2 has Fiber to Ports A2 and B2 -I kept the default config with 1 Virtual Disk and 1 Volume -The default mappings show ports A1,A2,B1,B2 on LUN 0 with read-write My goal is: -2xVM's with IIS and Guest Level Failover -2xVM's with SQL 2008 Enterprise using a Single DB and Guest Level Failover -1xVM that is an application server, preferable with Host Failover. From what I read, this will also need AD for clustering to work. -I need at least 1 VM always running for IIS and the SQLDB. This includes hardware failover and application (ie: reboot a VM for Critical updates) I was told I could install the VM's and run them from the SAN, and this is what I've tried: Installed MPIO and HyperV on Server1 and Server 2 Added the SAN as Disk E: on both servers, made it GPT and formatted NTFS Configured HyperV on both server to store use E:\VD and E:\VHD On server1, I was able to install 3 VM's on the SAN and all worked well. On server2, I would start installing the other 2 VM's, but always at some point the VM's would get a corrupt .VHD message (either server). Everything I found about the message typically related to antivirus, so I removed all antivirus on both Host servers (now only running 2008R2). I reformatted drive E: (SAN), recreated the VHD and VD directories, installed 3 VM's on Server 1, and then had the same issue when installing VM's on Server2. Obviously something is wrong, but I'm not certain what exactly. My questions: 1) Are my goals possible with this hardware setup? -I've read 2008R2 supports FC SAN's, but a lot of articles seem to only give examples with iSCSCI setups 2) What would be the suggested route on setting up the SAN (disks,volumes,LUN's)? I've worked with HyperV on a single machine before and never had issues. Actual experience working on SAN's and clustering is new to me. Any suggestions or recommendations to get me in the right direction would be much appreciated.

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • PAM Winbind Expired Password

    - by kernelpanic
    We've got Winbind/Kerberos setup on RHEL for AD authentication. Working fine however I noticed that when a password has expired, we get a warning but shell access is still granted. What's the proper way of handling this? Can we tell PAM to close the session once it sees the password has expired? Example: login as: ad-user [email protected]'s password: Warning: password has expired. [ad-user@server ~]$ Contents of /etc/pam.d/system-auth: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth sufficient pam_winbind.so use_first_pass auth required pam_deny.so account [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 account sufficient pam_succeed_if.so user ingroup AD_Admins debug account requisite pam_succeed_if.so user ingroup AD_Developers debug account required pam_access.so account required pam_unix.so broken_shadow account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account [default=bad success=ok user_unknown=ignore] pam_winbind.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password sufficient pam_winbind.so use_authtok password required pam_deny.so session [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 session sufficient pam_succeed_if.so user ingroup AD_Admins debug session requisite pam_succeed_if.so user ingroup AD_Developers debug session optional pam_mkhomedir.so umask=0077 skel=/etc/skel session optional pam_keyinit.so revoke session required pam_limits.so session optional pam_mkhomedir.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • Installer not being updated ( probably because of Windows 7 file cache )

    - by Sithu Kyaw
    I'm creating an installer for my Visual FoxPro application using ISTool and Inno Setup. It is ok for me for the first time. But, I updated my code and re-built the EXE file. Then, compiled the installer again. I found that my update was not compiled into the installer and I did not see the update in my running application. I noticed that the EXE file, which was built by VFP, was updated properly. It seems the installation script did not output the updated file. But, when I changed folder names, it did work. I don't want to change folder names whenever I run that installation script. It is not a good idea actually. I think it is because of Windows 7 cache system. Mine is Windows 7 Home Premium Service Pack 1. For example, My previous output file is located at C:\path\to\myinstaller.exe When I compile the installation script, the output file there should be overwritten, but it was not as expected. Although I deleted the file, it did not work. When I changed to output file path as C:\newpath\to\myinstaller.exe, I got the fix, but it is not a solution what I'm looking for. Does anyone how to do that? [Edit] I found that the installed directory was not updated properly. For example, I installed the program to C:\Program files\MyInstalledApp When I run the installer again, that installation directory should be overwritten, but failed. Thus, I got to uninstall the app before I re-install it. Is there any fix for this?

    Read the article

  • Windows XP SP3 Keyboard stops working

    - by Kevin K
    Here's the strangest thing I have yet to see in 20+ yrs of computer repairs. My in-laws Windowsx XP SP3 has stopped recognizing keyboards. The keyboards work fine in the BIOS, during the boot select process to boot normally, etc. but once Windows comes up it will not recognize any USB keyboard. The USB mouse works fine, have tried different USB ports, different keyboards, etc. nothing works. I can log into the machine via VNC and use the remote keyboard just fine, but not connected locally. Tried a system restore, it says nothing changed. I am about to just re-install Windows at this point, except I am afraid it will happen again. I have googled for this and it is not unheard of, but I have not found any solution other than nuking it. Anyone have any ideas? I have re-installed the USB drivers for the M/B. Gone into devices and deleted them for a re-install, etc. Keyboard works off a Linux live boot CD, and in the BIOS setup so it is not a hardware issue, and I have tried a few keyboards all of which I know are good and work fine on other systems.

    Read the article

  • Weird mouse/keyboard freezups when using PowerPoint 2007 with IBM/Lenovo docking station

    - by DanM
    I'm not sure what part of my system is responsible for this, but when using PowerPoint, I have problems when trying to resize drawing objects. I'll be dragging the handle and suddenly, the object will deselect and whatever is behind the object will select and start moving around. Next thing I know, the keyboard won't type anymore, and the only way to fix it is to unplug the USB and plug it back in. In case it's hardware related, I'm using an IMB Thinkpad T60P in a docking station. My keyboard is a Microsoft Natural Keyboard Pro. My OS is Windows XP SP3. I've never noticed this happening in anything besides PowerPoint, and I don't know anyone else who has this problem (even people with similar setups). Any ideas what it could be? Edit Well, it looks like I only get the problem if I plug the mouse into my docking station's USB. If I plug directly into the laptop's USB, everything works fine. And, again, this problem is only with PowerPoint. I tried playing with some drawing objects in Word and had no issue no matter where my mouse was plugged in. I should also mention I tried a different mouse (a standard Microsoft corded mouse instead of my Logitech trackball), but that made no difference. So, I don't think it's anything specific with the trackball or the trackball's driver. I tried searching Google but came up empty, so I'm guessing this problem is something unique to my setup. If you have any thoughts or ideas to try, I'd love to hear them.

    Read the article

  • Windows Server (SBS) 2008 - Telephony service won't start (missing permissions)

    - by Uri
    I am running a SBS 2008 server. It's setup as the domain controller for the network. After a reboot, the Telephony service (and all services that depend on it) refuses to start under the Network Service account. The error given is: Error 1297: A privilege that the service requires to function properly does not exist in the service account configuration. You may use the Services Microsoft Management Console (MMC) snap-in (services.msc) and the Local Security Settings MMC snap-in (secpol.msc) to view the service configuration and the account configuration. This has caused all the network services not to be accessible e.g. terminal services, VPN (RRAS), SQL Server instances. The SSH daemon I have running on the box will accept connections only from localhost, but won't respond on the network. After searching around, the only advice I could find was to grant the Network Service account these permissions: Adjust memory quotas for a process Replace a process level token I set those permissions on both the Default Domain Policy and the Default Domain Controller Policy, but it seemingly had no effect. Most of the services will start if I change them to run under the Local System account, but that didn't make them accessible on the network. I even tried removing the Routing and Remote Access Services feature, rebooting and reinstalling it, but the issue remains. Any ideas?

    Read the article

  • Vim: Custom Folding function done, custom highlighting required

    - by sixtyfootersdude
    I have defined a function in vim to properly indent folds. Ie so they look like this: Unfolded this is text also text indented text indented text not indented text folded with default function this is text also text +-- 2 lines: indented text ---------------------------- not indented text folded with my new function this is text also text ++- 2 lines: indented text ---------------------------- not indented text The only problem is the the highlighting is still like this: folded with my new function (highlighting shown with tag) this is text also text <hi> ++- 2 lines: indented text ----------------------------</hi> not indented text I would like the highlighting to start at the ++ and not at the beginning of the line. I have looked in the vim manual but could not find anything like that. One so-so solution I found was to make the background black. highlight Folded ctermbg=black ctermfg=white cterm=bold But this make folds less visible. I have tried several variations of: syn keyword Folded lines syn region Folded ... But I don't think that this is the way that folds are selected. Can anyone offer a suggestion? By the way this is my function to indent the folds: set foldmethod=indent function! MyFoldText() let lines = 1 + v:foldend - v:foldstart let ind = indent(v:foldstart) let spaces = '' let i = 0 while i < ind let i = i+1 let spaces = spaces . ' ' endwhile let linestxt = 'lines' if lines == 1 linestxt = 'line' endif return spaces . '+' . v:folddashes . ' '. lines . ' ' . linestxt . ': ' . getline(v:foldstaendfunction endfunction au BufWinEnter,BufRead,BufNewFile * set foldtext=MyFoldText() By the way thanks to njd for helping me get this function setup.

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • Internal but no external Citrix Access?

    - by leeand00
    We recently had to reload our configuration of Citrix on our server Server1, and since we have, we can access Citrix internally, but not externally. Normally we access Citrix from http://remote.xyz.org/Citrix/XenApp but since the configuration was reloaded we are met with a Service Unavailable message. Internally accessing the Citrix web application from http://localhost/Citrix/XenApp/ on Server1 we are able to access the web application. And also from machines on our local network using http://Server1/Citrix/XenApp/. I have gone into the Citrix Access Management Console and from the tree pane on the left clicked on Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/PNAgent Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/XenApp, which in both cases displays a screen that reads Secure client access. Here it offers me several options: Direct, Alternate, Translated, Gateway Direct, Gateway Alternate, Gateway Translated. I know that I can change the method of use by clicking Manage secure client access->Edit secure client access settings which opens a window that reads "Specify Access Methods", and below that reads "Specify details of the DMZ settings, including IP address, mask, and associated access method", I don't know what the original settings were, and I also don't know how our DMZ is configured so that I can specify the correct settings, to give access to our external users on the http://remote.xyz.org/Citrix/XenApp site. We have a vendor who setup our DMZ and does not allow us access to the gateway to see these settings. What sorts of questions should I ask them to restore remote access?

    Read the article

  • Auto load balancing two node Cluster Hyper-v 2008 R2 enterprise?

    - by Kristofer O
    My setup is a 2 node cluster with 72GB ram each and a ~10TB MD3000i Iscsi SAN. I have about 30VMs running I keep about 15 on either server. I do a live migration to the other server if I need to run updates or whatever... Either one of the servers is able of running all VM if needed, but the cpu is pretty high. Here's my issues. I know Hyper-v has a limit of a single Live-migration at a time. But Why doesn't it queue them up to move one at a time? If I multi select I don't get the option to live migrate a one at a time. OR if I'm in the process of Migrating one it will give me an error that it's currently migrating a VM... Is there a button I missed that will tell a Node that it needs to migrate all the VMs elsewhere? Another question, does anyone know whats the best way to load balance VMs based on CPU and/or network utilzation. I have some VMs that don't do much. and some that trash the CPU or network. I'd like to balance it out on both servers if at all possible. and Is there any way to automate it? last question... If I overcommit my Cluster is there a way to tell the cluster that I want certian VMs the be running and to savestate other VMs based on availible system resources? Say when my one node blue screens and the other node begins starting the VMs up. I want the unimportant ones to shutdown or savestate so the important ones can stay running or come back online. Thanks just for reading all that. Any help would be great.

    Read the article

  • fail2ban Error Gentoo

    - by Mark Davidson
    Hi All I've recently setup a new VPS running Gentoo (My first time using the distro so please forgive me is this is a really easy one) and as I've done with other servers installed fail2ban. Setting it up to block the host via iptables, on too many unsuccessful logins with ssh. However I'm getting a strange error that I can't quite solve. When I start fail2ban I get these lines in the error log 2009-11-13 18:02:01,290 fail2ban.jail : INFO Jail 'ssh-iptables' started 2009-11-13 18:02:01,480 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 If I try and force a ban these errors show up in the log and the host is not banned 2009-11-13 11:23:26,905 fail2ban.actions: WARNING [ssh-iptables] Ban XXX.XXX.XXX.XXX 2009-11-13 11:23:26,929 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:26,930 fail2ban.actions.action: ERROR Invariant check failed. Trying to restore a sane environment 2009-11-13 11:23:27,007 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: CRITICAL Unable to restore environment My versions are as follows Linux masked 2.6.18-xen-r12 #2 SMP Wed Mar 4 11:45:03 GMT 2009 x86_64 Intel(R) Xeon(R) CPU E5504 @ 2.00GHz GenuineIntel GNU/Linux net-analyzer/fail2ban-0.8.4 net-firewall/iptables-1.4.3.2 If anyone could shead some light on these errors that would be great, I did wonder if it was a problem with iptables or some kernel modules but I can block an IP if I do. iptables -I INPUT -s 25.55.55.55 -j DROP so makes me think its something a bit more unusual. Thanks a lot in advance

    Read the article

  • ISA Server 2006 SSL Certificate Dilemma

    - by JohnyD
    I'm making so great headway in offering our services over https with help from a Go Daddy certificate, later to be upgraded to Thawte SSL123 certs. But, I've just run into one whopper of a problem. Here's my setup: I run an ISA 2006 firewall. Our web services are distributed over 2 servers. One is Windows 2000 (www.domain.com) and the other is Windows 2003 (services.domain.com). So, I'll need to purchase 2 certs for both www and services, import them into IIS6 on their respective machines, then export them with the primary key (making sure to Include all certificates in the certification path if possible... that had me stumped for a while), and then to finally import them into ISA's local computer Personal store. The problem I've just run into is that I have separate firewall rules for services.domain.com and www.domain.com... because requests need to be forwarded to different web servers. Each of these firewall rules use the same httplistener. I have just found out that you can only use 1 certificate per httplistener. To make matters worse you can only have a single httplistener per ip / port. Is this correct? I can only use a single certificate for a single ip address? This would seem to be a severe limitation. Am I wrong? If I'm not then I've got a whole lot more work ahead of me as I'll have to set up extra ip's, add them to the firewall's network interface, create new listeners using that ip, etc... Can someone please confirm that I'm doing this correctly / incorrectly? Once I got my head wrapped around it all it seemed easy... then this. Thanks in advance.

    Read the article

  • Pure-FTPD accounts and permissions for websites

    - by EddyR
    I'm having trouble setting up the appropriate Pure-FTPD accounts and permissions - I have the following sites setup up on my Debian server. /var/www/site1 /var/www/site2 /var/www/wordpress The permissions are 775 for folders and 664 for files. The owner is currently admin:ftpgroup Wordpress also requires special permissions for file uploads in /var/www/wordpress/wp-content/uploads What I need is: a general admin group with access to /var/www a group for each site (site1, site2, wordpress) and a group or user, not www-data (?), with permissions to write files to the wordpress upload folder I ask because restrictions on linux groups (can't have groups in groups) makes it a little bit confusing and also because many of the tutorial sites have conflicting information like, some recommend the use of www-data and some don't. Also, I'm not sure if I understand how Pure-FTP is supposed to work exactly. I create a Pure-FTPD account and assign it a directory (/var/www) and a system user (ftpuser) and group (ftpgroup): Can I assign more than 1 path? For example, if a user requires access to 2 sites. Is it better to assign ftpgroup to all ftp locations and let Pure-FTPD manage account access? Why would anyone have more than 1 ftpuser or ftpgroup? (Doesn't it mean users have access to everyone else's files if they could get there?) Sorry for so many questions at once. I've been reading lots of tutorials but I think they've ended up making me more confused!

    Read the article

  • Nagios Apache Config with PHP-FPM downloading cgi files

    - by tubaguy50035
    I'm trying to setup Nagios 3 under Apache 2.4 with PHP-FPM. I've run into a couple problems I could use help with. The PHP side of things seems to be working, I can see the home page and the sidebar. But all of the CGI files are downloading instead of executing, and when I try to click on "Read What's New In Nagios Core 3", I get an error /nagios3/docs/whatsnew.html was not found on this server. Below is my vhost config for Nagios. <VirtualHost *:300> # apache configuration for nagios 3.x ScriptAlias /cgi-bin/nagios3 /usr/lib/cgi-bin/nagios3 ScriptAlias /nagios3/cgi-bin /usr/lib/cgi-bin/nagios3 # Where the stylesheets (config files) reside Alias /nagios3/stylesheets /etc/nagios3/stylesheets # Where the HTML pages live Alias /nagios3 /usr/share/nagios3/htdocs ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9001/usr/share/nagios3/htdocs/$1 <DirectoryMatch (/usr/share/nagios3/htdocs|/usr/lib/cgi-bin/nagios3|/etc/nagios3/stylesheets)> Options FollowSymLinks ExecCGI AllowOverride AuthConfig Order Allow,Deny Allow From All AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios3/htpasswd.users require valid-user </DirectoryMatch> <Directory /usr/share/nagios3/htdocs> Options +ExecCGI </Directory> </VirtualHost> I also added this in my global Apache config: AddHandler cgi-script .cgi Any help or instructions you can give me would be much appreciated. If more information is needed, let me know.

    Read the article

  • Which apache/mysql/php package is best for windows?

    - by crosenblum
    I have tried appservnetwork, was the best so far, but I haven't seen them do an update in ages, EasyPHP is just slow to load always. Wamp and Xamp, all put in their description that is not for production. I do not plan to host publicly this site or site's I am working on. But I do want a fast loading apache/mysql/php server for development purposes. I used to really like WLMP, which is Lighttpd for Windows, but that project seems unupdated or abandoned. I refuse to use IIS, but i have no desire to get into any wars over it. I run windows xp sp3 at my home pc. I will need to have a web server setup for professional work, as well as some fun websites I am working on. I just want it fast enough, so i can run it via localhost, and not take forever to load in the browser. Thank you... I plan mostly do php programming and perhaps coldfusion via this.

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

    - by Philip Durbin
    I need to migrate a number of virtual machines from VMware ESX to CentOS 6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my problem at https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm not sure if there's another RPM in base CentOS or EPEL (something standard) I should be using instead. Originally, I was getting this "No root device found in this operating system image" error when attemting to migrate a Window 2008 VM. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm virt-v2v: No root device found in this operating system image. . . . but I solved this with a simply yum install libguestfs-winsupport since the docs say: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. Next I got an error about missing drivers for Windows 2008. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/amd64/Win2008 . . . and I resolved this by grabbing an iso from Fedora at http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and building an RPM from it with this spec file: https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec Now, virt-v2v exits without error: [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: my-vm configured with virtio drivers. [root@kvm01b ~]# Now, my question is, rather that the virtio-win RPM from the spec file I wrote, is there some other more standard RPM in base CentOS or EPEL that will resolve the error above? Here's a bit more detail about my setup: [root@kvm01b ~]# cat /etc/redhat-release CentOS release 6.2 (Final) [root@kvm01b ~]# rpm -q virt-v2v virt-v2v-0.8.3-5.el6.x86_64 See also Bug 605334 – VirtIO driver for windows does not show specific OS: Windows 7, Windows 2003

    Read the article

  • cpanel dns only / rdns questions

    - by Clear.Cache
    I started getting IPs from ARIN directly, instead of the data center I'm colocated at. Now I have to start applying rdns myself for my clients upon request, instead of having the NOC at the DC do this. That is obvious, since I am in full control over the IP delegation and therefore have nameserver authority. The question is, how do I "create" ptr / rdns records for my clients? My current server uses Cpanel / WHM with ns1/ns2.mycompany.com I also applied those as dns nameservers in the ARIN IP's whois record. How do I create rdns for my clients? Should I install Cpanel DNS Only on a entirely separate server and use this method instead? http://layer1.cpanel.net/ If so, how can I seamlessly transition over the dns records to that new dns server, retaining my ns1/ns2.mycompany.com and their ns1 and ns2 IP addresses? Even more important: I have to change the ns1/ns2 IPs to the new ones I retrieve from ARIN. How can this be done, avoiding downtime during the dns transition? On a side note, would it be easier to just install Cpanel DNS Only on a dedicated server and just use dns1.mycompany.com and dns2.mycompany.com with their own dedicated ns1/ns2 IPs from ARIN - and utilize this dns server for customers who request rdns? Would this be a more viable solution than using our current ns1/ns2.mycompany.com Nameservers? Is Cpanel DNS Only a standalone software that does not require Cpanel/WHM on another server? Is it possible to have redundant dns servers setup using this software solely, ns1 on one server and ns2 on another? Thanks.

    Read the article

  • Openfiler iSCSI performance

    - by Justin
    Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 26.8786 s, 39.0 MB/s [root@localhost ~]# Both servers have brand new Intel Server NIC's and I have Jumbo Frames enabled on the switch, the openfiler box as well as the VMKernel adapter on the ESXi box. I can confirm this is set up properly by using the vmkping command from the ESXi host: ~ # vmkping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1): 9000 data bytes 9008 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.533 ms 9008 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.736 ms 9008 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.570 ms The only thing I haven't tried as far as networking goes is bonding two interfaces together. I'm open to trying that down the road but for now I am trying to keep things simple. I know this is a pretty modest setup and I'm not expecting top notch performance but I would like to see 90-100MB/s. Any ideas?

    Read the article

  • VSFTP Users and Directories

    - by Mathew
    I'm stuck. I've been working all day on trying to figure out what I'm doing wrong and I've hit wall after wall. What I'm trying to do: Setup FTP in such a way that certain users have access only to their directory, but higher level users have access to all directories. What I've Googled so far: I started with this, but that didn't do what I needed it to. I then used this, but once I created one user, it wouldn't let me create another one. Finally, I decided to follow this, but it wouldn't let me even create one user. I'm using Ubuntu 10. I can login to ftp as a root user and it takes me to the home directory. If I try to login using the user I created in the tutorial it says: Status: Connection established, waiting for welcome message... Response: 220 (vsFTPd 2.2.2) Command: USER mathew Response: 331 Please specify the password. Command: PASS **** Response: 530 Login incorrect. Error: Critical error Error: Could not connect to server

    Read the article

  • Mounting Replicated Gluster Multi-AZ Storage

    - by Roman Newaza
    I have Replicated Gluster Storage which is used by Auto scaling Servers. Both, Auto scaling and Storage are allocated in two Availability zones. Gluster: Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: Brick1: gluster01:/storage/1a # Zone A Brick2: gluster02:/storage/1b # Zone B Brick3: gluster03:/storage/2a # Zone A Brick4: gluster04:/storage/2b # Zone B Brick5: gluster01:/storage/3a # Zone A Brick6: gluster02:/storage/3b # Zone B Brick7: gluster03:/storage/4a # Zone A Brick8: gluster04:/storage/4b # Zone B I used Round Robin DNS for Gluster entry point, so DNS name resolves to all of the storage server addresses which are returned in different order all the time: # host storage.domain.com storage.domain.com has address xx.xx.xx.x1 storage.domain.com has address xx.xx.xx.x2 storage.domain.com has address xx.xx.xx.x3 storage.domain.com has address xx.xx.xx.x4 The Storage is mounted with Native Gluster Client: # grep storage /etc/fstab storage.domain.com:/storage /storage glusterfs defaults,log-level=WARNING,log-file=/var/log/gluster.log 0 0 I have heard Gluster might be mounted with the first Server IP and after that it will fetch its configuration with the rest of Servers. Personally, I never tested single Server mount setup and I don't know how Gluster handles this. On EC2, traffic among single Availability zone is free and between different zones is not. When Client in zone A writes to storage and IP of Storage in zone B is returned, it will cost me twice more for data transfer: Client (Zone A) - Storage Server (Zone B) - Replication to Storage Server (Zone A). Question: Would it be better to mount Storage Server of the same zone, so that data transfer charges apply only for replication (A - A - B)?

    Read the article

  • Traffic Shaping using tc

    - by Simon
    Hi guys, I have a 1.5 Mbit/s link that i want to share with 150 users. My setup is the following: Linux box with 3 NICs eth0 - public ip eth1 - subnet A - 50 users (static ips) eth2 - subnet B - 100 users (via dhcp) I am using squid as a transparent proxy on port 3128. dhcp server using ports 67 and 68. I was creating, but I think packets are not going to the right queues #!/bin/bash DEV=eth0 RATE_MAIN=2048kbit CEIL_MAIN=2048kbit BURST=1b CBURST=1b RATE_DEFAULT=1024kbit CEIL_DEFAULT=$CEIL_MAIN PRIO_DEFAULT=3 RATE_P2P=1024Kbit CEIL_P2P=$CEIL_MAIN PRIO_P2P=4 RATE_IND=32kbit CEIL_IND=$CEIL_DEFAULT tc qdisc del dev $DEV root tc qdisc add dev $DEV root handle 1: htb default 30 tc class add dev $DEV parent 1: classid 1:1 htb rate $RATE_MAIN ceil $CEIL_MAIN tc class add dev $DEV parent 1:1 classid 1:10 htb rate $RATE_DEFAULT ceil $CEIL_MAIN burst $BURST cburst $CBURST prio $PRIO_WEB ## some other sub class for p2p other traffic tc class add dev $DEV parent 1:1 classid 1:20 htb rate $RATE_P2P ceil $CEIL_P2P burst $BURST cburst $CBURST prio $PRIO_P2P $IPS_NET1=50 $IPS_NET2=100 let $IPS=$IPS_NET1+$IPS_NET2 for ((i=1; i<= $IPS; i++)) do let CLASSID=($i+100) let HANDLE=($i+100) tc class add dev $DEV parent 1:10 classid 1:$CLASSID htb rate $RATE_IND ceil $CEIL_IND tc qdisc add dev $DEV parent 1:$CLASSID handle $HANDLE: sfq perturb 10 done ## Generate IP addresses ## IP_ADDRESSES="" # Subnet A BASE_IP=10.10.10. for ((i=2; i<=$IPS_NET1+1; i++)) do TEMP="$BASE_IP$i" IP=ADDRESSES="$IP_ADDRESSES $TEMP" done # Subnet B BASE_IP=192.168.0. for ((i=2; i<=$IPS_NET2+1; i++)) do TEMP="$BASE_IP$i" IP_ADDRESSES="$IP_ADDRESSES $TEMP" done ## FILTERS ## j=1 U32="tc filter add dev $DEV protocol ip parent 1:0 prio $PRIO_DEFAULT u32" for NET in $IP_ADDRESSES; do let CLASSID=($j+100) $U32_DEFAULT match ip src $NET/32 flowid 1:$CLASSID $U32_DEFAULT match ip dst $NET/32 flowid 1:$CLASSID let j=j+1 done Can you guys help me figure out what's wrong with it? basically I want my classes to be 1:1 (1.5 Mbit ) 1:10 (1024 Kbit) 1:20 (1024 Kbit) (200 ips each with 32 kbit)

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >