Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 675/792 | < Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >

  • .htaccess issue on Apache Web Server in Ubuntu VM

    - by Neon Flash
    I just installed Apache Web Server on Ubuntu 11.04 in VMWare Workstation. I created a basic HTML page, named it index.html and placed it in /var/www directory (document root). I am able to access this web page from my Host OS (Windows 7), by pointing the browser to: http://192.168.2.2/index.html where, 192.168.2.2 is the IP Address of the Ubuntu VM. Next, to test various configurations of .htaccess files, I created a new directory in /var/www called, members. Inside this directory, I created and placed a .htaccess file with the following configuration: AuthUserFile /www/Neon/auth/.htpasswd AuthName "neon's home" AuthType Basic require valid-user IndexIgnore */* I created a directory path like /var/www/Neon/auth/ and then placed a .htpasswd file inside it. To place the username and hash inside the .htpasswd file: I created a username "neon" and calculated the DES hash of a password and placed it inside .htpasswd file in format: username:hash Now, when I try to access the web page: http://192.168.2.2/members/ It does not prompt me to enter the username and password with a popup box. Instead it just displays the index.html which is placed inside members directory. I would like to get this configuration working :)

    Read the article

  • Openvpn - stuck on Connecting

    - by user224277
    I've got a problem with openvpn server... every time when I trying to connect to the VPN , I am getting a window with login and password box, so I typed my login and password (login = Common Name (user1) and password is from a challenge password from the client certificate. Logs : Jun 7 17:03:05 test ovpn-openvpn[5618]: Authenticate/Decrypt packet error: packet HMAC authentication failed Jun 7 17:03:05 test ovpn-openvpn[5618]: TLS Error: incoming packet authentication failed from [AF_INET]80.**.**.***:54179 Client.ovpn : client #dev tap dev tun #proto tcp proto udp remote [Server IP] 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert user1.crt key user1.key <tls-auth> -----BEGIN OpenVPN Static key V1----- d1e0... -----END OpenVPN Static key V1----- </tls-auth> ns-cert-type server cipher AES-256-CBC comp-lzo yes verb 0 mute 20 My openvpn.conf : port 1194 #proto tcp proto udp #dev tap dev tun #dev-node MyTap ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/VPN.crt key /etc/openvpn/keys/VPN.key dh /etc/openvpn/keys/dh2048.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt #push „route 192.168.5.0 255.255.255.0? #push „route 192.168.10.0 255.255.255.0? keepalive 10 120 tls-auth /etc/openvpn/keys/ta.key 0 #cipher BF-CBC # Blowfish #cipher AES-128-CBC # AES #cipher DES-EDE3-CBC # Triple-DES comp-lzo #max-clients 100 #user nobody #group nogroup persist-key persist-tun status openvpn-status.log #log openvpn.log #log-append openvpn.log verb 3 sysctl : net.ipv4.ip_forward=1

    Read the article

  • How to display/define Mirror/Stripping pairs with mdadm

    - by Chris
    I want to make a standard linux software Raid10 over 4 HDD. The server has 4HDDs, 2 pairs from different vendors in order to avoid batch problems. I want to have the mirror over two different Vendors, and then the Stripe over the mirror pairs. I could do that by manually creating Raid1/0, but mdadm supports Raid level 10. I just cant figure out how the Raid10 is then handled and how the data is distributed. mdadm --detail /dev/md10 /dev/md10: Version : 1.2 Creation Time : Wed May 28 11:06:23 2014 Raid Level : raid10 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed May 28 11:06:23 2014 State : clean, resyncing (PENDING) Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : pdwhost:10 (local to host pdwhost) UUID : a3de0ad5:9e694ee1:addc6786:c4449e40 Events : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 113 3 active sync /dev/sdh1 does not really give any information about that. How it should be: Raid 1 / Mirror over /dev/sda1 /dev/sdf1 and /dev/sdg1 /dev/sdh1 Raid 0 over the two Raid 1 pairs Is it possible to do that with the built in "level=10", how can I see what pairs are mirrored? Thanks a lot for you help

    Read the article

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • VirtualBox doesn't see raw partitions

    - by smbear
    What I want to achieve is to set up virtual machine with VirtualBox. Host OS is Windows 7 Home Premium, guest will be (k)Ubuntu 12.04 on a raw partition. The first problem is that when I issue following command: VBoxManage.exe internalcommands listpartitions -rawdisk \\.\PhysicalDrive0 I get following result: Number Type StartCHS EndCHS Size (MiB) Start (Sect) 1 0xee 0 /0 /1 1023/254/63 715404 1 I'm guessing that VirtualBox is unable to see my partitions. If I use diskpart tool, then all partitions are listed correctly (note Polish language version of Windows): DISKPART> select disk 0 Obecnie wybranym dyskiem jest dysk 0. DISKPART> list partition Partycja ### Typ Rozmiar Przesuniecie ------------- ---------------- ------- ------------ Partycja 1 System 200 MB 1024 KB Partycja 2 Zarezerwowany 128 MB 201 MB Partycja 3 Podstawowy 139 GB 329 MB Partycja 5 Nieznany 4883 KB 140 GB Partycja 6 Podstawowy 50 GB 140 GB Partycja 7 Podstawowy 484 GB 190 GB Partycja 4 Odzyskiwanie 24 GB 674 GB Additional note: my PC is using EFI to boot OS. Basing on the results listed above, I believe that: I messed up with my partition table. Something is wrong with VirtualBox. Can anyone help with this issue?

    Read the article

  • Proper Outlook Free/Busy status when working from home

    - by rwmnau
    Our office (pretty large - about 200 people) has recently started part-time telecommuting. It's only one day/week now, but it's already raised some questions about availability, so I wanted to see how the users here, some of whom I'm sure telecommute to a corporate job, how they set their out of office status. Outlook has four statuses, and here's what I (and most others?) take them to mean: Free: I'm available for meetings Busy: I'm in a meeting or otherwise occupied, and unavailable Tentative: Shy away from scheduling over, but I'm available if needed Out of office: I'm on vacation and unavailable. However, I don't travel for work - do people tend to use this status to mean they're remote, but available for a phone call/bridge? As we begin to telecommute, I'll be available by phone for meetings, but not in person - any meeting can have a conference bridge, but some meetings just need to be in person. I'd like to send the right message about my status - people can schedule meetings with me on my telecommute days, but they should expect me to be on a conference bridge when they do. What status do people use? Does "Out of Office" correctly reflect that you're working from home, even though I perceive this to mean that somebody is on vacation? Maybe I'm the only one confused here, but as a company that's never before done telecommuting of any kind, I'm in the dark about standard practices. Thanks for the insight! Though this isn't a technical question directly, I'm hoping it's still applicable to the group and constructive - if it's not, please close it and accept my apology.

    Read the article

  • High availability virtual machines

    - by Jeremy
    I've been reading a lot about high availability virtualization, either via Hyper-V or VMWare. In that context, essentially high availabliity means that the VM is hosted by a closter of physical servers (nodes), so if one of the physical servers goes down, the VM can still be served by other physical servers. So far so good, the physical cluster and the VM itself are highly available. However if the service being provided, let's say SQL server, MSDTC, or any other service, are actually being provided by the VM image and the virtualized operating system. So I imagine that there is still a point of failure at the virtual layer that isn't accounted for. Something could happen within the virtual machine itself that the physican cluster can not account for, correct? In that instance the physican failover cluster (Hyper-V) or VMWare host, can not fail over, because the issue is not with one of the servers in the physical cluster - failing over a physical node would not do any good. Does this necessitate building a virtual failover cluster on top of the physical one, or is this not necessary? Alternatively, I suppose you could skip the phsyical clustering, and just cluster at the virtual layer (Child based failover clustering), because that should still survive a physical failure. See image below showing parent based (left), child based (right) and a combination (center). Is parent based as far as you need to go, or is child based more appropriate?

    Read the article

  • IIS FTP 7.5 Data Channel Problem (SSL)

    - by user59050
    Hey there I wonder if anyone can get me in the right direction. I am setting up both a FTPS Client and Server, FTPS Server using Microsoft’s iis FTP 7.5. On the client side it will be running on Linux and I am using M2crypto for the openssl wrapping (python). I am worried the problem is on the server side (iis7.5) due to the following discovery : If I host using Filezilla with BOTH the control and data channel being forced to be encrypted it works 100% (100% file transmission), if i use iis as the server everything works up to the point when the data channel takes over... i.e. all data of the retrieved file is already received correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? If i force the client or server to close the connection the file is 100% intact....If i use iis 7.5 with forced encryption on control channel all works 100% as long as i don’t force data channel... Here are some screenshots to demo this... Client View after Kill Client : pics @ http://forums.iis.net/p/1172936/1960994.aspx#1960994 Summary : We can establish the connection, do directory listings, start the upload, see the file (0bytes) created on the server but then the client hangs. If we terminate the client, the uploaded file on the server suddenly jumps up to full size.

    Read the article

  • Send command through PuTTY automatic login

    - by Arthur
    I am using the following to login automatically to a remote server and then run commands listed in a commands.txt, like this: C:\path to\putty.exe -ssh adreese.ip -l user -pw Password -m C:\Path to\command.txt commands.txt contains the following: wakeonlan -i broadcast adress Macadress However, when I try to do so a new window for PuTTY appears, but it closes and exits instantly after login. As a result, I cannot see the output of the command(s). After a several tests, it appears that the command is not execute , cause my computer doesn't "wake on lan". I don't understand what's going on here ? I cannot use the plink.exe program cause I cannot make connection with public key ( too much distant site for doing all the registration keys in putty ) Can someone help me with this ? Or can i use another program to make ssh connection and send command with script from a windows os? Edit : I also try to make a bash file in the distant server with the same command and execute it from the session like this : C:\path to\putty.exe -ssh adreese.ip -l user -pw Password \home\user\script.sh Ihave the same problem... Need help please : /

    Read the article

  • Measure Upload Speed between a client and our server

    - by tresstylez
    We host a SAAS application specially customized for multiple clients. For one customer in particular -- they are reporting sporadic performance issues from various locations on their network, in particular UPLOADING documents through a form on our website. The client claims they have "bandwidth to spare" and that utilization of their "pipe" is so low that it MUST be our application, but our application has MANY clients and all features are working fine for all other clients. Interestingly enough -- DOWNLOADS (ie. just accessing the website, or downloading documents) is working fine. Speed test shows that they should get 1.2Mbps UP. So, a 3MB file should take 20 secs to upload. It takes 60+ seconds on their network. Sometimes even small files take OVER 10 minutes to upload or they timeout. Pings and Traceroutes don't show any abnormally long hops or response times. They claim other SAAS applications they use allow them to upload just fine. Both IT teams are working together to resolve this issue. What kind of data can I request from the clients to begin ruling things out. Seems like we need to somehow measure LATENCY of the networks involved or even at the switch level, we need to understand if packets are getting dropped somewhere and why. Where should I start? Any help is appreciated. I'll provide more info upon requests

    Read the article

  • Fetch new Mails (Also from Subfolders) from another IMAP server as new Mail in Postfix

    - by Tobi
    everyone. I have installed Postfix on a server with Aliases and Domains from a MySQL Database. It is configured to forward some adresses to other Mail Accounts and also delivers some mails in local mailboxes that will be queried over a dovecot imap server. For this example let there be two users: [email protected] what is a user that gets its mail just forwarded to let's say [email protected] [email protected] what is a user that accesses its mail from local IMAP. Now, I want to fetch some Mails from another mailserver and handle them as if they were sent to a user of my Mailserver. Lets say those corelations exist: [email protected] has two external accounts: [email protected] and [email protected] [email protected] has also one external account [email protected] The Problem is the new mails on that other Mailserver is not always in the inbox, it might be in subdirectories: mailinglists/all or mailinglists/it but also in mailinglists/some-other-department which is not interesting and should not be delivered. I already found a programm called fetchmail but I cannot find how to fetch subdirectories or decide which subdirectories are fetched.

    Read the article

  • Server 2008 RAID 5 Write Speeds

    - by Solipsism
    I recently configured a RAID 5 partition in Server 2008 with 4 RAID 5 disks. These disks are connected through a SATA expansion card that uses PCIe. This morning, I checked and they had finally finished synchronizing, and so I tried to do some speed tests. Copying off the disks started pretty much fine - speeds began at 125MB/s, then trailed down to about 70MB/s, which I found odd but not worrying. Writing TO the disks however is a completely different story. I attempted to copy some of my VM host ISOs onto the disks (~2-4 GB apiece) and this resulted in speeds of approximately 10MB/s. I tried copying both from a local disk (connected directly to the motherboard) and from another server ththe gigabit network and results were the same. I checked the performance monitor while transferring the files and the only thing that stuck out was that my memory hard faults shot up to 6,000 per minute (spiking around 200/s) by explorer.exe. The system is running 2GB of DDR667 ECC RAM and a quad-core 2.3GHz opteron. Is there anything I can do to fix this performance issue (buy more RAM? move the drives to a faster box?, etc) or am I just screwed so long as I stick to windows.

    Read the article

  • I need to preserve a tape using symantec backup exec. I'm aving trouble doing so

    - by MrVimes
    Please forgive me if this is the wrong stack exchange site. Please suggest which one I should post this to if it is. There's an automatic tape machine running in a remote location, with software (symantec backup exec 11d) Recently one of the servers being backed up had problems with its raid controller, so one of the drives has become invisible. I need to preserve the last good backup of that drive so I am trying to replace the tape with the most recent backup of that drive on it with one of the scratch tapes (blank tapes) present in the machine. I've tried the following... Associate the blank media with the media set in question (Wednesday) For the existing media (the tape with the data I want to keep) I click 'move to vault' and move it to the offline vault. I associate it with something other than 'Wednesday' (a media set called 'keep data infinitely...') I then do an inventory on that slot. The above steps I'm led to believe are supposed to put the fresh tape in the slot that had the tape I want to keep in it. But it just keeps showing up as containing the tape I want to keep after the inventory. (after refreshing the device tree) I am a complete newbie with this software. Can you tell me what I'm doing wrong, and/or tell me how to acheive my desired goal Edit: Just want to point out that I did try to get help directly from symantec with this, but having jumped through countless hoops to create an account and create a support ticket my progress was halted by requiring something called a 'tecnical contact id' at the final step with no explanation of what it is or how to get one.

    Read the article

  • IP addresses not listed for IIS website bindings

    - by Svinn
    Recently purchased a windows cloud server godaddy. Now i installed iis7 and all other required software. And i have 50.62.1.89 and 2 more public ips. Also i have a private ip 10.1.0.2. Now the problem is am unable to access any website through any public ip. All my public ips are opening default website only. also i cant see pubic ips for IIS website bindings. Only my private ip listed for IIS binding. And in my server also public opening only default website. But am able to open websites using private ip. But my public ip addresses pointed to my server correctly. am able to open my server using remote desktop using public ip. Also as i said already public ip opening default website from IIS without problem. Please help me. Am confused for last 2 days.

    Read the article

  • Bandwidth monitoring with iptables for non-router machine

    - by user1591276
    I came across this tutorial here that describes how to monitor bandwidth using iptables. I wanted to adapt it for a non-router machine, so I want to know how much data is going in/coming out and not passing through. Here are the rules I added: iptables -N ETH0_IN iptables -N ETH0_OUT iptables -I INPUT -i eth0 -j ETH0_IN iptables -I OUTPUT -o eth0 -j ETH0_OUT And here is a sample of the output: user@host:/tmp$ sudo iptables -x -vL -n Chain INPUT (policy ACCEPT 1549 packets, 225723 bytes) pkts bytes target prot opt in out source destination 199 54168 ETH0_IN all -- eth0 * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1417 packets, 178128 bytes) pkts bytes target prot opt in out source destination 201 19597 ETH0_OUT all -- * eth0 0.0.0.0/0 0.0.0.0/0 Chain ETH0_IN (1 references) pkts bytes target prot opt in out source destination Chain ETH0_OUT (1 references) pkts bytes target prot opt in out source destination As seen above, there are no packet and byte values for ETH0_IN and ETH0_OUT, which is not the same result in the tutorial I referenced. Is there a mistake that I made somewhere? Thanks for your time.

    Read the article

  • Assign individual NIC to KVM guest

    - by Bin S
    I have a server with 6 NICs installed and is running Ubuntu 12.04LTS. I want to setup 4 guest VMs using kvm. Now I want to assign 2 NICs for the host(1 Public IP and 1 private IP), and 1 NIC each to 4 guest VM(all private IP). How do I do this? /etc/network/interfaces I am having trouble with my configuration file shown below: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 192.168.1.109 netmask 255.255.255.0 gateway 192.168.1.5 auto eth1 iface eth1 inet static address 192.168.1.117 netmask 255.255.255.0 auto eth2 iface eth2 inet manual auto br0 iface br0 inet static address 192.168.1.118 netmask 255.255.255.0 bridge_ports eth2 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off auto eth3 iface eth3 inet manual auto br1 iface br1 inet static address 192.168.1.119 netmask 255.255.255.0 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off auto eth4 iface eth4 inet manual auto br2 iface br2 inet static address 192.168.1.123 netmask 255.255.255.0 bridge_ports eth4 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off auto eth5 iface eth5 inet manual auto br3 iface br3 inet static address 192.168.1.124 netmask 255.255.255.0 bridge_ports eth5 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off

    Read the article

  • Why not block ICMP?

    - by Agvorth
    I think I almost have my iptables setup complete on my CentOS 5.3 system. Here is my script... # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP For context, this machine is a Virtual Private Server Web app host. In a previous question, Lee B said that I should "lock down ICMP a bit more." Why not just block it altogether? What would happen if I did that (what bad thing would happen)? If I need to not block ICMP, how could I go about locking it down more?

    Read the article

  • Routing and authenticating all access through squid

    - by Knight Samar
    Hi, I want to route all Internet access in my network through a Squid proxy server and authenticate and log all users. I want this to be a client-independent setting so that no one needs to do anything on their browsers or machines. I have set my network gateway as the proxy server so that all traffic will be sent to it. I have done this using options in DHCP server. Now I tried using squid as a transparent proxy, but then it won't authenticate in that mode. I tried using iptables to route all traffic to port 3128 but it won't popup the authentication dialog box from SQUID. I tried telling DHCP to give WPAD to all clients by placing a WPAD file on a webserver containing the following for automatic proxy configuration on clients: Changes in dhcpd.conf option wpad code 252 =test; option wpad "\n\000"; option wpad "http://192.168.1.5/wpad.dat\n"; The WPAD file: function FindProxyForURL(url,host) { return "PROXY squid-server-ip-address:3128 ; DIRECT "; } But the browsers (different versions of Firefox and IE) seem to ignore it. :( What should I do ?

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • Does ICS modify windows firewall policies in the registry?

    - by insipid
    I had a host machine I wanted to enable ICS on. First I realized that doing so was not possible until I enabled the Windows Firewall. Once I enabled the firewall and set up ICS, I noticed due to group policy I could no longer disable the firewall. Also, any ports I tried to open seemed to be ignored. Although nothing seemed to be configured when I used the mmc snap-in to view local computer policy, when I checked the registry I noticed several policies set there in HKLM (such as disabling AllowLocalPolicyMerge). I was able to remove the policies from the registry and my open ports worked, but they were eventually re-added without my input. The network I am sharing the internet from is an "unsecured" wireless network with an authentication page, is it possible that this is causing those policies to be set? Did ICS set those policies? When you go to the properties of the ICS enabled adapter and go to the ICS settings it takes you to a tab called services where you can add and remove "services running on your network that internet users can access". Is this related to the windows firewall?

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • Hosting multiple websites from home

    - by dean nolan
    I have just been accepted for Microsofts Wevsite Spark program which I mainly got for the tools, Visual Studio, Blend. I also have a few of my own websites, personal and a couple of business ones. I also work freelance and sometimes I would like a place to just put a demo up of a clients project. The websites I currently have are all on differnet hosting provders and domain registrars. The WebsiteSpark comes with Windows Server 2008 and SQL Server 2008. It would be really advantagous of me to have all these in one place but also so I have complete control over the database and the environment. So I am thinking over the next 4-6 months of migrating all this to my own server that I will host from home, or maybe even setup at home and then store in a proper datacentre. I was wondering what steps I should take and what to be aware of, specifically: 1) having all these different websites on one computer and having the url got to the proper place. 2) Cost effectiveness? Having the server in home as apposed to datacentre. Most solutions I see charge over £1000 a month to have a machine in datacentre. This is mostly for my own ease of management and shared hosting which I currently have is very limited configuration wise. Would getting a server in house be beneficial for then upgrading to the cloud? What measures should I take with my ISP? I know this is a lot I've asked but just even links to good articles would be good. Thanks

    Read the article

  • What is going on when I can't access an SMB server share (not accessible error) until I run cmdkey to delete the credential?

    - by Warren P
    I have a network connection share issue. The first connection works, and seems to stay connected for at least a few hours. However, after each time my windows 7 PC reboots, it can no longer form a network connection to the shared folder, nor browse to it, until I not only unmap and remap the mapped drive, but also, I have to use cmdkey to delete the stored credentials like this: cmdkey /delete:Domain:target=HOSTNAME My work PC is on a domain, and I am not the IT administrator, but I'm curious if there is anything I can do to investigate this issue. Any settings in registry or group policy that I could examine to see why the first connection works, but each subsequent attempt (once a stored credential exists) to browse or use the connection, fails with a connection error saying it is "not accessible", like this: I do not even get any error until at least several minutes go by. THe first thing I see is a window frozen and empty, and then I get this error: This has happened when connecting to a share on a DROBO device, and on a share which is not on the domain, but which was a Microsoft Home Server. I wonder if there's something broken in WIndows 7 professional with regards to connecting to non-domain shares when an active directory domain controller exists, and a particular workstation is joined to a domain? The problem only occurs if I click "remember credentials". It is not fixed by any amount of working with net use. Usingcmdkey to delete all stored credentials for the host is the only way to get back in, and it affects all non-domain shared folders. Update I'm hoping there are some registry locations I could check that could be misconfigured in some way that might explain why SMB/CIFS stored credentials for non-domain systems seem to be auto-invalidated in this weird way. Knowing how whacko Microsoft Windows domain and security handling is sometimes, this could be some kind of stupid "feature".

    Read the article

  • Nginx + PHPBB3 reverse proxy images problem

    - by siberiano
    Hello all I have a problem with my Nginx Frontend + Apache2 backend + PHPBB3 software. It doesn't load the CSS and the images neither. I get constant errors like these: 2010/04/14 16:57:25 [error] 13365#0: *69 open() "/var/www/foo/styles/styles/coffee_time/theme/large.css" failed (2: No such file or directory), client: 83.44.175.237, server: www.foo.com, request: "GET /styles/coffee_time/theme/large.css HTTP/1.1", host: "www.foo.com", referrer: "http://www.foo.com/viewforum.php?f=43" This is my config of the site: server { listen 80; server_name www.foo.com; access_log /var/log/nginx/foo.access.log; # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /var/www/trasteando/; } location / { root /var/www/foo/; index /var/www/foo/index.php; } # proxy the PHP scripts to predefined upstream .apache. # location ~ .php$ { proxy_pass http://apache; } location /styles/ { root /var/www/foo/styles/; }

    Read the article

  • Setting up Web server so it is easy to migrate

    - by Nyxynyx
    Hi I am about to move my site from a VPS to another host's dedicated server. One of my concern is about scaling the site in the future that involves a change of server. Now that I am starting the dedicated server from scratch with only the OS, this means that I need to install the web server stack, including Apache and its mods, PHP, MySQL, PostgreSQL, Tomcat, Solr and a few other softwares like ImageMagick and git. Question: Is there a way for me to setup this new dedicated server such that I can easily migrate the entire site, both the technology stack and the code to the a newer server (upgrade from this new dedicated server) easily without reinstalling and reconfiguring everything? The code for the website is being handled by git and github so thats not a problem. I'm more conerned about the rest of the software required. Side question: The current VPS uses CentOs with cpanel and it seems that many packages are outdated on yum and cpanel interfers with the installation of many packages. Which OS should I go with for the new server? Ubuntu?

    Read the article

< Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >