Search Results

Search found 50510 results on 2021 pages for 'static files'.

Page 496/2021 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • How to configure DNS so that www.example.com goes to one server, *.example.com to another

    - by fishwebby
    I'm trying to set up my domain as follows, but I'm not actually sure if it's possible. I have a domain where I would like the base and www addresses to go to my static site, but others to go to my application server. For example: My domain is registered with Dreamhost, and my application is on a VPS at Webbynode. I've set up the domain in Dreamhost to use Webbynode's nameservers: ns1.dnswebby.com ns2.dnswebby.com ns3.dnswebby.com And in Webbynode I've set up a wildcard A record to point to the IP address of my VPS: * 1.2.3.4 A and this works nicely, if I go to app.example.com it resolves to my application server at Webbynode. However, what I'd like to do is have example.com and www.example.com go to my static site, hosted back at Dreamhost, whilst still having any other domain go to my app. What I've done to try and achieve this is set up these DNS "NS" entries at Webbynode, trying to get Dreamhost to resolve these domain names: (empty) ns1.dreamhost.com NS (empty) ns2.dreamhost.com NS (empty) ns3.dreamhost.com NS www ns1.dreamhost.com NS www ns2.dreamhost.com NS www ns3.dreamhost.com NS (I don't have a fixed IP address at Dreamhost so I can't just set up simple A records). However this doesn't work... does anyone have any idea if this is possible and if so how it could be done? Update: I've got this working now, as above for the domain (i.e. registered with Dreamhost, but using Webbynode's nameservers). To delegate the DNS for www.example.com to Dreamhost, I've got the following DNS entries set up: www.example.com. ns1.dreamhost.com. NS www.example.com. ns2.dreamhost.com. NS www.example.com. ns3.dreamhost.com. NS (note the full stops at the end) And to get example.com to resolve to my static site, I set up CNAME record: example.com. www.example.com. CNAME So now, example.com and www.example.com go to my static site on Dreamhost, and if they change the IP address of my shared hosting it won't affect me, and all other subdomains go to my application server. This seems to work nicely, but if anyone knows a better way to do it I'd be happy to hear it. Thanks to all who replied.

    Read the article

  • (Serving PHP) Does Apache2 will create new thread on every connection?

    - by apasajja
    Based on many online sources, in serving static files, Apache2 will create new thread on every different connection... results in resource hungry But how about serving PHP through Apache2 (mod_php, MPM worker, etc)? Does apache will also open new thread like serving static files? (AFAIK, in nginx php-fpm, we can set the max thread, but I dont know how many connection per thread) I'm planning to use Apache2 in serving PHP, and hope it will be same as nginx PHP-FPM or even better in resource usage and performance.

    Read the article

  • Cisco IPSec, nat, and port forwarding don't play well together

    - by Alan
    I have two Cisco ADSL modems configured conventionally to nat the inside traffic to the ISP. That works. I have two port forwards on one of them for SMTP and IMAP from the outside to the inside this provides external access to the mail server. This works. The modem doing the port forwarding also terminates PPTP VPN traffic. There are two DNS servers one inside the office which resolves mail to the local address, one outside the office which resolves mail for the rest of the world to the external interface. That all works. I recently added an IPSec VPN between the two modems and that works for every thing EXCEPT connections over the IPSec VPN to the mail server on port 25 or 143 from workstations on the remote lan. It would seem that the modem with the port forwards is confusing traffic from the mail server destined for a machine on the other side of the IPSec VPN for traffic that should go back to a port forward connection. PPTP VPN traffic to the mail server is fine. Is this a scenario anybody is familiar with and are there any suggestions on how to work around it? Many thanks Alan But wait there is more..... This is the strategic parts of the nat config. A route map is used to exclude the lans that are reachable via IPSec tunnels from being Nated. int ethernet0 ip nat inside int dialer1 ip nat outside ip nat inside source route-map nonat interface Dialer1 overload route-map nonat permit 10 match ip address 105 access-list 105 remark *** Traffic to NAT access-list 105 deny ip 192.168.1.0 0.0.0.255 192.168.9.0 0.0.0.255 access-list 105 deny ip 192.168.1.0 0.0.0.255 192.168.48.0 0.0.0.255 access-list 105 permit ip 192.168.1.0 0.0.0.255 any ip nat inside source static tcp 192.168.1.241 25 interface Dialer1 25 ip nat inside source static tcp 192.168.1.241 143 interface Dialer1 143 At the risk of answering my own question, I resolved this outside the Cisco realm. I bound a secondary ip address to mail server 192.168.1.244, changed the port forwards to use it while leaving all the local and IPSec traffic to use 192.168.1.241 and the problem was solved. New port forwards. ip nat inside source static tcp 192.168.1.244 25 interface Dialer1 25 ip nat inside source static tcp 192.168.1.244 143 interface Dialer1 143 Obviously this is a messy solution and being able to fix this in the Cisco would be preferable.

    Read the article

  • Windows XP installation problems

    - by Samurai Waffle
    I recently asked a question on here, and thought I had it working... Here is a link to it. Windows XP Installation problems So basically I'm having trouble getting XP installed. To sum it up, a computer I have had a boot sector virus, and I used Darik's Nuke and Boot to wipe the hard drive clean. So the hard drive has nothing on it. I had to try and install Windows through a DOS prompt, because for some reason it won't read it off the DVD. The UBCD is able to look at the files located on the DVD I have in, but I can't boot from it for some reason. So I extracted it to a USB drive, booted to DOS and started the setup process. Here's the weird thing with DOS... It can only find the C: drive. The C: drive in DOS is the flash drive that I have in, running DOS. I can't find the hard drive anywhere! So anyways, after starting the setup process, it copied the files over to the "hard drive" (which took 16 hours because the version of DOS I ran couldn't run smartdrv.exe), and it said the computer had to reboot. So I let it reboot, and it stopped and said there is no boot device. So I popped in UBCD that I have installed on a flash drive, and I discovered that it had copied the Windows files over to the flash drive and not the hard drive. It never asked where it should extract the files... So I toyed around with UBCD, ran a memory test on the hard drive to make sure it was fine, and it came out clean. So I'm stuck now. How can I get this installed? Writing this, I came up with an idea. If I copy the DOS startup files over to the hard drive, would I be able to start DOS from it? If so, I believe that could fix my problems. Any help is greatly appreciated, because I am running out of ideas and am at my wits end with this computer.

    Read the article

  • lighttpd config and rewriting/disabling attempts to access favicon.ico

    - by Kyle
    I've got lighttpd and apache working together on an app I'm building. lighty is serving out static content. However, each time a static asset is requested, I see a not found: favicon.ico message in the logs. I have added the following url rewrite: url.rewrite-once = ( "^/favicon.ico$" => "/assets/images/favicon.png" ) But to no avail; still getting the message. Any ideas?

    Read the article

  • debian gateway using iptables

    - by meijuh
    I am having problems setting up a debian gateway server. My goal: Having eth1 the WAN interface. Having eth0 the LAN interface. Allow both ports 22 (SSH) and 80 (HTTP) accessed from the outside world on the gateway (SSH and HTTP run on this server). What I did was the following: Create a file /etc/iptables.rules with contents: /etc/iptables.rules: *nat -A POSTROUTING -o eth1 -j MASQUERADE COMMIT *filter -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -i eth1 -j DROP COMMIT edit /etc/network/interfaces as follows: /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.rules auto eth0 allow-hotplug eth0 iface eth0 inet dhcp #auto eth1 #allow-hotplug eth1 #iface eth1 inet dhcp allow-hotplug eth1 iface eth1 inet static address 217.119.224.51 netmask 255.255.255.248 gateway 217.119.224.49 dns-nameservers 217.119.226.67 217.119.226.68 Uncomment the rule net.ipv4.ip_forward=1 in /etc/sysctl.conf to allow packet forwarding. The static settings for eth1 such as the ip address I got from my router (which I want to replace); I simply copied these. I have a (windows) DNS + DHCP server on ip address 10.180.1.10, which assigns ip address 10.180.1.44 to eth0. What this server does is not really interesting it only maps domain names on our local network and assigns one static ip to the gateway. What works: on the gateway itself I can ping 8.8.8.8 and google.nl. So that is okey. What does not work: (1) Every machine connected to eth0 (indirectly via a switch) can not ping an ip or a domain. So I guess the gateway can not be found. (2) Also when I configure my linux machine (a laptop) to use a static ip 10.180.1.41, a mask and a gateway (10.180.1.44) I can not ping an ip or domain either. This means that maybe my iptables is incorrect of not loaded correctly. Or I maybe have to configure my DNS/DHCP on my windows machine. I have not reset the windows machine net, restart the DNS/DHCP services, should I do this? I did not install dnsmasq as desribed here: http://blog.noviantech.com/2010/12/22/debian-router-gateway-in-15-minutes/. I don't think this is necessary?

    Read the article

  • CDN vs own apache servers?

    - by ajsie
    i know that CDN is just for static contents. but then i still have to spread out by apache servers to all corners of the world right? so when i have done it, why dont i just set up some dedicated apache servers only serving static content just like CDN? are there real benefits from still using CDN compared to that scenario?

    Read the article

  • Which web server architecture do you think is better?

    - by ngache
    use apache to server dynamic requests that need to be processed by php,and use nginx to serve static files use nginx to serve all requests So the key point is: which of them is more efficient in serving dynamic requests(we have no doubt that nginx is much better than apache in serving static files)?

    Read the article

  • Hell: NTFS "Restore previous versions"...

    - by ttsiodras
    The hell I have experienced these last 24h: Windows 7 installation hosed after bluetooth driver install. Attempting to recover using restore points via "Repair" on the bootable Win7 installation CD. Attempting to go back one day in the restore points. No joy. Attempting to go back two days in the restore points. No joy. Attempting to go back one week in the restore points. Stil no joy. Windows won't boot. Apparently something is REALLY hosed. And then it hits me - PANIC - the restore points somehow reverted DATA files to their older versions! Word, Powerpoint, SPSS, etc document versions are all one week old now. Using the "freshest" restore point. Failed to restore yesterday's restore point!!! I am stuck at old versions of the data!!! Booting KNOPPIX, mounting NTFS partition as read-only under KNOPPIX. Checking. Nope, the data files are still the one week old versions. Booting Win7 CD, Recovery console - Cmd prompt - navigating - yep, data files are still one week old. Removing the drive, mounting it under other Win7 installation. Still old data. Running NTFS undelete on the drive (read-only scan), searching for file created yesterday. Not found. Despair. At this point, idea: I will install a brand new Windows installation, keeping the old one in Windows.old (default behaviour of Windows installs). I boot the new install, I go to my C:\Data\ folder, I choose "Restore previous versions", click on yesterday's date, and click open... YES! It works! I can see the latest versions of my files (e.g. from yesterday). Thank God. And then, I try to view the files under the "yesterday snapshot-version" of c:\Users\MyAccount\Desktop ... And I get "Permission Denied" as soon as I try to open "Users\MyAccount". I make sure I am an administrator. No joy. Apparently, the new Windows installation does not have access to read the "NTFS snapshots" or "Volume Shadow Snapshots" of my old Windows account! Cross-installation permissions? I need to somehow tell the new Windows install that I am the same "old" user... So that I will be able to access the "Users\MyAccount" folder of the snapshot of my old user account. Help?

    Read the article

  • Blocking ICMP outgoing requests only in eth1

    - by Raj
    I am creating a NAT with iptables: Computer A: eth0 (dhcp) + eth1 (static ip 192.168.0.1 - gateway) Computer B: eth1 (static ip 192.168.0.2, using Computer A as gateway) I know how to block ICMP outgoing requests (-A OUTPUT -p icmp --icmp-type echo-request -j DROP), but that would block ICMP requests from Computer A, but not from Computer B (in fact, only for Computer A - Computer B can keep doing those). I tried with the same command, but adding -o eth1, but that does not block at all. Any idea?

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • Has anyone achieved true differential sync with rsync in ESXi?

    - by Julius
    Berate me later on the fact that I'm using the service console to do anything in ESXi... I've got a working rsync binary (v3.0.4) that I can use in ESXi 4.1U1. I tend to use rsync over cp when copying VM's or backups from one local datastore to another local datastore. I've used rsync to copy data from one ESXi box to another but that was just for small files. In now trying to do true differential syncs of backups taken via ghettoVCB between my primary ESXi machine and a secondary one. But even when I do this locally (one datastore to another datastore on the same ESXi machine) rsync appears to copy the files in their entirety. I've got two VMDK's totally 80GB in size, and rsync still takes anywhere between 1 and 2 hours but the VMDK's aren't growing that much daily. Below is the rsync command I'm executing. I am copying locally because ultimately these files will get copied onto a datastore created from a LUN on a remote system. Its not an rsync that'll be serviced by an rsync daemon on a remote system. rsync -avPSI VMBACKUP_2011-06-10_02-27-56/* VMBACKUP_2011-06-01_06-37-11/ --stats --itemize-changes --existing --modify-window=2 --no-whole-file sending incremental file list >f..t...... VM-flat.vmdk 42949672960 100% 15.06MB/s 0:45:20 (xfer#1, to-check=5/6) >f..t...... VM.vmdk 556 100% 4.24kB/s 0:00:00 (xfer#2, to-check=4/6) >f..t...... VM.vmx 3327 100% 25.19kB/s 0:00:00 (xfer#3, to-check=3/6) >f..t...... VM_1-flat.vmdk 42949672960 100% 12.19MB/s 0:56:01 (xfer#4, to-check=2/6) >f..t...... VM_1.vmdk 558 100% 2.51kB/s 0:00:00 (xfer#5, to-check=1/6) >f..t...... STATUS.ok 30 100% 0.02kB/s 0:00:01 (xfer#6, to-check=0/6) Number of files: 6 Number of files transferred: 6 Total file size: 85899350391 bytes Total transferred file size: 85899350391 bytes Literal data: 2429682778 bytes Matched data: 83469667613 bytes File list size: 129 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 2432530094 Total bytes received: 5243054 sent 2432530094 bytes received 5243054 bytes 295648.92 bytes/sec total size is 85899350391 speedup is 35.24 Is this because ESXi is itself making so many changes to the VMDK's that as far as rsync is concerned the entire file has to be retransmitted? Has anyone actually achieved actual diff sync with ESXi?

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

  • Email hosting on home's Windows server 2003

    - by klay
    Hi guys, I am new to Server management, I have a static Ip address and I bought recently a domain name, I configure the domain name to target my Ip address. I am running windows server 2003 standard. what are the steps to host my email adresses? Do I need to buy anything else, or what I have is enough (static ip address, domain name, win server 2003, exchange server 2003) ?? thanks Guys

    Read the article

  • Facebook, Twitter, Yahoo doesn't work. CDN problem. Akamai?

    - by Toktik
    Some sites doesn't work normally, they are open, without css, images and javascript errors... Facebook stucks on static.ak.fbcdn.net Twitter stucks on a1.twimg.com Yahoo stucks on l.yimg.com On firefox I'm receiving Waiting for ...(any of those). I can access facebook only with SSL. Like https://facebook.com I ping them, only receive request timed out. Update: When I ping static.ak.fbcdn.net I refer to a749.g.akamai.net, when I ping this server I get Request timed out.

    Read the article

  • Event ID 17890 (A significant part... paged out.) with SQL Server 2008

    - by Godeke
    I have a machine that has SQL Server 2008 Standard installed. Periodically (about once an hour) I am getting Event ID 17890 several times in a row. An example: 6:28:54 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 10652, committed (KB): 628428, memory utilization: 1%%. 6:34:27 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 332 seconds. Working set (KB): 169780, committed (KB): 546124, memory utilization: 31%%." 6:38:55 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 600 seconds. Working set (KB): 245068, committed (KB): 546124, memory utilization: 44%%." This pattern repeated at 7:26 - 7:37, 8:26 - 8:36, 9:24 - 9:35 and so with the same increasing working set and memory utilization pattern. I don't have any (known) background tasks running at this time. Backups run at 2:00 This subsided from 11:00 at night until it resumed at 4:00 in the morning and has been continuing the intermittent 10 minute glitch periods. As this server has plenty of RAM (the commit charge has peaked at 2,871,564 of 4,194,012 physical) I disabled the paging files after reading several items I dug up searching Google and not finding any of them changing the situation. This pattern I am documented is after removing the paging files, so I'm not even sure where we are paging the SQL process could be going. I also changed the SQL process memory to have a minimum of 500MB and a maximum of 2GB of RAM (as this is a light duty database server serving only a small workgroup). Has anyone encountered this? Prior to disabling the page files this error would cause 5 minutes of disk thrashing that disabled access to the databases, files, IIS webs and so on. Since disabling the page files it just logs strange things, but I'm not seeing a performance drop at least. Any suggestions would be welcome.

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • routing through multiple subinterfaces in debian

    - by Kstro21
    my question is as simple as the title, i have a debian 6 , 2 NICs, 3 different subnets in a single interface, just like this: auto eth0 iface eth0 inet static address 192.168.106.254 netmask 255.255.255.0 auto eth0:0 iface eth0:0 inet static address 172.19.221.81 netmask 255.255.255.248 auto eth0:1 iface eth0:1 inet static address 192.168.254.1 netmask 255.255.255.248 auto eth1 iface eth1 inet static address 172.19.216.3 netmask 255.255.255.0 gateway 172.19.216.13 eth0 is conected to a swith with 3 differents vlans, eth1 is conected to a router. No iptables DROP, so, all traffic is allowed. Now, passing the traffic through eth0 is OK, passing the traffic through eth0:0 is OK, but, passing the traffic through eth0:1 is not working, i can ping the ip address of that sub interface from a pc where this ip is the default gateway, but can't get to servers in the subnet of the eth1 interface, the traffic is not passing, even when i set the iptables to log all the traffic in the FORWARD chain and i can see the traffic there, but, the traffic is not really passing. And the funny is i can do any the other way around, i mean, passing from eth1 to eth0:1, RDP, telnet, ping, etc, doing some work with the iptable, i manage to pass some traffic from eth0:1 to eth1, the iptables look like this: iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m multiport --dports 25,110,5269 -j DNAT --to-destination 172.19.216.1 iptables -t nat PREROUTING -d 192.168.254.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 172.19.216.9 iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 172.19.216.11 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 172.19.221.80/29 -j SNAT --to-source 172.19.221.81 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 192.168.254.0/29 -j SNAT --to-source 192.168.254.1 iptables -t nat POSTROUTING -s 172.19.216.0/24 -o eth0 -j SNAT --to-source 192.168.106.254 dong this is working, but,it is really a headache have to map each port with the server, imagine if i move the service from server, so, now i have doubts: can debian route through multiple subinterfaces?? exist a limit for this?? if not, what i'm doing wrong when i have the same setup with other subnets and it is working ok?? without the iptables rules in the nat, it doesn't work thanks and i hope good comments/answers

    Read the article

  • freebsd-update from 8.3-RELEASE to 9.0-RELEASE: How to deal with dozens of diffs?

    - by Stefan Lasiewski
    I am upgrading a FreeBSD 8.3-RELEASE system to FreeBSD 9.0-RELEASE using freebsd-update. This is my first time performing a major version upgrade in FreeBSD. At one point in the process, freebsd-update performs a diff on files which are different then what is expected for the 9.0-RELEASE. It compares the current version on the system with the new changes added from 9.0-RELEASE. There are dozens of files in the list. Thus, I am presented with dozens and dozens of diffs which open in a vi window and look like this: The following file could not be merged automatically: /etc/ntp.conf Press Enter to edit this file in vi and resolve the conflicts manually... ### vi window opens <<<<<<< current version driftfile /etc/ntp/drift ======= # # $FreeBSD: release/9.0.0/etc/ntp.conf 195652 2009-07-13 05:51:33Z dwmalone $ # # Default NTP servers for the FreeBSD operating system. # # Don't forget to enable ntpd in /etc/rc.conf with: # ntpd_enable="YES" # # The driftfile is by default /var/db/ntpd.drift, check # /etc/defaults/rc.conf on how to change the location. # >>>>>>> 9.0-RELEASE restrict default notrust nomodify ignore And so on. This requires that I manually edit each file and remove the strings like <<<<<<< current version >>>>>>> 9.0-RELEASE and =======. As I discovered afterwards, if I don't remove these strings, they end up in the file afterwards. There are dozens of files which differ between 8.3 and 9.0, and I have a dozen local modifications myself. It appears that freebsd-update is using a diff, sdiff or mergemaster function of some sort, but I can't tell what it is doing exactly. Processing these files is tedious. Is there a way that I can just say "Accept new version" or "keep old version" or "Your merge is correct"? There has got to be an easier way to deal with these files. I must be missing something. This isn't a huge problem for one machine, but eventually I'll be doing this dozens of times and I want to find an easier way.

    Read the article

  • Deployment/provisioning tool for commercial applications (not developed in-house)

    - by mfinni
    I help manage a few hosted commercial applications, and we have a lot of manual processes involved when doing new customer-instance deployments into the shared (multitenant) environment. Allow me to describe the most relevant features, and then we can talk about the tools. We have an application on AIX, that requires dozens of changes to config files (some plain text, some XML) as well as a good number of commands to be run on multiple servers - some to start the new instance, some to restart our shared authentication and reporting engines, etc. The config changes follow templates, of course. The servers in question will also depend on the initial conditions specified by the implementer/deployer - we may choose to deploy a given customer to our servers in Europe, or one set of servers may be active-active whereas a different set of servers is active-passive - in short, there's a lot of complications. We have another application that run on IIS 6 and SQL. The DBAs don't want any automation of the SQL components and that's fine with me, but automating the IIS bit would be great. For a new customer instance, we make a filesystem copy of a template Virtual Directory target named after the new customer, make a new AppPool to match, edit a VirDir template .xml file to replace the filepaths and AppPool names with the new ones, and then make a new VirDir from the modified template XML to point to the new filesystem folder and app pool. For the first case, something like ControlTier or Chef might be good. For the second, the new(ish) Web Deploy from MS would probably do a good job. Has anyone used these tools or others to do something similar for applications? More of a nice-to-have, not a fixed requirement - Has anyone used anything that works on both platforms? I'm looking for something free, because the official word is that within a year, we will have whatever HP has renamed the OpsWare suite, which should be able to do stuff like this. Edit - based on someone's suggestion, looking at CFengine for the AIX application, it doesn't seem to address my pain. The problem isn't keeping a given config synced across dozens of servers, we have rsync for that. The problem is that onboarding a new customer instance touches dozens of files, putting pieces of the same or similar information into them - some are new stanzas in existing files, some are new files, and some are new directories. This is a several-hours-long process that is also error-prone because it's mostly done by hand. I guess I'm looking for config-file generation and management. I have built a small Perl script to do something similar for a much smaller case - it binds a CSV file into variables, and then does a copy-and-search-and-replace from a set of template config files. I could probably do the same here.

    Read the article

  • UNIX Question to b answered??? [closed]

    - by Nits
    Create a tree structure named ‘training’ in which there are 3 subdirectories – ‘level 1’,’ level2’ and ‘cep’. Each one is again further divided into 3. The ‘level 1’ is divided into ‘sdp’, ‘re’ and ‘se’. From the subdirectory ‘se’ how can one reach the home directory in one step and also how to navigate to the subdirectory ‘sdp’ in one step? Give the commands, which do the above actions? How will you copy a directory structure dir1 to dir2 ? (with all the subdirectories) How can you find out if you have the permission to send a message? Find the space occupied ( in Bytes) by the /home directory including all its subdirectories. What is the command for printing the current time in 24-hour format? What is the command for printing the year, month, and date with a horizontal tab between the fields? Create the following files: chapa, chapb, chapc, chapd, chape, chapA, chapB, chapC, chapD, chapE, chap01, chap02, chap03, chap04, chap05, chap11, chap12, chap13, chap14, and chap15. With reference to question 7, What is the command for listing all files ending in small letters? With reference to question 7, What is the command for listing all files ending in capitals? With reference to question 7, What is the command for listing all files whose last but one character is 0? With reference to question 7, What is the command for listing all files which end in small letters but not ‘a’ and ‘c’? In an organisation one wants to know how many programmers are there. The employee data is stored in a file called ‘personnel’ with one record per employee. Every record has field for designation. How can grep be used for this purpose? In the organisation mentioned in question 12 how can sed be used to print only the records of all employees who are programmers. In the organisation mentioned in question 12 how can sed be used to change the designation ‘programmer’ to ‘software professional’ every where in the ‘personnel’ file Find out about the sleep command and start five jobs in the background, each one sleeping for 10 minutes. How do you get the status of all the processes running on the system? i.e. using what option?

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >