Search Results

Search found 5793 results on 232 pages for 'ftp sync'.

Page 111/232 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Address (url) forwarding with Vyatta

    - by Trikks
    Got this kind of noob question i suppose. I got this very basic network setup and need help to set up some address forwarding. As seen in my illustration below all traffic enters via the eth0 interface (85.123.32.23). The external dns is setup to direct all hosts to this ip as well. Now, how on earth do I filter the incoming requests to each box? The Ip's are static! My network layout: I do not wish to solve this by assigning tons of ports etc. In my wishful thinking something like this would be nice :) set service nat rule 10 type destination set service nat rule 10 inbound-interface eth0 set service nat rule 10 destination address ftp.myhost.com set service nat rule 10 inside-address address 192.168.100.20 This way ALL traffic to the address ftp.myhost.com (at eth0) should be routed to the internal ip, 192.168.100.20. Right, is there anyone who could point in some direction? Maybe it's wrong to use nat? Please help me! :)

    Read the article

  • Centos IPTables configuration for external firewall

    - by user137974
    Current setup Centos which is a Web, Mail (Postfix,Dovecot), FTP Server and Gateway with public ip and private ip (for LAN Gateway). We are planning to implement external firewall box and bring the server to LAN Please guide on configuring IPTables... Unable to receive mail and outgoing mail stays in postfix queue and is sent after delaying... The local ip of the server is 192.168.1.220 iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP incoming HTTP iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT outgoing HTTP iptables -A OUTPUT -o eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT FTP iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 192.168.1.220 --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 21 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 192.168.1.220 --dport 1024:65535 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 1024:65535 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT SMTP iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 192.168.1.220 --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 25 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 1024:65535 -d 0/0 --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp -s 0/0 --sport 25 -d 192.168.1.220 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT POP3 iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 192.168.1.220 --dport 110 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 110 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT

    Read the article

  • VSFTP Users and Directories

    - by Mathew
    I'm stuck. I've been working all day on trying to figure out what I'm doing wrong and I've hit wall after wall. What I'm trying to do: Setup FTP in such a way that certain users have access only to their directory, but higher level users have access to all directories. What I've Googled so far: I started with this, but that didn't do what I needed it to. I then used this, but once I created one user, it wouldn't let me create another one. Finally, I decided to follow this, but it wouldn't let me even create one user. I'm using Ubuntu 10. I can login to ftp as a root user and it takes me to the home directory. If I try to login using the user I created in the tutorial it says: Status: Connection established, waiting for welcome message... Response: 220 (vsFTPd 2.2.2) Command: USER mathew Response: 331 Please specify the password. Command: PASS **** Response: 530 Login incorrect. Error: Critical error Error: Could not connect to server

    Read the article

  • How can a CentOS 6 guest running in VirtualBox be configured as a LAMP server that can be accessed from the Windows host?

    - by jtt89
    I was able to conect Centos6 on Virtual Box to Windows (I can ping in both directions) with Host-only Adapter (for connection between the two) and NAT Adapter (to enable Linux on VB to connect to the Internet). I want to set up httpd, mysql and vsftpd servers and in the end easily connect to httpd from Windows based browser and ftp server with a Windows based client as well. I would also want to have access through SSH. I have a general idea of the steps that are involved, but there is also a configuration that I am not sure about at this point. Lets say I follow these steps: yum install httpd yum install php php-pear php-mysql yum install mysql-server mysql_secure_installation yum install vsftpd yum install mod_ssl Technically I have everything installed, but what would be the next steps that I need to take (from the networking point of view, so to speak) to get it all working)? I know I need to configure, at least Apache, and ftp server, but I am not sure how is it gonna work; like where am I gonna be uloading the sites (I know this can vary), how am I gonna know what address to use in a browser if I wanna go to a website x, y, z on that installation etc. This sounds like I need to do some kind of DNS setup and I am kind of stuck at this point. If somebody could give me a general outline of what are the things that need to be done that would be great (I was looking at a lot of websites and I know about etc/sysconfig/network, httpd.config - not too much about it on Apache's site, hostname, hostname -f etc; but it is kind of hard to piece it all together at this point). I am gonna be looking at the books also, but they not always reflect the setup that I have too (VirtualBox). Thank you.

    Read the article

  • Broken UAC, cant edit File/folders or change settings in user account

    - by Antoros
    It appears that UAC is broken cant move/delete some file/folders, when asked to use administrative right (and say yes) it shows the loading bar and then nothings gets moved/deleted, no error whatsoever opened control panel, user account and when i click on any option with the Shield (administrative rights) the mouse changes to loading and thengo back to normal, not opening any menu or showing any error Already done a sfc /scannow , no errors found Already used Microsoft's Fix it, reclycle bin broken and repaired, still the same error Used microsoftaccounts tool , this are the errors i got: Problem with microsoft account policy... <- this is the problem (didint fix) Trust this PC <- loop of redirections, cant get to trust this pc (only one with w8) Problems witht sytem registration : i think it is because of the soft system reset Some setting have sync turned off : i never configured anything to sync Rootcauses found and created logs : would like to know where the logs are saved... had to use a ".reg" file to change uac setting to never notify, thinking it would fix this, no, it stoped asking though, i can still open a cmd with Administrative rights, but cant access to UAC settings Accesed Administrator account (net user administrator /active:yes)and even with that account could change any settings so there it is, dont know what else to do in the moment (this pc broke with 8.1 update and was restored to factory configuration, it broke several drivers and kept most of the registry entries, i cant find the cause of this problem. other info: i tried to delete a file in a program folder and couldnt, downloaded unlocker to check first if it was permissions but no, it showed me a msg telling me that there wasnt any error, and if i would like to delete it, clicked on yes, and it did delete it, what amuses me is that i cant without this tool, not even using the feature that takes over ownership Edit: wow, in a not crappy pc chkdsk is fast, completed with no errors found : /

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • Trying to configure DNS on a Godaddy Virtual Dedicated host, Mediatemple Domain Registration [closed]

    - by dclowd9901
    A client of mine purchased VD hosting with Godaddy and a domain name with Mediatemple. I've never configured DNS from scratch, and I'm finding it very difficult to find any sort of explanation on how to go about it. As of right now, Mediatemple is pointing to the Godaddy's ns1.domaincontrol.com and ns2.domaincontrol.com nameservers. The VD hosting on Godaddy (via their Simple Control Panel) has options to "Add a new domain", which brings you through a wizard of sorts that asks you if the domain has already been registered (yes), what it is (dclowd9901.com for this example), create a system username and password for it (with checkboxes for SSH and FTP access), which level of user can administer it, and whether a mail account should be setup. When complete, it also creates a zone file. In this zone file, the Primary nameserver is ns1.dclowd9901.com; the records are as follow (where 12.23.12.34 is the presumed host): @ A 12.23.12.34 @ NS ns1 @ NS ns2 ns1 A 12.23.12.34 ns2 A 12.23.12.34 @ MX mail www A 12.23.12.34 ftp A 12.23.12.34 ssh A 12.23.12.34 mail A 12.23.12.34 If anyone can shed any light on this for me, explain to me the interactions between the registrar and the host and so on, I'd be very grateful. Thanks in advance for the help.

    Read the article

  • Finding a backup and synchronization solution

    - by Andrea Zilio
    I'm having difficulties to find a backup and synchronization solution with the following characteristics: Cross-platform: Windows, Linux, Mac Offsite backup (so Internet Backup) Data deduplication Transfer only new/modified bits of modified files Secure: Data encrypted before leaving computer Maintain multiple versions of files (even deleted files) Folder synchronization integrated with backup and across multiple computers connected to the internet (not necessarily in the same LAN) I think that the Folder Sync feature needs a better explanation. The use case is this: you have a desktop pc and a laptop. The desktop pc contains a folder with some files and this folder is part of the backup (so it was selected to be backed up). The laptop does not contain that folder or that files at all. Then you're abroad with your laptop and you need that folder. So you want to be able to open the backup program, select that folder from the backup and download it in your laptop mantaining it synchronized with the backed up version. When you then come back home and switch on your desktop pc you want the folder we're talking about to be updated in the desktop PC. Does anyone knows any service with all these features? I've only found SpiderOak to support all the features I've mentioned but I'm not completely satisfied by the time taken to complete a backup. Sometimes it seems to hang for minutes with no reasons at all and folder synchronization occurs only after all files are backed up (instead folder sync should have a separated queue independent from other backup operations and synchronization should occurs frequently... for example every 5 minutes or less, independently from the frequency of normal backup operations)

    Read the article

  • Proftpd on Debian ignoring umask setting

    - by sodan
    I have found a solution for my problem. This is what I did: I added the following to my /etc/proftpd/proftpd.conf: <Limit SITE_CHMOD> DenyAll </Limit> I have the following problem: When I upload files to my FTP server the umask I set is totally ignored. All files have permissions 644. I use Debian 5.0.3 as operating system and proftpd 1.3.1 as ftp server. The user logging in is called mug and he is a local user (no virtual user). He is chrooted to the home directory /home/mug/ I tried the following things: 1. set umask setting in /etc/proftpd/proftpd.conf Umask 000 000 This should result in 777 for directories and 666 for files since directory umask is applied to 777 and file umask is applied to 666. After that I of course restarted the proftpd to be sure that the config is reloaded. 2. set umask for the user in /home/mug/.bashrc I added the following to the .bashrc for the user: umask 0000 After that I reloaded the .bashrc: source /home/mug/.bashrc I also checked the umask setting for the user by changing to the user and using this command: su mug umask As result I got a umask of 0000 prompted. So this worked. But still all my uploaded files are having 644 permissions set :( What am I doing wrong?

    Read the article

  • Time not propagating to machines on Windows domain

    - by rbeier
    We have a two-domain Active Directory forest: ourcompany.com at the root, and prod.ourcompany.com for production servers. Time is propagating properly through the root domain, but servers in the child domain are unable to sync via NTP. So the time on these servers is starting to drift, since they're relying only on the hardware clock. WHen I type "net time" on one of the production servers, I get the following error: Could not locate a time-server. More help is available by typing NET HELPMSG 3912. When I type "w32tm /resync", i get the following: Sending resync command to local computer The computer did not resync because no time data was available. "w32tm /query /source" shows the following: Free-running System Clock We have three domain controllers in the prod.ourcompany.com subdomain (overkill, but the result of a migration - we haven't gotten rid of one of the old ones yet.) To complicate matters, the domain controllers are all virtualized, running on two different physical hosts. But the time on the domain controllers themselves is accurate - the servers that aren't DCs are the ones having problems. Two of the DCs are running Server 2003, including the PDC emulator. The third DC is running Server 2008. (I could move the PDC emulator role to the 2008 machine if that would help.) The non-DC servers are all running Server 2008. All other Active Directory functionality works fine in the production domain - we're only seeing problems with NTP. I can manually sync each machine to the time source (the PDC emulator) by doing the following: net time \\dc1.prod.ourcompany.com /set /y But this is just a one-off, and it doesn't cause automated time syncing to start working. I guess I could create a scheduled task which runs the above command periodically, but I'm hoping there's a better way. Does anyone have any ideas as to why this isn't working, and what we can do to fix it? Thanks for your help, Richard

    Read the article

  • Dell PE2950 - slow IO rates for writing and reading locally

    - by OrenM
    I'm having a serious issue with dell server PE2950. The server has really slow IO rates, so slow that I'm not able to use it anymore I tried few things to solve this: changing disks to new disks (configured them as raid1) changing perc card + perc cables reinstalling the OS of course, had to cause of changing of disks, centos 5.5 x64bit firmware update to everything virtual disks policy: No Read Ahead,Write Back, disk cache policy disabled. openmanage doesn't alert about anything, also i ran dell's diag tests, everything passed, also dell didn't see anything in deset log. dell offered to reseat everything, including the cpu, we did that as well, still io rates are slow I have several PE2950 servers, and I never had such a thing with any of those. All have similar or exact hardware as this one, all configured the same, with the same os centos 5.5 x64, same disks, same raid, same policy. Just for comparison: the problematic PE2950 server: [root@bad ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 27.7946 seconds, 58.9 MB/s real 0m33.968s user 0m0.531s sys 0m26.000s good PE2950 server (with the exact same hardware): [root@good ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 3.19999 seconds, 512 MB/s real 0m7.694s user 0m0.053s sys 0m4.057s Hopefully you will have an idea what can cause the problem.

    Read the article

  • Options for synchronizing Palm Desktop calendars

    - by Al Everett
    My wife and I each have Palm Centro phones and very full calendars. We've been using Palm products for years and are happy with them and are not looking to switch (and don't have the budget for it even if we wanted to). What is a viable way we can synchronize our calendars? Back when we had Palm Z22s we used AirSet. It worked great in synchronizing our desktop calendars. Unfortunately, their sync software does not support Palm Desktop v6 and there is nothing in the pipeline to support it. (The third party vendor is apparently not interested in updating for the newer Palm desktop.) I would love to be able to get back to having each other's appointments appear on the other's calendar. What can we do? Some limitations: A data plan is not an option (so this precludes over-the-air synchronization options) No Microsoft Outlook Edit: Syncing my Palm to my Palm Desktop is not the issue. Being able to sync, in some way, the Palm calendar databases of my wife's and my calendars is what's desired.

    Read the article

  • Opening and Testing Ports on Modem > Router Connection

    - by JakeTheSnake
    Working off of my last question, I can access my server's FTP over the LAN but not over the internet. I'm using Filezilla on port 666. My router/modem configuration is as such (similar to other post): 1) Modem connects to WAN 2) WAN port on modem connects to LAN port on Router 3) Modem internal IP address is 192.168.0.254 4) Router internal IP address is 192.168.0.1 5) Modem has DHCP turned OFF 6) Router has DHCP turned ON 7) Router is running Tomato firmware and it's set as 'Router' (not 'Gateway') 8) The internet is working (just had to say that) I've set up port forwarding both on the modem and router - both route port 666 to the IP address of 192.168.0.3 (TCP); that is the IP address of the server which has FileZilla running. I don't know if that's hindering anything but I've also tried it with just the modem and just the router...same result. I've also tried setting the server to be DMZ (both on router and modem). Neither router nor modem have anything in their logs about denying inbound traffic on port 666 so my ability to troubleshoot stops there. I've tried contacting my ISP (Telus, running on mobility plan...it's a "Smart" Hub) but they weren't much help. They said they only block port 25 and 80 and maybe a few others, but not most ports. I test whether or not the port is open by going to canyouseeme.org - I don't know whether or not that would produce a 'connection refused' result just based on the fact that the FTP requires a login...I'm not well versed on this matter. FWIW, sometimes I get a 'connection refused' error on canyouseeme.org but mostly it's 'connection timed out'. I don't know what else to do at this point.

    Read the article

  • rsync not copying hard links

    - by A.Ellett
    I have two computers (both MacBook Airs) for which I sync one directory tree in both, but not the entire hard drive or any other directories. Let's say on computer A the directory is /Users/aellett/projects Let's say on computer B the directory is /Users/bellett/projects Generally, I'll log into computer B and then remotely connect to computer A as user 'aellett'. As super user I sync the two project directories as follows: rsync -av /Volumes/aellett/projects/ /Users/bellett/projects/ and this works as expected. On both computers I have another file letter.txt in a different directory which is not getting synced. Let's say on computer A the file is found in /Users/aellett/letters On computer B the file is found in /Users/bellett/correspondence Generally, I don't want to share what's not included in /Users/<username>/projects. But I do want to share this particular file. So on both computer I made a correspondence directory in projects. And then I made hard links as follows On computer A: ln /Users/aellett/letters/letter.txt /Users/aellett/projects/correspondence/letter.txt On computer B: ln /Users/bellett/correspondence/letter.txt /Users/aellett/projects/correspondence/letter.txt The next time I synced the two computers I did the following rsync -av -H /Volumes/aellett/projects/ /Users/bellett/projects/ When I checked on computer B, /Users/bellett/projects/correspondence/letter.txt was correctly synced. But, the hardlink to /Users/bellett/correspondence/letter.txt was no longer there. In other words, /Users/bellett/projects/correspondence/letter.txt was identical to /Users/aellett/projects/correspondence/letter.txt but it differed from /Users/bellett/correspondence/letter.txt. Since these two files were hard linked on both computers, I expected them to still have the hard link. Why are my hard links not being preserved?

    Read the article

  • ownCloud WebDAV interface seems to be broken

    - by Nobleleader13245
    I've been trying to host ownCloud on my server but everytime I try to it tells me this : Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides. This is my setup : Windows Server 2012 R2 IIS 8.5 PHP 5.5.11 ownCloud 6.0.3 MySQL 5.6.17 I tried google the error but I can't seem to find anything usefull. Some say I should try if this works : https://cloud.mcsoftworks.net/remote.php/webdav/ and yes I can navigate to this folder and I can open files from there. The calendar works and I can also just upload files from here https://cloud.mcsoftworks.net/ the only thing that doesn't seem to work is the sync client. The sync client doesn't say anything it just doesn't connect (Screenshot : http://prntscr.com/3p2apz) This is the error log : Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:56:00+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:47+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:37+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:51:24+00:00 This is my php.ini : http://pastebin.com/es3MB8Uh Does anyone have any idea on how I should get this to work? I've been trying to get this to work for about 14 days now and it starts to annoy me =P

    Read the article

  • PHP/MySQL Performance Testing with Just PHP

    - by Mike Gifford
    I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access. I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools. I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now). So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this? There are already tools like: https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness. Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this. Originally posted in http://stackoverflow.com/questions/12321498/php-mysql-performance-testing-with-just-php but it was recommended that I re-post it here.

    Read the article

  • Refresh file access time under Linux / Discard disk read cache

    - by calandoa
    I am making use of the access time to analyse some build process, but it is not working the way I want: the access time is updated the first time I read the file, then it stays the same for a long while, or until the next reboot. For instance: $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 10:03 some_file $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time is updated # waiting a few minutes... $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time has not been updated :( I suppose that the file is buffered by Linux in the free memory, the only this copy is accessed the subsequent times for speed reasons. A solution would be to discard the buffers in memory. After searching some forums, I found: sync echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches But it is not working, it seems that it only sync up the write buffers, not the read ones. May be it is due to some custom kernel configuration on my distro (fedora 9)? Or I am missing something here? Is there a way to achieve this access time refresh? Note also that I do not want to simulate some writes on my entire file tree. Because I am using some makefile based build system, this will cause the entire project to be build again.

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Allow access from outside network with dmz and iptables

    - by Ivan
    I'm having a problem with my home network. So my setup is like this: In my Router (using Ubuntu desktop v11.04), I installed squid proxy as my transparent proxy. So I would like to use dyndns to my home network so I could be access my server from the internet, and also I installed CCTV camera and I would like to enable watching it from internet. The problem is I cannot access it from outside the net. I already set DMZ in my modem to my router ip. My first guess is because i'm using iptables to redirect all inside network to use squid. And not allow from outside traffic to my inside network. Here is my iptables script: #!/bin/sh # squid server IP SQUID_SERVER="192.168.5.1" # Interface connected to Internet INTERNET="eth0" # Interface connected to LAN LAN_IN="eth1" # Squid port SQUID_PORT="3128" # Clean old firewall iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X # Load IPTABLES modules for NAT and IP conntrack support modprobe ip_conntrack modprobe ip_conntrack_ftp # For win xp ftp client #modprobe ip_nat_ftp echo 1 > /proc/sys/net/ipv4/ip_forward # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT ACCEPT # Unlimited access to loop back iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow UDP, DNS and Passive FTP iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT # set this system as a router for Rest of LAN iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT # unlimited access to LAN iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT # DNAT port 80 request comming from LAN systems to squid 3128 ($SQUID_PORT) aka transparent proxy iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT # if it is same system iptables -t nat -A PREROUTING -i $INTERNET -p tcp --dport 80 -j REDIRECT --to-port $SQUID_PORT # DROP everything and Log it iptables -A INPUT -j LOG iptables -A INPUT -j DROP If you know where did I miss, please advice me. Thanks for all your help and I really appreciate it.

    Read the article

  • Duplicity on a ReadyNAS

    - by Jason Swett
    Has anyone here run Duplicity on a ReadyNAS? I'm trying but here's what I get: duplicity full --encrypt-key="ABC123" /home/jason/ scp://[email protected]//gob Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) I've also found this post that says the "Invalid SSH password" message doesn't actually mean invalid SSH password. This would make sense because I'm not using an SSH password; I'm using a public key. I can ssh, ftp, sftp and rsync into my ReadyNAS just fine. (Actually, to be more accurate, I can get past authentication with ssh, ftp and sftp but I can't actually do anything past that. Regardless, that's enough to tell me that "Invalid SSH password" is bogus. Rsync works with no problems.) The post I found says the command will work as soon as the directory at the end of your scp command exists, but I don't know how to check for that. I know the share gob exists on my ReadyNAS and I know it's writable because I'm writing to it with rsync. Also, here is the verbose output: Using archive dir: /home/jason/.cache/duplicity/3bdd353b29468311ffa8485160da6873 Using backup name: 3bdd353b29468311ffa8485160da6873 Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Main action: full ================================================================================ duplicity 0.6.10 (September 19, 2010) Args: /usr/bin/duplicity full --encrypt-key=ABC123 -v9 /home/jason/ scp://[email protected]//gob Linux gob 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 /usr/bin/python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] ================================================================================ Using temporary directory /tmp/duplicity-cridGi-tempdir Registering (mkstemp) temporary file /tmp/duplicity-cridGi-tempdir/mkstemp-ztuF5P-1 Temp has 86334349312 available, backup will use approx 34078720. Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' (attempt #1) State = sftp, Before = '[email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) Any ideas as to what's going wrong?

    Read the article

  • Diagnosing linux issues with ipod syncing in Ubuntu

    - by alexpotato
    Issue: I am currently using Ubuntu 9.10 with a 5th generation Ipod 60 GB Black video classic. In general, it seems that Ubuntu can always detect the usb hd and displays it on my desktop. However, some applications seem to detect the ipod (e.g. Rythymbox and gtkpod do but Banshee does not) and some don't. I narrowed down the banshee issue to a bug that requires Nautilus to be restarted (although it would be nice to not have to do this). Also, Whenever I sync between these applications, it appears that everything is working fine during the sync but when I disconnect the ipod and browse, all of the songs seem to be there but the playlists are not. If I reconnect the ipod, in banshee specifically it sees the space usage as "other". What I am looking for is some way to at least understand what is and is not working OR directions to some where that can help me learn what's going on. I have already tried: -IRC. Either the channel is too general (e.g. #ubuntu) or no one is ever one (e.g. #banshee) -The web. Most of what I've found is too specific to one particular bug or too general. Any thoughts?

    Read the article

  • configure Squid3 proxy server on Ubuntu with caching and logging

    - by Panshul
    I have a ubuntu 11.10 machine. Installed Squid3. When i configure the squid as http_access allow all, everything works fine. my current configuration mostly default is as follows: 2012/09/10 13:19:57| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/09/10 13:19:57| Processing: acl manager proto cache_object 2012/09/10 13:19:57| Processing: acl localhost src 127.0.0.1/32 ::1 2012/09/10 13:19:57| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/09/10 13:19:57| Processing: acl SSL_ports port 443 2012/09/10 13:19:57| Processing: acl Safe_ports port 80 # http 2012/09/10 13:19:57| Processing: acl Safe_ports port 21 # ftp 2012/09/10 13:19:57| Processing: acl Safe_ports port 443 # https 2012/09/10 13:19:57| Processing: acl Safe_ports port 70 # gopher 2012/09/10 13:19:57| Processing: acl Safe_ports port 210 # wais 2012/09/10 13:19:57| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/09/10 13:19:57| Processing: acl Safe_ports port 280 # http-mgmt 2012/09/10 13:19:57| Processing: acl Safe_ports port 488 # gss-http 2012/09/10 13:19:57| Processing: acl Safe_ports port 591 # filemaker 2012/09/10 13:19:57| Processing: acl Safe_ports port 777 # multiling http 2012/09/10 13:19:57| Processing: acl CONNECT method CONNECT 2012/09/10 13:19:57| Processing: http_access allow manager localhost 2012/09/10 13:19:57| Processing: http_access deny manager 2012/09/10 13:19:57| Processing: http_access deny !Safe_ports 2012/09/10 13:19:57| Processing: http_access deny CONNECT !SSL_ports 2012/09/10 13:19:57| Processing: http_access allow localhost 2012/09/10 13:19:57| Processing: http_access deny all 2012/09/10 13:19:57| Processing: http_port 3128 2012/09/10 13:19:57| Processing: coredump_dir /var/spool/squid3 2012/09/10 13:19:57| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/09/10 13:19:57| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/09/10 13:19:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 2012/09/10 13:19:57| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/09/10 13:19:57| Processing: refresh_pattern . 0 20% 4320 2012/09/10 13:19:57| Processing: http_access allow all 2012/09/10 13:19:57| Processing: cache_mem 512 MB 2012/09/10 13:19:57| Processing: logformat squid3 %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru 2012/09/10 13:19:57| Processing: access_log /home/panshul/squidCache/log/access.log squid3 The problem starts when I enable the following line: access_log /home/panshul/squidCache/log/access.log I start to get proxy server is refusing connections error in the browser. on commenting out the above line in my config, things go back to normal. The second problem starts when i add the following line to my config: cache_dir ufs /home/panshul/squidCache/cache 100 16 256 The squid server fails to start. Any suggestions what am I missing in the config. Please help.!!

    Read the article

  • Exchange 2003 ActiveSync problem

    - by colemanm
    We're having problems getting iPhones to sync properly with SBS 2003 Exchange. When you add a new Exchange ActiveSync account on an iPhone and enter all the pertinent information, it shows a "Verifying Exchange account info" message for a minute or so, then says everything's verified and asks what you want to sync, Mail, Contacts, Calendars... so it looks like it's working. However, when you go to the Mail app and select the Exchange email account, it just shows an "Inbox" folder with nothing in it. When you try refreshing, it attempts for a second, then says "Last Updated" with a timestamp, as if it worked, but there's no mail and no error message/feedback at all. I think I've narrowed it down to some sort of certificate issue, but I'm having trouble finding out where to go from here... I ran MS's Exchange connectivity testing tool with these results: Our cert was purchased from Network Solutions, and I'd already added it to the IIS Default Website for OWA purposes. But this report makes it look like the cert is somehow problematic. I don't know what to do now... Here's a shot of the cert details, just in case:

    Read the article

  • Unable to get squid working for remote users

    - by Sean
    I am trying to setup squid 3.2.4, but I have not been able to get it working for remote users. Works fine locally. Unable to figure out what I am doing wrong... http_port 3128 transparent ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/usr/share/ssl-cert/myCA.pem refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access allow localhost http_access allow localnet http_access allow all cache deny all via off forwarded_for off header_access From deny all header_access Server deny all header_access WWW-Authenticate deny all header_access Link deny all header_access Cache-Control deny all header_access Proxy-Connection deny all header_access X-Cache deny all header_access X-Cache-Lookup deny all header_access Via deny all header_access Forwarded-For deny all header_access X-Forwarded-For deny all header_access Pragma deny all header_access Keep-Alive deny all acl ip1 localip 1.1.1.90 acl ip2 localip 1.1.1.91 acl ip3 localip 1.1.1.92 acl ip4 localip 1.1.1.93 acl ip5 localip 1.1.1.94 tcp_outgoing_address 1.1.1.90 ip1 tcp_outgoing_address 1.1.1.91 ip2 tcp_outgoing_address 1.1.1.92 ip3 tcp_outgoing_address 1.1.1.93 ip4 tcp_outgoing_address 1.1.1.94 ip5 tcp_outgoing_address 1.1.1.90

    Read the article

  • VPS with Plesk, one ip, and godaddy (definely need help)

    - by Francesco
    Hi there, here's my situation : i've Plesk 8.3.0 with one IP and i've registered my domains at godaddy.com My problem : i cannot figure out how to configure plesk and godaddy to have my domains (6) properly working into the VPS. i've only one IP, so i can't have my personal NS and need to use godaddy ns. But.. how do i set all the stuff ? I've made a try but it's not working. Please take a look : This is an example of how the domain i'm actually working on is configured On Plesk : Host Record type Value 1.2.3.4 / 24 PTR mydomain.com. ftp.mydomain.com. CNAME mydomain.com. mail.mydomain.com. A 1.2.3.4 ns.mydomain.com. A 1.2.3.4 mydomain.com. NS ns.mydomain.com. mydomain.com. A 1.2.3.4 mydomain.com. MX (10) mail.mydomain.com. webmail.mydomain.com. A 1.2.3.4 www.mydomain.com. CNAME mydomain.com. On godaddy,(Total DNS Control) for the same domain i have this setup : A (Host) Host Points To TTL Actions * 1.2.3.4 1 Hour CNAMES (Aliases) Host Points To TTL Actions e email.secureserver.net 1 Hour email email.secureserver.net 1 Hour ftp @ 1 Hour imap imap.secureserver.net 1 Hour mail pop.secureserver.net 1 Hour mobilemail mobilemail-v01.prod.mesa1.secureserver.net 1 Hour pda mobilemail-v01.prod.mesa1.secureserver.net 1 Hour pop pop.secureserver.net 1 Hour smtp smtp.secureserver.net 1 Hour webmail webmail.secureserver.net 1 Hour www @ 1 Hour MX (Mail Exchange) Priority Host Goes To TTL Actions 10 @ mailstore1.secureserver.net 1 Hour 0 @ smtp.secureserver.net Host Points To TTL Actions @ ns53.domaincontrol.com @ ns54.domaincontrol.com What should i correct ? Thanks for helping me Francesco

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >