Search Results

Search found 20946 results on 838 pages for 'at command'.

Page 706/838 | < Previous Page | 702 703 704 705 706 707 708 709 710 711 712 713  | Next Page >

  • smtpd_helo_restrictions = ..., reject_unknown_helo_hostname occasionally rejects mail I care about, how to handle?

    - by lkraav
    I have configured my postfix as follows: smtpd_helo_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unknown_helo_hostname This is working well because most spambots don't seem to have correct reverse lookups. But every once in a while I run into mail I care about getting reject, because the mail source server admin doesn't care about configuring his server correctly. For example here the server introduces itself as "srv1.xbmc.org" which has no DNS record and fails my basic check. Jan 6 04:42:36 mail postfix/smtpd[660]: connect from xbmc.org[205.251.128.242] Jan 6 04:42:37 mail postfix/smtpd[660]: NOQUEUE: reject: RCPT from xbmc.org[205.251.128.242]: 450 4.7.1 <srv1.xbmc.org>: Helo command rejected: Host not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<srv1.xbmc.org> I have tried to contact the server admin several times, but there is no response. What is the optimal way to handle this from my side? Is adding these "special" hosts to mynetworks = my only option? Is perhaps my whole smtpd_helo_restrictions setup wrong in some significant way?

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • SVN hangs on commit - any suggestions for troubleshooting?

    - by Richard Beier
    We're having a problem with SVN... Subversion clients such as TortoiseSVN hang when we commit any more than a few files at a time to our server. Everything appears to actually be committed successfully to the repository; but the client hangs after all the data has been transmitted. We're using version 1.4.4 of the SVN server. We use the svn:// protocol rather than http to connect. We've reproduced this problem with several clients: TortoiseSVN (1.6.10), AnkhSVN (2.1), and the Silk command-line client (1.6.12). This is happening for everyone on the team, though some people seem to be more affected than others. If someone commits only a few files, it often works; but with more than half a dozen files, it usually hangs. Does anyone have troubleshooting suggestions? This has been happening sporadically for a while, but it's become pretty consistent lately. We've been working around the issue by killing the hung SVN client, doing "svn cleanup", and then doing "svn up"; but sometimes that causes tree conflicts. Another workaround is to blow away the workspace and check it out again after every commit; but of course that's pretty annoying. Are there any diagnostics that could help us troubleshoot this? We're considering upgrading to SVN 1.6 server, and installing the server on a new machine; but we're wondering if there's an easier solution. Thanks for your help, Richard

    Read the article

  • converting a png with an ICC profile?

    - by jedierikb
    I can convert a jpg from one ICC to another ICC. convert rgb_image.jpg -profile USCoat.icm cmyk_image.jpg Or I can convert a jpg with no ICC to another ICC. convert rgb_image.jpg +profile icm \ -profile sRGB.icc -profile USCoat.icm cmyk_image.jpg But how do I convert a png's pixels into the gamut described by an ICC profile? I understand I cannot embed the profile into the image file, but would at least like to convert the colors. When I reuse the above commands, the colors come out wrong... (different from the colors in the JPG when converted). This is the source image: http://alumni.media.mit.edu/~erikb/tmp/RED_JPG.jpg And here is what I am trying: convert RED_JPG.jpg +profile icm -profile sRGB_v4_ICC_preference.icc -profile USWebUncoated.icc CMYK_PNG.png and this is what I am getting: http://alumni.media.mit.edu/~erikb/tmp/CMYK_PNG.png I was hoping to get an image with the same colors as a JPEG run through the same command: convert RED_JPG.jpg +profile icm -profile sRGB_v4_ICC_preference.icc -profile USWebUncoated.icc CMYK_JPG.jpg resulting in: http://alumni.media.mit.edu/~erikb/tmp/CMYK_JPG.jpg *this image, CMYK_JPG.jpg, is what I am trying to reproduce pixel by pixel in a PNG file.* Any suggestions? Original (unanswered) post here.

    Read the article

  • Postfix misconfigured? 550 Sender rejected from recieving server

    - by wnstnsmth
    We use Postfix on our CentOS 6 machine, having the following configuration. We use PHP's mail() function to send rudimentary password reset emails, but there is a problem. As you will see, mydomain and myhostname is correctly set, afaik. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ***.ch myhostname = test.***.ch newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 Now this is the stuff that is in the /var/log/maillog of Postfix upon sending an email to ***.***@***.ch, with ***.ch being the same domain our sending server test.***.ch is on: Dec 13 16:55:06 R12X0210 postfix/pickup[6831]: E6D6311406AB: uid=48 from=<apache> Dec 13 16:55:06 R12X0210 postfix/cleanup[6839]: E6D6311406AB: message-id=<20121213155506.E6D6311406AB@test.***.ch> Dec 13 16:55:07 R12X0210 postfix/qmgr[6832]: E6D6311406AB: from=<apache@test.***.ch>, size=1276, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/smtp[6841]: E6D6311406AB: to=<***.***@***.ch>, relay=mail.***.ch[**.**.249.3]:25, delay=46, delays=0.18/0/21/24, dsn=5.0.0, status=bounced (host mail.***.ch[**.**.249.3] said: 550 Sender Rejected (in reply to RCPT TO command)) Dec 13 16:55:52 R12X0210 postfix/cleanup[6839]: 8562C11406AC: message-id=<20121213155552.8562C11406AC@test.***.ch> Dec 13 16:55:52 R12X0210 postfix/bounce[6848]: E6D6311406AB: sender non-delivery notification: 8562C11406AC Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: from=<>, size=3065, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: E6D6311406AB: removed Dec 13 16:55:52 R12X0210 postfix/local[6850]: 8562C11406AC: to=<root@test.***.ch>, orig_to=<apache@test.***.ch>, relay=local, delay=0.13, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: removed So the receiving server rejects the sender (line 4 of log output). We have tested it with one other recipient and it worked, so this problem might be completely unrelated to our settings, but related to the recipient. Still, with this question, I want to make sure we're not making an obvious misconfiguration on our side.

    Read the article

  • Synergy Key Mapping

    - by Tauren
    I'm running a Synergy server on Ubuntu and a Synergy+ client on OSX. The server has a standard windows keyboard with shift, ctrl, windows, and alt keys. My MacBookPro has shift, fn, control, alt/option, and command keys. When I press ctrl-c, ctrl-v, etc, the appropriate copy/paste action doesn't happen on the Mac, but it does in Ubuntu. If I'm controlling the mac, and press alt-c, alt-v, then I get the copy/paste action. So I played around with key mapping in synergy.conf and found that the following allows me to do copy/paste with ctrl-c/ctrl-v: section: screens godzilla: mbp.local: ctrl = alt alt = ctrl end Is this all that I need to do? Or are there other mappings that will help as well? The synergy configuration page refers to the following key mappings. What are the equivalent keys for each of these on the Windows keyboard and Mac keyboard? What is a meta or super key? shift = {shift|ctrl|alt|meta|super|none} ctrl = {shift|ctrl|alt|meta|super|none} alt = {shift|ctrl|alt|meta|super|none} meta = {shift|ctrl|alt|meta|super|none} super = {shift|ctrl|alt|meta|super|none} Thanks!

    Read the article

  • ffmpeg conversion problem

    - by user33126
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • ffmpeg conversion problem

    - by Elamurugan
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • How to install the MySQL Ruby Gem on Ubuntu 9.10?

    - by misbehavens
    I am having a problem installing the Ruby Gem for MySQL. This is the command that I am running: sudo gem install mysql and this is the output that I'm getting: Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib --with-mlib --without-mlib --with-mysqlclientlib --without-mysqlclientlib --with-zlib --without-zlib --with-mysqlclientlib --without-mysqlclientlib --with-socketlib --without-socketlib --with-mysqlclientlib --without-mysqlclientlib --with-nsllib --without-nsllib --with-mysqlclientlib --without-mysqlclientlib --with-mygcclib --without-mygcclib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1/ext/mysql_api/gem_make.out What do I need to do in order to get this to install?

    Read the article

  • Blink build with Xcode failed

    - by Merci
    I found a GPL-ed SIP client for Mac, Blink. I'd like to build it from source since the binaries are only available as paid download. Just FYI i'm studying programming at university but have no experience in building complex application from source. After downloading the content of the repository i opened the Xcode project and tried to build on OS X 10.7, Xcode 4.2.1. Unfortunately the build fail with 1 error and many warnings Most of the warnings are like this: Attribute Unavailable: Custom Identifiers in Interface Builder versions prior to 3.2 The error message is: Apple Mach-O Linker (ld) Error Command /Developer/usr/bin/clang failed with exit code 1 preceded by the warning Apple Mach-O Linker (ld) Warning directory not found for option '-L/Users/Sergio/Downloads/Blink/devel.ag-projects.com/repositories/public/blink-cocoa/Distribution/Frameworks' I notice that in the list of required files i have this files missing: Dependencies/Frameworks libgcrypt.11.6.0.dylib libgcrypt.11.dylib libgnutls-extra.26.dylib libgnutls.26.dylib libgpg-error.0.dylib libintl.8.dylib liblzo.1.dylib libtasn1.3.dylib Dependencies/Resources lib Frameworks/Linked Frameworks Sparkle.framework Products Blink.app It should be possible to download these files somewhere. Unfortunately googling did not help. There's no documentation on the project site.

    Read the article

  • Postfix: How do I Make Email Aliases Work?

    - by Nick
    The documentation claims that I can add aliases in a file (like /etc/postfix/virtusertable) and then use the "virtual_maps" directive to point to it. This does not appear to be working, however. My mail is bouncing with: Recipient address rejected: User unknown in local recipient table; If I mail the user from the server using the mail command, it works. mail myuser The message goes through postfix and inserts itself in the Cyrus inbox correctly. When I use fetchmail to get the user's messages off a pop3 server, postfix fails. The user's email is "[email protected]", but it doesn't seem to be mapping correctly to "myuser", the cyrus mailbox name. /etc/postfix/main.cf myhostname = localhost alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all mailbox_transport = lmtp:unix:/var/run/cyrus/socket/lmtp #lmtp:unix:/var/run/lmtp virtual_alias_domains = mydomain.com virtual_maps = hash:/etc/postfix/virtusertable /etc/fetchmailrc et syslog; set daemon 20; poll "mail.pop3server.com" with protocol pop3 user "[email protected]" password "12345" is "myuser" fetchall keep /etc/postfix/virtusertable [email protected] myuser postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_size_limit = 0 mailbox_transport = lmtp:unix:/var/run/cyrus/socket/lmtp mydestination = localhost myhostname = localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes virtual_alias_domains = mydomain.com Why is it ignoring my alias?

    Read the article

  • Sending email with exim and external sender address

    - by Tronic
    i have following problem: i want to send emails with an rails webapp. i set up an exim server and when looking into the logs, the sending works, but the emails aren't sent really. i had the same problem with another isp. the sender address is hosted on another mailserver, other isp. i think the problem is, that sending doesn't work because the sener address isn't hosted on the same server. do you have any advice on this? the logs (exim) tell me the following: 2011-01-01 14:38:06 1PZ1eo-0000Ga-38 <= <> R=1PZ1eo-0000GY-1p U=Debian-exim P=local S=1778 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.131] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 Completed [email protected] is the external sender-address! thank you! Edit with more details when sending a mail from command line with echo "Test" | mail -s Testmail [email protected] the logs says 2011-01-01 20:45:24 1PZ7OG-0001Vp-Rx <= root@gustav U=root P=local S=360 2011-01-01 20:45:26 1PZ7OG-0001Vp-Rx => [email protected] R=dnslookup T=remote_smtp H=gmail-smtp-in.l.google.com [209.85.229.27] X=TLS1.0:RSA_ARCFOUR_MD5:16 DN="C=US,ST=California,L=Mountain View,O=Google Inc,CN=mx.google.com" 2011-01-01 20:45:26 1PZ7OG-0001Vp-Rx Completed and i get the mail on my gmail account. but when sending by webapp (when testing locally with sendmail it works fine) i only get this log output 2011-01-01 20:50:08 1PZ7Sq-0001X9-L4 <= <> R=1PZ7Sq-0001X7-Jo U=Debian-exim P=local S=1780 2011-01-01 20:50:11 1PZ7Sq-0001X9-L4 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.3] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 20:50:11 1PZ7Sq-0001X9-L4 Completed

    Read the article

  • Scripting an 'empty' password in /etc/shadow

    - by paddy
    I've written a script to add CVS and SVN users on a Linux server (Slackware 14.0). This script creates the user if necessary, and either copies the user's SSH key from an existing shell account or generates a new SSH key. Just to be clear, the accounts are specifically for SVN or CVS. So the entry in /home/${username}/.ssh/authorized_keys begins with (using CVS as an example): command="/usr/bin/cvs server",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa ....etc...etc...etc... Actual shell access will never be allowed for these users - they are purely there to provide access to our source repositories via SSH. My problem is that when I add a new user, they get an empty password in /etc/shadow by default. It looks like: paddycvs:!:15679:0:99999:7::: If I leave the shadow file as is (with the !), SSH authentication fails. To enable SSH, I must first run passwd for the new user and enter something. I have two issues with doing that. First, it requires user input which I can't allow in this script. Second, it potentially allows the user to login at the physical terminal (if they have physical access, which they might, and know the secret password -- okay, so that's unlikely). The way I normally prevent users from logging in is to set their shell to /bin/false, but if I do that then SSH doesn't work either! Does anyone have a suggestion for scripting this? Should I simply use sed or something and replace the relevant line in the shadow file with a preset encrypted secret password string? Or is there a better way? Cheers =)

    Read the article

  • DRBD setup problem

    - by cuthieu
    I'm so new to DRBD, please help me fixing the problem below. Enclosed my drbd.conf. Many thanks [root@skonkwerks1 ~]# drbdadm create-md all open(/dev/hdb3) failed: No such file or directory Command 'drbdmeta /dev/drbd0 v08 /dev/hdb3 internal create-md' terminated with exit code 20 drbdsetup exited with code 20 [root@skonkwerks1 ~]# vi /etc/drbd.conf global { usage-count no; } resource repdata { protocol C; startup { wfc-timeout 0; degr-wfc-timeout 120; } disk { on-io-error detach; } # or panic # net { cram-hmac-alg "hdd1"; shared-secret "testing"; } syncer { rate 10M; } on skonkwerks1 { device /dev/drbd0; disk /dev/hdb1; address 172.29.156.1:7788; meta-disk internal; } on skonkwerks2 { device /dev/drbd0; disk /dev/hdb1; address 172.29.156.2:7788; meta-disk internal; } }

    Read the article

  • Why does this rsnapshot exclude not work?

    - by bstpierre
    Rsnapshot passes excludes directly to rsync, but rsync's behavior appears inconsistent. I've simplified my rsnapshot backup test to the following directory tree (this tree will be backed up): gorilla:~# find /tmp/snaptest -exec file {} \; /tmp/snaptest: directory /tmp/snaptest/SKIPTHIS: directory /tmp/snaptest/SKIPTHIS/xyz: directory /tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/snaptest/SKIPTHIS.txt: ASCII text My config file: config_version 1.2 snapshot_root /tmp/backup-media no_create_root 1 cmd_cp /bin/cp cmd_rm /bin/rm cmd_rsync /usr/bin/rsync cmd_ssh /usr/bin/ssh cmd_logger /usr/bin/logger cmd_du /usr/bin/du interval hourly 6 interval daily 7 interval weekly 4 interval monthly 3 verbose 3 loglevel 3 logfile /media/maxtor-one-touch/rsnapshot.log lockfile /media/maxtor-one-touch/backups/.rsnapshot.pid rsync_short_args -a rsync_long_args --delete --numeric-ids --relative --delete-excluded exclude "SKIPTHIS/**" link_dest 1 backup /tmp/snaptest snaptest The result: gorilla:~# rsnapshot -c /tmp/snaptest.conf hourly echo 12638 > /media/maxtor-one-touch/backups/.rsnapshot.pid mkdir -m 0755 -p /tmp/backup-media/hourly.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ --exclude="SKIPTHIS/**" /tmp/snaptest \ /tmp/backup-media/hourly.0/snaptest touch /tmp/backup-media/hourly.0/ rm -f /media/maxtor-one-touch/backups/.rsnapshot.pid gorilla:~# find /tmp/backup-media/ -exec file {} \; /tmp/backup-media/: directory /tmp/backup-media/hourly.0: directory /tmp/backup-media/hourly.0/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp: sticky directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz: directory /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/xyz/testing: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/bar: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS/foo: ASCII text /tmp/backup-media/hourly.0/snaptest/tmp/snaptest/SKIPTHIS.txt: ASCII text My confusion stems from the fact that if I copy-paste the rsync command echoed by rsnapshot, the SKIPTHIS directory is excluded! (I've tested with various other SKIPTHIS patterns with the same results.) Any idea what's going on?

    Read the article

  • iptables to block non-VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • will heavy network traffic affect other connections on HP ProCurve V1810-48G?

    - by nn4l
    I have a HP ProCurve V1810-48G switch with a few servers connected to it (everything in one rack). The switch is practically in its default configuration. During copying of a few hundred GByte of data from server_a to server_b (using tar cf - data | ssh server_b 'cd myhome; tar xf -'), essentially saturating the network capacity between those two servers, I noticed network related error messages on the console of server_c - as if server_c is no longer able to send/receive traffic to server_d. After canceling the copy command everything was normal again. I would understand this if the network connection would use a shared resource, for example if server_a and server_c are in one datacenter, server_b and server_d are in another datacenter and both datacenters are connected with a 100 MBit line. But all of the mentioned servers are connected to the same switch and are located in the same IP network. I always thought that a connection between two servers on one switch will not affect any other server connected to the switch. It is also possible that the network related error messages are caused by something else - but I can't risk a network problem for any other system on this switch. Please advise.

    Read the article

  • Deleting "undeletable" files in Vista

    - by Nik Reiman
    I recently upgraded my workstation from XP SP3 to Vista Business, and during the upgrade Windows moved my old C:\Windows directory to C:\Windows.old. I got all of the stuff I needed out of that folder, but there are six "undeletable" files there so I cannot remove it. They are: Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-H Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-V Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroPDF.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\pdfshell.dll Whenever I try to delete the files either through explorer or a command line, I get a permission denied error. I have tried to grant myself full permission on the files, but again, permission denied. I don't even have acrobat installed on my Vista machine, and I uninstalled Adobe updater. However, I still can't manage to get rid of these files. How do I nuke them for good? Edit: I was able to take ownership of the files, but I still can't delete them. Renaming them did not work, as I was denied permission to do that as well. I'll try booting up in safe mode and getting rid of them there. Edit II: Booting up into safe mode did not allow me to delete the files. Bummer.

    Read the article

  • Mac OS X : Open up 3 terminals, run different commands from all for each of them, to set up a develo

    - by taelor
    I'm a Ruby on Rails Web Developer and there is a lot of repetition I go through to start up my development environment. I was wondering if there is any way that I can remove some of this repetition by writing a script, or using a program (like quicksilver) or something to get my work environment going. I know how to use quicksilver to open up terminal, and I even have a saved window group to get my 3 or 4 panes open. The next thing I would love to automatically happen is getting all three to goto a certain directory, and each run different commands. One will start the local server, and in another tab, start a background process. the other would open text mate, and then start a console session, while the last one runs a svn(or git) status. Oh yah, and I would love to go ahead and open firefox, and a few tabs going to a couple of locations. Does anyone have any suggestions on how I could make all this happen in once quicksilver command, or a double click on some type of script on my Desktop?

    Read the article

  • Upstart can't determine my process' pid

    - by sirlark
    I'm writing an upstart script for a small service I've written for my colleagues. My upstart job can start the service, but when it does it only outputs queryqueue start/running; note the lack of a pid as given for other services. #/etc/init/queryqueue.conf description "Query Queue Daemon" author "---" start on started mysql stop on stopping mysql expect fork env DEFAULTFILE=/etc/default/queryqueue umask 007 kill timeout 30 pre-start script #if [ -f "$DEFAULTFILE" ]; then # . "$DEFAULTFILE" #fi [ ! -f /var/run/queryqueue.sock ] || rm -rf /var/run/queryqueue.sock #exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script script #Originally this stanza didn't exist at all if [ -f "$DEFAULTFILE" ]; then . "$DEFAULTFILE" fi exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script post-start script for i in `seq 1 5` ; do [ -S /var/run/queryqueue.sock ] && exit 0 sleep 1 done exit 1 end script The service in question is a python script, which when run without error, forks using the code below right after checking command line options and basic environmental sanity, so I tell upstart to expect fork. pid = os.fork() if pid != 0: sys.exit(0) The script is executable, and has a python shebang. I can send the TERM signal to the process manually, and it quits gracefully. But running stop queryqueue claims queryqueue stop/waiting but the process is still alive and well. Also, it's logs indicate it never received the kill signal. I'm guessing this is because upstart doesn't know which pid it has. I've also tried expect daemon and leaving the expect clause out entirely, but there's no change in behaviour. How can I get upstart to determine the pid of the exec'd process

    Read the article

  • How do I set up a Windows NFS share so that I can view it's contents on Linux?

    - by hewhocutsdown
    My NFS server is a Windows XP SP3 box with the Microsoft Windows Services for Unix installed. I have a share configured under C:\NFS with the share name NFS and ANSI encoding. Anonymous access is enabled, with the anon UID/GID set to 0/0. Additionally, I've set ALL MACHINES to Read-Write, and checked the checkbox to Allow root access. My first NFS client is a Ubuntu 10.04 box, with nfs-common installed. Running sudo mount -t nfs 1.1.1.1:/NFS /home/user/NFS succeeds, but when I attempt to view the folder (even as root), it tells me that I do not have the permissions necessary to view the contents of the folder. My second NFS client is an IBM iSeries box running OS/400 V5R3. I used the mount command below: MOUNT TYPE(*NFS) MFS('1.1.1.1:/NFS') MNTOVRDIR('/PARENT/NFS') OPTIONS('rw,nosuid,retry=5,rsize=8096,wsize=8096,timeo=20,retrans=2,acregmin=30,acregmax=60,acdirmin=30,acdirmax=60,soft') CODEPAGE(*BINARY *ASCII) which also mounts successfully. Attempting to WRKLNK '/PARENT/NFS' and use Option 5 to enter the directory yields a Not authorized to object error - even though I am a security officer with the *ALLOBJ special authority. My gut says that it's a problem with the Windows share, but I don't know what it could be. Do you have any suggestions?

    Read the article

  • How do you do a keyword search the Services.msc (mmc) window in Windows 7?

    - by Warren P
    When you want to run a service, you have very limited capabilities, in all current Windows versions, as far as I can tell. I usually start Services by typing "services.msc" into the Start-Run box, on most versions of Windows, this works. I know how to click the "Name" column in the MMC view of Windows Services. If you know what the first few characters of a service name is, you can usually sort by the name, and type the prefix to scroll the list down (find Windows Search for example). This seems pretty weak to me, so I spent some time searching the interwebs for tools that do a better job of managing services. Usually I have a keyword that I know "fooWare" might be the keyword, and I need to find the (usually badly named) service and start it and stop it. This is often WAY too hard. The best I could do is "NET SERVICES" from the command line, and maybe add a grep in there, but that doesn't list every service, only a few of them. And the MMC snap-in in Win7 now has an Export List button, exporting to csv text file feature which I have used from time to time, to export and then search. I have thought of writing my own tool. I'm hoping a better "service manager" utility exists out there that sysadmins use. I'd like a search box at the top right corner, kind of the same way that the Add-Remove-Programs dialog in Win7 and Vista has a search facility. Does such a services utility exist out there?

    Read the article

  • How to make Synaptics touchpad work better on Linux?

    - by whitequark
    I have Debian Squeeze currently installed on a Samsung N250 netbook with a Synaptics touchpad. These touchpads are, generally, good, and everything works perfectly on Windows. The support is extremely sucky on Linux through. Of course it has all the cool features like two-finger scrolling, but the cursor (or whatever is a replacement for cursor when scrolling) is trembling awfully. It trembles when I just keep the finger on touchpad, it shakes awfully if the finger is close to the top of touchpad, and when I'm scrolling with it (no matter with two fingers or one), the page shakes a lot too. None of this behavior is observed even in Windows XP with just the default drivers installed. Here's the Xorg version: $ Xorg -version X.Org X Server 1.7.7 Release Date: 2010-05-04 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.32-5-686 i686 Debian Current Operating System: Linux mannaz 2.6.32-5-686 #1 SMP Fri Dec 10 16:12:40 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/vmlinuz-2.6.32-5-686 root=/dev/mapper/mannaz-root ro quiet splash Build Date: 02 December 2010 01:08:37AM xorg-server 2:1.7.7-10 (Julien Cristau <[email protected]>) Current version of pixman: 0.16.4 and here is synclient -l output: http://pastebin.com/Eqa6hGXP

    Read the article

  • "Security Warning" comes up when I run via another program

    - by Alexander Bird
    If I execute vmmap from the command line it works fine. However, if I call some other program and pass vmmap as a paramater for this other program to start the execution, then I get this "security error" popup -- which makes it hard to automate scripts. In other words, I want to wrap vmmap via another program. In my case, I want to wrap vmmap via another program because whenever vmmap runs, it will bring a window up momentarily and then disappear. So I try passing vmmap as an argument to another program which will start the program "headlessly". I tried this program and this program, and in both cases I get the same popup which defeats the purpose of automation. Why does this happen when the program isn't run directly? Does anyone know the internals of what this warning is? And, utlimately, is there a way to stop this from happening, but only for this instance? I don't want to disable this warning-system on my whole computer. EDIT: I am using Windows Server 2003, and I don't necessarily need solutions for other platforms, but I would like to know what they are if they are platform-dependent solutions.

    Read the article

  • Service nginx reload: unexpected error

    - by Anna
    I'm trying to install wordpress on my nginx server by following this tutorial: http://premium.wpmudev.org/blog/how-to-setup-your-own-nginx-powered-wordpress-server/ However, the last command at step 7 gave me a strange error: service nginx reload A copy-paste from my terminal: root@server:~# service nginx reload Reloading nginx configuration: nginx: [emerg] unexpected "o" in /etc/nginx/sites-enabled/wordpress:7 nginx: configuration file /etc/nginx/nginx.conf test failed When I nano into sites-enabled/wordpress, on the 7th line I can't find anything strange: <!DOCTYPE html> <html class=" "> <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#"> <meta charset='utf-8'> <meta http-equiv="X-UA-Compatible" content="IE=edge"> Also, I don't see any obvious errors in my nginx.conf file, but maybe I'm not checking something? The first couple of lines of the nginx config file: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } Any help is appreciated, thanks a lot in advance!

    Read the article

< Previous Page | 702 703 704 705 706 707 708 709 710 711 712 713  | Next Page >