Search Results

Search found 24933 results on 998 pages for 'arch linux'.

Page 345/998 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • extract recipient addresses from mailer-daemon messages

    - by Frank Peixoto
    I need generate and email to a mailing list administrator a list of email addresses that have generated a bounceback message, to allow him to clean up the mailing list. A separate mailbox will be dedicated to receiving those messages, so I need to extract the recipients that have bounced. What's the easiest way to extract those from the mail file for that special mailbox?

    Read the article

  • SSHing thru an HTTP proxy

    - by Siler
    Typical scenario: I'm trying to SSH thru a corporate HTTP proxy to a remote machine using corkscrew, and I get: ssh_exchange_identification: Connection closed by remote host Obviously, there's a lot of reasons this might be happening - the proxy might not allow this, the remote box might not be running sshd, etc. So, I tried to tunnel manually via telnet: $ telnet proxy.evilcorporation.com 82 Trying XX.XX.XX.XX... Connected to proxy.evilcorporation.com. Escape character is '^]'. CONNECT myremotehost.com:22 HTTP/1.1 HTTP/1.1 200 Connection established So, unless I'm mistaken... it looks like the connection is working. So, why then, doesn't it work via corkscrew? ssh -vvv [email protected] -p 22 -o "ProxyCommand corkscrew proxy.evilcorporation.com 82 myremotehost.com 22" OpenSSH_6.6, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Executing proxy command: exec corkscrew proxy.evilcorporation.com 82 myremotehost.com 22 debug1: permanently_set_uid: 0/0 debug1: permanently_drop_suid: 0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: identity file /root/.ssh/id_ed25519 type -1 debug1: identity file /root/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6p1 Ubuntu-2ubuntu1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

  • Windows clients not using NTP server provided via DHCP

    - by gencha
    I have a network consisting mostly of Windows Vista and 7 clients and an Ubuntu server. The server provides both the DHCP and NTP services through dhcp3-server and openntpd. In my dhcpd.conf, the subnet is declared as follows: subnet 10.10.10.0 netmask 255.255.255.0 { range 10.10.10.10 10.10.10.200; option broadcast-address 10.10.10.255; option routers 10.10.10.1; option ntp-servers 10.10.10.1; } The clients don't seem to be using the NTP server though. When I capture the network traffic with Wireshark during the DHCP process, I also see no mention of the NTP option in the DHCP offer message. I am not quite sure if the clients would have to specifically request that option to receive it or if I have to make another configuration to offer the option.

    Read the article

  • How to set up an FTP user on UBUNTU 9 server using vsftpd utility?

    - by Pavel
    Hi guys. I'm kinda new to this so bear with me. I've set up a server and now I need to create ftp user for it. I'm doing this by typing: useradd pavel passwd pavel And then I'm running iptables -I INPUT 1 -p tcp --dport 21 -j ACCEPT iptables-save > /etc/iptables.rules in order to open ftp ports and lastly, I'm changing the usermod by: usermod -s /bin/sh pavel So now tell me - what I'm doing wrong here? I just want to connect using FTP protocol. Please help...

    Read the article

  • Simultaneonus NX client connections

    - by Ja Sam
    I would like to have two separate NX connections for the same username going from one dual-monitor client to NX server. The idea is to get two separate same username KDE sessions. When I try to do it the most obvious way (by starting one NX client on each separate monitor), second connection just over takes the first one. Any idea on how to do this?

    Read the article

  • postfix: Temporary lookup failure for FQDN

    - by Thufir
    I'm using the FQDN of dur.bounceme.net which I want to resolve(?) to localhost. That is, I want mail to [email protected] to get delivered to user@localhost. I've tried following the Ubuntu guide on this and seem to be going in circles a bit. root@dur:~# root@dur:~# postfix stop postfix/postfix-script: stopping the Postfix mail system root@dur:~# postfix start postfix/postfix-script: starting the Postfix mail system root@dur:~# telnet dur.bounceme.net 25 Trying 127.0.1.1... telnet: Unable to connect to remote host: Connection refused root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# grep telnet /var/log/mail.log Aug 28 00:24:45 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> Aug 28 00:24:58 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:54:55 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:55:08 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~#

    Read the article

  • Upgraded from fc10 to fc12 now I have eth0_rename, how do I get back to plain old eth0?

    - by shank
    I upgraded from Fedora 10 to Fedora 12. Unfortunately, my ethernet interface eth0 is now named eth0_rename. I'd like to get back to having it named plain old eth0. I googled a bit but the solution of removing the eth0 entry from /etc/udev/rules.d/70-persistent-net.rules seems to have no effect (I restarted the network service but didn't reboot). The interface works just fine although I could see a script or two having a problem with the format. So, it's more of an inconvenience thing than anything else. Any ideas? Thanks.

    Read the article

  • bash script - spawn, send, interact - commands not found error

    - by Sandeepan Nath
    I my shell script, I am trying to remove password prompt for scp command (as given in http://stackoverflow.com/questions/459182/using-expect-to-pass-a-password-to-ssh/459225#459225) and this is what I have so far :- #!/usr/bin/expect spawn scp $DESTINATION_PATH/exam.tar $SSH_CREDENTIALS':/'$PROJECT_INSTALLATION_PATH expect "password:" send $sshPassword"\n"; interact On running the script, I am getting errors spawn: command not found send: command not found interact: command not found I was also getting error expect: command not found also, then I realised the path to expect was not correct and expect was not installed at all. So, I did yum install expect, corrected the path and the error was gone. But not able to remove the other 3 errors still.

    Read the article

  • How to mirror filesystems with millions of hardlinks?

    - by Thomas Berger
    We have one big problem at the moment: We need to mirror a filesystem for one of our customers. Thats usual not really a problem, but here it is: On this filesystem there is one folder with millions of hardlinks (yes! MILLIONS!). rsync requires more then 4 days to just build the filelist. We use the following rsync options: rsync -Havz --progress serverA:/data/cms /data/ Has anyone a idea how to speed up this rsync, or use alternatives? We could not use dd as the target disk is smaller then the source.

    Read the article

  • Permission denied error while import to remote repository via svn...

    - by Usman Ajmal
    Hi I am importing my project to another machine on my LAN to the directory: /srv/svn/repos/my-repo where my-repo was created via svnadmin create option The permissions of /srv/svn/repos/my-repo are drwxr-xr-x 6 svn svn 4096 2010-04-19 17:30 my-repo I executed following command to import myProject files to my-repo on remote system sudo svn import -m "First import" myProject svn+ssh://[email protected]/srv/svn/repos/my-repo This command started 'Adding' files but gave following error after 'Adding' 7 files svn: Can't open file '/srv/svn/repos/baltoros-valgrind/db/txn-current-lock': Permission denied Any idea whats going on...? Thanks a lot

    Read the article

  • Install GD Library for PHP in GoDaddy VPS with CentOS???

    - by Arun David
    When I tried to install php-gd library in my GoDaddy VPS with CentOS, It gives: $ yum install php-gd Loaded plugins: fastestmirror Determining fastest mirrors addons                                                    | 951 B     00:00 base                                                      | 2.1 kB     00:00 extras                                                    | 2.1 kB     00:00 update                                                    | 1.9 kB     00:00 Excluding Packages in global exclude list Finished Setting up Install Process No package php-gd available. Nothing to do

    Read the article

  • How to recover broken MJPEG AVI file after a camera crash

    - by Jam K
    I was recording a video when my KODAK EASYSHARE M531 Digital Camera fell to the ground and turned off. When i turned it on nothing happened to the camera but the video file is somehow broken. Obviously the camera did not have time to convert and close it properly. $file myvid.avi myvid.AVI: data where it should be: another_vid.AVI: RIFF (little-endian) data, AVI, 640 x 480, ~30 fps, video: Motion JPEG, audio: uLaw (stereo, 12000 Hz)` I would like to fix my video which I believe needs to be converted to AVI. Any idea? VLC does not play the file and returns: could not determine type of stream

    Read the article

  • How to prevent samba from holding a file lock after a client disconnects?

    - by Jean-Francois Chevrette
    Here I have a Samba server (Debian 5.0) thats is configured to host Windows XP profiles. Clients connects to this server and work on their profiles directly on the samba share (the profile is not copied locally). Every now and then, a client may not shutdown properly and thus Windows does not free the file locks. When looking at the samba locking table, we can see that many files are still locked even though the client is not connected anymore. In our case, this seems to occur with lockfiles created by Mozilla Thunderbird and Firefox. Here's an example of the samba locking table: # smbstatus -L | grep DENY_ALL | head -n5 Pid Uid DenyMode Access R/W Oplock SharePath Name Time -------------------------------------------------------------------------------------------------- 15494 10345 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user1 app.profile/user1.thunderbird/parent.lock Mon Nov 22 07:12:45 2010 18040 10454 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user2 app.profile/user2.thunderbird/parent.lock Mon Nov 22 11:20:45 2010 26466 10056 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user3 app.profile/user3.firefox/parent.lock Mon Nov 22 08:48:23 2010 We can see that the files were opened by Windows and imposed a DENY_ALL lock. Now when a client reconnects to this share and tries to open those files, samba says that they are locked and denies access. Is there any way to work around this situation or am I missing something? Edit: We would like to avoid disabling file locks on the samba server because there are good reasons to have those enabled.

    Read the article

  • DVD ROM is not working

    - by Cyril N.
    (note: I don't know in which StackExchange site to put this question, I'll thank the moderator that will move it to a more appropriate place, if there is a S.E. available for my question). I have a DVD RW drive that is well listed in the bios, and if no CD is in, it is also present in the "My Computer" of my Fedora 16. But when I put a disc on it, the icon disapear from "My Computer", and I can not do anything with this ! (Like erasing a RW disc). I'd like to boot a Fedora 17 Live CD image. I burned it on an other computer but when I try to run it in bios, nothing is done and I'm redirected to Grub of my HD. The command cdrecord -scanbus shows this : wodim: Warning: controller returns wrong size for CD capabilities page. wodim: Cannot get CD capabilities data. 6,1,0 601) 'HD-DT%ST' 'DVD%RAM G@22NP20' '1&04' Removable CD-ROM And when I try to mount manually the disc, I got this error : mount: block device /dev/sr0 is write-protected, mounting read-only mount: /dev/sr0: can't read superblock Here's a paste of dmesg | grep sr0 : [ 5.161265] sr0: scsi-1 drive [ 5.161621] sr 6:0:1:0: Attached scsi CD-ROM sr0 [ 834.545978] sr0: Hmm, seems the drive doesn't support multisession CD's [ 841.731194] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 842.021640] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 842.021652] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 842.021662] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 842.021672] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 [ 842.021688] end_request: I/O error, dev sr0, sector 0 [ 842.021697] Buffer I/O error on device sr0, logical block 0 [ 842.023715] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.048203] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.048211] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.048219] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.048234] end_request: I/O error, dev sr0, sector 0 [ 843.048274] EXT4-fs (sr0): unable to read superblock [ 843.063155] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.075904] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.220512] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.220522] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.220530] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.220538] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.220553] end_request: I/O error, dev sr0, sector 0 [ 843.220609] FAT-fs (sr0): unable to read boot sector The lines from Sense Key .. (line 6) to DRIVER_SENSE (line 11) are repeating a lot. I then changed my DVD player with an other spare one I had, and the disc didn't boot neither. I then changed the IDE cable, but still no success. What can I do to make it work? Thanks for your help.

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Passing PATH through sudo

    - by whitequark
    In short: how to make sudo not to flush PATH everytime? I have some websites deployed on my server (Debian testing) written with Ruby on Rails. I use Mongrel+Nginx to host them, but there is one problem that comes when I need to restart Mongrel (e.g. after making some changes). All sites are checked in VCS (git, but it is not important) and have owner and group set to my user, whereas Mongrel runs under the, huh, mongrel user that is severely restricted in it's rights. So Mongrel must be started under root (it can automatically change UID) or mongrel. To manage mongrel I use mongrel_cluster gem because it allows starting or stopping any amount of Mongrel servers with just one command. But it needs the directory /var/lib/gems/1.8/bin to be in PATH: this is not enough to start it with absolute path. Modifying PATH in root .bashrc changed nothing, tweaking sudo's env_reset and keepenv didn't either. So the question: how to add a directory to PATH or keep user's PATH in sudo?

    Read the article

  • Installing GNU scientific library and linking to programme

    - by jack
    I am trying to install a statistical program which requires GNU Scientific Library (GSL). I have successfully installed GSL through the yum command, but my statistical program gives an error when I try to run make install. I think there is a linking problem. How can I solve it? $ sudo yum install gsl.x86_64 Installed: gsl.x86_64 0:1.15-3.fc16 Dependency Installed: atlas.x86_64 0:3.8.4-1.fc16 $ tar -xvzf prog.tgz $ cd prog $ make $ gcc -O3 -Wall -Wshadow -pedantic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -DVER32 -I/opt/local/include/ -L/opt/local/lib/ -c -o prog.o prog.c In file included from prog.c:16:0: prog.h:7:30: fatal error: gsl/gsl_sf_gamma.h: No such file or directory compilation terminated. make: *** [prog.o] Error 1

    Read the article

  • unable to run OpenOffice from text mode

    - by dilip
    I have installed OpenOffice in Redhat5 in text mode. When I tried to start OpenOffice using the command: sudo /usr/lib/openoffice/program/soffice "-accept=socket,host=localhost,port=8100; urp;StarOffice.ServiceManager " -nologo -headless -nofirststartwizard: " It shows the error saying javaldx: Could not find a Java Runtime Environment! So I installed jre in my system, and then I did not get any error but OpenOffice does not start. Also I checked the process regarding OpenOffice, but I haven's seen any process related to that. Can any one help me to fix this issue?

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • collectd:Monitoring server not showing clients

    - by Quintin Par
    I have setup a monitoring server with the following setup. <Plugin network> Listen "0.0.0.0" "25826" </Plugin> Now my clients are sending data to the monitoring server(verified through tcpdump). Even the collection folder shows that the data is being dumped /var/lib/collectd/rrd [ec2-user at x rrd]$ ll total 4 drwxr-xr-x 11 root root 4096 Nov 20 17:53 x-web-1.y.com [ec2-user at x rrd]$ I have also verified with find . -mmin 1 to see if its being constantly updated. [ec2-user@x rrd]$ find . -mmin 1 ./x-web-1.y.com/interface-eth0/if_errors.rrd ./x-web-1.y.com/interface-eth0/if_packets.rrd ./x-web-1.y.com/interface-eth0/if_octets.rrd ./x-web-1.y.com/disk-xvda1/disk_time.rrd ./x-web-1.y.com/disk-xvda1/disk_ops.rrd ./x-web-1.y.com/disk-xvda1/disk_octets.rrd ./x-web-1.y.com/disk-xvda1/disk_merged.rrd But when i look it up through collectd-web, I don't see the clients What might be wrong in my setup?

    Read the article

  • No architecture vs architecture-specific binaries

    - by Aaron
    From what I understand, the noarch suffix means that it's architecture independent and should work universally. If this is the case, why should I install architecture-specific packages at all? Why not just go straight for the noarch? Are there optimizations in the x86 or x64 binaries that aren't found in the noarch binaries? What's best for high performance applications? Folding@Home does this with their controller:

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >