Search Results

Search found 24623 results on 985 pages for 'linux'.

Page 344/985 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

  • Connect to SVN through Eclipse on Ubuntu

    - by Gene R
    We have a Subversion server running on an internal server. I'm trying to connect to it through SubClipse (Eclipse) on Ubuntu. When I enter the URL: svn://servername/site/trunk as I do from Windows. I get the following error: Error validating location: "org.tigris.subversion.javahl.ClientException: svn: Malformed network data" Anybody got any ideas?

    Read the article

  • What do I need to do to set my computer as Default Gateway?

    - by Vaibhav
    We are trying to put together a box with dual LAN cards (let's say Outer and Inner), where the Inner LAN card is supposed to act as a default gateway on the network it is connected to. This box is running Ubuntu. The basic purpose for this box is to take messages generated on the inner network, do some work with them and forward them out the Outer LAN card to a server. The inner network is completely isolated with simply a regular switch connecting the Inner LAN Card with two other boxes. These other boxes either throw out multi-cast messages (which the Inner LAN Card is listening to), or send out unicast messages meant for the server which is not on this inner network. So, we need the Inner LAN Card to act as a default gateway, where these unicast messages will then be sent, and the code on the dual-LAN Card box can then intercept and forward these messages to the server. Question: 1. How do we setup the LAN Card to be default gateway (does it need some configuration on Ubuntu)? 2. Once we have this setup, is it a simple matter of listening to the interface to intercept the incoming messages? Any help (pointers in the right direction) is appreciated. Thanks.

    Read the article

  • Start multiple instances of Firefox; Xephyr rootless mode

    - by Vi
    How can I have multiple independent instances of Mozilla Firefox 3.5 on the same X server, but started from different user accounts (consequently, different profiles)? Limited success was only with Xephyr :1, DISPLAY=:1 /usr/local/bin/firefox, but Xephyr has no Cygwin/X's "rootless" mode so it's not comfortable. The idea is to have one Firefox instance for various "Serious Business" things and the other for regular browsing with dozens of add-ons securely isolated.

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Permission denied error while import to remote repository via svn...

    - by Usman Ajmal
    Hi I am importing my project to another machine on my LAN to the directory: /srv/svn/repos/my-repo where my-repo was created via svnadmin create option The permissions of /srv/svn/repos/my-repo are drwxr-xr-x 6 svn svn 4096 2010-04-19 17:30 my-repo I executed following command to import myProject files to my-repo on remote system sudo svn import -m "First import" myProject svn+ssh://[email protected]/srv/svn/repos/my-repo This command started 'Adding' files but gave following error after 'Adding' 7 files svn: Can't open file '/srv/svn/repos/baltoros-valgrind/db/txn-current-lock': Permission denied Any idea whats going on...? Thanks a lot

    Read the article

  • Ubuntu: Unsupported Sources

    - by Scott
    What kind of software is available in that source tree ? Also what is the best way to see what kind of software is available in a repository ? I have always reluctant to enable it because I didn't want to wind up with an unstable system.

    Read the article

  • FreeNX Server w/ nxagent 3.5 not able to create shadow sessions

    - by Jenna Whitehouse
    I am running a FreeNX server on Ubuntu 11.10 and am unable to do session shadowing. I get the authorization prompt, but the shadow client crashes after. The NX server log in the user's .nx directory is as follows: Error: Aborting session with 'Server is already active for display 3000 If this server is no longer running, remove /tmp/.X3000-lock and start again'. Session: Aborting session at 'Mon Oct 1 14:26:44 2012'. Session: Session aborted at 'Mon Oct 1 14:26:44 2012'. This then deletes the lock file, which is the lock file for the initial Unix session and crashes out. Everything works for a normal session, and shadowing works up to the authorization prompt. I am using this software: Ubuntu 11.10 freenx-server 0.7.3.zgit.120322.977c28d-0~ppa11 nx-common 0.7.3.zgit.120322.977c28d-0~ppa11 nxagent 1:3.5.0-1-2-0ubuntu1ppa8 nxlibs 1:3.5.0-1-2-0ubuntu1ppa8 Any help is appreciated, thanks!

    Read the article

  • Sharing hp Deskjet F380 using cups via http driver issues with xp client

    - by ageis23
    Hi the problem is xp doesn't have built in drivers for my printer but vista does. On vista it works perfectly without any issues. However when I try using xp, it insists that I select a driver from the selection xp offers by default. The drivers I've downloaded from HP don't support networking. Hp have stated they're non networkable. Is there anything I can do about this? Any help is greatly appreciated and would save me getting ear ache!

    Read the article

  • bash script - spawn, send, interact - commands not found error

    - by Sandeepan Nath
    I my shell script, I am trying to remove password prompt for scp command (as given in http://stackoverflow.com/questions/459182/using-expect-to-pass-a-password-to-ssh/459225#459225) and this is what I have so far :- #!/usr/bin/expect spawn scp $DESTINATION_PATH/exam.tar $SSH_CREDENTIALS':/'$PROJECT_INSTALLATION_PATH expect "password:" send $sshPassword"\n"; interact On running the script, I am getting errors spawn: command not found send: command not found interact: command not found I was also getting error expect: command not found also, then I realised the path to expect was not correct and expect was not installed at all. So, I did yum install expect, corrected the path and the error was gone. But not able to remove the other 3 errors still.

    Read the article

  • Setup git repository on gentoo server using gitosis & ssh

    - by ikso
    I installed git and gitosis as described here in this guide Here are the steps I took: Server: Gentoo Client: MAC OS X 1) git install emerge dev-util/git 2) gitosis install cd ~/src git clone git://eagain.net/gitosis.git cd gitosis python setup.py install 3) added git user adduser --system --shell /bin/sh --comment 'git version control' --no-user-group --home-dir /home/git git In /etc/shadow now: git:!:14665:::::: 4) On local computer (Mac OS X) (local login is ipx, server login is expert) ssh-keygen -t dsa got 2 files: ~/.ssh/id_dsa.pub ~/.ssh/id_dsa 5) Copied id_dsa.pub onto server ~/.ssh/id_dsa.pub Added content from file ~/.ssh/id_dsa.pub into file ~/.ssh/authorized_keys cp ~/.ssh/id_dsa.pub /tmp/id_dsa.pub sudo -H -u git gitosis-init < /tmp/id_rsa.pub sudo chmod 755 /home/git/repositories/gitosis-admin.git/hooks/post-update 6) Added 2 params to /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes Full sshd_config: Protocol 2 RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no UsePAM yes PrintMotd no PrintLastLog no Subsystem sftp /usr/lib64/misc/sftp-server 7) Local settings in file ~/.ssh/config: Host myserver.com.ua User expert Port 22 IdentityFile ~/.ssh/id_dsa 8) Tested: ssh [email protected] Done! 9) Next step. There I have problem git clone [email protected]:gitosis-admin.git cd gitosis-admin SSH asked password for user git. Why ssh should allow me to login as user git? The git user doesn't have a password. The ssh key I created is for the user expert. How this should work? Do I have to add some params to sshd_config?

    Read the article

  • postfix: Temporary lookup failure for FQDN

    - by Thufir
    I'm using the FQDN of dur.bounceme.net which I want to resolve(?) to localhost. That is, I want mail to [email protected] to get delivered to user@localhost. I've tried following the Ubuntu guide on this and seem to be going in circles a bit. root@dur:~# root@dur:~# postfix stop postfix/postfix-script: stopping the Postfix mail system root@dur:~# postfix start postfix/postfix-script: starting the Postfix mail system root@dur:~# telnet dur.bounceme.net 25 Trying 127.0.1.1... telnet: Unable to connect to remote host: Connection refused root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# grep telnet /var/log/mail.log Aug 28 00:24:45 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> Aug 28 00:24:58 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:54:55 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:55:08 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~#

    Read the article

  • Issues while installing no machine setup (NX )

    - by TopCoder
    I am trying to connect to NX server from windows client but it reports following exception NX 203 NXSSH running with pid: 5404 NX 285 Enabling check on switch command NX 285 Enabling skip of SSH config files NX 285 Setting the preferred NX options NX 200 Connected to address: 10.43.51.77 on port: 22 NX 202 Authenticating user: nx NX 208 Using auth method: publickey NX 204 Authentication failed. I have regenearted the default_dsa.key on server and imported the same for client but still not working. Any solutions?

    Read the article

  • How to make Shared Keys .ssh/authorized_keys and sudo work together?

    - by farinspace
    I've setup the .ssh/authorized_keys and am able to login with the new "user" using the pub/private key ... I have also added "user" to the sudoers list ... the problem I have now is when I try to execute a sudo command, something simple like: $ sudo cd /root it will prompt me for my password, which I enter, but it doesn't work (I am using the private key password I set) Also, ive disabled the users password using $ passwd -l user What am I missing? Somewhere my initial remarks are being misunderstood ... I am trying to harden my system ... the ultimate goal is to use pub/private keys to do logins versus simple password authentication. I've figured out how to set all that up via the authorized_keys file. Additionally I will ultimately prevent server logins through the root account. But before I do that I need sudo to work for a second user (the user which I will be login into the system with all the time). For this second user I want to prevent regular password logins and force only pub/private key logins, if I don't lock the user via" passwd -l user ... then if i dont use a key, i can still get into the server with a regular password. But more importantly I need to get sudo to work with a pub/private key setup with a user whos had his/her password disabled. Edit: Ok I think I've got it (the solution): 1) I've adjusted /etc/ssh/sshd_config and set PasswordAuthentication no This will prevent ssh password logins (be sure to have a working public/private key setup prior to doing this 2) I've adjusted the sudoers list visudo and added root ALL=(ALL) ALL dimas ALL=(ALL) NOPASSWD: ALL 3) root is the only user account that will have a password, I am testing with two user accounts "dimas" and "sherry" which do not have a password set (passwords are blank, passwd -d user) The above essentially prevents everyone from logging into the system with passwords (a public/private key must be setup). Additionally users in the sudoers list have admin abilities. They can also su to different accounts. So basically "dimas" can sudo su sherry, however "dimas can NOT do su sherry. Similarly any user NOT in the sudoers list can NOT do su user or sudo su user. NOTE The above works but is considered poor security. Any script that is able to access code as the "dimas" or "sherry" users will be able to execute sudo to gain root access. A bug in ssh that allows remote users to log in despite the settings, a remote code execution in something like firefox, or any other flaw that allows unwanted code to run as the user will now be able to run as root. Sudo should always require a password or you may as well log in as root instead of some other user.

    Read the article

  • Setting the Timezone with an automated script

    - by Tom
    I'm writing scripts to automate setting up new slicehost installations. In a perfect world, after I started the script, it would just run, with no attention from me. I have succeeded, with one exception. How do I set the timezone, in a permanent (survive reboot) and sane (adjust for standard and daylight savings time, so no just forcing the date) ... manner that doesn't require input from me? Currently, I'm using dpkg-reconfigure tzdata This doesn't seem to have any way to force parameters into it. It demands user input. EDIT: I'm editing here, rather than commenting, since comments don't seem to allow code blocks. Here's the actual code I ended up with, based on Rudedog's comment below. I also noticed that this doesn't update /etc/timezone. I'm not certain who uses that, but in case anybody does, I'm setting that too. TIMEZONE="America/Los_Angeles" echo $TIMEZONE > /etc/timezone cp /usr/share/zoneinfo/${TIMEZONE} /etc/localtime # This sets the time

    Read the article

  • Find a process by name and kill it

    - by Fabiano PS
    So, I want to send a kill to a process, I know it's name ps -ef | grep '_rails master' root 2388 1 0 19:46 ? 00:00:04 unicorn_rails master -c /web/hero/config/unicorn.rb -E production -D root 2582 2172 0 20:28 pts/0 00:00:00 grep --color=auto _rails master It is unicorn_rails master [..] how do I kill it? I tried so far: sed and expr. But cant pass it as param to kill

    Read the article

  • OpenAFS on Fedora/CentOS

    - by Michael Pliskin
    I am trying to see if OpenAFS fits my needs as a distributed filesystem and is a bit stuck. There are docs but they're all quite hard to understand, so asking for some expert advice here. My questions: which version to install? I need windows client support so I need 1.5 - right? But it is not stable.. Or is it? And don't see any pre-built rpms for it, so compiling from sources? tried to compile and it worked but it created a non-"mp" kernel module while my kernel needs an mp one - how to workaround that? do I really need a new fresh partition to start with or I can re-use an existing one and just make it available via afp? any nice HOWTOs around?

    Read the article

  • Adding a jar file to CLASSPATH is still not executable

    - by Simon O'Hanlon
    Perhaps I just don't understand how the whole CLASSPATH environment variable works when trying to find .jar files on your system. I thought if you specified it, you could launch .jar files with java in much the same way that you can launch executables that are on your path. I have an executable java archive (.jar file) on my system, that I stuck in /usr/local/bin/gatk/. I added this to my CLASSPATH via: export CLASSPATH=/usr/local/bin/gatk/GenomeAnalysisTK.jar I thought this would make the .jar file visible to my JVM. When I try to invoke it with java -jar GenomeAnalysisTK.jar #Error: Unable to access jarfile .gatk/GenomeAnalysisTK.jar I can invoke it setting the absolute path, e.g. java -jar /usr/local/bin/gatk/GenomeAnalysisTK.jar, however I'd rather not type the full path each time. I have read many of the linked tutorials but somehow I don't seem to be getting this right and I can't understand what I am doing wrong.

    Read the article

  • How does Kerberos work with SSH?

    - by Phil
    Suppose I have four computers, Laptop, Server1, Server2, Kerberos server: I log in using PuTTY or SSH from L to S1, giving my username / password From S1 I then SSH to S2. No password is needed as Kerberos authenticates me Describe all the important SSH and KRB5 protocol exchanges: "L sends username to S1", "K sends ... to S1" etc. (This question is intended to be community-edited; please improve it for the non-expert reader.)

    Read the article

  • Installing Midnight Commander from sources (no root privileges)

    - by ouroboros
    I tried to configure ./configure --prefix=/localfolder glib-2.26.1/ make make install but it fails at make stage. trying to configure mc-4.6.1/ and make doesn't obviously work. What are the steps I need to make in order to install midnight comander for my local user in a custom folder? Make for glib gives me these errors /usr/bin/msgfmt: found 2 fatal errors cp: cannot stat `test.mo': No such file or directory gmake[4]: *** [test.mo] Error 1 gmake[4]: Leaving directory `/remote/folder/mc/glib-2.26.1/gio/tests' gmake[3]: *** [all-recursive] Error 1 gmake[3]: Leaving directory `/remote/folder/mc/glib-2.26.1/gio' gmake[2]: *** [all] Error 2 gmake[2]: Leaving directory `/remote/folder/mc/glib-2.26.1/gio' gmake[1]: *** [all-recursive] Error 1 gmake[1]: Leaving directory `/remote/folder/mc/glib-2.26.1' gmake: *** [all] Error 2

    Read the article

  • bash code in rc.local not excuting after bootup

    - by mrTomahawk
    Does anyone know why a system would not execute the script code within rc.local on bootup? I have a post configuration bash script that I want to run after the initial install of VMware ESX (Red Hat), and for some reason it doesn't seem to execute. I have the setup to log its start of execution and even its progress so that I can see how far it gets in case it fails at some point, but even when I look at that log, I am finding that didn't even started the execution of the script code. I already checked to see that script has execution permissions (755), what else should I be looking at? Here is the first few lines of my code: #!/bin/sh echo >> /tmp/configLog "" echo >> /tmp/configLog "Entering maintenance mode"

    Read the article

  • lighttpd: weird behavior on multiple rewrite rule matches

    - by netmikey
    I have a 20-rewrite.conf for my php application looking like this: $HTTP["host"] =~ "www.mydomain.com" { url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/my_app.php" ) } I want to be able to put the webserver in kind of a "maintenance" mode while I update my application from scm. To do this, my idea was to enable an additional rewrite configuration file before this one. The 16-rewrite-maintenance.conf file looks like this: url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/maintenance_app.php" ) Now, on the maintenance page, I have a logo that doesn't get loaded. I get a 404 error. Lighttpd debug says the following: 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.349) -- sanatising URI 2012-12-13 20:28:06: (response.c.350) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (mod_access.c.135) -- mod_access_uri_handler called 2012-12-13 20:28:06: (response.c.470) -- before doc_root 2012-12-13 20:28:06: (response.c.471) Doc-Root : /www 2012-12-13 20:28:06: (response.c.472) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.473) Path : 2012-12-13 20:28:06: (response.c.521) -- after doc_root 2012-12-13 20:28:06: (response.c.522) Doc-Root : /www 2012-12-13 20:28:06: (response.c.523) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.524) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.541) -- logical -> physical 2012-12-13 20:28:06: (response.c.542) Doc-Root : /www 2012-12-13 20:28:06: (response.c.543) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.544) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.561) -- handling physical path 2012-12-13 20:28:06: (response.c.562) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.618) -- file not found 2012-12-13 20:28:06: (response.c.619) Path : /www/img/content/logo.png, /img/content/logo.png Any clue on why lighttpd matches both rules (from my application rewrite config and from my maintenance rewrite config) and concatenates them with a comma - that doesn't seem to make any sense?! Shouldn't it stop after the first match with rewrite-once?

    Read the article

  • E: Internal Error, Could not perform immediate configuration (2) on libattr1 ? in Ubuntu

    - by user32178
    Hi, I am working with Ubuntu latest version. While installing via apt-get install i tried to abort that by pressing Ctrl+Z. It terminate successfully ;). But next time when i tried to use apt-get, i got some error "lock" and "temporally unavailable" something like that and **I unfortunately i delete the /var/lib/dkpg folder.** after that i cant install anything from apt-get, getting an error. E: Internal Error, Could not perform immediate configuration (2) on libattr1 so how can i solve this issue?

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >