Search Results

Search found 6637 results on 266 pages for 'usr'.

Page 187/266 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • installing latest apache on centos

    - by fivelitresofsoda
    hi, I'm trying to install the newest version of apache on my centos server. I did the following: Download $ wget http://httpd.apache.org/path/to/latest/version/ Extract $ gzip -d httpd-2_0_NN.tar.gz $ tar xvf httpd-2_0_NN.tar Configure $ ./configure Compile $ make Install $ make install Test $ PREFIX/bin/apachectl start And that all worked except the last step, when i type apachectl start it says 'command not found'. I ran this command from /usr/local/apache2/bin/ where it is installed but no cigar. Any idea what i am doing wrong? Thanks.

    Read the article

  • curl XPUT returning HTTP 500 error message

    - by pradeepchhetri
    I have added the following changes in nginx configuration. server { listen 8080; root /usr/share/nginx/www; client_body_temp_path /tmp/; dav_methods PUT DELETE MKCOL COPY MOVE; create_full_put_path on; dav_access user:rw group:rw all:rw; } I have my nginx configured with --with-http_dav_module also. But when I am trying to running the command: $ curl -XPUT http://172.16.31.127:8080/test.html -d 'test' I am getting 500 Internal Server error. Can anyone help me out in solving this.

    Read the article

  • Is there a way to control two instantiated systemd services as a single unit?

    - by rascalking
    I've got a couple python web services I'm trying to run on a Fedora 15 box. They're being run by paster, and the only difference in starting them is the config file they read. This seems like a good fit for systemd's instantiated services, but I'd like to be able to control them as a single unit. A systemd target that requires both services seems like the way to approach that. Starting the target does start both services, but stopping the target leaves them running. Here's the service file: [Unit] Description=AUI Instance on Port %i After=syslog.target [Service] WorkingDirectory=/usr/local/share/aui ExecStart=/opt/cogo/bin/paster serve --log-file=/var/log/aui/%i deploy-%i.ini Restart=always RestartSec=2 User=aui Group=aui [Install] WantedBy=multi-user.target And here's the target file: [Unit] Description=AUI [email protected] [email protected] After=syslog.target [Install] WantedBy=multi-user.target Is this kind of grouping even possible with systemd?

    Read the article

  • Nginx, logrotate and empty files

    - by tzulberti
    I have a problem with nginx/logrotate. The problems is that nginx is logging access to 2 files (main and data). I have the following contrab setting: 0 * * * * /usr/sbin/logrotate -f /home/orwell/orwell-setup/bin/logrotate-nginx And the file "logrotate-nginx" has the following content: /tmp/data.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } /tmp/main.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } The work is done in the two files, but there is a problem that nginx stops logging into those files. Both files are created, but they are empty. Any ideas why nginx stop logging info to both files?

    Read the article

  • Unknown/unsupported storage engine: InnoDB | Mysql Ubuntu

    - by Kayle
    I recently upgraded from the previous LTS Ubuntu to Precise and now mysql refuses to start. It complains of the following when I attempt to start it: ?$ sudo service mysql restart stop: Unknown instance: start: Job failed to start And this shows in "/var/log/mysql/error.log": 120415 23:01:09 [Note] Plugin 'InnoDB' is disabled. 120415 23:01:09 [Note] Plugin 'FEDERATED' is disabled. 120415 23:01:09 [ERROR] Unknown/unsupported storage engine: InnoDB 120415 23:01:09 [ERROR] Aborting 120415 23:01:09 [Note] /usr/sbin/mysqld: Shutdown complete I've checked permissions on all the mysql directories to make sure it had ownership and I also renamed the previou ib_logs so that it could remake them. I'm just getting no where with this issue right now, after looking at google results for 2 hours.

    Read the article

  • Git can no longer open emacs as its editor

    - by mwilliams
    I'm running Git version 1.7.3.2 that I built from source, zsh is my shell, and emacs is my editor. Recently I started seeing the following: /usr/local/Cellar/git/1.7.3.2/libexec/git-core/git-sh-setup: line 106: emacs: command not found Could not execute editor My zshrc looks like the following so I can use the Cocoa build and the console binary provided with it. EMACS_HOME="/Applications/Emacs.app/Contents/MacOS" function e() { PATH=$EMACS_HOME/bin:$PATH $EMACS_HOME/Emacs -nw $@ } function ec() { PATH=$EMACS_HOME/bin:$PATH emacsclient -t $@ } function es() { e --daemon=$1 && ec -s $1 } function el() { ps ax|grep Emacs } function ek() { $EMACS_HOME/bin/emacsclient -e '(kill-emacs)' -s $1 } function ecompile() { e -eval "(setq load-path (cons (expand-file-name \".\") load-path))" \ -batch -f batch-byte-compile $@ } alias emacs=e alias emacsclient=ec And I also have export EDITOR="emacs" and have tried adding export GIT_EDITOR="emacs" (and swapping that out with "e") But whatever I try I can't get git to open emacs whenever I need to do a commit or an interactive rebase, etc etc...

    Read the article

  • How can I persist certificates in Java's cacerts?

    - by Alan Spark
    We need to have a certificate in Java's cacerts keystore for one of our servers that is authenticated by LDAP. We are using Ubuntu server. We have successfully done this by updating the cacerts file in /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/security but occasionally a Java update is installed and the cacerts file seems to be getting replaced by a default one that doesn't contain our changes. This doesn't happen very often but it is becoming a bit of a pain when it does happen. Is there a better way of adding things to cacerts so that they don't get lost when a Java update happens? Thanks, Alan

    Read the article

  • emails getting sent with wrong "from" address

    - by Errol Gongson
    I have a postfix/dovecot system setup on Ubuntu 10.04, and it sends/receives emails fine, but when I send emails they are all from [email protected]. For example, I have a user called "info" and when I try to send an email using mutt from this Mailbox "/home/vmail/mydomain.com/info/Maildir" the email will send find but it will be from "[email protected]" and not "[email protected]". I have 3 mailboxes (/home/vmail/mydomain.com/root/Maildir, /home/vmail/mydomain.com/root/postmaster, and /home/vmail/mydomain.com/root/info) and they all send and receive emails. I am new to postfix and dovecot... can someone who knows what they are doing help me out on this one?? 30 myhostname = mail.mydomain.com 31 alias_maps = hash:/etc/aliases 32 alias_database = hash:/etc/aliases 33 myorigin = mydomain.com #have tried setting myorigin = mail.mydomain.com and still same problem 34 mydestination = mail.mydomain.com, localhost, localhost.localdomain 35 relayhost = 36 mynetworks = 127.0.0.0/8 37 mailbox_size_limit = 0 38 recipient_delimiter = + 39 inet_interfaces = all 40 html_directory = /usr/share/doc/postfix/html 41 message_size_limit = 30720000 42 virtual_alias_domains = This is from the aliases file postmaster: root root: [email protected]

    Read the article

  • How to whitelist a user agent for nginx?

    - by djb
    I'm trying to figure out how to whitelist a user agent from my nginx conf. All other agents should be shown a password prompt. In my naivity, I tried to put the following in before deny all: if ($http_user_agent ~* SpecialAgent ) { allow; } but I'm told "allow" directive is not allowed here (!). How can I make it work? A chunk of my config file: server { server_name site.com; root /var/www/site; auth_basic "Restricted"; auth_basic_user_file /usr/local/nginx/conf/htpasswd; allow 123.456.789.123; deny all; satisfy any; #other stuff... } Thanks for any help.

    Read the article

  • Forcing Acrobat Reader font

    - by Jack
    I have a netbook with Linpus Linux and I'm trying to open automatically generated documents with Acrobat Reader that use Verdana but without having it embedded inside the PDF file. Linpus doesn't come natively with any Verdana font so I had to install them inside /usr/share/fonts/by doing mkfontdirand fc-cacheto force a recache of the fonts. Then I've been able to select it inside other programs (eg. OpenOffice) but I'm still unable to open these PDFs. It seems that Acrobat is unable to find the font anyway. Since I have no control on how these PDFs are generated, is there a way to force Acrobat to use a specific font is the one it needs is unfound? Or maybe Acrobat needs a different kind of font configuration on Linux? Thanks in advance

    Read the article

  • LPR command won't recognize CUPS printer

    - by Datapimp23
    I have a cups server with one shared printer configured on it. It prints test pages without problems. printername (Idle, Accepting Jobs, Shared) Description: desc Location: Driver: Zebra ZPL Label Printer (grayscale, 2-sided printing) Connection: socket://172.20.50.26 Defaults: job-sheets=none, none media=oe_w288h432_4x6in sides=one-sided This is the output from lpstat -t. it shows that the printer is idle and accepting requests admin@SERVER:~$ lpstat -t scheduler is running no system default destination device for printername: socket://172.20.50.26 printername accepting requests since Thu 26 Jan 2012 01:29:35 PM CET printer printername is idle. enabled since Thu 26 Jan 2012 01:29:35 PM CET Now when I want to send a printjob to it via an LPR command it won't recognize the printer /usr/bin/lpr -P printername test.pdf Result lpr: ttn_seg_zebra1: unknown printer What am I missing here ?

    Read the article

  • I don't see the running guest in virsh

    - by Louise Hoffman
    Using CentOS 5 with KVM. I have downloaded this KVM applicance, and when unzipped it is just a .img file. No xml file supplied. I can start the guest with /usr/libexec/qemu-kvm -hda /data/kvm/slash.img -m 512 and it works. Now I would like to make a config file for the guest. The problem is when I do # virsh -c qemu:///system list Id Name State ---------------------------------- # I don't see the guest as expected. Does anyone know what is wrong?

    Read the article

  • Apache: tmp is not writable

    - by Patrick
    hi, I've installed Drupal on a new webserver and I get the following errors: warning: is_writable() [function.is-writable]: open_basedir restriction in effect. File(/tmp) is not within the allowed path(s): (/customers/rollergirl.ch/rollergirl.ch:/var/www/diagnostics:/usr/share/php) in /customers/rollergirl.ch/rollergirl.ch/httpd.www/drupal/sites/all/modules/imagecache/imagecache.install on line 37. ImageCache Temp Directory /tmp is not writeable by the webserver. I guess this happen because the server is not configured with a writable tmp folder I don't have access to Apache configuration file (I only know for sure it is Apache). Could you suggest me what to do ? I can only contact web server service ? thanks

    Read the article

  • The 'which' command returns nothing via cron, but works via console

    - by Zárate
    Hi there, I've written a little utility in haXe + Neko that needs to execute some GIT commands. To avoid hardcoding the path to the GIT executable I'd like to use the which command to find out where it is. Everything works as expected when running manually from the console, but not when the the app runs on a cron job. I'm aware of the restricted environment (here or here) when you run a script using cron, but still surprised this doesn't work: /usr/bin/which git >> /home/user/git.txt The text file is created but the content is empty. Again, when run from the console it works as expected. Any ideas? I'm running OS X Leopard, if that helps. Thanks : ) Juan

    Read the article

  • Mysql migrate huge db from innodb to ndbcluster Err: the table is full

    - by Nguyen Trong Nhan
    I'm trying to migrate old database to mysql cluster (4 data nodes) by using command: ALTER TABLE sample ENGINE=NDBCLUSTER but I'm getting the following error: The table '#sql-7ff3_3' is full There are approximately 300 mil rows in this table. Here are my config file: /mysql-cluster/config.ini [NDBD DEFAULT] NoOfReplicas=2 DataDir=/data/mysql-cluster/ndb/ BackupDataDir=/data/mysql-cluster/backup/ DataMemory=10G IndexMemory=5G TimeBetweenLocalCheckpoints=6 FragmentLogFileSize=256MB NoOfFragmentLogFiles=50 MaxNoOfOrderedIndexes=8000 MaxNoOfConcurrentOperations=100000 MaxNoOfTables = 10000 RedoBuffer=128M MaxNoOfAttributes=5000 MaxNoOfUniqueHashIndexes=1024 /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/data/mysql-cluster/mysqld/ event_scheduler=on default-storage-engine=ndbcluster ndbcluster ndb-connectstring=192.168.x.x,192.168.x.x innodb_file_per_table innodb_buffer_pool_size = 512MB key_buffer = 512M key_buffer_size = 512M sort_buffer_size = 512M table_cache = 1024 read_buffer_size = 512M

    Read the article

  • after install python 2.7.3 yum is broken

    - by user468587
    i installed libxml2-2.9.0 and libxslt-1.1.27 then yum is broken any yum command that i ran i got the result of : There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: No module named yum Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.4.3 (#1, Jan 21 2009, 01:11:33) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] If you cannot solve this problem yourself, please go to the yum faq at: http://yum.baseurl.org/wiki/Faq then i thought python version is way too old and install python 2.7.3 and install it from scratch, after some wrong trials it got worse and worse, now when i run 'python -V' i got version 'Python 2.7.3', when i run '/usr/bin/python -V', it returned 'python-2.4.3-24.el5', and no matter what i did the yum is still broken with that message. how can i get yum back? my os is: linux 2.6.18-164.11.1.el5 x86_64 GNU/Linux

    Read the article

  • what is the best setting for using lighttpd on 8G ram?

    - by user39639
    I have running 8GB ram and 8 x Xeon 3361 system! What is the best setting for running simultaneous connection! What is the maximum? Is setting like this correct? server.max-keep-alive-requests = 0 server.max-keep-alive-idle = 10 server.max-read-idle = 60 server.max-write-idle = 60 server.event-handler = "linux-sysepoll" server.max-fds = 2048 fastcgi.server = ( ".php" = ( "localhost" = ( "socket" = "/tmp/php-fastcgi.socket", "bin-path" = "/usr/bin/php-cgi", "max-procs" = 20, "bin-environment" = ( "PHP_FCGI_CHILDREN" = "40", "PHP_FCGI_MAX_REQUESTS" = "800" ), "broken-scriptfilename" = "enable" ) ) ) please help me!

    Read the article

  • Testing php mail() in localhost problem.

    - by Samir Ghobril
    Hey guys, recently I just installed msmtp in linux and I even send a mail from the terminal and it worked: echo -e "Subject: Test Mail\r\n\r\nThis is a test mail" |msmtp --debug --from=default -t [email protected] But in php, after editing the php.ini file to have this: sendmail_path = '/usr/bin/msmtp -t' and using this piece of code: <?php if ( mail ( '[email protected]', 'Test mail from localhost', 'Working Fine.' ) ){ echo 'Mail sent'; } else{ echo 'Error. Please check error log.'; } ?> I get the Mail sent message but don't receive a message in my inbox. Not even in the spam folder. Anything wrong I'm doing?

    Read the article

  • Force ntpd to make changes in smaller steps

    - by David Wolever
    The NTP documentation says: Under ordinariy conditions, ntpd adjusts the clock in small steps so that the timescale is effectively continuous and without discontinuities - http://doc.ntp.org/4.1.0/ntpd.htm However, this is not at all what I have noticed in practice. If I manually change the system time backwards or forwards 5 or 10 seconds then start ntpd, I notice that it adjusts the clock in one shot. For example, with this code: #!/usr/bin/env python import time last = time.time() while True: time.sleep(1) print time.time() - last last = time.time() When I first change the time, I'll notice something like: 1.00194311142 8.29711604118 1.0010509491 Then when I start NTPd, I'll see something like: 1.00194311142 -8.117301941 1.0010509491 Is there any way to force ntpd to make the adjustments in smaller steps?

    Read the article

  • Cron Jobs unable to deliver email error report

    - by root
    I am sure I have right syntax and I am still unable to receive report emails to my email address. My OS is CentOS 6.4. My Crontab script is MAILTO="[email protected]" * * * * * /usr/bin/php5 /home/myusername/public_html/cron.php /post/find_submit_test/1/ Email address [email protected] is working fine and I tested sendmail from ssh which is too working fine but cron reports are unable to be delivered. I checked WHM for notification settings couldn't even find anything relevant there. Please advice me how to fix this. Thanks

    Read the article

  • Allowing outbound traffic with APF/iptables for OpenVZ container

    - by David
    I have apf installed on a OpenVZ container (proxmox 2.1). The config is pretty much vanilla and things are working. My external services like ssh and http are working. My problem is that all outbound traffic on http/https is blocked. How do I allow all outbound traffic for http/https. If I change EGF to 1 like this, all inbound and outbound traffic gets blocked EGF="1" EG_TCP_CPORTS="21,25,80,443,43,53" EG_UDP_CPORTS="20,21,53" EG_ICMP_TYPES="all" I opened a single outbound rule with the following # /usr/local/sbin/apf -a downloads.wordpress.org How do I allow all outbound traffic on http/https without blocking all traffic? Why would I allow all inbound ssh/http traffic and block all outbound traffic?

    Read the article

  • Why isn't this smbmount attempt working?

    - by Max Williams
    I can successfully access one of our local samba shares, which is on a windows pc (called marina) as follows: $ sudo /usr/bin/smbclient \\\\marina\\resource_library <my password> Domain=[MARINA] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager] smb: \> So, that works. I'm now trying to mount the above location (the resource_library folder on marina) to /mnt/resource_library (as a read only folder), but it keeps failing - i've tried a few variations of specifying the location: $ sudo smbmount \\\\marina\\resource_library /mnt/resource_library -o username=max,password=<my password>,r mount error: could not resolve address for marina: No address associated with hostname No ip address specified and hostname not found and $ sudo smbmount //marina/resource_library /mnt/resource_library -o username=max,password=<my password>,r mount error: could not resolve address for marina: No address associated with hostname No ip address specified and hostname not found and both of the above with MARINA instead of marina. It's bound to be some dumb mistake i'm making, can anyone see it? cheers, max

    Read the article

  • Yum install packages to an alternate directory, without chroot?

    - by Stefan Lasiewski
    I would like to use the Foswiki yum repository to install Foswiki (296 packages). The default installation path is /var/lib/, but I want to install it to an alternate location at /opt/www/. In the future, I still want to use yum to check for and apply updates to the packages. Is it possible to use yum to install packages to an location which is different then the default location provided by RPMs? Does Yum provide anything similar to ./configure --prefix=/usr/local/ or rpm --install --prefix=/opt/local? Yum provides the --installroot option, but that appears to primarily for chroot environments.

    Read the article

  • postfix: force server to send mail outside of localhost

    - by LoneWolfPR
    I have a php file that sends mail using the mail() function. The problem is one of the forms sends to a domain that is registerred on my server while having the mail handled on a different server. Postfix looks locally only. When it doesn't find the email address is rejects the message. How can I configure postfix to send mail to all domains through the internet and not locally? Update Ok. So it wasn't a postfix issue at all. I simply needed to turn off mail to that domain from the command line. For anyone that needs that command it is (at least on my system): /usr/local/psa/bin/domain --update example.com -mail_service false

    Read the article

  • Why are my log in times taking so long in Linux?

    - by Jamie
    In recent weeks, login times on my Ubuntu server have started timing out; both through SSH and the local command line console. Examination of the /var/auth.log yields nothing interesting. How can I diagnose long log in times on my Ubuntu server? I should mention, also, that no updates have been performed since the problem has started, and that the /, /boot/ and /usr/ file systems are mounted as readonly. [Edit] This is a stand alone machine, so it doesn't authenticate with Active Directory, LDAP etc. Also, the login prompt is responsive, as is the password prompt. Upon typing the password then CR, I'll timeout. After four a five tries, I will be able to login, although I'm worried this will start taking longer.

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >