Search Results

Search found 832 results on 34 pages for 'apr'.

Page 26/34 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Apache (Solaris 10): 2 symlinks to the same file, one works the other doesn't

    - by justcatchingrye
    I'm seeing a strange issue with Apache I have a system that pulls a configuration file from a web server. I want to use a symlink with the name 'ocds-dpsarch01a.rules'. This doesn't work. However, if I change one character in that name and link it to the same file, it works - See below I can't think of any reason why one symlink would work when another doesn't? I would have thought either the Apache configuration is right and all symlinks work, or it isn't and no syslinks work(?) Any thoughts welcome ls -l /REMOVED/apache2/htdocs/rules/syslog/*cds-dpsarch01a.rules lrwxrwxrwx 1 root root 62 May 13 13:55 ocds-dpsarch01a.rules - /REMOVED/apache2/htdocs/templates/syslog/DCM_SST_DPST_01.rules lrwxrwxrwx 1 root root 62 May 13 13:52 xcds-dpsarch01a.rules - /REMOVED/apache2/htdocs/templates/syslog/DCM_SST_DPST_01.rules 1) Application starting and successfully reading configuration from the web server 13/05/2010 13:56:37: Information: Connecting ... 13/05/2010 13:56:37: Debug: Reading REMOVED:// REMOVED /rules/syslog/xcds-dpsarch01a.rules 13/05/2010 13:56:37: Debug: HTTP response: HTTP/1.1 200 OK Date: Thu, 13 May 2010 13:56:34 GMT Server: Apache Last-Modified: Fri, 09 Apr 2010 12:28:26 GMT ETag: "5073-a744-ee92ae80" Accept-Ranges: bytes Content-Length: 42820 Cache-Control: max-age=5 Expires: Thu, 13 May 2010 13:56:39 GMT NL7C-Filtered: Content-Type: text/plain Connection: close 13/05/2010 13:56:37: Debug: Plain text rules file detected. 2) Application starting and failing to read configuration from the web server 13/05/2010 13:56:55: Information: Connecting ... 13/05/2010 13:56:55: Debug: Reading REMOVED :// REMOVED /rules/syslog/ocds-dpsarch01a.rules 13/05/2010 13:56:55: Debug: HTTP response: HTTP/1.1 403 Forbidden Date: Wed, 12 May 2010 15:25:11 GMT Server: Apache Vary: accept-language,accept-charset Accept-Ranges: bytes Connection: close Content-Type: text/html; charset=iso-8859-1 Content-Language: en Expires: Wed, 12 May 2010 15:25:11 GMT 13/05/2010 13:56:55: Error: HTTP: HTTP/1.1 403 Forbidden Date: Wed, 12 May 2010 15:25:11 GMT Server: Apache Vary: accept-language,accept-charset Accept-Ranges: bytes Connection: close Content-Type: text/html; charset=iso-8859-1 Content-Language: en Expires: Wed, 12 May 2010 15:25:11 GMT 13/05/2010 13:56:55: Error: HTTP GET failed 13/05/2010 13:56:55: Error: Failed to open Rules file: REMOVED :// REMOVED /rules/syslog/ocds-dpsarch01a.rules

    Read the article

  • Simple end-to-end load and bottleneck monitoring for DB-based web sites

    - by T.J. Crowder
    What tools do you use / would you recommend for monitoring a Linux-based, DB-based website's servers for bottlenecks and load? The obvious goal being to know when growth has gotten to the point where it's necessary to scale up (or out) one or more of the bits and pieces because the current system won't be managing the load if an observed trend continues. I'm looking for general recommendations based on standard Linux load metrics, disk I/O metrics, network I/O metrics, etc., but if specifics are helpful: It'll be Tomcat6 using APR (possibly with a Varnish or similar caching and balancing front-end), MySQL, and either Ubuntu 8.04 LTS or 10.04 LTS depending on timing. I know about top, vmstat, iostat, bwmon and the like that collect and parse info from the /proc file system (et. al.); and obviously MySQL provides a lot of queriable performance information. I could use those directly, probably automating periodic monitoring logs with scripts and such. But I have a suspicion that I'd be reinventing a wheel... For example, Hyperic HQ seems to be along the lines of what I'm looking for. Others? Meta: I tend to think of "recommendation" questions as needing to be CW because there's no one right answer, but I see a lot of these here that aren't CWs, so I haven't marked it as one. I'll happily do so if enough people think I should.

    Read the article

  • Ubuntu-VirtualBox-LikeWiseOpen network disaster

    - by Sergio
    I've a virtual machine on VirtualBox 4.1.4 with Ubuntu 11.04. It was working perfectly, but after a reboot something really wrong happened: I wasn't able to connect to the internal network (same for NAT). $ sudo dhclient -v Internet Systems Consortium DHCP Client 4.1.1-P1 Copyright 2004-2010 Internet System Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Error creating socket to list interfaces; Permission denied Can't get list of interfaces. The network interface is PCnet-FAST III. Additional information: $ uname -a Linux LinuxFileServer 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:50 UTC 2011 i686 i686 i386 GNU/Linux Any ideas? Thanks EDIT: $ sudo ifconfig -a eth1 Link encap:Ethernet HWaddr 08:00:27:af:f2:c7 indirizzo inet6: fe80::a00:27ff:feaf:f2c7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisioni:0 txqueuelen:1000 Byte RX:0 (0 B) Byte TX:3870 (3.8 KB) Interrupt:10 lo Link encap:Loopback locale indirizzo inet:127.0.0.1 Maschera:255.0.0.0 indirizzo inet6: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisioni:0 txqueuelen:0 Byte RX:960 (960.0 B) Byte TX:960 (960.0 B)

    Read the article

  • VNC error: "Could not connect to session bus: Failed to connect to socket"

    - by GJ
    I started a vncserver on display :1 on an ubuntu machine. When I connect to it, I get a grey X window with an error message Could not connect to session bus: Failed to connect to socket. The vnc log is: Xvnc Free Edition 4.1.1 - built Apr 9 2010 15:59:33 Copyright (C) 2002-2005 RealVNC Ltd. See http://www.realvnc.com for information on VNC. Underlying X server release 40300000, The XFree86 Project, Inc Sun Mar 20 15:33:59 2011 vncext: VNC extension running! vncext: Listening for VNC connections on port 5901 vncext: created VNC server for screen 0 error opening security policy file /etc/X11/xserver/SecurityPolicy Could not init font path element /usr/X11R6/lib/X11/fonts/Type1/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/Speedo/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/misc/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/75dpi/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/100dpi/, removing from list! cat: /var/run/gdm/auth-for-link2-eGnVvf/database: No such file or directory gnome-session[24880]: WARNING: Could not make bus activated clients aware of DISPLAY=:1.0 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of GNOME_DESKTOP_SESSION_ID=this-is-deprecated environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of SESSION_MANAGER=local/dell:@/tmp/.ICE-unix/24880,unix/dell:/tmp/.ICE-unix/24880 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused Sun Mar 20 15:34:10 2011 Connections: accepted: 0.0.0.0::51620 SConnection: Client needs protocol version 3.8 SConnection: Client requests security type VncAuth(2) VNCSConnST: Server default pixel format depth 16 (16bpp) little-endian rgb565 VNCSConnST: Client pixel format depth 16 (16bpp) little-endian rgb565 gnome-session[24880]: Gtk-CRITICAL: gtk_main_quit: assertion `main_loops != NULL' failed gnome-session[24880]: CRITICAL: dbus_g_proxy_new_for_name: assertion `connection != NULL' failed Any ideas how to fix it?

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -qi jre Name : jre Relocations: /usr/java Version : 1.6.0_20 Vendor: Sun Microsystems, Inc. Release : fcs Build Date: Mon Apr 12 19:34:13 2010 Install Date: Thu May 6 06:36:17 2010 Build Host: jdk-lin-1586 Group : Development/Tools Source RPM: jre-1.6.0_20-fcs.src.rpm Size : 50708634 License: Sun Microsystems Binary Code License (BCL) Signature : (none) Packager : Java Software <[email protected]> URL : http://java.sun.com/ Summary : Java(TM) Platform Standard Edition Runtime Environment Description : The Java Platform Standard Edition Runtime Environment (JRE) contains everything necessary to run applets and applications designed for the Java platform. This includes the Java virtual machine, plus the Java platform classes and supporting files. The JRE is freely redistributable, per the terms of the included license. [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • [Ubuntu 10.04] mdadm - Can't get RAID5 Array To Start

    - by Matthew Hodgkins
    Hello, after a power failure my RAID array refuses to start. When I boot I have to sudo mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 to get mdadm to notice the array. Here are the details (after I force assemble). sudo mdadm --misc --detail /dev/md0: /dev/md0: Version : 00.90 Creation Time : Sun Apr 25 01:39:25 2010 Raid Level : raid5 Used Dev Size : 1465135872 (1397.26 GiB 1500.30 GB) Raid Devices : 6 Total Devices : 6 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 17 23:02:38 2010 State : active, Not Started Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K UUID : 44a8f730:b9bea6ea:3a28392c:12b22235 (local to host hodge-fs) Events : 0.1249691 Number Major Minor RaidDevice State 0 8 65 0 active sync /dev/sde1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 49 3 active sync /dev/sdd1 4 8 33 4 active sync /dev/sdc1 5 8 17 5 active sync /dev/sdb1 mdadm.conf: # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions /dev/sdb1 /dev/sdb1 # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md0 level=raid5 num-devices=6 UUID=44a8f730:b9bea6ea:3a28392c:12b22235 Any help would be appreciated.

    Read the article

  • CentOS networking BNX2

    - by james moore
    Having some trouble with my NICs. The server starts fine and I can wget/ping etc. However, when I /etc/init.d/networking restart I then receive the following error: Bringing up interface eth0: bnx2: fw sync timeout, reset code = 1030009 SIOCSIFFLAGS: Device or resource busy Consequently, the task fails. I have searched around on google users suggesting to disable PNP in the BIOS but I see no option. Here is some system information: $ ethtool -i eth0 driver: bnx2 version: 2.0.8-rh firmware-version: bc 2.9.1 $ uname -a Linux host 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux $/sbin/lspci | grep Broadcom 04:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) 08:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) $ lsmod | grep bnx2 bnx2i 81704 0 cnic 109512 1 bnx2i libiscsi2 77765 6 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi_tcp scsi_transport_iscsi2 73945 8 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2 bnx2 224780 0 scsi_mod 199001 15 mpt2sas,scsi_transport_sas,mptctl,be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2,scsi_transport_iscsi2,scsi_dh,sg,libata,megaraid_sas,sd_mod $ rmmod bnx2; modprobe bnx2 PCI: Enabling device 0000:05:00.0 (0158 -> 015a) PCI: Enabling device 0000:09:00.0 (0158 -> 015a) bnx2: fw sync timeout, reset code = 10300003 Any help would be appreciated as I am at a loss.

    Read the article

  • Postfix relay gives error 450 while it should be 550

    - by dieter-be
    Hi, we use postfix to do relaying. We get several messages like the following in /var/log/mail (slightly edited) Apr 13 13:30:29 linserver postfix/smtpd[1064]: NOQUEUE: reject: RCPT from unknown[$ip]: 450 4.1.1 <[email protected]>: Recipient address rejected: undeliverable address: host domain.be [$ip] said: 550 <[email protected]>: Recipient address rejected: User unknown in virtual mailbox table (in reply to RCPT TO command); from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<BLUESTREAK.domain.local> Now, when the master mail servers gives a 550, claiming that the user does not exist, I want the relay to also give a 550 back. What happens now is that it seems to return a 450, causing clients to keep messages queued, keep trying and only notify users after a certain period has passed. According to what I could find, the soft_bounce could cause this. But we have not enabled this option (and by default it's off according to postfix docs) It might also have something to do with the *_reject_code postconf values. Especially since the log message complains the unknown ip. But as you can see in the postconf output below, smtpd_sender_restrictions and smtpd_client_restrictions are empty. So even if it would try to do any restrictions there, 550 is the "worst" error going on, so that's what I expect to be returned to the client. postconf: http://sprunge.us/JYgB Thanks, Dieter

    Read the article

  • Can't install PHP after apt-get dist-upgrade

    - by WASD42
    I had a server with perfectly running for months classical LAMP installation on Ubuntu 8.04: Linux localhost 2.6.24-23-generic #1 SMP Wed Apr 1 21:47:28 UTC 2009 i686 GNU/Linux DISTRIB_ID=Ubuntu DISTRIB_RELEASE=8.04 DISTRIB_CODENAME=hardy DISTRIB_DESCRIPTION="Ubuntu 8.04.4 LTS" Don't know why I've started apt-get update, apt-get upgrade but everything ended with apt-get dist-upgrade :) Everything gone alright... But now I can't start nor Apache, nor PHP, because PHP was simply deleted. When I'm trying to install it: > apt-get install php5 <...> The following packages have unmet dependencies: php5: Depends: libapache2-mod-php5 (>= 5.2.4-2ubuntu5.17) but it is not going to be installed or php5-cgi (>= 5.2.4-2ubuntu5.17) but it is not going to be installed E: Broken packages When I'm trying to install libapache2-mod-php5: The following packages have unmet dependencies: libapache2-mod-php5: Depends: php5-common (= 5.2.4-2ubuntu5.17) but 5.3.6-6~dotdeb.1 is to be installed E: Broken packages I don't know what 5.3.6-6~dotdeb.1 is and where is this package, because I've already removed dotdeb repository from APT sources :/ Tried to do apt-get update, apt-get upgrade, apt-get install php5 php5-common php5-cli with no success... Don't know what to try next :(

    Read the article

  • How to enable error log in lighttpd properly?

    - by Tomaszs
    I have a Centos 5 system with Lighttpd and fastcgi enabled. It does log access but does not log errors. I have Internal Server Error 500 and no info in log and when I try to open not -existing file also - no info in error log. How to enable it properly? Below is list of modules that I've enabled: server.modules = ( "mod_rewrite", "mod_redirect", "mod_alias", # "mod_access", # "mod_cml", # "mod_trigger_b4_dl", # "mod_auth", "mod_status", "mod_setenv", "mod_fastcgi", # "mod_webdav", # "mod_proxy_core", # "mod_proxy_backend_fastcgi", # "mod_proxy_backend_scgi", # "mod_proxy_backend_ajp13", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) Here are setting of debugging: ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" debug.log-file-not-found = "enable" #debug.log-condition-handling = "enable" Setting of path to error and access log: ## where to send error-messages to server.errorlog = "/home/lxadmin/httpd/lighttpd/error.log" #### accesslog module accesslog.filename = "/home/lxadmin/httpd/lighttpd/ligh.log" Settings of fastcgi: fastcgi.debug = 1 fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 12, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "500" ) ))) And in included config file I have: server.errorlog = "/home/httpd/mywebsite.com/stats/mywebsite.com-error_log" What comes to log files: /home/httpd/mywebsite.com/stats/ -rw-r--r-- 1 apache apache 5173239 May 16 11:34 mywebsite.com-custom_log -rwxrwxrwx 1 root root 0 Mar 27 2009 mywebsite.com-error_log /home/lxadmin/httpd/lighttpd/ -rwxrwxrwx 1 apache apache 2184 Apr 22 22:59 error.log -rwxrwxrwx 1 apache apache 6088621 May 16 11:26 ligh.log I gave error logs chmod 777 for a try to check if it's the issue, but apparently it's not. So my question is: what to do to have error log enabled?

    Read the article

  • Error upgrading Ubuntu server from Intrepid to Jaunty

    - by Martin
    I'm trying to upgrade an old ubuntu server from 8.10 (Intrepid) to 9.04 (Jaunty). But it fails. root@server1:/# do-release-upgrade Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Does anyone have an idea why I get this error and how to fix it? UPDATE: I think i might have tracked the problem down. My /etc/update-manager/meta-release looks like this: [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed If i go to http://changelogs.ubuntu.com/meta-release it has this info for Jaunty: Dist: jaunty Name: Jaunty Jackalope Version: 9.04 Date: Thu, 23 Apr 2009 12:00:00 UTC Supported: 0 Description: This is the 9.04 release Release-File: http://archive.ubuntu.com/ubuntu/dists/jaunty/Release ReleaseNotes: http://changelogs.ubuntu.com/EOLReleaseAnnouncement UpgradeTool: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz UpgradeToolSignature: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz.gpg Those links starting with archive.ubuntu.com are broken since jaunty is EOL. I guess i could fix this by copying this file, replacing "archive" with "old-releases", host the modified file somewhere and change the url in the meta-release file. Is this a good solution or will it make me run into worse problems?

    Read the article

  • How to make ssh connection between servers using public-key authentication

    - by Rafael
    I am setting up a continuos integration(CI) server and a test web server. I would like that CI server would access web server with public key authentication. In the web server I have created an user and generated the keys sudo useradd -d /var/www/user -m user sudo passwd user sudo su user ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/var/www/user/.ssh/id_rsa): Created directory '/var/www/user/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /var/www/user/.ssh/id_rsa. Your public key has been saved in /var/www/user/.ssh/id_rsa.pub. However othe side, CI server copies the key to the host but still asks password ssh-copy-id -i ~/.ssh/id_rsa.pub user@webserver-address user@webserver-address's password: Now try logging into the machine, with "ssh 'user@webserver-address'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. I checked on the web server and the CI server public key has been copied to web server authorized_keys but when I connect, It asks password. ssh 'user@webserver-address' user@webserver-address's password: If I try use root user rather than my created user (both users are with copied public keys). It connects with the public key ssh 'root@webserver-address' Welcome to Ubuntu 11.04 (GNU/Linux 2.6.18-274.7.1.el5.028stab095.1 x86_64) * Documentation: https://help.ubuntu.com/ Last login: Wed Apr 11 10:21:13 2012 from ******* root@webserver-address:~#

    Read the article

  • How can I configure apache to cache the images that it is serving? Right now it is giving headers t

    - by Tchalvak
    Serving up images that don't seem to cache There's a LAPP (postgresql instead of mysql) running over on http://ninjawars.net. I just recently noticed that images don't seem to be caching with any kind of good frequency as I was reloading a page with a few images on it here: http://www.ninjawars.net/attack_player.php Here is an example image (they're probably all being served exactly the same): http://www.ninjawars.net/images/characters/fighter.png Checking the header, it seems that the caching is set to: Cache-Control:max-age=0 (the full header for this image-like-all-the-others is... Request URL:http://www.ninjawars.net/images/characters/fighter.png Request Method:GET Status Code:200 OK Request Headers Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Cache-Control:max-age=0 Referer:http://www.ninjawars.net/images/characters/fighter.png User-Agent:Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.3 Safari/533.4 Response Headers Accept-Ranges:bytes Content-Length:938 Content-Type:image/png Date:Thu, 13 May 2010 21:24:07 GMT ETag:"ffd4d-3aa-4837efc120540" Last-Modified:Mon, 05 Apr 2010 15:28:45 GMT Server:Apache ) So what modules or config or htaccess or whatever do I change to have it cache images, e.g. for 24 hours?

    Read the article

  • Symlinks are inaccessible by their full path on OS X

    - by Computer Guru
    Hi, I have symlinks pointing to applications placed in /usr/local/bin which is in the path. However, I can't run these applications from other folders. Even more weird, I can't access them by the full path to the symlink. [mqudsi@iqudsi:Desktop/EasyBCD]$ echo $path (03-26 13:42) /opt/local/bin /opt/local/sbin /usr/local/bin /usr/local/sbin/ /usr/local/CrossPack-AVR/bin /usr/bin /bin /usr/sbin /sbin /usr/local/bin /usr/X11/bin [mqudsi@iqudsi:local/bin]$ ls -l /usr/local/bin (03-26 13:47) total 24280 -rwxr-xr-x 1 mqudsi wheel 18464 May 14 2009 ascii-xfr -rwxr-xr-x 1 mqudsi wheel 12567 Mar 25 04:50 brew -rwxr-xr-x 1 mqudsi wheel 17768 Dec 11 12:41 bsdiff -rwxr-xr-x 1 mqudsi wheel 43024 Mar 28 2009 dumpsexp -rwxr-xr-x 1 mqudsi wheel 280 Sep 10 2009 easy_install -rwxr-xr-x 1 mqudsi wheel 288 Sep 10 2009 easy_install-2.6 -rwxr-xr-x 1 mqudsi wheel 39696 Apr 5 2009 fuse_wait lrwxr-xr-x 1 mqudsi wheel 29 Mar 25 04:53 git -> ../Cellar/git/1.7.0.3/bin/git [mqudsi@iqudsi:local/bin]$ /usr/local/bin/git (03-26 13:47) zsh: no such file or directory: /usr/local/bin/git Clearly the link is there, but I'm not able to get it to it :S

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

  • Lost Permission on Files using wrong chmod syntax Centos 5.5

    - by alloutfallout
    Hello, I was trying to remove write permissions on an entire directory, and I used the incorrect command: chmod 644 -r sites/default I meant to type chmod -R 644 sites/default The result was this: chmod: cannot access `644': No such file or directory $ ls -als sites total 24 4 drwxr-xr-x 5 user group 4096 Jan 11 10:54 . 4 drwxrwxr-x 14 user group 4096 Jan 11 10:11 .. 4 drwxr-xr-x 4 user group 4096 Jan 5 01:25 all 4 d-w------- 3 user group 4096 Jan 11 10:43 default 4 -rw-r--r-- 1 user group 1849 Apr 15 2010 example.sites.php I fixed the permissions on the default folder with $ chmod 644 sites/default But, the following ls shows a all the files with red backgrounds and question marks. I can't access any files unless I am root. $ ls -als sites/default total 0 ? ?--------- ? ? ? ? ? . ? ?--------- ? ? ? ? ? .. ? ?--------- ? ? ? ? ? default.settings.php ? ?--------- ? ? ? ? ? files ? ?--------- ? ? ? ? ? settings.php When I log in as root, I can edit all of the files, and their permissions appear correctly. I do not know how to undo the damage caused by using -r with chmod instead of -R. Any Suggestions?

    Read the article

  • Can I use CNAME with ip address? Why If works (sometimes)?

    - by Maciek Sawicki
    I believe that the easiest answer for the first question is "No, You have "A" for this", but I accidentally setup some subdomain using CNAME pointing to ip address and it worked on few computers in my office. I wonder how it was possible? Now, when I'm checking it from home I have following error: beast:~ viroos$ host somesubdomain.somedomain.com Host somesubdomain.somedomain.com not found: 3(NXDOMAIN) I'm 100% it used to work at my office (currently it looks like it doesn't, but I'm checking it on different machine). Therefore I'm not 100% if it worked due to some special network setup or because I tested it just after adding DNS entry. I know this story sounds, a little crazy/incredibly, but can someone help me solve this puzzle. //edit: I'm adding dig output ; <<>> DiG 9.6-ESV-R4-P3 <<>> somesubdomain.somedomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 60224 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;somesubdomain.somedomain.com. IN A ;; ANSWER SECTION: somesubdomain.somedomain.com. 67 IN CNAME xxx.xxx.xxx.xx1. ;; AUTHORITY SECTION: . 1800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012040901 1800 900 604800 86400 ;; Query time: 72 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Tue Apr 10 00:11:01 2012 ;; MSG SIZE rcvd: 136

    Read the article

  • Requesting better explanation for expires headers

    - by syn4k
    I have successfully implemented expires headers however, for several days I have been stumped by one thing. This article: http://www.tipsandtricks-hq.com/how-to-add-far-future-expires-headers-to-your-wordpress-site-1533 states Keep in mind that when you use expires header the files are cached in the browser until it expires so do not use this on files that changes frequently. Other sites indicate the same in my reading. But this doesn't seem to be true. I have updated an image, using the same name, several times. Each time I update and refresh my browser, the new image (with the same name) displays. I understand from this article that the old image should display unless I use a new name. Do you happen to know where the misunderstanding is? I have verified that the image in question has expires headers set on it: Request Headers: Host domain.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.28) Gecko/20120306 Firefox/3.6.28 FirePHP/0.5 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://domain.com/index.php Cookie __utma=1.61479883.1332439113.1332783348.1332796726.4; __utmz=1.1332439113.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);PHPSESSID=lv2hun9klt2nhrdkdbqt8abug7; __utmb=1.33.10.1332796726; __utmc=1; ck_authorized=true x-insight activate If-Modified-Since Mon, 26 Mar 2012 21:55:33 GMT Cache-Control max-age=0 Response Headers: Date Mon, 26 Mar 2012 22:06:50 GMT Server Apache/2.2.3 (CentOS) Connection close Expires Wed, 25 Apr 2012 22:06:50 GMT Cache-Control max-age=2592000

    Read the article

  • SSH works in putty but not terminal

    - by Ryan Naddy
    When I try to ssh this in a terminal: ssh [email protected] I get the following error: Connection closed by 69.163.227.82 When I use putty, I am able to connect to the server. Why is this happening, and how can I get this to work in a terminal? ssh -v [email protected] OpenSSH_6.0p1 (CentrifyDC build 5.1.0-472) (CentrifyDC build 5.1.0-472), OpenSSL 0.9.8w 23 Apr 2012 debug1: Reading configuration data /etc/centrifydc/ssh/ssh_config debug1: /etc/centrifydc/ssh/ssh_config line 52: Applying options for * debug1: Connecting to sub.domain.com [69.163.227.82] port 22. debug1: Connection established. debug1: identity file /home/ryannaddy/.ssh/id_rsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_rsa-cert type -1 debug1: identity file /home/ryannaddy/.ssh/id_dsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_dsa-cert type -1 debug1: identity file /home/ryannaddy/.ssh/id_ecdsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.0 debug1: Miscellaneous failure Cannot resolve network address for KDC in requested realm debug1: Miscellaneous failure Cannot resolve network address for KDC in requested realm debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP Connection closed by 69.163.227.82

    Read the article

  • How to disable mod_security2 rule (false positive) for one domain on centos 5

    - by nicholas.alipaz
    Hi I have mod_security enabled on a centos5 server and one of the rules is keeping a user from posting some text on a form. The text is legitimate but it has the words 'create' and an html <table> tag later in it so it is causing a false positive. The error I am receiving is below: [Sun Apr 25 20:36:53 2010] [error] [client 76.171.171.xxx] ModSecurity: Access denied with code 500 (phase 2). Pattern match "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" at ARGS:body. [file "/usr/local/apache/conf/modsec2.user.conf"] [line "352"] [id "300015"] [rev "1"] [msg "Generic SQL injection protection"] [severity "CRITICAL"] [hostname "www.mysite.com"] [uri "/node/181/edit"] [unique_id "@TaVDEWnlusAABQv9@oAAAAD"] and here is /usr/local/apache/conf/modsec2.user.conf (line 352) #Generic SQL sigs SecRule ARGS "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" "id:1,rev:1,severity:2,msg:'Generic SQL injection protection'" The questions I have are: What should I do to "whitelist" or allow this rule to get through? What file do I create and where? How should I alter this rule? Can I set it to only be allowed for the one domain, since it is the only one having the issue on this dedicated server or is there a better way to exclude table tags perhaps? Thanks guys

    Read the article

  • Jobs with anacron won't run

    - by mareser
    I would like to run two bash scripts daily using anacron in order to backup some data. Unfortunately I can't figure out why said scripts are not executed. For test purposes I let cron execute the scripts and it worked fine. cat /etc/anacrontab gives # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # These replace cron's entries 1 5 cron.daily nice run-parts --report /etc/cron.daily 7 10 cron.weekly nice run-parts --report /etc/cron.weekly @monthly 15 cron.monthly nice run-parts --report /etc/cron.monthly 1 5 TB_bak /bin/sh /home/vasco2/Dropbox/Scripts/backup_TB.sh 1 5 key_db_bak /bin/sh /home/vasco2/Dropbox/Scripts/bak_key_db.sh The output of ls ~/Dropbox/Scripts/ is backup_TB.sh bak_key_db.sh I use Linux Mint Katya. uname -a gives Linux vasco2 2.6.38-8-generic-pae #42-Ubuntu SMP Mon Apr 11 05:17:09 UTC 2011 i686 i686 i386 GNU/Linux I would be very happy if somebody could point me in the right direction on why those scripts won't get executed. P.S.: There is no anacron tag on superuser.com. Maybe somebody wants to change that.

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • How do I deliver mail for wildcard addresses to a particular user/alias/program?

    - by David M
    I need to configure sendmail so that mail delivered for wildcard addresses is accepted for delivery and then delivered to a user, alias, or directly to a script. I can rewrite the envelope/headers any number of ways, but I don't know how to accept the wildcard address when it's provided in RCPT TO: Everything I've tried so far winds up with a 550 user unknown error. So here's a specific example: I want to be able to handle any address that consists of a series of digits followed by a dot followed by a word, then pipe that to a script. If the headers get rewritten, that's OK, but I need the envelope to contain the actual Delivered-To address. Here's the sort of SMTP session I need: 220 blah.foo.com ESMTP server ready; Thu, 22 Apr 2010 20:41:08 -0700 (PDT) HELO blort.foo.com 250 blah.foo.com Hello blort.foo.com [10.1.2.3], pleased to meet you MAIL FROM: <[email protected]> 250 2.1.0 <[email protected]>... Sender ok RCPT TO: <[email protected]> 250 2.1.5 <[email protected]>... Recipient ok I tried some stuff with regex maps, but I never got past 550 user unknown.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >