Search Results

Search found 24814 results on 993 pages for 'linux distro'.

Page 398/993 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Lighttpd mod_accesslog not logging fastcgi requests

    - by zepatou
    I have recently installed a lighttpd for serving a python script via mod_fastcgi. Everything works fine except that I don't get the requests handled by mod_fastcgi logged in the access.log file (requests on port 80 are logged though). My lighttpd version is 1.4.28 on a Debian 6.0. I used the same working configuration a Ubuntu server 10.04 with lighttpd 1.4.26 and it worked. Here is my config lighttpd.conf server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", ) server.document-root = "/var/www/" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/home/log/lighttpd/error.log" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) accesslog.filename = "/home/log/lighttpd/access.log" url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) server.pid-file = "/var/run/lighttpd.pid" include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" conf-enabled/10-fastcgi.conf server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/" => ( ( "min-procs" => 1, "check-local" => "disable", "host" => "127.0.0.1", # local "port" => 3000 ), ) ) Any idea ?

    Read the article

  • Server not responding to SSH and HTTP but ping works

    - by yes123
    Hello guys, I requested an hard reboot because none of ssh and http worked. Ping worked normally. Which logs should i check to understand what was the problem? Thanks! (debian 6 on lamp) Edit: my memory and swap: Mem: 4040068k total, 1114920k used, 2925148k free, 109212k buffers Swap: 1051384k total, 0k used, 1051384k free, 283820k cached 4 GB ram (and more than 1TB of HDD) The cause is from 2 days ago: look how the usage of swap goes +60% in less than 10hours My control panel reports this as top 5 memory usage process: If every apache2 process is 190MB large that sux because IF i do TOP i have 262 sleeping process most of them are apache2! My apache mpm_prefork settings are: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 ServerLimit 1500 MaxClients 1500 MaxRequestsPerChild 2000 </IfModule> KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 4

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • how do I set up a virtual host (it's not working, and I've done everything right)

    - by piratepartypumpkin
    My router redirects port 80 to port 8080. My router works fine and my domain name is routed properly. This is my virtual hosts file: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /home/admins/lampstack-5.3.16-0/apps/wordpress ServerName example.com ServerAlias www.example.com </VirtualHost> I can access my website by entering "mywebsite.com:8080" but I cannot access it by entering "mywebsite.com" For further information, this is a part of my httpd.conf: Listen 8080 Servername localhost:8080 DocumentRoot "/home/admins/lampstack-5.3.16-0/apache2/htdocs <Directory /> Options FollowSymLinks AllowOverride None Order deny, allow deny from all </Directory> <Directory "/home/admins/lampstack-5.3.16-0/apache2/htdocs"> Options FollowSymLinks AllowOverride None Order allow, deny allow from all </Directory>

    Read the article

  • Why does sshd give a different identification when connecting through netcat?

    - by Robbie Mckennie
    I have been attempting to create a way to ssh into a machine hiding behind a firewall. I set up my ssh client with the option Proxycommand /usr/bin/ncat -l 2000, and then I connect it to sshd with ncat <client> 2000 -c "sshd -i" on the server. It works in that I can get a shell on the server, but the server sends a different key than when I use normal ssh. So the question is, why? Is the key different when sshd is called in this unusual way?

    Read the article

  • Different external ip addresses from different sites

    - by user630286
    My router is ClearOS 6(Centos 6). In my router, I have two external (internet) network connections from two ISP's. The primary connection is eth2 connected to a cable modem and the second one is ppp0 connected to a dsl modem. I have assigned eth2 as the primary connection (with a high metric value). In fact this is done through clearos's multiwan web interface. I have a test in my Nagios to monitor whether the primary connection. This connection is done based on the result of curl ifconfig.me But it seems that ifconfig.me is always giving the ip address of my secondary connection. I tested it through a browser. Yes ifconfig.me gives the secondary internet's(ppp0) ip address. But whatismyipaddress.[com|org] give my primary ip address (eth2). I checked the default route on the router through ip route list 0/0 which also shows the primary connection (eth2) as the default route. The traceroute www.google.com and traceroute ifconfig.me both seems to trace through the primary connection (eth2). As our secondary internet connection has only got a limited download, I don't want to end up having to pay a large sum at the end of the month. Has somebody got an idea why the ifconfig.me shows my secondary address? What is the best way to ensure that my router(and thus the lan) use the right internet connection.

    Read the article

  • Cron ignoring an update to crontab

    - by GJ
    I've commented out a line in the crontab on a debian server, which I guess was there by default yet was causing me to get error emails every hour: # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly However, the error emails keep coming in as if it hasn't been commented out. The error emails: Subject: Cron <root@(none)> root cd / && run-parts --report /etc/cron.hourly (failed) /bin/sh: root: not found Any ideas?

    Read the article

  • What's going on with traceroute?

    - by Kevin
    The following is what happens when I run traceroute from a certain location: # traceroute google.com traceroute to google.com (74.125.227.39), 30 hops max, 60 byte packets 1 gateway.local.enactpc.com (10.0.0.1) 0.138 ms 0.101 ms 0.084 ms 2 * * * 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * Absolutely nothing of interest... Now, originally I thought this was just a fact of the location's network set up. (I assume they block pings or something...) However, watch what happens when I use nmap to run a traceroute... # nmap -sP --traceroute google.com Starting Nmap 5.21 ( http://nmap.org ) at 2012-09-25 22:18 CDT Nmap scan report for google.com (74.125.227.40) Host is up (0.034s latency). Hostname google.com resolves to 11 IPs. Only scanned 74.125.227.40 rDNS record for 74.125.227.40: dfw06s06-in-f8.1e100.net TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 0.19 ms gateway.local.enactpc.com (10.0.0.1) 2 1.93 ms 99-20-92-1.lightspeed.austtx.sbcglobal.net (99.20.92.1) 3 25.61 ms 99-20-92-2.lightspeed.austtx.sbcglobal.net (99.20.92.2) 4 ... 6 7 23.68 ms 12.83.68.137 8 31.30 ms gar23.dlstx.ip.att.net (12.122.85.73) 9 ... 10 31.82 ms 72.14.233.65 11 32.27 ms 209.85.250.77 12 32.98 ms dfw06s06-in-f8.1e100.net (74.125.227.40) Nmap done: 1 IP address (1 host up) scanned in 3.29 seconds When using nmap I get A LOT more results than with traceroute, why? Note, I checked, and the difference in target IP addresses is not related...

    Read the article

  • Only tunnel certain applications via OpenVPN

    - by jinjin
    Hi, I've purchased a VPN solution, it works correctly when I have "redirect-gateway def1" in the configuration file (routing all traffic through the VPN). However when I remove that line from the configuration file, I am still able to ping-out of the machine (ping -I tap0), however I cannot ping the IP assigned to the machine (it's a public ip), i get the error: Destination Host Unreachable. I only want to have certain applications sending traffic through the VPN tunnel (eg: ZNC, irssi), all of which i can select which IP they use. However they can't recieve any data, making the tunnel essentially useless to me when disabling redirect-gateway. Any ideas on how to allow specific applications use the tunnel, without of forcing everything to go through it? My configuration file is as follows: dev tap remote #.#.#.# float #.#.#.# port 5129 comp-lzo ifconfig #.#.#.# 255.255.255.128 route-gateway #.#.#.# #redirect-gateway def1 secret key.txt cipher AES-128-CBC The output of ifconfig -a when the tunnel is connected: tap0 Link encap:Ethernet HWaddr 00:ff:47:d3:6d:f3 inet addr:#.#.#.# Bcast:#.#.#.# Mask:255.255.255.255 inet6 addr: <snip> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:612 errors:0 dropped:0 overruns:0 frame:0 TX packets:35 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:25704 (25.1 KiB) TX bytes:6427 (6.2 KiB) EDIT: the Bcast:#.#.#.# (ifconfig) is different from route-gateway #.#.#.# (openvpn) if that makes any difference.

    Read the article

  • Debian: SSH: "PermitRootLogin=forced-commands-only" stopped working

    - by Brent
    I have several servers running Debian Lenny. Just recently I discovered the PermitRootLogin=forced-commands-only directive for ssh, which allows me to run a scripted rsync as root with an ssl key, without enabling more generalized root ssh access. However, last week this stopped working - it appears on all of my servers - and I can't figure out why. Everything continues to work fine with PermitRootLogin=yes, but I would prefer to block root logins - especially via passwords. The day it stopped working, we reconfigured some of the ports on one of our switches (which we later reverted), but I can't see that affecting this, since it still works with PermitRootLogin set to yes. How can I diagnose why the forced-commands-only directive has apparently stopped working?

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • Accidentally deleted /opt/local/bin without backup. Any help?

    - by Aaron
    Hi all, I'm on a Mac OS X 10.5.8 I was recently uninstalling mysql5 from /opt/local/bin. I typed: rm -rf /opt/local/bin mysql* instead of rm -rf /opt/local/bin/mysql* This deleted my entire /opt/local/bin directory which puts me in a bit of a bind. Is there any way to recover these files? If not, I have a friend that is using a similar set of programs, would it be possible to use the contents of his folder? If I end up needing to re-install everything in this folder, what is the best way to go about doing this? Thanks in advance!

    Read the article

  • Ubuntu can't install an older version of a package

    - by Trevor Newhook
    When I try to do an apt-get install, I keep getting an error: Depends: libgtk-3-common (= 3.4.1-0ubuntu1) but 3.4.2-0ubuntu0.4 is to be installed when I run sudo apt-get -f install, I get several dpkg: warning: files list file for package 'XXX' missing, assuming package has no files currently installed. then Preparing to replace libgtk-3-bin 3.4.1-0ubuntu1 (using .../libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb) ... Adding 'diversion of /usr/sbin/update-icon-caches to /usr/sbin/update-icon-caches.gtk2 by libgtk-3-bin' dpkg-divert: error: rename involves overwriting `/usr/sbin/update-icon-caches.gtk2' with different file `/usr/sbin/update-icon-caches', not allowed dpkg: error processing /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure why it's complaining about a newer version of a package, but any help would be appreciated

    Read the article

  • How to maintain a SSD drive on Ubuntu ?

    - by Julien Nicoulaud
    I am running Ubuntu 10.04 on a Intel X25-M PostVille 160 Go SSD drive. How can I tell if there's something wrong ? What should/can I do to maintain its performance/health ? Should I use TRIM and how often ? This may look as a duplicate of this question, but I am more asking in term of good practices and learning how to use this new technology the right way...

    Read the article

  • Iptables NAT logging

    - by Gerard
    I have a box setup as a router using Iptables (masquerade), logging all network traffic. The problem: Connections from LAN IPs to WAN show fine, i.e. SRC=192.168.32.10 - DST=60.242.67.190 but for traffic coming from WAN to LAN it will show the WAN IP as the source, but the routers IP as the destination, then the router - LAN IP. I.e. SRC=60.242.67.190 - DST=192.168.32.199 SRC=192.168.32.199(router) - DST=192.168.32.10 How do I configure it so that it logs the conversations correctly? SRC=192.168.32.10 - DST=60.242.67.190 SRC=60.242.67.190 DST=192.168.32.10 Any help appreciated, cheers

    Read the article

  • crontab: question about a special case of the dash character in the time field spec

    - by mdpc
    In the SuSE /etc/crontab the entry to run the cron.{hourly,daily,monthly,weekly} scripts is coded as: -*/15 * * * * root test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons /dev/null 2&1 Notice that the very first character of the specification is a dash character (-), and this is NOT a typo. Can somebody explain what the time spec '-*/15' means? BTW, the stuff seems to be running fine. Thanks

    Read the article

  • Cannot boot from Yumi multiboot USB stick

    - by Amator
    I've just created a multiboot USB stick using Yumi. I tried to start my notebook (Asus K70IO) using it, but all I see is just a black screen with blinking underscore even after waiting for minutes. If during this time I remove the USB stick I get the message: "Operating system load error". How do I properly load my Yumi USB stick and use it? I've tried formatting it using Yumi's checkbox to format the stick in FAT32 too, but it didn't help. Now I tried to use Sardu 2.0.5 and met same problem: black screen and blinkin underscore, if I remove stick I see "Operating system load error" and my OS starts to boot. At the same time if I create bootable USB stick from ISO using UltraISO it boots smoothly.

    Read the article

  • How to disable or tune filesystem cache sharing for OpenVZ?

    - by gertvdijk
    For OpenVZ, an example of container-based virtualization, it seems that host and all guests are sharing the filesystem cache. This sounds paradoxical when talking about virtualization, but this is actually a feature of OpenVZ. It makes sense too. Because only one kernel is running, it's possible to benefit from sharing the same pages of filesystem cache in memory. And while it sounds beneficial, I think a set up here actually suffers in performance from it. Here's why I think why: my machines aren't actually sharing any files on disk so I can't benefit from this feature in OpenVZ. Several OpenVZ machines are running MySQL with MyISAM tables. MyISAM relies on the system's filesystem cache for caching of data files, unlike InnoDB's buffer pool. Also some virtual machines are known to do heavy and large I/O operations on the same filesystem in the host. For example, when running cat *.MYD > /dev/null on some large database in one machine, I saw the filesystem cache lowering in another, monitored by htop. This essentially flushes all the useful filesystem cache in guests (FIFO) and so it flushes the MySQL caches in the guests. Now users are complaining that MySQL is very slow. And it is. Some simple SELECT queries take several seconds on times disk I/O is heavily used by other machines. So, simply put: Is there a way to avoid filesystem cache being wiped out by other virtual machines in container-based virtualization? Some thoughts: Choosing algorithm for flushing filesystem cache in the kernel. (possible? how?) Reserving a certain amount of pages for a single VM. (seems no option for filesystem cache type of pages that reading man vzctl) Will running MySQL on another filesystem get me anywhere? If not, I think my alternatives are: Use KVM for MySQL-MyISAM running VMs. KVM actually assigns memory to the VM and does not allow swapping out caches unless using a balloon driver. Move to InnoDB and tune the buffer pools, dirty pages, etc. This is now considered to be 'nice to have' on the long-term as not everyone responsible for administration of the system understands InnoDB. more suggestions welcome. System software: Proxmox (now 1.9, could be upgraded to 2.x). One big LV assigned for the VMs.

    Read the article

  • Can a named (bind) crash make a server unreachable?

    - by giorgio79
    My server recently became unreachable, and after restart a named error was the last line I found in /var/log/messages before restart: Jun 26 00:15:06 host named[1303]: error (network unreachable) resolving 'dlv.isc.org/DNSKEY/IN': 2001:500:71::29#53 Jun 26 06:38:55 host kernel: imklog 5.8.10, log source = /proc/kmsg started. Jun 26 06:38:55 host rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1294" x-info="http://www.rsyslog.com"] start Jun 26 06:38:55 host kernel: Initializing cgroup subsys cpuset Can a named crash make a server unreachable? I doubt it, as I assume I should still be able to login with ssh via IP, but the server did not respond...So, I am trying to make heavy guesses here.

    Read the article

  • Changing MAC address on the Fly on old kernel 2.6.x

    - by shaiss
    I'm doing some research on old kernels and running the following on 2.6.7 or 2.6.8 gives a resource busy error. But on 2.6.28 the command works as expected. How would I determine which kernel allowed this command to change the mac address on the fly? ip link set dev addr You can see some screenshots below. Thank you!

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >