Search Results

Search found 23404 results on 937 pages for 'script compression'.

Page 892/937 | < Previous Page | 888 889 890 891 892 893 894 895 896 897 898 899  | Next Page >

  • Static IPv6 address in Windows unused for outgoing connections

    - by Luc
    I'm running a Windows server and trying to get it to use a static IPv6 address for outgoing connections to other IPv6 hosts (such as Gmail). I need this because Gmail requires a ptr record, and I can't set one for random addresses. The static address is configured on the host, but it also has a temporary privacy address as well as a random address from the router it seems. By default Windows uses the privacy address; it seems this is the expected behavior (and it makes perfect sense for people/users that did not set a static address, but I did!). I've tried disabling the privacy address with: netsh int ipv6 set privacy disabled This indeed gets rid of the privacy address, but I still have the random address that the router assigned. To disable this, it was said I needed to disable "router discovery" using this command: net interface ipv6 set interface 14 routerdiscovery=disabled Upon doing this, all IPv6 connectivity is lost. If I do this while pinging Gmail, it will report "Destination host unreachable" as soon as I enter the command. In the static IPv6 configuration, I did configure the default gateway and prefix length, so I don't see why it's unable to connect. Probably has something to do with the lack of ARP in IPv6 and somehow being unable to resolve the router's MAC, but I wouldn't know how to fix this. Finally I've tried disabling the DHCPv6 lease with these commands: netsh interface ipv6 set interface "IDMZ Team" managedaddress=disabled netsh interface ipv6 set interface "IDMZ Team" otherstateful=disabled Which was to no avail; the host continues to obtain and use the router-assigned IPv6 address. The router is a FritzBox 7340, which shows me all the IPv4 and IPv6 addresses that the host (identified by MAC) utilizes, but I'm unable to change the assigned address. Maybe this could be done over the telnet interface of the router somehow, but again, I wouldn't know how to do this even if it's the way to go. In short, any of the following would probably solve my problem: Change Windows' source address selection behavior. Have Windows not get an address from the router and not generate a privacy address; Have the router hand out a static address and make Windows use that as source address. Recover connectivity after disabling router discovery on Windows. Alternatively I might use some (batch, perl, ...) script to throw away all IPv6 addresses except the desired one, but this feels rather hacky. If it's the only way (or less hacky than another hacky solution), it might be an option though. Thanks!

    Read the article

  • Windows 2008 R2 CA and auto-enrollment: how to get rid of >100,000 issued certificates?

    - by HopelessN00b
    The basic problem I'm having is that I have 100,000 useless machine certificates cluttering up my CA, and I'd like to delete them, without deleting all certs, or time jumping the server ahead, and invalidating some of the useful certs on there. This came about as a result of accepting a couple defaults with our Enterprise Root CA (2008 R2) and using a GPO to auto-enroll client machines for certificates to allow 802.1x authentication to our corporate wireless network. Turns out that the default Computer (Machine) Certificate Template will happily allow machines to re-enroll instead of directing them to use the certificate they already have. This is creating a number of problems for the guy (me) who was hoping to use the Certificate Authority as more than a log of every time a workstation's been rebooted. (The scroll bar on the side is lying, if you drag it to the bottom, the screen pauses and loads the next few dozen certs.) Does anyone know how to DELETE 100,000 or so time-valid, existing certificates from a Windows Server 2008R2 CA? When I go to delete a certificate now, now, I get an error that it cannot be delete because it's still valid. So, ideally, some way to temporarily bypass that error, as Mark Henderson's provided a way to delete the certificates with a script once that hurdle is cleared. (Revoking them is not an option, as that just moves them to Revoked Certificates, which we need to be able to view, and they can't be deleted from the revoked "folder" either.) Update: I tried the site @MarkHenderson linked, which is promising, and offers much better certificate manageability, buts still doesn't quite get there. The rub in my case seems to be that the certificates are still "time-valid," (not yet expired) so the CA doesn't want to let them be deleted from existence, and this applies to revoked certs as well, so revoking them all and then deleting them won't work either. I've also found this technet blog with my Google-Fu, but unfortunately, they seemed to only have to delete a very large number of certificate requests, not actual certificates. Finally, for now, time jumping the CA forward so the certificates I want to get rid of expire, and therefore can be deleted with the tools at the site Mark linked is not a great option, as would expire a number of valid certificates we use that have to be manually issued. So it's a better option than rebuilding the CA, but not a great one.

    Read the article

  • backup of KVM VM's running on Ubuntu 12.4.1 precise edition from a remote machine

    - by Dr. Death
    I am creating a library API which will take the backup of all the VM's running on KVM hypervisor. My VM's can be of any type. I am taking this backup from a remote machine and need to put the backup at remote server. I have KVM, Libvirt installed on my system. Some of my VM's are LVM based and some are normal VM's running on KVM. I research and found out an excellent perl script for taking the backup http://pof.eslack.org/2010/12/23/best-solution-to-fully-backup-kvm-virtual-machines/ but since I am developing this library in C++ I cannot use it however it has given me a good understanding of how it will work. One thing I didnot able to sort out is if my VM's are not created using virt-manager or are created using any other tool them virsh system list command does not give them in the list of running VM's however they are running perfectly on my KVM server. Is there a way to list these VM's in my system list anyhow? secondly, when I am taking backup from the remote machine I am getting out of my ssh mode as soon as my libvirt command finishes and for every command I need to ssh again, Is there a way that I do not need to ssh each and every time? I have already used the rsa key for ssh but when once my command finishes my control moves to the remote machine again and try to find out my source VM location in remote machine's local drives which in turn fails it. here is the main problem I am facing. also for the LVM based VM I am able to take the live backup but for non LVM based my machines are getting suspended and not been able to take the live backup. Since my library will work on the remote machine only I might not know the VM's configruation on the KVM server. so need to make it consistent for all the VM's. Please share any thing related to this issue so that I may be able to take the live backup of the non lvm vm's also. I'll update my working and any research findings time to time to all of you. Thanks in advance for your suggestions in these regards.

    Read the article

  • How to add an iptables rule with source IP address

    - by ???
    I have a bash script that starts with this: if [[ $EUID -ne 0 ]]; then echo "Permission denied (are you root?)." exit 1 elif [ $# -ne 1 ] then echo "Usage: install-nfs-server <client network/CIDR>" echo "$ bash install-nfs-server 192.168.1.1/24" exit 2 fi; I then try to add the iptables rules for NFS as follows: iptables -A INPUT -i eth0 -p tcp -s $1 --dport 111 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p udp -s $1 --dport 111 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 111 -m state --state ESTABLISHED -j ACCEPT service iptables save service iptables restart I get the error: Try iptables -h' or 'iptables --help' for more information. Bad argument111' Try iptables -h' or 'iptables --help' for more information. Bad argument111' Saving firewall rules to /etc/sysconfig/iptables: ^[[60G[^[[0;32m OK ^[[0;39m]^M Flushing firewall rules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Setting chains to policy ACCEPT: filter ^[[60G[^[[0;32m OK ^[[0;39m]^M Unloading iptables modules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Applying iptables firewall rules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Loading additional iptables modules: ip_conntrack_netbios_ns ^[[60G[^[[0;32m OK ^[[0;39m]^M When I open /etc/sysconfig/iptables these are the rules: # Generated by iptables-save v1.3.5 on Mon Mar 26 08:00:42 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [466:54208] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A OUTPUT -o eth0 -p tcp -m tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p esp -j ACCEPT -A RH-Firewall-1-INPUT -p ah -j ACCEPT -A RH-Firewall-1-INPUT -d 224.0.0.251 -p udp -m udp --dport 5353 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Mon Mar 26 08:00:42 2012 ~ "/etc/sysconfig/iptables" 32L, 1872C I've also tried: iptables -I RH-Firewall-1-INPUT 1 -m state --state NEW -m tcp -p tcp --source $1 --dport 111 -j ACCEPT iptables -I RH-Firewall-1-INPUT 2 -m udp -p udp --source $1 --dport 111 -j ACCEPT

    Read the article

  • Server load spikes several times a day, load average for the past month is 5 times the load average all year

    - by AMF
    My Munin notifications set up for our (Debian) LAMP cluster have been notifying me continuously that our load on our production machine has been at dangerous levels. While the average load all year typically runs between 2 and 8, the load in the past month and only the past month -- has been skyrocketing to 10, 18, and occasionally even 50-60. The spikes last only 5-10 minutes at a time and occur about every 2-3 hours. The spikes do not effect performance only because I have a script that sends traffic off our server to a mirror CDN when the load goes above 10. I've looked for cron jobs that correlate with this timeframe but there is nothing I can see that would cause this. Site traffic is also normal (we receive about 200K visits per day). I'm also trying to think of anything I've changed around the time this problem began, and I really cannot think of anything. This is probably not much to go on. Maybe there is a clue in the top print-out (below) that I'm not seeing. How do I proceed to find the cause? -- Typical top when the load is NOT spiking: top - 11:13:09 up 472 days, 25 min, 1 user, load average: 6.08, 4.29, 3.80 Tasks: 105 total, 1 running, 104 sleeping, 0 stopped, 0 zombie Cpu(s): 41.2%us, 5.8%sy, 0.0%ni, 49.5%id, 2.7%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 3369592k total, 2166980k used, 1202612k free, 559504k buffers Swap: 2650684k total, 1892k used, 2648792k free, 1129116k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32046 apache 15 0 36300 12m 9828 S 20 0.4 0:01.97 apache2 32679 apache 15 0 36568 13m 10m S 19 0.4 0:01.69 apache2 31441 apache 15 0 36616 13m 10m S 19 0.4 0:04.13 apache2 31477 apache 15 0 36596 13m 9.8m S 15 0.4 0:01.99 apache2 31993 apache 15 0 36876 16m 12m S 12 0.5 0:02.01 apache2 31782 apache 15 0 36836 14m 10m S 8 0.4 0:02.17 apache2 32198 apache 15 0 36536 13m 10m S 7 0.4 0:01.59 apache2 880 apache 15 0 36508 9708 6236 S 7 0.3 0:00.42 apache2 31945 apache 17 0 36876 16m 13m S 5 0.5 0:03.17 apache2 32197 apache 16 0 36636 10m 7504 S 5 0.3 0:02.70 apache2 32326 apache 15 0 37024 11m 7632 S 5 0.3 0:02.15 apache2 32565 apache 15 0 37280 13m 9.8m S 5 0.4 0:03.75 apache2 32676 apache 15 0 36896 16m 12m S 4 0.5 0:00.95 apache2 32678 apache 15 0 36536 12m 9692 S 4 0.4 0:02.27 apache2 974 apache 16 0 37064 9888 6016 D 4 0.3 0:00.13 apache2 32150 apache 16 0 36832 13m 10m S 3 0.4 0:01.74 apache2 31780 apache 16 0 36848 11m 7660 S 3 0.3 0:02.87 apache2

    Read the article

  • Secure ldap problem

    - by neverland
    Hi there, I have tried to config my openldap to have secure connection by using openssl on Debian5. By the way, I got trouble during the below command. ldap:/etc/ldap# slapd -h 'ldap:// ldaps://' -d1 >>> slap_listener(ldaps://) connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_read(15): unable to get TLS client DN, error=49 id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 ber_get_next ber_get_next on fd 15 failed errno=0 (Success) connection_closing: readying conn=7 sd=15 for close connection_close: conn=7 sd=15 Then I have search for "unable to get TLS client DN, error=49 id=7" but it seems no where has a good solution to this yet. Please help. Thanks # Well, I try to fix something to get it work but now I got this ldap:~# slapd -d 256 -f /etc/openldap/slapd.conf @(#) $OpenLDAP: slapd 2.4.11 (Nov 26 2009 09:17:06) $ root@SD6-Casa:/tmp/buildd/openldap-2.4.11/debian/build/servers/slapd could not stat config file "/etc/openldap/slapd.conf": No such file or directory (2) slapd stopped. connections_destroy: nothing to destroy. What should I do now? log : ldap:~# /etc/init.d/slapd start Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldaps:///' -g openldap -u openldap -f /etc/ldap/slapd.conf ldap:~# tail /var/log/messages Feb 8 16:53:27 ldap kernel: [ 123.582757] intel8x0_measure_ac97_clock: measured 57614 usecs Feb 8 16:53:27 ldap kernel: [ 123.582801] intel8x0: measured clock 172041 rejected Feb 8 16:53:27 ldap kernel: [ 123.582825] intel8x0: clocking to 48000 Feb 8 16:53:27 ldap kernel: [ 131.469687] Adding 240932k swap on /dev/hda5. Priority:-1 extents:1 across:240932k Feb 8 16:53:27 ldap kernel: [ 133.432131] EXT3 FS on hda1, internal journal Feb 8 16:53:27 ldap kernel: [ 135.478218] loop: module loaded Feb 8 16:53:27 ldap kernel: [ 141.348104] eth0: link up, 100Mbps, full-duplex Feb 8 16:53:27 ldap rsyslogd: [origin software="rsyslogd" swVersion="3.18.6" x-pid="1705" x-info="http://www.rsyslog.com"] restart Feb 8 16:53:34 ldap kernel: [ 159.217171] NET: Registered protocol family 10 Feb 8 16:53:34 ldap kernel: [ 159.220083] lo: Disabled Privacy Extensions

    Read the article

  • VMWare tools not installing with an error

    - by JDS
    VMWare tools not installing on Ubuntu 12.04. I'm using Chef to manage the installation, but the Apt commands fail if run manually. I'm using the VMWare tool Debian repo. Example: $ cat /etc/apt/sources.list.d/vmware-tools-source.list deb http://packages.vmware.com/tools/esx/5.0u2/ubuntu precise main When trying to install, most packages seem to go ok, but one, "vmware-tools-foundation", does not. Example: $ apt-get -q -y install vmware-tools-esx-nox=8.6.10-1.precise Reading package lists... Building dependency tree... Reading state information... You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: vmware-tools-esx-kmods-3.2.0-23-generic : Depends: vmware-tools-foundation (>= 8.6.10) but it is not going to be installed vmware-tools-esx-nox : Depends: ...snip list of deps... E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). $ apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vmware-tools-foundation The following NEW packages will be installed: vmware-tools-foundation 0 upgraded, 1 newly installed, 0 to remove and 118 not upgraded. 7 not fully installed or removed. Need to get 0 B/5,886 B of archives. After this operation, 86.0 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 103499 files and directories currently installed.) Unpacking vmware-tools-foundation (from .../vmware-tools-foundation_8.6.10-1.precise_all.deb) ... VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again. dpkg: error processing /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) The key seems to be this error: "VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again." However, I've tryed removing and purging and can't seem to "trick" VMWare tools into thinking the packages are gone. Apt thinks they are gone. Is there some service/file/cache/lock left that VMWare tools sees that makes it think that VMWare tools are still installed? I've googled and googled but there is no answer to this question with my particular circumstances on the interwebs. VMWare's documentation of this error is minimal.

    Read the article

  • How to solve/disable spam sending with my postfix server on linux

    - by Dukla
    I am quite new in setting up e-mail server on linux - I barely set up the whole think to get it working and connected it with my domain and php script which uses PHPMailer 5.2.1. In my setting I am using smtp server from my web provider (domain) and all e-mail which are not defined (trash) are sent on one simple address like I have address [email protected]. So when somebody will send email to [email protected] it will be forwarded again to [email protected] even in case of failure. I am receiving emails like: Hi. This is the qmail-send program at comercio.interone.com.br. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: Sorry, no mailbox here by that name. (#5.1.1) --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 49156 invoked from network); 25 Jun 2012 07:34:57 -0300 Received: from unknown (HELO S0106602ad08df877.no.shawcable.net) (70.66.34.103) by hosting.interone.com.br with SMTP; 25 Jun 2012 07:34:57 -0300 Message-Id: <20120625034039.B45C12DCC3B13A22F261@GANDERTO-015445> From: Ezra Whitehead <[email protected]> To: toa <[email protected]> Reply-To: Jamey Mcconnell <[email protected]> Subject: Welcome toa Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Visit our shop http://44090.medicneed.ru/ 113B726C73560AA41A68163AA474D5F1476 0225770686522678 As you can see there is line From: Ezra Whitehead <[email protected]> I am sure I did not send this email from my domain.com with some Davis8FB name and some russian page. This is just one of many and only NOT-delivered e-mails - there can be much more which has been sent successfully! What do I have wrong in my settings? How can I make it right? What should I do to prevent these messages to send? Where should I look? Thank you all.

    Read the article

  • Ubuntu Server 10.04 Heavy Network Traffic causes disconnect

    - by K Vaughan
    I'm currently running a headless Ubuntu 10.04 server. Installed is the LAMP stack, Joomla, Virtualbox, phpvirtualbox, webmin and proFTP.. It resolves the IP address so I can access it remotely (either the apache2 webserver or the FTP) using DDClient. Any packages installed have been installed using apt-get. Webmin, although discouraged in Ubuntu Server, is used mostly to administer the webserver aspect. This issue also appeared when I was using Ubuntu Server 10.10. After periods of heavy network traffic, whether local or remote, the connect drops. I'm talking specifically about the transfer of files via FTP, SCP or Samba (the latter of which I seldom use). There is no response to ping or ssh. I can't FTP to the server nor can I load the website. There are times when the server has been on for a few days and everything runs fine because I haven't accessed it much, if at all (thus not much network traffic). I've gone through a few hardware changes although I don't believe this has cause the issue: this has been happening long before I made any changes. At first I thought it was my ISP-provided router blocking traffic because of some kind of misconfiguration (perhaps assuming it was some kind of DoS attack). I've changed routers and still found no success. I've checked syslog, dmesg and kern.log for warnings but have uncovered none. I've ran memtest via the GRUB2 menu at boot and once it turned up 4 errors. I ran again with individual sticks of RAM in various slots and everything turned up fine. I've looked through the BIOS settings and everything looks fine. I've tried unplugging unnecessary pieces of hardware (other internal hard drives, CD drives, floppy, PCI cards, etc). Any help or tips on how I can even begin to troubleshoot this would be very much appreciated. Please note that i've only started playing with servers as a hobby so my knowledge wouldn't be the most refined. I'm comfortable with command line and have the initiative to know how to look up something I can't do. Unfortunately I can't seem to find any issues like this. Additionally: If a solution can't be found some assistance to write a script that will cause the server to reboot automatically if, after x minutes, it gets no response to pinging somewhere like google. Admittedly that's not the cleanest solution should my internet end up going down but I can't think of what else to do.

    Read the article

  • How to stop a random ramp in FCGI Processes Killing the server

    - by Andy Main
    So got the below earlier to day... Around that time the logs show a ramp in processes(600) and associated memory (1.2g), cpu usage load average (80) untill the server gave out. Server had to be hard reset by host as there was no ssh or plesk panel access. Fast CGI is configured as below and is setup for one high use site. As I understand it FcgidMaxProcesses 20 should protect against what happen but has not. I've read many forums with differing answers and references to many different fcgi directives, but have found nothing conclusive. Any one got some definitive answers on how to stop this sort of server process ramping and subsequent server failure? If you need more info let me know. Cheers Andy  /var/log/apache2/error_log [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17651 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17650 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17649 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17644 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17643 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17638 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17633 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17627 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17622 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17674 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17673 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17672 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17667 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17666 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17665 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17664 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17659 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17658 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17657 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17656 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17651 graceful kill fail, sending SIGKILL https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VRmFLWEZfR2VBb2M https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VWTcwZEhoV2Fqejg https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VUUtVWWFINHZjZ0U https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VZEVMclh6ZUdaOUE <IfModule mod_fcgid.c> <IfModule !mod_fastcgi.c> AddHandler fcgid-script fcg fcgi fpl </IfModule> FcgidIPCDir /var/lib/apache2/fcgid/sock FcgidProcessTableFile /var/lib/apache2/fcgid/shm FcgidIdleTimeout 40 FcgidProcessLifeTime 30 FcgidMaxProcesses 20 FcgidMaxProcessesPerClass 20 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 30 FcgidIOTimeout 120 FcgidInitialEnv RAILS_ENV production FcgidIdleScanInterval 10 FcgidMaxRequestLen 1073741824 </IfModule>

    Read the article

  • Microphones not working on Apple macbook Air 1,1 (Early 2008) under Linux

    - by jj_p
    I'm running Linux on an mba. I can't make the microphones (neither external nor internal) work. I test using alsamixer and arecord -d 5 test-mic.waw together with aplay test-mic.waw It seems there is a problem with kernel trying to decipher Apple (intentionally) corrupted 'bios', in particular the mic pins are wrongly assigned. As far as we are concerned here, is there any difference between using EFI and BIOS-compatibility mode? (see https://wiki.archlinux.org/index.php/MacBook where they claim to have everything working out of the box on mba1,1) A nice proposal would be to compile the latest Linux kernel and run hda-jack-retask to find the right configuration (in the case of Realtek codec, the missing things I'm supposed to check are either some vendor-specific COEF verbs, EAPD or GPIO setup.), and then come up with a kernel patch to address the issue. Since I'm not that familiar with this last part of the story, can anyone help me through this process? Some useful data: The output from alsa script run as root http://www.alsa-project.org/db/?f=adae8ebee1007043fe83414ac4972319e02255fa The command hda-jack-sense-test -a (with external mic in) Pin 0x14 (Internal Speaker): present = No Pin 0x15 (Green HP Out): present = Yes Pin 0x16 (Not connected): present = No Pin 0x17 (Not connected): present = No Pin 0x18 (Not connected): present = No Pin 0x19 (Not connected): present = No Pin 0x1a (Not connected): present = No Pin 0x1b (Not connected): present = No Pin 0x1c (Not connected): present = No Pin 0x1d (Not connected): present = No Pin 0x1e (Not connected): present = No Pin 0x1f (Not connected): present = No Most likely the chip is Realtek ALC885 (compare also ALC889A) http://guide-images.ifixit.net/igi/bBTSqaeK5JpQ1AWe.large , although at the moment alsa reads it as ALC889A Takashi Iwai's tutorial https://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio.txt Some people researched the original files from a running OS X installation on this same model (I think the relevant files are AppleHDA.kext/Contents/MacOS/AppleHDA AppleHDA.kext/Contents/PlugIns/AppleHDAHardwareConfigDriver.kext/Contents/Info.p????list AppleHDA.kext/Contents/Resources/layout12.xml.zlib AppleHDA.kext/Contents/Resources/Platforms.xml.zlib) http://www.insanelymac.com/forum/topic/220090-alc889a-pin-configuration/#entry1554954 Datasheet http://www.realtek.info/pdf/ALC885_1-1.pdf (from the same Realtek, one can also try to download Linux driver, but this is just taken from ALSA project, as stated in the readme file.) Compare with this Arch guy http://www.alsa-project.org/db/?f=3ca8243c0626844f0264a3faad0aa72018bc14f4 Here for the first time support to audio (except mics) for mba1,2 (which is morally the same as 1,1) is patched into the kernel http://www.alsa-project.org/pipermail/alsa-devel/2010-February/025511.html The same jack supposedly works both for HP and ext MIC, I think it's called TRRS, and it's the same as the one used e.g. for iphones This guy might have done a similar job, though to a more recent version and for sound globally, not just mics: http://blogs.aerys.in/jeanmarc-leroux/2013/09/15/fixing-2013-macbook-air-ubuntu-sound-issue/ (this is mirror to http://unix.stackexchange.com/questions/73044/microphones-not-working-on-apple-macbook-air-1-1-early-2008-under-linux )

    Read the article

  • Starting/Stopping Custom PHP Chat Server Linux Service (CentOS)

    - by chad
    I have been trying all night to get this service working properly. I created this script from a template and am very new to bash coding. I wrote a fully functioning chat server in php which runs endlessly, but now want to make it a dedicated service. I want to do this so that it starts on server boot and boots back up if possible when there are any down-times with the server. The issue is that I need this thing to run in a detached screen so that I can monitor packet data or send server commands via SSH when need-be. The main problem that i'm having is that it needs to have its own PID when it starts so that I can stop/restart it when needed. I am the type who grinds on coding until I figure it out, but this is so new to me that it seems the learning curve here is very steep and frustrating. Below is my code if anybody can please help me with this one, i've gotten so tired I can't even concentrate any more :( #!/bin/sh # # chatserver # # chkconfig: 345 20 90 # description: chatServer Linux Service Daemon \ # for general server handling ### BEGIN INIT INFO # Provides: chatserver # Required-Start: $local_fs $network $named $syslog # Required-Stop: $local_fs $syslog # Default-Start: 3 4 5 # Default-Stop: 0 1 2 6 # Short-Description: This service maintains the chatServer # Description: chatServer Linux Service Daemon # for general server handling ### END INIT INFO # Source function library. . /etc/rc.d/init.d/functions exec="screen php -q /var/www/html/chatServer.php" prog="chatserver" config="/etc/sysconfig/$prog" pidfile="/var/run/chatserver.pid" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/subsys/$prog start() { #$exec || exit 5 echo -n $"Starting $prog: " daemon $exec --name=$exec --pidfile=$pidfile retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile rm -f $pidfile retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { # run checks to determine if the service is running or use generic status status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?

    Read the article

  • why i failed to build rtorrent?

    - by hugemeow
    i am not root, so i have to build rtorrent from source and hope to install it in my home directory, but failed, why? [mirror@hugemeow rtorrent]$ ls AUTHORS autogen.sh ChangeLog configure.ac COPYING doc INSTALL Makefile.am NEWS rak README scripts src test [mirror@hugemeow rtorrent]$ ./autogen.sh aclocal... aclocal:configure.ac:7: warning: macro `AM_PATH_CPPUNIT' not found in library autoheader... libtoolize... using libtoolize automake... configure.ac: installing `./install-sh' configure.ac: installing `./missing' src/Makefile.am: installing `./depcomp' autoconf... configure.ac:7: error: possibly undefined macro: AM_PATH_CPPUNIT If this token and others are legitimate, please use m4_pattern_allow. See the Autoconf documentation. though autoge failed, configure script is created. [mirror@hugemeow rtorrent]$ ls aclocal.m4 autogen.sh ChangeLog config.h.in configure COPYING doc install-sh Makefile.am missing rak scripts test AUTHORS autom4te.cache config.guess config.sub configure.ac depcomp INSTALL ltmain.sh Makefile.in NEWS README src run configure, and fail for syntax error near unexpected token `1.9.6', what's wrong? what i should do in order to build this rtorrent for my centos? [mirror@hugemeow rtorrent]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes ./configure: line 2016: syntax error near unexpected token `1.9.6' ./configure: line 2016: `AM_PATH_CPPUNIT(1.9.6)' [mirror@hugemeow rtorrent]$ git branch * master [mirror@hugemeow rtorrent]$ git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/c++11 remotes/origin/master [mirror@hugemeow rtorrent]$ git tag 0.9.0 0.9.1 [mirror@hugemeow rtorrent]$ git clean -dfx Removing Makefile.in Removing aclocal.m4 Removing autom4te.cache/ Removing config.guess Removing config.h.in Removing config.log Removing config.sub Removing configure Removing depcomp Removing doc/Makefile.in Removing install-sh Removing ltmain.sh Removing missing Removing src/Makefile.in Removing src/core/Makefile.in Removing src/display/Makefile.in Removing src/input/Makefile.in Removing src/rpc/Makefile.in Removing src/ui/Makefile.in Removing src/utils/Makefile.in Removing test/Makefile.in Edit 1: details about libtool and libtools [mirror@hugemeow rtorrent]$ libtoolize --version libtoolize (GNU libtool) 1.5.22 Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [mirror@hugemeow rtorrent]$ libtool --version ltmain.sh (GNU libtool) 1.5.22 (1.1220.2.365 2005/12/18 22:14:06) Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Read the article

  • Active directory integration not working properly with winbind and samba

    - by tubaguy50035
    I'm trying to get my linux box to use active directory authentication. I believe I have almost everything setup correctly. I'm able to issue wbinfo -g and wbinfo -u and see all the groups and users respectively. Brief intro to my setup: The username I use on my linux box to do admin things is nick. My active directory username is nwalke. They have two different passwords. I am able to log in to the box with nick and that user's password and I'm also able to login as nwalke with nwalke's password. The curious bit: Upon creating the active directory user's home directory, I run a script that requires root access. This is to setup some system wide things like a samba share for them. When I log in as nwalke, I enter my nwalke password and it succeeds. I'm then greeted with [sudo] password for nick:. If I enter my nwalke password here, it says Sorry, try again.. If I enter nick's password, it says Sorry, user nick is not allowed to execute scriptname as root. If I do groups as nwalke, I see that magically my user has been given the group nick. Now, I accidentally thought that nick had a UID of 100, not 1000. So originally in my smb.conf I had idmap uid 1000-10000. The only thing I can think of, is that I logged in with nwalke while that was still set and now I'm just being presented with a UID of 1000 forcing linux to think I'm nick. I'm not really sure where to go from here. Like I said, I'm fairly certain active directory is communicating with my server properly, but something must not be mapped right on the linux side. Any thoughts? Here is my smb.conf: [global] security = ads netbios name = hostname realm = COMPANY.COM password server = adshost.company.com workgroup = COMPANY idmap uid = 10000-90000 idmap gid = 10000-90000 winbind separator = + winbind enum users = no winbind enum groups = no winbind use default domain = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes domain master = no load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes Let me know if more information about something is required.

    Read the article

  • Nginx + PHP - No input file specified

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; root /www keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host?

    Read the article

  • GPO Software Uninstall Not Taking Place

    - by burmat
    I am having some trouble with my software GPO's and can't seem to find any answers using Google. I successfully deployed software using my policy but when I delete another, the uninstallation of the software does not take place. What I did: Deployed software using a GPO, used gpupdate /force on the workstation to update, reboot, and install the software Deleted another software installation by: Right-Click All Tasks Remove 'Immediately uninstall the software from users and computers' From there, I did another gpupdate /force to try and get the GPO to refresh and uninstall the software on the workstation. This did not work. I then forced replication between my domain controllers and ran another gpupdate /force on the workstation and this did not uninstall the software. There are not error logs or indications that the uninstall is being triggered when I go into the event viewer, and I know for a fact that the policy is working in other aspects. So my questions is: Where do I look next to find the answer as to why GPO software deployments are working but un-installations are not, based off of what I have already tried? Thank you in advance. UPDATE: After using gpresult /z, there is no indication of a pending un-installation or removal of software. Under the section entitled "Software Installations", the software I am trying to uninstall is not listed. There is no other indication that the software I am trying to uninstall even exists. I also turned on RSoP logging and did (yet another) gpupdate /force to yield no blatant results. There is no indication that an uninstall event was even triggered, let alone incapability or failure. Although I am sure I marked it to uninstall in case of two events (the falling out of the scope of management, as well as the removal of the entry), I am beginning to think the entry just never triggered something that should have been triggered. UPDATE #2: After troubleshooting this (frustrating) application assignment, I have chalked it up as a fluke. I have tested with other software to make sure that the uninstall of other application assignments is actually working, so I am assuming it is something related to the package directly. There is the possibility that my problem resides in something related to what @joeqwerty linked in a comment below but because I can't go back in time, I don't think I will be able to prove it. I will probably be running a script via another GPO to guarantee the un-installation of left over package installs. For now, Evan Anderson is getting the answer because of the debugging information I was able to put to good use. Thank you to everyone that helped contribute so far!

    Read the article

  • Changing User/Group to allow PHP to chmod/rename and move_upload_file()

    - by moe
    Hi There, It seems like I cannot do anything with my PHP script on my VPS. It returns 'Permission denied' when I try to upload something to a directory. Yes, I have changed the permission to 777, and it works, but I do not like the insecurity When running the command: ps axu|grep apache|grep -v grep It returns nobody 7689 0.1 3.8 50604 20036 ? S 21:38 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 13600 0.0 3.8 50304 20348 ? Ss Jun06 0:46 /usr/local/apache/bin/httpd -k start -DSSL nobody 15733 0.1 3.8 50700 20156 ? S 21:39 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 15818 0.1 3.8 51492 20180 ? S 21:39 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 23843 0.1 3.7 51336 19592 ? S 21:40 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30335 0.0 3.5 50436 18496 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30406 0.0 3.5 50444 18544 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30407 0.0 3.5 50556 18696 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30472 0.0 3.6 50828 19348 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30474 0.0 3.5 50668 18868 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30476 0.0 3.6 50532 19064 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30501 0.0 3.8 50556 20080 ? S 21:36 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32341 0.0 3.5 50444 18492 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32370 0.0 3.5 50444 18476 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32414 0.1 3.7 51336 19524 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32416 0.1 3.5 50668 18816 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32457 0.1 3.6 50828 19320 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32458 0.1 3.6 50772 19276 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32459 0.0 3.5 50444 18504 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32460 0.2 3.6 50828 19320 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32463 0.0 3.5 50444 18472 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32466 0.0 3.4 50436 17960 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL The owner of the directory is 'user [505]' and the group is 'user[508]' (as seen in WinSCP) What can I do to change the Apache Handler to the right owner and group to allow my PHP scripts to work? P.S My PHP is not set to safe mode, and the open_basedir is set to no value

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by rbeier
    Hi, We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • Apache + Tomcat error 120006 Using mod_proxy_ajp for Load Balance

    - by Wakaru44
    I have an apache 2 frontend with two nodes, and a backend with two instances of tomcat 6 balance with mod_proxy_ajp. The bbdd is in a separate machine. All machines use RHEL, 6.2 on the frontend, 5.5 on the backend. The infraestructure is virtualized using VMware. # This is the apache config in one of the virtualHost. ProxyPreserveHost On ProxyPass / balancer://liferay/ <Proxy balancer://liferay> BalancerMember ajp://lrab:8009 route=liferaya BalancerMember ajp://lrbb:8009 route=liferayb status=+H ProxySet lbmethod=byrequests nofailover=on </Proxy> The conector in tomcat is now configured like this: <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8" enableLookups="false" allowTrace="true" /> Do you think it could be useful to set a maxThreads parameter, like in this post?? in that case, How can i determine a proper number of threads? From time to time, we get errors like this [Tue Sep 18 17:57:02 2012] [error] ajp_read_header: ajp_ilink_receive failed [Tue Sep 18 17:57:02 2012] [error] (120006)APR does not understand this error code: proxy: read response failed from 192.168.1.104:8009 (lrab) And apache switches to the pasive node (if its active) or fails with 503. Some things i have tried so far: I think that i have some performance issues with one of the applications, Here you can see a threadDump But i'm not quite sure about it. I also started to monitor the network connection. I have noticed that there are some pings lost when i have a "ping -f " so maybe it could be a network issue, but the success rate is 100% (so the lost packets are only a few among the flood, but maybe, i don't know, enough to break the link betwen apache and tomcat). I wrote a python script to check connectivity with timestamps on the pings, so i can know when the network fails. After sniffing the network , i can also see some RST packets, but i don't know if that is a normal behaviour (some applications do that to end a network communication). I have also noticed that the applications have problems communicating with the database, but im not even sure if this could be related or not. If you think so, i can post more info about it. I changed the connector on the tomcats to use the native one, but still the same. I need not even a solution to this, but maybe some guidance on how can i troubleshoot this better ¿Analyze threads, monitor mysql performance, sniff the traffic between apaches and tomcats? Ultimately, all i need is to balance the tomcat instances in Active/pasive mode, so if there is another way to do it, i could give it a try.

    Read the article

  • Apache + PHP in paths with accented letters

    - by Álvaro G. Vicario
    I'm not able to run a PHP enabled web site under Apache on Windows XP if the path to DOCUMENT_ROOT contains accented letters. I'm not referring to the script file names themselves but to any folder in the path components. I have this virtual host definition: <VirtualHost *:80> ServerName foo.local DocumentRoot "E:/gonzález/sites/foo" ErrorLog logs/foo.local-error.log CustomLog logs/foo.local-access.log combined <Directory "E:/gonzález/sites/foo"> AllowOverride All Options Indexes FollowSymLinks Order allow,deny Allow from all </Directory> </VirtualHost> If I save the file in ANSI I get a syntax error: DocumentRoot must be a directory If I save the file in Unicode I get another syntax error: Invalid command '\xff\xfe#', perhaps misspelled or defined by a module not included in the server configuration (looks like it's complaining about the BOM) If I save the file in BOM-less UTF-8 Apache works fine and it serves static files with no apparent issue... ... however, PHP complaints when loading any *.php file (even an empty one): Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'E:/gonzález/sites/foo/vacio.php' (include_path='.;C:\Archivos de programa\PHP\pear') in Unknown on line 0 I decided to try the 8+3 short name of the directory (just a test, I don't want to use such a workaround): <VirtualHost *:80> ServerName foo.local DocumentRoot "E:/GONZLE~1/sites/foo" ErrorLog logs/foo.local-error.log CustomLog logs/foo.local-access.log combined <Directory "E:/GONZLE~1/sites/foo"> AllowOverride All Options Indexes FollowSymLinks Order allow,deny Allow from all </Directory> </VirtualHost> But I get the same behaviour: Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'E:/gonzález/sites/foo/vacio.php' (include_path='.;C:\Archivos de programa\PHP\pear') in Unknown on line 0 While there're obvious workarounds (use plain ASCII in all directory names or create NTFS junctions to hide actual names) I can't believe that this cannot be done. Do you have more information about the subject? My specs include 32 bit Windows XP Professional SP3, Apache/2.2.13 and PHP/5.2.11 running as Apache module (but I've noticed the same issue in another box with Windows Vista and PHP/5.3.1).

    Read the article

  • iCloud stuff stops working while connected to OpenVPN [closed]

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again!

    Read the article

  • Convert mkv/h264 video so it can be played on a "mid-range" Sony Ericsson phone. (using Ubuntu).

    - by Johan
    Hi As a little experiment I thinking of converting some video/movies/tv-series into a format that could be playable on my K850, but to be a little bit more generic in this question let's say "mid range Sony Ericsson" phone since they all more or less behave the same and has the same screen resolution (240 x 320). I am looking for command line based tools (for Ubuntu), since I am thinking about writing a "convert and move" script later if it is successful. A lot of the video I have is encoded in mkv/h264, but since that is not supported by the phone I guess that I need to convert it into some mp4/mpeg4 low quality video. After some googling it seems like a good candidate for the job is ffmpeg, but that seems to be a very versatile tool with a lot of magic tricks. Am I on the right track? And if so how do I use ffmpeg to do this? Thanks Johan Update: After plating a little bit with ffmeg I noticed that it only uses 1 of my 4 cores, so the transcoding takes forever. I found a arg called -threads but that did not change much, maybe I got it wrong. I also found that something like this plays in the phone. ffmpeg -i Mythbusters\ S1D1_1.mkv -threads 4 -t 180 -vcodec mpeg4 -r 15 -s 320x240 Mythbusters\ S1D1_1_mini.mp4 It was possible to use 3gp/h263, but the quality was really useless. ffmpeg -i Mythbusters\ S1D1_1.mkv -t 180 -vcodec h263 -acodec libfaac -s cif Mythbusters\ S1D1_1_cif.3gp And it seems like mp4/h264 is also possible and the result is ok, thanks to this question, this one seem to use more than one core as well so it was a little bit faster for me. ffmpeg -i Mythbusters_S1D1_1.mkv -t 180 -acodec libfaac -ab 60k -s 320x240 -vcodec libx264 -b 500k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -flags2 +mixed_refs -me_method umh -subq 6 -trellis 1 -refs 5 -coder 0 -me_range 16 -g 250 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 500k -maxrate 768k -bufsize 2M -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 13 -threads 0 -f mp4 Mythbusters_S1D1_1_qvga.mp4 Update: I have tried to use HandBrakeCLI and it is no problem creating a new file that seem to be the same as the one created with ffmpeg with something like this. HandBrakeCLI -i Mythbusters_S1D1_1.mkv --size 100 -E faac -B 60 --maxHeight 240 -r 15 -e x264 -o Mythbusters_S1D1_1_hand.mp4 But that one did not play in the phone... I found this in the official manual: If you transfer video clips using another program than Media Go™, we recommend that you select H.264 Baseline profile video, up to QVGA at 30 fps, VBR 384 kbps (max 768 kps) with AAC+ audio at 128 kbps (max 255 kbps), 48 kHz and stereo audio in mp4 file format. So the idea to use H264 seems to be correct.

    Read the article

  • Hylafax / Capi4hylafax: faxgetty does not recognize number of lines

    - by Wrikken
    We've got a T.30 card, 30 working lines on it, but for some reason, if I add more then 30 faxes in the queue at any time (and we're busy enough at peak times that this happens a lot), faxgetty sends faxes to non-existent lines and they appear in the error queue as a 'busy' signal on the line, which results in a lot of failed faxes because the counter of max 3 tries increases rapidly. This is using faxgetty (USE_FAXGETTY="y" in /etc/default/hylafax). I've inherited this thing, so I'm not entirely sure how faxgetty is supposed to know the number of lines. However, if I alter the script to faxmodem (USE_FAXGETTY="n" in /etc/default/hylafax and manually enabling 30 modems), this behavior goes away (new faxes 'wait' for a line to be available before trying to send, so each try / fail is a valid one on a working line, majorly descreasing the amount of failed faxes. However, when researching this almost anyone talks about faxgetty being the preferred, more robust, method, and on top of that for some unexplained reason all FIFO's disappeared for some reason after several errorless hours with faxmodem, forcing a hylafax restart using faxgetty until we figured out why this faxmodem solution failed (which is another question, and somewhat out of scope here). Environment: Debian 2.6.26-2-amd64 capi4hylafax 1:01.03.00.99.svn.300-12 hylafax-client 2:4.4.4-10.1 hylafax-server 2:4.4.4-10.1 Config --hfaxd.conf-- LogFacility: daemon ServerTracing: 0x1ff --hyla.conf-- Host: localhost Verbose: No VRes: 196 TimeZone: local DialRules: "/etc/hylafax/dialrules.europe" --/etc/hylafax/config -- InternationalPrefix: 00 LongDistancePrefix: 0 AreaCode: 99999 CountryCode: 31 DialStringRules: "etc/dialrules.europe" ModemGroup: any:faxCAPI SendFaxCmd: "/usr/bin/wrapc2faxsend" --/etc/hylafax/config.faxCAPI -- SpoolDir: /var/spool/hylafax FaxRcvdCmd: /var/spool/hylafax/bin/faxrcvd PollRcvdCmd: /var/spool/hylafax/bin/pollrcvd FaxReceiveUser: uucp FaxReceiveGroup: dialout LogFile: /var/spool/hylafax/log/capi4hylafax #no, checking this log did not yield anything interesting LogTraceLevel: 4 LogFileMode: 0600 ModemGroup: any:faxCAPI #repeats of faxCAPI2 = faxCAPI30, with of course another devicename/local ident: { HylafaxDeviceName: faxCAPI RecvFileMode: 0600 FAXNumber: ****redacted**** LocalIdentifier: ****some-ident-per-device*** MaxConcurrentRecvs: 0 OutgoingController: 1 OutgoingMSN: SuppressMSN: 0 NumberPrefix: NumberPlusReplacer: "00" UseISDNFaxService: 0 RingingDuration: 0 { Controller: 1 AcceptSpeech: 0 UseDDI: 0 DDIOffset: DDILength: 0 IncomingDDIs: IncomingMSNs: AcceptGlobalCall: 1 } } So in short: How does faxgetty determine the number of lines available? (the man page isn't terribly revealing, and I can't find an appropriate setting in hylafax-config. And how can I get a capi4hylafax/hylafax setup which queues more faxes then lines are available correctly without immediately incrementing the fail count? We will not be receiving any faxes on this machine b.t.w. As I said, I've inherited this thing, so if there are important configuration options I'm not including, please let me know.

    Read the article

  • Excluding child processes from ps

    - by stefpet
    Background: To reload app configuration I need to kill -HUP the parent processes' PIDs. To find PIDs I currently use ps auxf | grep gunicorn with the following example output: $ ps auxf | grep gunicorn stpe 4222 0.0 0.2 64524 11668 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4225 0.0 0.4 76920 16332 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4226 0.0 0.4 76932 16340 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4227 0.0 0.4 76940 16344 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4228 0.0 0.4 76948 16344 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4229 0.0 0.4 76960 16356 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4230 0.0 0.4 76972 16368 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4231 0.0 0.4 78856 18644 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4232 0.0 0.4 76992 16376 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 5685 0.0 0.0 22076 908 pts/1 S+ 11:50 0:00 | \_ grep --color=auto gunicorn stpe 5012 0.0 0.2 64512 11656 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5021 0.0 0.4 77656 17156 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5022 0.0 0.4 77664 17156 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5023 0.0 0.4 77672 17164 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5024 0.0 0.4 77684 17196 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5025 0.0 0.4 77692 17200 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5026 0.0 0.4 77700 17208 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5027 0.0 0.4 77712 17220 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5028 0.0 0.4 77720 17220 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py Based on the above I see that it is 4222 and 5012 I need to HUP. Question: How can I exclude the child processes and only get the parent process (please note however that the processes I want do also have a parent (e.g. bash) that I'm uninterested with)? Using a regexp with grep on how much indentation there is in the ascii tree feels dirty. Is there a better way? Example: The desired output would be something like this. stpe 4222 0.0 0.2 64524 11668 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 5012 0.0 0.2 64512 11656 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py This would be easily parseable to be able to automatically find the PIDs in a script that does the HUPing which is the goal.

    Read the article

< Previous Page | 888 889 890 891 892 893 894 895 896 897 898 899  | Next Page >