Search Results

Search found 14422 results on 577 pages for 'fix'.

Page 544/577 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • Diagnosing packet loss / high latency in Ubuntu

    - by Sam Gammon
    We have a Linux box (Ubuntu 12.04) running Nginx (1.5.2), which acts as a reverse proxy/load balancer to some Tornado and Apache hosts. The upstream servers are physically and logically close (same DC, sometimes same-rack) and show sub-millisecond latency between them: PING appserver (10.xx.xx.112) 56(84) bytes of data. 64 bytes from appserver (10.xx.xx.112): icmp_req=1 ttl=64 time=0.180 ms 64 bytes from appserver (10.xx.xx.112): icmp_req=2 ttl=64 time=0.165 ms 64 bytes from appserver (10.xx.xx.112): icmp_req=3 ttl=64 time=0.153 ms We receive a sustained load of about 500 requests per second, and are currently seeing regular packet loss / latency spikes from the Internet, even from basic pings: sam@AM-KEEN ~> ping -c 1000 loadbalancer PING 50.xx.xx.16 (50.xx.xx.16): 56 data bytes 64 bytes from loadbalancer: icmp_seq=0 ttl=56 time=11.624 ms 64 bytes from loadbalancer: icmp_seq=1 ttl=56 time=10.494 ms ... many packets later ... Request timeout for icmp_seq 2 64 bytes from loadbalancer: icmp_seq=2 ttl=56 time=1536.516 ms 64 bytes from loadbalancer: icmp_seq=3 ttl=56 time=536.907 ms 64 bytes from loadbalancer: icmp_seq=4 ttl=56 time=9.389 ms ... many packets later ... Request timeout for icmp_seq 919 64 bytes from loadbalancer: icmp_seq=918 ttl=56 time=2932.571 ms 64 bytes from loadbalancer: icmp_seq=919 ttl=56 time=1932.174 ms 64 bytes from loadbalancer: icmp_seq=920 ttl=56 time=932.018 ms 64 bytes from loadbalancer: icmp_seq=921 ttl=56 time=6.157 ms --- 50.xx.xx.16 ping statistics --- 1000 packets transmitted, 997 packets received, 0.3% packet loss round-trip min/avg/max/stddev = 5.119/52.712/2932.571/224.629 ms The pattern is always the same: things operate fine for a while (<20ms), then a ping drops completely, then three or four high-latency pings (1000ms), then it settles down again. Traffic comes in through a bonded public interface (we will call it bond0) configured as such: bond0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:5d inet addr:50.xx.xx.16 Bcast:50.xx.xx.31 Mask:255.255.255.224 inet6 addr: <ipv6 address> Scope:Global inet6 addr: <ipv6 address> Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:527181270 errors:1 dropped:4 overruns:0 frame:1 TX packets:413335045 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240016223540 (240.0 GB) TX bytes:104301759647 (104.3 GB) Requests are then submitted via HTTP to upstream servers on the private network (we can call it bond1), which is configured like so: bond1 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:5c inet addr:10.xx.xx.70 Bcast:10.xx.xx.127 Mask:255.255.255.192 inet6 addr: <ipv6 address> Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:430293342 errors:1 dropped:2 overruns:0 frame:1 TX packets:466983986 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:77714410892 (77.7 GB) TX bytes:227349392334 (227.3 GB) Output of uname -a: Linux <hostname> 3.5.0-42-generic #65~precise1-Ubuntu SMP Wed Oct 2 20:57:18 UTC 2013 x86_64 GNU/Linux We have customized sysctl.conf in an attempt to fix the problem, with no success. Output of /etc/sysctl.conf (with irrelevant configs omitted): # net: core net.core.netdev_max_backlog = 10000 # net: ipv4 stack net.ipv4.tcp_ecn = 2 net.ipv4.tcp_sack = 1 net.ipv4.tcp_fack = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_max_syn_backlog = 10000 net.ipv4.tcp_congestion_control = cubic net.ipv4.ip_local_port_range = 8000 65535 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_thin_dupack = 1 net.ipv4.tcp_thin_linear_timeouts = 1 net.netfilter.nf_conntrack_max = 99999999 net.netfilter.nf_conntrack_tcp_timeout_established = 300 Output of dmesg -d, with non-ICMP UFW messages suppressed: [508315.349295 < 19.852453>] [UFW BLOCK] IN=bond1 OUT= MAC=<mac addresses> SRC=118.xx.xx.143 DST=50.xx.xx.16 LEN=68 TOS=0x00 PREC=0x00 TTL=51 ID=43221 PROTO=ICMP TYPE=3 CODE=1 [SRC=50.xx.xx.16 DST=118.xx.xx.143 LEN=40 TOS=0x00 PREC=0x00 TTL=249 ID=10220 DF PROTO=TCP SPT=80 DPT=53817 WINDOW=8190 RES=0x00 ACK FIN URGP=0 ] [517787.732242 < 0.443127>] Peer 190.xx.xx.131:59705/80 unexpectedly shrunk window 1155488866:1155489425 (repaired) How can I go about diagnosing the cause of this problem, on a Debian-family Linux box?

    Read the article

  • Chrome does not re-draw properly on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites? Computer Freeze: Chrome sometimes, randomly, freezes my computer. Freeze in the sense, none of the other apps are also working. It's like the whole system freezes, I can not even switch to other apps. I am sure that this is because of Chrome since this happens only when Chrome is active. Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). UPDATE 3: @NothingsImpossible said that He is also experiencing the same problem on Windows Server 2008! This seams to be a major issue now. He also said that GPU load is also high at the same time! Even I saw the same thing. UPDATE 4: Recently, Chrome updated to v24 Stable (I am using stable from a long time now). I was experiencing this problem a lot less in Chrome 23, but this is back in Chrome 24. Seams like Chrome 24 is the most affected from this bug, as this same problem was high in Chrome 24 beta also. UPDATE 5: Chrome was updated to v25 Stable. This problem is 99% Gone, it is still there in 1% of the cases. One such example is when I leave chrome inactive for a while with a few tabs open, the tabs go black and no activity can get them back to active state. If I open a new tab, the new tab is OK but the others are still black, I need to close all those tabs. UPDATE 6: Chrome updated to v27 Stable channel, this problem is nearly gone. This does happen occasionally, but not as frequent as in earlier versions of Chrome. UPDATE 7: I am on Chrome v35.0.1916.114 Stable, Windows 8.1 Pro Update 1. Some of the other problems appears to be back. Chrome is slow and laggy again. Re-draw time is getting worse. Is anybody else experiencing such issues? Does anybody have a solution to any of these?

    Read the article

  • Extreme headache from ASSP Extreme Ban

    - by Chase Florell
    I've got a local user on my server that as of today cannot send email from any of their devices. Only Webmail (which doesn't touch any of their devices) works. Here are the various email failures I'm receiving in the logs. Dec-04-12 19:52:47 75966-05166 [SpoofedSender] 111.111.111.111 <[email protected]> to: [email protected] [scoring:20] -- No Spoofing Allowed -- [Test]; Dec-04-12 19:52:47 75966-05166 [Extreme] 111.111.111.111 <[email protected]> to: [email protected] [spam found] -- score for 111.111.111.111 is 1980, surpassing extreme level of 500 -- [Test] -> spam/Test__1.eml; Dec-04-12 19:52:48 75968-05169 111.111.111.111 <[email protected]> to: [email protected] [scoring:10] -- IP in HELO does not match connection: '[192.168.0.10]' -- [Re Demo Feedbacks for End of November Sales]; Dec-04-12 19:52:48 75968-05169 [SpoofedSender] 111.111.111.111 <[email protected]> to: [email protected] [scoring:20] -- No Spoofing Allowed -- [Re Demo Feedbacks for End of November Sales]; Dec-04-12 19:52:48 75968-05169 [Extreme] 111.111.111.111 <[email protected]> to: [email protected] [spam found] -- score for 111.111.111.111 is 2020, surpassing extreme level of 500 -- [Re Demo Feedbacks for End of November Sales] ->spam/Re_Demo_Feedbacks_for_End_of_N__2.eml; Dec-04-12 19:52:57 75977-05179 [SpoofedSender] 111.111.111.111 <[email protected]> to: [email protected] [scoring:20] -- No Spoofing Allowed -- [test]; Dec-04-12 19:52:57 75977-05179 [Extreme] 111.111.111.111 <[email protected]> to: [email protected] [spam found] -- score for 111.111.111.111 is 2040, surpassing extreme level of 500 -- [test] -> spam/test__3.eml; ……………. Dec-04-12 19:55:35 76135-05338 [SpoofedSender] 111.111.111.111 <[email protected]> to: [email protected] [scoring:20] -- No Spoofing Allowed -- [test]; Dec-04-12 19:55:35 76135-05338 [MsgID] 111.111.111.111 <[email protected]> to: [email protected] [scoring] (Message-ID not valid: 'E8472A91545B44FBAE413F6D8760C7C3@bts'); Dec-04-12 19:55:35 76135-05338 [InvalidHELO] 111.111.111.111 <[email protected]> to: [email protected] [spam found] -- Invalid HELO: 'bts' -- [test] -> discarded/test__4.eml; note: 111.111.111.111 is a replacement for the users home IP address Here is the headers of one of the messages X-Assp-Score: 10 (HELO contains IP: '[192.168.0.10]') X-Assp-Score: 10 (IP in HELO does not match connection: '[192.168.0.10]') X-Assp-Score: 20 (No Spoofing Allowed) X-Assp-Score: 10 (bombSubjectRe: 'sale') X-Assp-Score: 20 (blacklisted HELO '[192.168.0.10]') X-Assp-Score: 45 (DNSBLcache: failed, 111.111.111.111 listed in safe.dnsbl.sorbs.net) X-Assp-DNSBLcache: failed, 174.0.35.31 listed in safe.dnsbl.sorbs.net X-Assp-Received-SPF: fail (cache) ip=174.0.35.31 [email protected] helo=[192.168.0.10] X-Assp-Score: 10 (SPF fail) X-Assp-Envelope-From: [email protected] X-Assp-Intended-For: [email protected] X-Assp-Version: 1.7.5.7(1.0.07) on ASSP.nospam X-Assp-ID: ASSP.nospam (77953-07232) X-Assp-Spam: YES X-Assp-Original-Subject: Re: Demo Feedbacks for End of November Sales X-Spam-Status:yes X-Assp-Spam-Reason: MessageScore (125) over limit (50) X-Assp-Message-Totalscore: 125 Received: from [192.168.0.10] ([111.111.111.111] helo=[192.168.0.10]) with IPv4:25 by ASSP.nospam; 4 Dec 2012 20:25:52 -0700 Content-Type: multipart/alternative; boundary=Apple-Mail-40FE7453-4BE7-4AD6-B297-FB81DAA554EC Content-Transfer-Encoding: 7bit Subject: Re: Demo Feedbacks for End of November Sales References: <003c01cdd22e$eafbc6f0$c0f354d0$@com> From: Some User <[email protected]> In-Reply-To: <003c01cdd22e$eafbc6f0$c0f354d0$@com> Message-Id: <[email protected]> Date: Tue, 4 Dec 2012 19:32:28 -0700 To: External User <[email protected]> Mime-Version: 1.0 (1.0) X-Mailer: iPhone Mail (10A523) Why is it that a local sender has been banned on our local server, and how can I fix this?

    Read the article

  • Varnish cached 'MISS status' object?

    - by Hesey
    My site uses nginx, varnish, jboss. And some url will be cached by varnish, it depends a response header from jboss. The first time, jboss tells varnish doesn't cache this url. Then the second request, jboss tells varnish to cache, but varnish won't cache it. I used varnishstat and found that 1 object is cached in Varnish, is that the 'MISS status' object? I remove grace code and the problem still exists. When I PURGE this url, varnish works fine and cache the url then. But I can't PURGE so much urls every startup time, how can I fix this? The configuration: acl local { "localhost"; } backend default { .host = "localhost"; .port = "8080"; .probe = { .url = "/preload.htm"; .interval = 3s; .timeout = 1s; .window = 5; .threshold = 3; } } sub vcl_deliver { if (req.request == "PURGE") { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Content-Type; remove resp.http.Server; remove resp.http.Date; remove resp.http.Accept-Ranges; remove resp.http.Connection; set resp.http.keeplive="true"; } else { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } } sub vcl_recv { if(req.url ~ "/check.htm"){ error 404 "N"; } if( req.http.host ~ "store." || req.request == "POST"){ return (pipe); } if (req.backend.healthy) { set req.grace = 30s; } else { set req.grace = 10m; } set req.http.x-cacheKey = "0"; if(req.url ~ "/shop/view_shop.htm" || req.url ~ "/shop/viewShop.htm" || req.url ~ "/index.htm"){ if(req.url ~ "search=y"){ set req.http.x-cacheKey = req.http.host + "/search.htm"; }else if(req.url !~ "bbs=y" && req.url !~ "shopIntro=y" && req.url !~ "shop_intro=y"){ set req.http.x-cacheKey = req.http.host + "/index.htm"; } }else if(req.url ~ "/search"){ set req.http.x-cacheKey = req.http.host + "/search.htm"; } if( req.http.x-cacheKey == "0" && req.url !~ "/i/"){ return (pipe); } if (req.request == "PURGE") { if (client.ip ~ local) { return (lookup); } else { error 405 "Not allowed."; } } if (req.url ~ "/i/") { set req.http.x-shop-url = req.original_url; }else { unset req.http.cookie; } } sub vcl_fetch { set beresp.grace = 10m; #unset beresp.http.x-cacheKey; if (req.url ~ "/i/" || req.url ~ "status" ){ set beresp.ttl = 0s; /* ttl=0 for dynamic content */ } else if(beresp.http.x-varnish-cache != "1"){ set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 0s; unset beresp.http.set-cookie; } else { set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 1800s; unset beresp.http.set-cookie; } } sub vcl_hash { hash_data(req.http.x-cacheKey); return (hash); } sub vcl_error { if (req.request == "PURGE") { return (deliver); } else { set obj.http.Content-Type = "text/html; charset=gbk"; synthetic {"<!--ve-->"}; return (deliver); } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "N"; } }

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • Blank black screen with cursor after login -- RHEL5

    - by Sean O.
    I have a RHEL 5 machine here which is a Dell Precision T3500. I'm an Ubuntu guy, but I'm having a heck of a time with this machine. After processing its first security update, we cannot log in via the gdm greeter. A new kernel was installed; then I installed the nVidia drivers for our Quadro NVS 295. I know the X configuration is valid because the gdm greeter does display; however, upon login all we can get is a blank, black screen with a cursor. I thought perhaps our python installation was corrupted but a reinstall via yum has not helped. I have searched & googled extensively for a potential fix for this and can find nothing. Below are outputs from uname, a tail of an error in /var/log/messages, and the Xorg.conf. Can anyone suggest a course of action? [sean@cheetah ~]$ uname -a Linux cheetah.*.* 2.6.18-308.8.1.el5 #1 SMP Fri May 4 16:43:02 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux [sean@cheetah ~]$ sudo tail /var/log/messages Jun 5 15:03:04 cheetah gconfd (sean-4592): Resolved address "xml:readonly:/etc/gconf/gconf.xml.defaults" to a read-only configuration source at position 2 Jun 5 15:03:05 cheetah hcid[3855]: Default passkey agent (:1.8, /org/bluez/applet) registered Jun 5 15:03:05 cheetah pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 0 Not Found Jun 5 15:03:05 cheetah last message repeated 2 times Jun 5 15:03:06 cheetah gconfd (sean-4592): Resolved address "xml:readwrite:/home/sean/.gconf" to a writable configuration source at position 0 Jun 5 15:03:06 cheetah setroubleshoot: [program.ERROR] exception ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Traceback (most recent call last): File "/usr/bin/sealert", line 952, in ? from setroubleshoot.gui_utils import * File "/usr/lib/python2.4/site-packages/setroubleshoot/gui_utils.py", line 26, in ? import gtk File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 48, in ? from gtk import _gtk ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Jun 5 15:03:07 cheetah setroubleshoot: [program.ERROR] exception ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Traceback (most recent call last): File "/usr/bin/sealert", line 952, in ? from setroubleshoot.gui_utils import * File "/usr/lib/python2.4/site-packages/setroubleshoot/gui_utils.py", line 26, in ? import gtk File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 48, in ? from gtk import _gtk ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Jun 5 15:03:08 cheetah pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 0 Not Found Jun 5 15:07:01 cheetah ntpd[4114]: synchronized to 64.16.211.38, stratum 3 Jun 5 15:07:01 cheetah ntpd[4114]: kernel time sync enabled 0001 [sean@cheetah ~]$ cat /etc/X11/xorg.conf # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 295.53 ([email protected]) Sat May 12 00:34:20 PDT 2012 # Xorg configuration created by system-config-display Section "ServerLayout" Identifier "single head configuration" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us" EndSection Section "Monitor" ### Comment all HorizSync and VertSync values to use DDC: ### Comment all HorizSync and VertSync values to use DDC: Identifier "Monitor0" ModelName "LCD Panel 1600x1200" HorizSync 31.5 - 74.7 VertRefresh 56.0 - 65.0 Option "dpms" EndSection Section "Device" Identifier "Videocard0" Driver "nvidia" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Google Apps e-mail being rejected from some domains

    - by Paul J. Lucas
    I'm migrating e-mail for my domains to Google Apps' e-mail. Most everything seems to work except e-mail sent to any user at (at least) sonic.net is rejected with a message of the form (where any-address has been substituted for my friend's address): From: Mail Delivery Subsystem <[email protected]> Date: March 11, 2010 10:04:48 AM PST To: [email protected] Subject: Delivery Status Notification (Failure) Delivered-To: [email protected] Received: by 10.229.194.26 with SMTP id dw26cs8717qcb; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr3841599fai.62.1268330688325; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr5119424fai.62; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Mime-Version: 1.0 Return-Path: <> X-Failed-Recipients: [email protected] Message-Id: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.1.1 <[email protected]>... No such user here (state 13). And here are the headers from the message it bounces back: Received: by 10.101.90.7 with SMTP id s7mr2515885anl.176.1267979929490; Sun, 07 Mar 2010 08:38:49 -0800 (PST) Return-Path: <[email protected]> Received: from [10.0.1.203] (adsl-76-201-171-194.dsl.pltn13.sbcglobal.net [76.201.171.194]) by mx.google.com with ESMTPS id 4sm1046550yxd.70.2010.03.07.08.38.48 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 07 Mar 2010 08:38:49 -0800 (PST) From: "Paul J. Lucas" <[email protected]> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Some fascinating subject Date: Sun, 7 Mar 2010 08:38:46 -0800 References: <[email protected]> To: [email protected] Message-Id: <[email protected]> Mime-Version: 1.0 (Apple Message framework v1077) X-Mailer: Apple Mail (2.1077) However, I am able to send mail to a user at sonic.net using my old e-mail account. Also, my company uses Google Apps for e-mail and I can send e-mail to a user at sonic.net from my company. The differences between my personal e-mail and my company's are: My company's domain has no SPF record whereas mine does. My company's domain has an A record whereas mine does not. My SPF record initially was as prescribed by Google here. However, this guy claims Google is wrong and gives a fix. I've tried it both ways with no difference. My SPF record is currently: v=spf1 mx include:aspmx.googlemail.com include:_spf.google.com ~all As for the lack of an A record, you wouldn't think that a mail host would care about that so long as mx records are defined. However, the funny thing is that if you look at the error message, why does Google state that the recipient's domain stated that there is "No such user here" for my address? That makes no sense. Of course there is no user having my address at sonic.net. Also, I assume that I just discovered that I can't send mail to users at sonic.net by accident and that there are probably other domains I can't send e-mail to. So... anybody have any idea what's going on? And how I can get mail to users at sonic.net?

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • Using Upstart to manage Unicorn w/ rbenv + bundler binstubs w/ ruby-local-exec shebang

    - by codykrieger
    Alright, this is melting my brain. It might have something to do with the fact that I don't understand Upstart as well as I should. Sorry in advance for the long question. I'm trying to use Upstart to manage a Rails app's Unicorn master process. Here is my current /etc/init/app.conf: description "app" start on runlevel [2] stop on runlevel [016] console owner # expect daemon script APP_ROOT=/home/deploy/app PATH=/home/deploy/.rbenv/shims:/home/deploy/.rbenv/bin:$PATH $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production # >> /tmp/upstart.log 2>&1 end script # respawn That works just fine - the Unicorns start up great. What's not great is that the PID detected is not of the Unicorn master, it's of an sh process. That in and of itself isn't so bad, either - if I wasn't using the automagical Unicorn zero-downtime deployment strategy. Because shortly after I send -USR2 to my Unicorn master, a new master spawns up, and the old one dies...and so does the sh process. So Upstart thinks my job has died, and I can no longer restart it with restart or stop it with stop if I want. I've played around with the config file, trying to add -D to the Unicorn line (like this: $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production -D) to daemonize Unicorn, and I added the expect daemon line, but that didn't work either. I've tried expect fork as well. Various combinations of all of those things can cause start and stop to hang, and then Upstart gets really confused about the state of the job. Then I have to restart the machine to fix it. I think Upstart is having problems detecting when/if Unicorn is forking because I'm using rbenv + the ruby-local-exec shebang in my $APP_ROOT/bin/unicorn script. Here it is: #!/usr/bin/env ruby-local-exec # # This file was generated by Bundler. # # The application 'unicorn' is installed as part of a gem, and # this file is here to facilitate running it. # require 'pathname' ENV['BUNDLE_GEMFILE'] ||= File.expand_path("../../Gemfile", Pathname.new(__FILE__).realpath) require 'rubygems' require 'bundler/setup' load Gem.bin_path('unicorn', 'unicorn') Additionally, the ruby-local-exec script looks like this: #!/usr/bin/env bash # # `ruby-local-exec` is a drop-in replacement for the standard Ruby # shebang line: # # #!/usr/bin/env ruby-local-exec # # Use it for scripts inside a project with an `.rbenv-version` # file. When you run the scripts, they'll use the project-specified # Ruby version, regardless of what directory they're run from. Useful # for e.g. running project tasks in cron scripts without needing to # `cd` into the project first. set -e export RBENV_DIR="${1%/*}" exec ruby "$@" So there's an exec in there that I'm worried about. It fires up a Ruby process, which fires up Unicorn, which may or may not daemonize itself, which all happens from an sh process in the first place...which makes me seriously doubt the ability of Upstart to track all of this nonsense. Is what I'm trying to do even possible? From what I understand, the expect stanza in Upstart can only be told (via daemon or fork) to expect a maximum of two forks.

    Read the article

  • Consistent PHP _SERVER variables between Apache and nginx?

    - by Alix Axel
    I'm not sure if this should be asked here or on ServerFault, but here it goes... I am trying to get started on nginx with PHP-FPM, but I noticed that the server block setup I currently have (gathered from several guides including the nginx Pitfalls wiki page) produces $_SERVER variables that are different from what I'm used to seeing in Apache setups. After spending the last evening trying to "fix" this, I decided to install Apache on my local computer and gather the variables that I'm interested in under different conditions so that I could try and mimic them on nginx. The Apache setup I've on my computer has only one mod_rewrite rule: RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 [L] And these are the values I get for different request URIs (left is Apache, right is nginx): localhost/ - http://www.mergely.com/GnzBHRV1/ localhost/foo/bar/baz/?foo=bar - http://www.mergely.com/VwsT8oTf/ localhost/index.php/foo/bar/baz/?foo=bar - http://www.mergely.com/VGEFehfT/ What configuration directives would allow me to get similar values on requests handled by nginx? My current configuration in nginx is: server { listen 80; listen 443 ssl; server_name default; ssl_certificate /etc/nginx/certificates/dummy.crt; ssl_certificate_key /etc/nginx/certificates/dummy.key; root /var/www/default/html; index index.php index.html; autoindex on; location / { try_files $uri $uri/ /index.php; } location ~ /(?:favicon[.]ico|robots[.]txt)$ { log_not_found off; } location ~* [.]php { #try_files $uri =404; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_split_path_info ^(.+[.]php)(/.+)$; } location ~* [.]ht { deny all; } } And my fastcgi_params file looks like this: fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; I know that the try_files $uri =404; directive is commented and that it is a security vulnerability but, if I uncomment it, the third request (localhost/index.php/foo/bar/baz/?foo=bar) will return a 404. It's also worth noting that my PHP cgi.fix_pathinfo in On (contrary to what some of the guides recommend), if I try to set it to Off, I'm presented with a "Access denied." message on every PHP request. I'm running PHP 5.4.8 and nginx/1.1.19. I don't know what else to try... Help?

    Read the article

  • How should I ask for help in getting my emails to stop bouncing?

    - by Gregg Williams
    For several months, people have been telling me that emails they sent to me have been bouncing back, marked as undeliverable. The bounce message would contain portions like this: Final-Recipient: rfc822;[email protected] Action: failed Status: 5.7.1 Diagnostic-Code: smtp;550 5.7.1 <[email protected]>... Recipient declines email from 69.64.159.2, <spamhaus-xbl>, Ref: http://www.spamhaus.org/query/bl?ip=69.64.159.2 Clicking the link on the last line, the destination page told me that "this IP address is infected with/emitting spamware/spamtrojan traffic and needs to be fixed." I could temporarily de-list this node by clicking a link on that page, but it would get back on the list and more emails to me to bounce. I own a domain, innerpaths.net, and I normally use [email protected] for my email. I have my domain registrar, namecheap.com, forward all email from innerpaths.net to the email account [email protected]. (BTW, I had this same problem at a former registrar. I changed registrars, hoping that would fix the problem. It didn't.) Trying to isolate the problem, I asked namecheap.com what I should do. Their answer, though substantial, left me scratching my head: We have received feedback from our upstream provider which informed us that the mail server that you are trying to email subscribes to a 3rd party blacklist service which they appear to be listed on at the present time and is causing destination mail server to reject the messages. Being blocked with one of these services can happen to anyone for many reasons and is something that is beyond our control. 3rd party blacklist services require companies whose mail servers they have blacklisted, pay fees in order to be removed from their lists. As we cannot pay fees to blacklist services which require them for removal, you should contact your email provider and have them whitelist our mail server IP address: 69.64.157.73. My best guess is that I should email my ISP, sonic.net, tell them what is going on and ask them to whitelist the IP address 69.64.157.73. (If not, please let me know.) But I want to know what is going on and how email works. I understand that there's a device at location 69.64.159.2 that is doing something bad that causes the "destination mail server [sonic.net's, I assume --gw] to reject the messages." I know that email is sent through multiple devices in a way that eventually gets it to its destination. Beyond that, here are my questions: 1) I thought the Internet "routed around damage." Why does email starting at namecheap.com always (or is it 'sometimes'?) go through 69.64.159.2? 2) Who is the "upstream provider" that the namecheap.com representative mentions, and what is their role? 3) How does having sonic.net's whitelisting namecheap.com's mail server prevent my email being bounced by 69.64.159.2? I've searched the Internet for answers but have found nothing useful. Thanks for whatever answers you can provide.

    Read the article

  • Unmountable boot volume blue screen, what should I do?

    - by Josh
    I was trying to install an update from NVIDIA for my GTX 560, but while it was installing, my computer shut off. After a few minutes, I turned it back on. It got to the Windows boot screen and then had a blue screen error and if left on it would just keep doing that. A few details about my PC: I haven't added any new hardware or software, I'm running Windows XP Professional 32 bit and Windows XP Professional 64 bit on the same hard drive for about 2 years now. I have 2 other hard drives also, but I don't have one large enough to hold everything from my main hard drive, so formatting isn't an option. Now, as for what I've done so far: I've scanned the RAM with "memtest - 86 v3.4" and it said that it was good. I scanned the hard drive in question with chkdsk /r and it gets to 50% and tells me something along the lines of "the drive has one or more unrepairable problems". I also tried to use chkdsk on the drive I installed the new copy of Windows XP on and it got to 75% then jumped back down to 50% and stayed there (I had to reboot the pc). So, after that, I turned off auto reboot and got to read the blue screen error code and I looked it up only to find that nobody seems to have this problem, just problems close to it. The error code is 0x000000ed and I've seen a lot of these online but none that matched the detailed part of the code UNMOUNTABLE_BOOT_VOLUME 0x000000ed (0xfffffadf513c19a0, 0xffffffffc0000006, 0, 0) So, I have installed another copy of Windows XP Professional 32 bit on one of my other hard drives in hopes of accessing the data on the drive in question and when it booted it asked if I wanted chkdsk to scan the drive in question and this is what it found: file record segments 12740, 12741, 12742 and 12743 were reported unreadable. Then it says "recovering lost files" but it sits there for a few seconds and then just boots to Windows. I can't access the drive in question from Windows as far as I can tell, it just says "drive not accessible" and when I go to properties it says that the drive has 100% free space. So, after that failed I didn't give up, I looked for another way to access the drive in question. I used a Ubuntu bootable disk and was able to access the drive in question without any problems. However, I can't access the registry editor because it's a .exe file and that won't load from Ubuntu. I made a copy of the "Windows" folder and put it on one of my other drives and that's where I'm stuck at now. I'm sure my drive works fine, I know chkdsk can't fix the problem with it and I know what caused the problem in the first place for the most part, but I don't know what to do about it. I have a laptop that I can use to download and burn disks if needed and I also have the other copy of Windows XP Professional 32 bit that I can use that's installed on the computer in question (so I know it's not a hardware issue). I'm pretty sure it's a driver issue or the update was editing the registry when it shut off and left me when a broken registry. I've tried accessing C:\Windows\System32\CONFIG only to find that the Windows XP disk repair option can't even access the files on the drive in question. It seems I'll need to be able to do everything from Ubuntu unless there is something I haven't tried with the Windows XP disk. I didn't install the update on Windows XP 64 bit but yet it also has the same blue screen error (that's where the error code above came from but I haven't checked to see if they are the same). They both stopped working at the same time, so I assume it's one problem causing both to not work.

    Read the article

  • Python in command line runs the wrong version?

    - by Deflect
    I have several versions of Python installed on a Windows 7 computer. I want to run Python 2.7 by default, but for whatever reason, typing python in the command line runs Python version 2.4.5. I've tried adding C:\Python27 to my system path variable as per this question, and manually combed my path variable it to make sure Python 2.4.5 wasn't tossed in there by mistake, but that didn't fix the issue. I have to type in C:\Python27\python.exe every time I want to access the correct version of python I want. What other places can I check? How can I make the command line use the correct version of python? I also found this but it's not for windows. [EDIT] My path (separated by semicolons): C:\Program Files\Common Files\Microsoft Shared\Windows Live; C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live; C:\Windows\system32; C:\Windows; C:\Windows\System32\Wbem; C:\Windows\System32\WindowsPowerShell\v1.0\; C:\Program Files\Dell\DW WLAN Card\Driver; C:\Program Files (x86)\Common Files\Roxio Shared\DLLShared\; C:\Program Files (x86)\Windows Live\Shared; c:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\; c:\Program Files\Microsoft SQL Server\100\Tools\Binn\; c:\Program Files\Microsoft SQL Server\100\DTS\Binn\; C:\Program Files\TortoiseGit\bin; C:\Program Files\Java\jdk1.6.0_26\bin; C:\Program Files\Java\jdk1.6.0_21 ; C:\Program Files\IVI Foundation\VISA\Win64\Bin\; C:\Program Files (x86)\IVI Foundation\VISA\WinNT\Bin\; C:\Program Files (x86)\IVI Foundation\VISA\WinNT\Bin; C:\Program Files\WPIJavaCV\OpenCV_2.2.0\bin; C:\Program Files (x86)\LilyPond\usr\bin; C:\Program Files\TortoiseSVN\bin; C:\Program Files (x86)\doxygen\bin; C:\Program Files (x86)\Graphviz 2.28\bin; C:\Users\Michael\bin\Misc\cppcheck\; C:\Program Files (x86)\Git\cmd; C:\Python27\python.exe; C:\Ruby192\bin; C:\Users\Michael\AppData\Roaming\cabal\bin; C:\Python27\; [EDIT 2] Running python spews this out: 'import site' failed; used -v for traceback Python 2.4.5 (#1, Jul 22 2011, 02:01:04) [GCC 4.1.1] on mingw32 Type "help", "copyright", "credits" or "license" for more information. >>> ...and running python --version (as suggested below) seems to be an unrecognized option. (I also tried running python -v, and it appears that Python 2.4 is trying to import libraries from C:\Python27\Lib, and failed due to a syntax error when it encountered a with statement, which was added in later version, I think) Also, I'm not sure if it's significant or not, but the above python version says something about GCC and mingw32, while running C:\python27\python.exe shows this: Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>>>

    Read the article

  • BSOD & System Failure after trying to install a new RAM

    - by Praveen Kumar
    I have updated the question with sections, so that people won't find it difficult to read. Basic System Information Let me give a basic introduction on my system. I have a system of following configuration: Processor: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz RAM: Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9) x 2 OS: Windows 7 Ultimate, SP1 Build 7601 HDD: 1 TB Seagate 7200 RPM The Problem It was working fine for about an year. Yesterday I planned to increase my RAM to 16 GB by putting another set of two Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9). I got it from an authorized reseller and also, the RAM was fitted by a service engineer only. After the RAM was fit (all the four), the system failed to start, with an error code of 0x000000f4. The complete information of it is: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 16393 Additional information about the problem: BCCode: f4 BCP1: 0000000000000003 BCP2: FFFFFA8008A39060 BCP3: FFFFFA8008A39340 BCP4: FFFFF800037C8510 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\093012-13041-01.dmp C:\Users\Praveen Kumar\AppData\Local\Temp\WER-30716-0.sysdata.xml Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Another Problem We first thought that it was the RAM, which caused the issue. So I returned the RAMs and now my computer configuration is exactly how it was the previous day. But, following the removal of the RAM, I also had several crashes after that. One suspicious thing was with an error code c0000134: STOP: c0000135 The program can’t start because %hs is missing from your computer . Try resintalling the program to fix this problem. After reading contents from this, this and this, which were never my case, they didn't help me. But I didn't receive any more STOP c0000134 messages. But this 0x000000f4 keeps on coming. I am writing from the same system and it allows me to work for say, half an hour max. Then I hear a device disconnect sound, the one you hear in Windows 7, when a USB Mass Storage Device is plugged out. Immediately following that, my screen goes blank and I get 0x000000f4 blue screen. Okay, now I am really concerned about my Hard Disk data, but I have no clue if there is a problem with the HDD. My Question What all files do I need to submit for your reference? Can this issue be fixed? I am getting more time if I remove my RAM, clean it and then put it back. Weird! Hope I have given the necessary information to help you guys. Thanks in advance. Minidumps I have uploaded all the Minidump DMP files from C:\Windows\Minidump folder here: http://www.praveen-kumar.com/Minidumps.zip Let me know if you face any issues in accessing it. Will be able to share elsewhere. Updates 30-Sep-2012 10:15 AM IST: When I keep the system cover opened, pressed the HDD Cable well, it is allowing me to be on for about half an hour, I guess? Also, I feel that the CPU fan speed is kind of slow. It rotates at around 900 RPM, but the CPU Temperature is not more than 70° C. 30-Sep-2012 10:30 AM IST: My Modem (Beetel 220BX ADSL2+ Router) failed. I have no idea how it is related to this issue, but I thought that I need to document this too. I really have a bad day here. 30-Sep-2012 11:00 AM IST: System still running fine, with the cabinet cover open, now for about an hour. 30-Sep-2012 12:00 PM IST: I shut down the system and closed the cabinet. Started the system, and it hung after giving the password. After a few minutes, got the same 0x000000f4 error. So, while it is in the upright position, fixed the Hard Disk cable and now it is booting fine. Waiting for more observations and answers.

    Read the article

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

  • Apache reverse proxy POST 403

    - by qkslvrwolf
    I am trying to get Jira and Stash to talk to each other via a Trusted Application link. The setup, currently, looks like this: Jira - http - Jira Proxy -https- stash proxy -http- stash. Jira and the Jira proxy are on the same machine. The Jira Proxy is showing 403 Forbidden for POST requests from the stash server. It works (or seems to ) for everything else. I contend that since we're seeing 403 forbiddens in the access log for apache, Jira is never seeing the request. Why is apache forbidding posts,and how do I fix it? Note that the IPs for both Stash and the Stash Proxy are in the "trusted host" section. My config: LogLevel info CustomLog "|/usr/sbin/rotatelogs /var/log/apache2/access.log 86400" common ServerSignature off ServerTokens prod Listen 8443 <VirtualHost *:443> ServerName jira.company.com SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/ssl/certs/server.cer SSLCertificateKeyFile /etc/ssl/private/server.key SSLProtocol +SSLv3 +TLSv1 SSLCipherSuite DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:AES128-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:DES-CBC3-SHA # If context path is not "/wiki", then send to /jira. RedirectMatch 301 ^/$ https://jira.company.com/jira RedirectMatch 301 ^/gsd(.*)$ https://jira.company.com/jira$1 ProxyRequests On ProxyPreserveHost On ProxyVia On ProxyPass /jira http://localhost:8080/jira ProxyPassReverse /jira http://localhost:8080/jira <Proxy *> Order deny,allow Allow from all </Proxy> RewriteEngine on RewriteLog "/var/log/apache2/rewrite.log" RewriteLogLevel 2 # Disable TRACE/TRACK requests, per security. RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] DocumentRoot /var/www DirectoryIndex index.html <Directory /var/www> Options FollowSymLinks AllowOverride None Order deny,allow Allow from all </Directory> <LocationMatch "/"> Order deny,allow Deny from all allow from x.x.71.8 allow from x.x.8.123 allow from x.x.120.179 allow from x.x.120.73 allow from x.x.120.45 satisfy any SetEnvif Remote_Addr "x.x.71.8" TRUSTED_HOST SetEnvif Remote_Addr "x.x.8.123" TRUSTED_HOST SetEnvif Remote_Addr "x.x.120.179" TRUSTED_HOST SetEnvif Remote_Addr "x.x.120.73" TRUSTED_HOST SetEnvif Remote_Addr "x.x.120.45" TRUSTED_HOST </LocationMatch> <LocationMatch ^> SSLRequireSSL AuthType CompanyNet PubcookieInactiveExpire -1 PubcookieAppID jira.company.com require valid-user RequestHeader set userid %{REMOTE_USER}s </LocationMatch> </VirtualHost> # Port open for SSL, non-pubcookie access. Used to access APIs with Basic Auth. <VirtualHost *:8443> SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/ssl/certs/server.cer SSLCertificateKeyFile /etc/ssl/private/server.key SSLProtocol +SSLv3 +TLSv1 SSLCipherSuite DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:AES128-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:DES-CBC3-SHA ProxyRequests On ProxyPreserveHost On ProxyVia On ProxyPass /jira http://localhost:8080/jira ProxyPassReverse /jira http://localhost:8080/jira <Proxy *> Order deny,allow Allow from all </Proxy> RewriteEngine on RewriteLog "/var/log/apache2/rewrite.log" RewriteLogLevel 2 # Disable TRACE/TRACK requests, per security. RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] DocumentRoot /var/www DirectoryIndex index.html <Directory /var/www> Options FollowSymLinks AllowOverride None Order deny,allow Allow from all </Directory> </VirtualHost> <VirtualHost jira.company.com:80> ServerName jira.company.com RedirectMatch 301 /(.*)$ https://jira.company.com/$1 RewriteEngine on RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </VirtualHost> <VirtualHost *:80> ServerName go.company.com RedirectMatch 301 /(.*)$ https://jira.company.com/$1 RewriteEngine on RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </VirtualHost>

    Read the article

  • DNS lookups failing somewhere between firewall and router

    - by TessellatingHeckler
    we have a setup of ADSL line - Cisco 837 ADSL router - Zyxel ZyWall 35 firewall/NAT - Switch == Intel load balanced NICS in a server. It has been fine for years, suddenly DNS resolution stopped working on the server. No changes that I know of, so I can't work backwards from there. It was configured with the ISP's DNS servers, neither network device does DNS relaying. Wireshark shows the request go out but nothing comes back. The server networking stack seems OK though, because if we query an internal DNS server on a remote site, that works. I can logon to the Cisco, and DNS resolves OK from the command line. I can logon to the ZyWall, and DNS does not resolve from the command line. So the problem seems to be the firewall, patch cable or router, yes? On the router: interface Ethernet0 ip address aaa.bbb.ccc.ddd 255.255.255.ddd ip tcp adjust-mss 1450 hold-queue 100 out On the firewall: DNS server set to 8.8.8.8 (Google's), DNS traffic allowed LAN-WAN. What else should I look for? Update: Following This guide I've got traffic logging on the Cisco. I have also got access to a public DNS server which I can run tcpdump on to see things from the other side. And as per the below comments, I've tested with Dig and see that DNS over TCP works, and over UDP does not. Currently: DNS request from the server using TCP shows up in the firewall log, and in the Cisco log, and in tcpdump on the DNS server, the answer comes back, it works fine. DNS request from the server using UDP shows up in the firewall log, and in the Cisco log, does NOT show in tcpdump on the DNS server, times out. DNS request from the cisco (using UDP) does show up in tcpdump on the DNS server, answer received, works fine. Ping requests from the server and the cisco to the DNS server show up in tcpdump on the DNS server. DNS request from the server using UDP does show up on the firewall. Summary: TCP seems fine throughought. UDP works over the ADSL and to the Cisco, and it works from the server to the Cisco, but it doesn't cross the Cisco properly, it seems. I did see the Cisco showing as connected at 10Mb/full-duplex internally, and the firewall showing as 100Mb/full-duplex externally. I have forced the firewall to 10Mb and rebooted both devices. That seemed to help get UDP traffic (server-firewall-cisco) instead of (server-firewall), but did not fix it. Update: Sanitized Cisco config: version 12.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname cisco ! logging queue-limit 100 enable secret 5 {password} enable password 7 {password} ! ip subnet-zero ip domain name example.org ip name-server {nameserver_IP} ! ! ip audit notify log ip audit po max-events 100 no ftp-server write-enable ! interface Ethernet0 ip address {Inside_public_IP} 255.255.255.248 ip tcp adjust-mss 1460 hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface Dialer1 ip unnumbered Ethernet0 encapsulation ppp dialer pool 1 dialer idle-timeout 0 dialer persistent no cdp enable ppp chap hostname {ADSL_Username} ppp chap password 7 {ADSL_Password} ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer1 no ip http server no ip http secure-server ! access-list 23 permit {IP} dialer-list 1 protocol ip permit no cdp run snmp-server enable traps tty ! {con, vty} end

    Read the article

  • bind9 DNS Ubuntu names pingible on server, but not on Windows Machines?

    - by leeand00
    I setup a DNS server today on Ubuntu, following this tutorial. My intent was to setup my network for dns-name resolving on the private LAN within a single zone (nothing fancy I just want name resolution). I've tested the setup on the DNS server machine itself, and I can ping all the machines listed in the configuration file. I've also configured the Windows Machines on my network, and for some reason they are incapable of pinging by names as was possible on the DNS Server itself. I've tried running nslookup on the Windows DNS clients and I receive and error mentioning the address of the DNS server. DNS forwarding works fine, I'm not having any trouble accessing the internet, the problem only lies within accessing names within the private LAN. Here are my configuration files: options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; forwarders { 8.8.8.8; 8.8.8.4; 74.242.0.12; //68.87.76.178; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.options zone "leerdomain.local" { type master; file "/etc/bind/zones/leerdomain.local.db"; notify no; }; zone "2.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.2.168.192.in-addr.arpa"; notify no; }; /etc/bind/named.conf.local Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 2010011001 28800 3600 604800 38400 ); leerdomain.local. IN NS ns.leerdomain.local. ns IN A 192.168.2.9 asus IN A 192.168.2.254 www IN CNAME asus vaio IN A 192.168.2.253 iptouch IN A 192.168.2.252 toshiba IN A 192.168.2.251 gw IN A 192.168.2.1 TXT "Network Gateway" /etc/bind/zones/leerdomain.local.db (Validates fine with named-checkzone when validating zone leerdomain.local) Reverse Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 201001101 28800 604800 604800 86400 ) IN NS ns.leerdomain.local. 1 IN PTR gw.leerdomain.local. 254 IN PTR asus.leerdomain.local. 253 IN PTR vaio.leerdomain.local. 252 IN PTR iptouch.leerdomain.local. 251 IN PTR toshiba.leerdomain.local. /etc/bind/zones/rev.2.168.192.in-addr.arpa *(Does not validate with named-checkzone when validating zone leerdomain.local gives an error of: zone leerdomain.local/IN: NS 'ns.leerdomain.local' has no address records (A or AAAA) zone leerdomain.local/IN: not loaded due to errors. * Despite not validating bind9 starts without errors in /var/log/syslog I've also configured a few of the windows machines on my network to have the static ip as specified in the lookup and reverse lookup config files. i.e. Using nslookup yields the following results: C:\Users\leeand00>nslookup ns Server: UnKnown Address: 192.168.2.9 *** UnKnown can't find ns: Non-existent domain C:\Users\leeand00>nslookup gw Server: UnKnown Address: 192.168.2.9 Name: gw. Additionally trying to ping by name also fails on machines that are not the DNS Server. Is there something wrong with my configuration of either the nameserver or the Windows Boxes that is keeping me from accessing other machines using names?

    Read the article

  • How to restore Linode to Vagrant VM?

    - by Iain Elder
    I'm trying to set up a Linux development environment so I can safely make changes to my website without breaking the live site. Linode hosts my live site. A simple solution would be to host my development server on Linode as well, but I want to avoid doubling my hosting costs. The cheapest way I see is to use Vagrant on my Windows workstation to host my development environment. After I attempt to restore the backup to Vagrant and reboot the VM, I can no longer ssh into the Vagrant host. It's probably because by restoring the backup I overwrite some special Vagrant configuration, but I'm not sure how to avoid that. How do I make this approach work? If my approach is fundamentally wrong, can you suggest an alternative? Creating the backup On the Linode I used these commands to create a compressed copy of the entire filesystem, while ignoring things that shouldn't be included in the backup: $ sudo rsync -ahvz --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/backup/*} /* /backup/2 $ sudo tar -czf /backup/2.gz /backup/2 The backup file is called 2.gz because this is thesecond backup. The first backup is called 1.gz. I use WinSCP to copy the backup file to my Windows workstation. Setting up the Vagrant host I need a Vagrant box that matches my Linode operating system (Ubuntu 12.04.3 LTS, kernel 3.9.3). I selected the closet match from vagrantbox.es: Ubuntu Server Precise 12.04.3 amd64 Kernel is ready for Docker (Docker not included) On my workstation I ran these commands to add the box and initialize and boot an instance: $ vagrant box add ubuntu-precise http://nitron-vagrant.s3-website-us-east-1.amazonaws.com/vagrant_ubuntu_12.04.3_amd64_virtualbox.box $ mkdir linode-test $ cd linode-test $ vagrant init ubuntu-precise $ vagrant up Now Vagrant is running a machine with SSH on port 2222. The operating system version is the same. The kernel version is 3.8.0. Sounds close enough. Restoring the backup With WinSCP I copied the backup file 2.gz to /home/vagrant/2.gz on the Vagrant box. With PuTTY I connected via ssh to my new Vagrant box: On the box move the backup to the filesystem root. $ sudo mv 2.gz / Extract the archive to the filesystem root: $ sudo tar -xvpz -f 2.gz -C / --strip-components=2 (I discovered I need to use strip components because all files in the archive have the prefix backup/2/. I'll fix this for the next backup.) After the tar command completes, I log out of the box. Testing the backup When I try to log in again, it doesn't let me log in as vagrant with a password any more. It does let me log in as iain, my user on the live Linode, with a password. That surprised me because I disabled password authentication on my live Linode. I figured that I have to restart the ssh service for the change to take effect. Instead of restarting just ssh, I chose to restart the whole system. Now I can't even get to the login screen. PuTTY says "connection refused" when I try to connect. What went wrong?

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • How to setup linux permissions the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • rsyslogd not monitoring all files

    - by Tom O'Connor
    So.. I've installed Logstash, and instead of using the logstash shipper (because it needs the JVM and is generally massive), I'm using rsyslogd with the following configuration. # Use traditional timestamp format $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $IncludeConfig /etc/rsyslog.d/*.conf # Provides kernel logging support (previously done by rklogd) $ModLoad imklog # Provides support for local system logging (e.g. via logger command) $ModLoad imuxsock # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none;local6.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg * # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log In /etc/rsyslog.d/logstash.conf there are 28 file monitor blocks using imfile $ModLoad imfile # Load the imfile input module $ModLoad imklog # for reading kernel log messages $ModLoad imuxsock # for reading local syslog messages $InputFileName /var/log/rabbitmq/startup_err $InputFileTag rmq-err: $InputFileStateFile state-rmq-err $InputFileFacility local6 $InputRunFileMonitor .... $InputFileName /var/log/some.other.custom.log $InputFileTag cust-log: $InputFileStateFile state-cust-log $InputFileFacility local6 $InputRunFileMonitor .... *.* @@10.90.0.110:5514 There are 28 InputFileMonitor blocks, each monitoring a different custom application logfile.. If I run [root@secret-gm02 ~]# lsof|grep rsyslog rsyslogd 5380 root cwd DIR 253,0 4096 2 / rsyslogd 5380 root rtd DIR 253,0 4096 2 / rsyslogd 5380 root txt REG 253,0 278976 1015955 /sbin/rsyslogd rsyslogd 5380 root mem REG 253,0 58400 1868123 /lib64/libgcc_s-4.1.2-20080825.so.1 rsyslogd 5380 root mem REG 253,0 144776 1867778 /lib64/ld-2.5.so rsyslogd 5380 root mem REG 253,0 1718232 1867780 /lib64/libc-2.5.so rsyslogd 5380 root mem REG 253,0 23360 1867787 /lib64/libdl-2.5.so rsyslogd 5380 root mem REG 253,0 145872 1867797 /lib64/libpthread-2.5.so rsyslogd 5380 root mem REG 253,0 85544 1867815 /lib64/libz.so.1.2.3 rsyslogd 5380 root mem REG 253,0 53448 1867801 /lib64/librt-2.5.so rsyslogd 5380 root mem REG 253,0 92816 1868016 /lib64/libresolv-2.5.so rsyslogd 5380 root mem REG 253,0 20384 1867990 /lib64/rsyslog/lmnsd_ptcp.so rsyslogd 5380 root mem REG 253,0 53880 1867802 /lib64/libnss_files-2.5.so rsyslogd 5380 root mem REG 253,0 23736 1867800 /lib64/libnss_dns-2.5.so rsyslogd 5380 root mem REG 253,0 20768 1867988 /lib64/rsyslog/lmnet.so rsyslogd 5380 root mem REG 253,0 11488 1867982 /lib64/rsyslog/imfile.so rsyslogd 5380 root mem REG 253,0 24040 1867983 /lib64/rsyslog/imklog.so rsyslogd 5380 root mem REG 253,0 11536 1867987 /lib64/rsyslog/imuxsock.so rsyslogd 5380 root mem REG 253,0 13152 1867989 /lib64/rsyslog/lmnetstrms.so rsyslogd 5380 root mem REG 253,0 8400 1867992 /lib64/rsyslog/lmtcpclt.so rsyslogd 5380 root 0r REG 0,3 0 4026531848 /proc/kmsg rsyslogd 5380 root 1u IPv4 1200589517 0t0 TCP 10.10.10.90 t:40629->10.10.10.90:5514 (ESTABLISHED) rsyslogd 5380 root 2u IPv4 1200589527 0t0 UDP *:45801 rsyslogd 5380 root 3w REG 253,3 17999744 2621483 /var/log/messages rsyslogd 5380 root 4w REG 253,3 13383 2621484 /var/log/secure rsyslogd 5380 root 5w REG 253,3 7180 2621493 /var/log/maillog rsyslogd 5380 root 6w REG 253,3 43321 2621529 /var/log/cron rsyslogd 5380 root 7w REG 253,3 0 2621494 /var/log/spooler rsyslogd 5380 root 8w REG 253,3 0 2621495 /var/log/boot.log rsyslogd 5380 root 9r REG 253,3 1064271998 2621464 /var/log/custom-application.monolog.log rsyslogd 5380 root 10u unix 0xffff81081fad2e40 0t0 1200589511 /dev/log You can see that there are nowhere near 28 logfiles actually being read. I really had to get one file monitored, so I moved it to the top, and it picked it up, but I'd like to be able to monitor all 28+ files, and not have to worry. OS is Centos 5.5 Kernel 2.6.18-308.el5 rsyslogd 3.22.1, compiled with: FEATURE_REGEXP: Yes FEATURE_LARGEFILE: Yes FEATURE_NETZIP (message compression): Yes GSSAPI Kerberos 5 support: Yes FEATURE_DEBUG (debug build, slow code): No Atomic operations supported: Yes Runtime Instrumentation (slow code): No Questions: Why is rsyslogd only monitoring a very small subset of the files? How can I fix this so that all the files are monitored?

    Read the article

  • Moved DNS and Email Hosting, Now Can't Send/Receive To/From Domains Hosted on Previous Host

    - by maxfinis
    Our company had 4 domains whose emails and DNS were hosted by one company, and then we moved the email and DNS hosting for 3 of the 4 domains to a new company. Now, the 3 domains that were moved can't send or receive emails to and from the one domain still left on the old server. All other email functions work fine for all 4 domains. There are no bouncebacks, error messages, or emails stuck in queue, and no evidence of these missing emails hitting the new servers. The new hosting company confirms that everything is fine on their end, and assures me that it's most likely an old zone file still remaining on the old nameserver, and so the emails sent from the old host is routed to what it believes is still the authoritative nameserver. Because the old zone file's MX records still contain the old resource, the requests never leave the old nameserver to go online to do a fresh search for the real (new) authoritative nameserver. The compounding problem is that the old company is rather inept and doesn't seem to have the technical expertise to identify the problem, much less fix it. (I know, I know.) Is the problem truly that this old zone file just needs to be deleted from the old company's nameserver? If so, what's the best way for me to describe this to them? If not, what do you think could be the issue? Any help is much appreciated. I'm not in IT, so all this is new to me. I know it seems weird for me (the client) to have to do this legwork, but I just want to get this resolved. Here's what I've done: Ran dig to verify that the old server's MX records still point to the old authoritative server, instead of going online to do a fresh search: ~$ dig @old.nameserver.com domainthatwasmoved.com mx ; << DiG 9.6.0-APPLE-P2 << @old.nameserver.com domainThatWasMoved.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 61227 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; QUESTION SECTION: ;domainthatwasmoved.com. IN MX ;; ANSWER SECTION: domainthatwasmoved.com. 3600 IN MX 10 mail.oldmailserver.com. ;; ADDITIONAL SECTION: mail.oldmailserver.com. 3600 IN A 65.198.191.5 ;; Query time: 29 msec ;; SERVER: 65.198.191.5#53(65.198.191.5) ;; WHEN: Sun Dec 26 16:59:22 2010 ;; MSG SIZE rcvd: 88 Ran dig to try to see where the new hosting company's servers look when emails are sent from the 3 domains that were moved, and got refused: ~$ dig @new.nameserver.net domainStillAtOldHost.com mx ; << DiG 9.6.0-APPLE-P2 << @new.nameserver.net domainStillAtOldHost.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: REFUSED, id: 31599 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;domainStillAtOldHost.com. IN MX ;; Query time: 31 msec ;; SERVER: 216.201.128.10#53(216.201.128.10) ;; WHEN: Sun Dec 26 17:00:14 2010 ;; MSG SIZE rcvd: 34

    Read the article

  • Install Intel USB 3.0 eXtensible Host Controller Driver for Windows Server 2008 R2 x64

    - by ffrugone
    According to Intel and Dell, by board is technically a 'desktop' board and they therefore do not support Intel USB 3.0 eXtensible Host Controller drivers for Windows Server 2008 (R2 x64). I'm trying to find a workaround. I found an entry on someone who tried to tackle this, but I can't make his fix work for me. Below, I have copied both his entry, and my reply. I'm a loyal stackoverflow user, and hopefully the people here at serverfault can help me: anyforumuser Re: GA-Z77X-UD5H USB3 Drivers not installing? « Reply #6 on: July 05, 2012, 04:12:59 am » Thanks to JoeMiner , his process for the network drivers gave me the clues to figure out to get the USB3 drivers working. I have got the intel USB3 drivers working at full speed in win server 2008r2 you have to edit the following file : 1. mup.xml in change the "Windows7" to "W2K8" 2. in setup.if2 under [groups] line starting with "HSCSDRIVER " change the "IsOS( ... )" entry to "IsOS(WIN2008_R2,WIN2008_R2_MAXSP)" inf files for all copy the content of the [Intel.NTAMD64.6.1] group to the [Intel.NTAMD64.6.2] group driver folders. here i am not entirely sure which is correct so there are some double up's. in the drivers folder copy the "Win7" folder to "win2008" , "win2008_r2" and "x64" ie your drivers folder should now contain the "win2008" , "win2008_r2" and "x64" folders and they contain contents of the win7 folder (the inf files should of already been fixed) Run install , It should install properly and work now. You will have to reboot If it doesn't work remove the intel usb3 controllers from device manager and get it to "scan for hardware changes" Good luck !!! benevida Re: GA-Z77X-UD5H Intel Network Drivers not installing? « Reply #7 on: August 13, 2012, 02:21:14 pm » Thank you anyforumuser! A process for getting this driver installed was exactly what I needed. However, I've hit a snag. I believe I've followed every step exactly as written, but I'm getting an error during installation. I get the message "One or more files that are required for installation are either missing or corrupted. Setup will exit." Behind the error, the 'Setup Progress' shows the current step as "Copying File: C:\Program Files (x86)\Intel\Intel(R) USB 3.0 eXtensible Host Controller Driver\Drivers\iusb3xhc.man". I've checked the installation files, and iusb3xhc.man seems to be a viable file in all of the Windows 2008 sub-directories of the Drivers folder. Therefore I don't see how the file could be missing and I doubt that it is corrupted, (although it does NOT exist in the \Drivers\HCSwitch folder). I opened 'Setup.if2', and there are two aspects to the step of copying iusb3xhc.man that caught my eye. First, the steps immediately preceding are set to 'error=ignore'. If they hadn't completed successfully, this is the first step where we'd hear about it. Second, this is the first step where the relative path '%source%\drivers\%_os%\%_ia%\' is used. If I haven't named the Windows 2008 sub-directories correctly, I could see where things are fouling up. In any event, if someone could take a look and make suggestions I'd appreciate it. Thank you.

    Read the article

  • How to install php-devel under CentOS 6.3 x64?

    - by Jeremy Dicaire
    I'm trying to install php-devel on my CentOS 6.3 VPS and get a failed dependencies test. From phpinfos(): SYSTEM Linux 2.6.32-279.5.2.el6.x86_64 #1 x86_64 NTS error: Failed dependencies: php(x86-64) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.x86_64 I've tried the following RPM packages: php54w-devel-5.4.6-1.w6.x86_64.rpm php-devel-5.4.6-1.el6.remi.i686.rpm php-devel-5.4.6-1.el6.remi.x86_64.rpm One of the above package gave me this: root@sv1 [/tmp]# rpm -Uvh php-devel-5.4.6-1.el6.remi.i686.rpm warning: php-devel-5.4.6-1.el6.remi.i686.rpm: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY error: Failed dependencies: php(x86-32) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.i686 libbz2.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 libcom_err.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libcrypto.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libedit.so.0 is needed by php-devel-5.4.6-1.el6.remi.i686 libgmp.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libgssapi_krb5.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libk5crypto.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libkrb5.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libncurses.so.5 is needed by php-devel-5.4.6-1.el6.remi.i686 libssl.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libstdc++.so.6 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.4.30) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.5.2) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.0) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.11) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.5) is needed by php-devel-5.4.6-1.el6.remi.i686 libz.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 I don't know how to fix this error and download all the dependencies. Thank you. Edit 1 (for quanta): Here is "yum repolist": root@sv1 [/tmp]# yum repolist Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.cogentco.com * extras: mirror.atlanticmetro.net * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.choopa.net repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,099 root@sv1 [/tmp]# rpm -qa | grep php didn't return any result. I forgot to mention I'm using cPanel/WHM Edit 2 after adding the Remi repo: >root@sv1 [/etc/yum.repos.d]# yum clean all Loaded plugins: fastestmirror, presto Cleaning repos: base epel extras remi remi-test rpmforge updates Cleaning up Everything Cleaning up list of fastest mirrors 1 delta-package files removed, by presto >root@sv1 [/etc/yum.repos.d]# yum repolist Loaded plugins: fastestmirror, presto Determining fastest mirrors epel/metalink | 12 kB 00:00 * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net base | 3.7 kB 00:00 base/primary_db | 4.5 MB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 4.7 MB 00:00 extras | 3.0 kB 00:00 extras/primary_db | 6.3 kB 00:00 remi | 2.9 kB 00:00 remi/primary_db | 330 kB 00:00 remi-test | 2.9 kB 00:00 remi-test/primary_db | 85 kB 00:00 rpmforge | 1.9 kB 00:00 rpmforge/primary_db | 2.5 MB 00:00 updates | 3.5 kB 00:00 updates/primary_db | 2.3 MB 00:00 repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 remi Les RPM de remi pour Enterprise Linux 6 - x86_64 96+564 remi-test Les RPM de remi en test pour Enterprise Linux 6 - x86_64 25+139 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,220 >root@sv1 [/etc/yum.repos.d]# yum install php-devel Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net Setting up Install Process No package php-devel available. Error: Nothing to do >root@sv1 [/etc/yum.repos.d]#

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >