Search Results

Search found 127626 results on 5106 pages for 'http status code 408'.

Page 537/5106 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • In Exim, is RBL spam rejected prior to being scanned by SpamAssassin?

    - by user955664
    I've recently been battling spam issues on our mail server. One account in particular was getting hammered with incoming spam. SpamAssassin's memory use is one of our concerns. What I've done is enable RBLs in Exim. I now see many rejection notices in the Exim log based on the various RBLs, which is good. However, when I run Eximstats, the numbers seem to be the same as they were prior to the enabling of the RBLs. I am assuming because the email is still logged in some way prior to the rejection. Is that what's happening, or am I missing something else? Does anyone know if these emails are rejected prior to being processed by SpamAssassin? Or does anyone know how I'd be able to find out? Is there a standard way to generate SpamAssassin stats, similar to Eximstats, so that I could compare the numbers? Thank you for your time and any advice. Edit: Here is the ACL section of my Exim configuration file ###################################################################### # ACLs # ###################################################################### begin acl # ACL that is used after the RCPT command check_recipient: # to block certain wellknown exploits, Deny for local domains if # local parts begin with a dot or contain @ % ! / | deny domains = +local_domains local_parts = ^[.] : ^.*[@%!/|] # to restrict port 587 to authenticated users only # see also daemon_smtp_ports above accept hosts = +auth_relay_hosts condition = ${if eq {$interface_port}{587} {yes}{no}} endpass message = relay not permitted, authentication required authenticated = * # allow local users to send outgoing messages using slashes # and vertical bars in their local parts. # Block outgoing local parts that begin with a dot, slash, or vertical # bar but allows them within the local part. # The sequence \..\ is barred. The usage of @ % and ! is barred as # before. The motivation is to prevent your users (or their virii) # from mounting certain kinds of attacks on remote sites. deny domains = !+local_domains local_parts = ^[./|] : ^.*[@%!] : ^.*/\\.\\./ # local source whitelist # accept if the source is local SMTP (i.e. not over TCP/IP). # Test for this by testing for an empty sending host field. accept hosts = : # sender domains whitelist # accept if sender domain is in whitelist accept sender_domains = +whitelist_domains # sender hosts whitelist # accept if sender host is in whitelist accept hosts = +whitelist_hosts accept hosts = +whitelist_hosts_ip # envelope senders whitelist # accept if envelope sender is in whitelist accept senders = +whitelist_senders # accept mail to postmaster in any local domain, regardless of source accept local_parts = postmaster domains = +local_domains # accept mail to abuse in any local domain, regardless of source accept local_parts = abuse domains = +local_domains # accept mail to hostmaster in any local domain, regardless of source accept local_parts = hostmaster domains =+local_domains # OPTIONAL MODIFICATIONS: # If the page you're using to notify senders of blocked email of how # to get their address unblocked will use a web form to send you email so # you'll know to unblock those senders, then you may leave these lines # commented out. However, if you'll be telling your senders of blocked # email to send an email to [email protected], then you should # replace "errors" with the left side of the email address you'll be # using, and "example.com" with the right side of the email address and # then uncomment the second two lines, leaving the first one commented. # Doing this will mean anyone can send email to this specific address, # even if they're at a blocked domain, and even if your domain is using # blocklists. # accept mail to [email protected], regardless of source # accept local_parts = errors # domains = example.com # deny so-called "legal" spammers" deny message = Email blocked by LBL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains sender_domains = +blacklist_domains # deny using hostname in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts # deny using IP in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts_ip # deny using email address in blacklist_senders deny message = Email blocked by BSAL - to unblock see http://www.example.com/ domains = +use_rbl_domains senders = +blacklist_senders # By default we do NOT require sender verification. # Sender verification denies unless sender address can be verified: # If you want to require sender verification, i.e., that the sending # address is routable and mail can be delivered to it, then # uncomment the next line. If you do not want to require sender # verification, leave the line commented out #require verify = sender # deny using .spamhaus deny message = Email blocked by SPAMHAUS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = sbl.spamhaus.org # deny using ordb # deny message = Email blocked by ORDB - to unblock see http://www.example.com/ # # only for domains that do want to be tested against RBLs # domains = +use_rbl_domains # dnslists = relays.ordb.org # deny using sorbs smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = dnsbl.sorbs.net=127.0.0.5 # Next deny stuff from more "fuzzy" blacklists # but do bypass all checking for whitelisted host names # and for authenticated users # deny using spamcop deny message = Email blocked by SPAMCOP - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = bl.spamcop.net # deny using njabl deny message = Email blocked by NJABL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.njabl.org # deny using cbl deny message = Email blocked by CBL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = cbl.abuseat.org # deny using all other sorbs ip-based blocklist besides smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.sorbs.net!=127.0.0.6 # deny using sorbs name based list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ domains =+use_rbl_domains # rhsbl list is name based dnslists = rhsbl.sorbs.net/$sender_address_domain # accept if address is in a local domain as long as recipient can be verified accept domains = +local_domains endpass message = "Unknown User" verify = recipient # accept if address is in a domain for which we relay as long as recipient # can be verified accept domains = +relay_domains endpass verify=recipient # accept if message comes for a host for which we are an outgoing relay # recipient verification is omitted because many MUA clients don't cope # well with SMTP error responses. If you are actually relaying from MTAs # then you should probably add recipient verify here accept hosts = +relay_hosts accept hosts = +auth_relay_hosts endpass message = authentication required authenticated = * deny message = relay not permitted # default at end of acl causes a "deny", but line below will give # an explicit error message: deny message = relay not permitted # ACL that is used after the DATA command check_message: accept

    Read the article

  • Pressing an accent key (´ / ?) adds the character twice in most programs

    - by sh code
    This might seem as duplicate of Pressing key with accent character makes it enter twice but it is not: About three days ago, the issue started - I am using Slovak (Slovakia) keyboard, and the key for adding accents/diacritics (just left from backspace) acts strangely - it should "do nothing" upon pressing, and then when character key is pressed right after it, it should output that character with accent/diacritics. However, in most programs, pressing it once results in immediately outputting the accent itself twice (which under normal circumstances happens when you press the key twice). This issue doesn't happen in Visual Studio or any program I make in it, and Pidgin, it seems to happen everywhere else (firefox, notepad, MS OneNote, Windows Explorer itself). I am not aware of any software installations, driver changes or system updates between the time it worked properly and the time when it broke. What could be the problem and how do I solve it?

    Read the article

  • HAProxy NGInx SSL setup

    - by Niclas
    I've been looking around different setups for a server cluster supporting SSL and I would like to benchmark my idea with you. Requirements: All servers in the cluster should be under the same full domain name. (http and https) Routing to subsystems is done on URI matching in HA proxy. All URIs have support for SSL support. Wish: Centralizing routing rules ---<----http-----<-- | | Inet -->HA--+---https--->NGInx_SSL_1..N | | +---http---> Apache_1..M | +---http---> NodeJS Idea: Configure HA to route all SSL traffic (mode=tcp,algorithm=Source) to an NGInx cluster turning https traffic into http. Re-pass the http traffic from NGInx to the HA for normal load-balancing which performs load balancing based on HA config. My question is simply: Is this the best way to to configure based on requirements above?

    Read the article

  • EC2-like virtualisation platform with non-RDP access

    - by code'
    I'm looking into setting up a few small VMs. The trouble is the software I intend to install on them (Cisco VPN Client) blocks out networking (other than to the target VPN destination...) with no workaround. This means that Remote Desktop or other methods of connecting to VMs that go via the Internet (e.g. GoToMyPC, LogMeIn) are a non-starter. What I'm really looking for is an EC2-like platform but which gives direct access to the VM through (for example) Hyper-V Manager. Sadly the only way they all seem to offer to remote control the VMs is direct access via Remote Desktop, whereas I need to be one layer above that (if that makes sense). A viable alternative would be to run virtualisation software within a Windows EC2 instance; obviously hardware virtualisation is impossible but I wonder if there are any software virtualisation platforms that could be run and that would work. Does anyone know if something like this exists/is possible? Thanks! C

    Read the article

  • Troubleshooting an NFS server hanging after authenticated mount request

    - by Christoph
    I need some advice on troubleshooting an NFS server problem on Scientific Linux (RHEL) 6.1. The log on the server shows that an authenticated mount request was made: Jan 13 16:30:02 ??? rpc.mountd[3996]: authenticated mount request from ????:784 for /shared-storage/cm/shared (/shared-storage/cm/shared) But after that, it does not continue. On the client, it is also hanging. The interesting thing now is that I have two NFS servers, which should be identical, and the one is working perfectly, but the other exhibits the above mentioned behaviour. The problem is also not completely persistent, i. e. sometimes the mount request succeeds. I assume that the problem must be related to the server rather than to the client, because it is working perfectly on the other server. My question is where I should search the problem. I have already re-created the exports using exportfs -r, I have restarted the NFS server, I have compared the rpcinfo outputs of both server - no success. The problem even survives a reboot. Any other ideas are appreciated. As answer to Tim's question: I have sporadically the following in dmesg, but do not know whether it is related e1000e 0000:0c:00.0: eth4: Detected Hardware Unit Hang: TDH <24> TDT <25> next_to_use <25> next_to_clean <24> buffer_info[next_to_clean]: time_stamp <1c3d12940> next_to_watch <24> jiffies <1c3d12940> next_to_watch.status <0> MAC Status <80383> PHY Status <792d> PHY 1000BASE-T Status <7800> PHY Extended Status <3000> PCI Status <10> Further edit: The problem above does not occur on the machine that is working, so it probably is related. Again an edit: The error is not on the (software) device that is used for NFS, but on another one. The NFS mount also does not trigger the message.

    Read the article

  • Integrating Nagios with a ticketing system/incident mnagement system

    - by sektor
    Is there a free ticketing system/incident management system which will help me in achieving the following? 1) If a service goes down then Nagios alerts the on-duty staff and pushes the status to some backend or DB as a ticket, say the initial status is "New". 2) The on-duty staff logs in through a frontend and acknowledges the new ticket by marking it as "In progress", so now the status of the ticket changes from "New" to "In progress". 3) If even after "n" number of minutes no person from on-duty staff has changed the ticket status to "In progress" then Nagios alerts the next level of contacts. Although if the on-duty staff has acknowledged the ticket then there is no need to alert the next level. 4) When the service comes up Nagios closes the ticket by marking it "Closed" Now I already have Nagios monitoring set up and currently it alerts by sending text messages and mails, what I'm looking for is some framework which only escalates the issue(alerts the second level) if the first level(on-duty staff) fails to respond to the initial alert. By "responding to the alert" I mean, the on-duty staff can login via some frontend and basically change the status to something like "Acknowledged" or "In progress".

    Read the article

  • How to manage preventive maintenance planning for external IT support?

    - by code-gijoe
    I am a bit puzzled by the way to handle server upgrade planning for software we maintain on remote sites. This is my case: I work for a software company that has many external clients. We are trying to be more Agile in our development so we plan to release small improvements every quarter and we wish to keep our clients informed of maintenance schedules. Instead of having angry clients that believe there ROI of our support plan is low, we want to be more proactive. Lets say we have 100 machines to take care of, is there some tool to assist me in planing the maintenance with clients? Right now I get a call from a client that is unhappy requesting we upgrade them, that is when we go into panic mode and start making calls. That is when I need to check my calendar, coordinate with the other guys, call a few times, change the date again and again until everyone is happy. Can this be done better?

    Read the article

  • Using Default Document with Forms Authentication

    - by John Rabotnik
    I have a site hosted on IIS7 with a default document specified as default.aspx. This works fine but my app uses Forms Authentication and I want to disable Anonymous Authentication completely. When I do disable anonymous authentication for everything except the login page, everything works fine but the default document setting stops working. With Anonymous authentication switched on if I visit http://mysite I get passed to http://mysite/default.aspx (which then redirects to the login page if the user hasn't already logged in) If I disable anonymous authentication (leaving only forms based auth enabled) and I visit http://mysite I get a permission denied page from IIS. Yet, if I visit http://mysite/default.aspx directly then the site works fine. I just want to disable anonymous authentication and have http://mysite go to http://mysite/default.aspx. Any ideas would be greatly appreciated.

    Read the article

  • How to find source of 301/302 redirect loop? Heroku GoDaddy Zerigo

    - by user179288
    this should be a relatively simple problem but I'm having trouble.I hope this is the right forum to post on as I've seen people get booted off stack-overflow for this sort of thing. I've setup a web app on heroku (cedar stack) at my-web-app.herokuapp.com and I'm trying to direct my-domain.com and www.my-domain.com to it. As per instructions on the heroku documentation, I've set my-domain.com to redirect (forwarding) to www.my-domain.com and then set a C-Name from www.my-domain.com to my-web-app.herokuapp.com. But the C-Name doesn't seem to be working right and is sending back to my-domain.com, causing a loop and I can't work out why. I first configured these setting at GoDaddy.com where I registered the domain but then tried to avoid the problem by using Heroku's Zerigo DNS add-on, setting the nameservers on GoDaddy to the ones given for Zerigo. However the problem remains. Here is the output from dig for my-domain.com ("drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 671 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 5 ;; QUESTION SECTION: ;drop-circles.com. IN ANY ;; ANSWER SECTION: drop-circles.com. 433 IN NS b.ns.zerigo.net. drop-circles.com. 433 IN NS d.ns.zerigo.net. drop-circles.com. 433 IN NS e.ns.zerigo.net. drop-circles.com. 433 IN NS a.ns.zerigo.net. drop-circles.com. 433 IN NS c.ns.zerigo.net. drop-circles.com. 433 IN SOA a.ns.zerigo.net. hostmaster.zerigo.com. 1372250760 10800 3600 604800 900 drop-circles.com. 433 IN A 64.27.57.29 drop-circles.com. 433 IN A 64.27.57.24 ;; ADDITIONAL SECTION: d.ns.zerigo.net. 68935 IN A 174.36.24.250 e.ns.zerigo.net. 69015 IN A 72.26.219.150 a.ns.zerigo.net. 72602 IN A 64.27.57.11 c.ns.zerigo.net. 69204 IN A 109.74.192.232 b.ns.zerigo.net. 70549 IN A 174.37.229.229 ;; Query time: 15 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:07 2013 ;; MSG SIZE rcvd: 293 Here is the output from dig for www.my-domain.com ("www.drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any www.drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1608 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.drop-circles.com. IN ANY ;; ANSWER SECTION: www.drop-circles.com. 407 IN CNAME drop-circles-website.herokuapp.com. ;; Query time: 19 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:15 2013 ;; MSG SIZE rcvd: 83 And from Fiddler if I use the inspector when I try either address I get a series of requests, with the my-domain.com ("drop-circles.com") looking like this: Request: GET http://drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: drop-circles.com Response: HTTP/1.1 302 Found Server: nginx/0.8.54 Date: Wed, 26 Jun 2013 13:26:55 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Status: 302 Found Location: http://www.drop-circles.com/ Content-Length: 113 <html><body>Redirecting to <a href="http://www.drop-circles.com/">http://www.drop-circles.com/</a></body></html> And the www.my-domain.com ("www.drop-circles.com") looking like this: Request: GET http://www.drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.drop-circles.com Response: HTTP/1.1 301 Moved Permanently Content-Type: text/html Date: Wed, 26 Jun 2013 13:26:56 GMT Location: http://drop-circles.com/ Vary: Accept X-Powered-By: Express Content-Length: 104 Connection: keep-alive <p>Moved Permanently. Redirecting to <a href="http://drop-circles.com/">http://drop-circles.com/</a></p> Any and all help would be greatly appreciated. If it is not at all obvious from these readouts what it might be could someone at least tell me which company GoDaddy, Zerigo or Heroku should I go to for support since I don't really know enough to be able to say where the problem lies. Thank you.

    Read the article

  • Where is the read/unread field in Outlook 2010 custom search?

    - by Ken
    I am trying to create a custom search folder in outlook 2010. I am using the "advanced" tab in the "search folder criteria" dialog. One of the criteria I need is the read/unread status of an message. But the "field" dropdown does not contain a field corresponding to read/unread status (see screen shot below). This is odd because the read/unread status is available in the "More Choices" tab, but seemingly not in the "advanced" tab. How do I create an advanced search folder criteria which incorporates the read/unread status of a message?

    Read the article

  • Adaptec 5805 not recognized after reboot

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not recognized. It only works if the computer is shut down and turned off. I have recently updated the firmware to "Adaptec RAID 5805 Firmware Build 18948". How do I fix the problem? Configuration summary --------------------------- 1. Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priority High Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1

    Read the article

  • Wake up and record in Windows Media Center on a Mac Mini

    - by Sir Code-A-Lot
    I'm currently considering buying a Mac Mini to use as a media center. I plan to install Windows 7 (or 8) on it, using Boot Camp. Will it be able to go into standby or hibernate (S3, S4?) and wake up to record TV scheduled in Windows Media Center? I haven't been able to find concrete information on supported standby types when running Windows under boot camp, and if Windows will even be able to wake when a recording should start. I just want to be clear on any limitations in this area before I buy anything.

    Read the article

  • SQL Server 2005: Rename DB Server Instance Name?

    - by Code Sherpa
    Hi, Can somebody tell me how to rename the DB server instance name and a DB name in SQL Server 2005? Right Now I Have SERVER/OLDNAME -- oldnameDB I want to change the server instance and also change the db name. I have tried: EXEC sp_renamedb 'oldName', 'newName' and that has changed the dbname as it appers in the tree directory. But, when I do "select @@servername" it is the old name. Also, the MDF and LDF files are still the old name. How do change instance and db names as a clean sweep across the server? Thanks.

    Read the article

  • What Are All the Variables Necessary to Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/O?/B?' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = ? - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = ? - The count of bytes sent including headers. %B = ? - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • Creating IIS Rewrite Rules

    - by Tom Bell
    I'm having a hard time converting old .htaccess rewrite rules to new IIS ones so I was wondering if anyone could point me in the right direction. Below are some example URLs I would like rewriting. http://example.org.uk/about/ Rewrites to http://example.org.uk/about/about.html ----------- http://example.org.uk/blog/events/ Rewrites to http://example.org.uk/blog/events.html ----------- http://example.org.uk/blog/2010/11/foo-bar Rewrites to http://example.org.uk/blog/2010/11/foo-bar.html The directories and file names are generic and could be anything. Any help would be greatly appreciated.

    Read the article

  • Allowing non-admin users to unstick the print spooler

    - by Reafidy
    I currently have an issue where the print que is getting stuck on a central print server (windows server 2008). Using the "Clear all documents" function does not clear it and gets stuck too. I need non-admin users to be able to clear the print cue from there work stations. I have tried using the following winforms program which I created and allows a user to stop the print spooler, delete printer files in the "C:\Windows\System32\spool\PRINTERS folder" and then start the print spooler but this functionality requires the program to be runs as an administrator, how can I allow my normal users to execute this program without giving them admin privileges? Or is there another way I can allow normal user to clear the print que on the server? Imports System.ServiceProcess Public Class Form1 Private Sub Button1_Click(sender As System.Object, e As System.EventArgs) Handles Button1.Click ClearJammedPrinter() End Sub Public Sub ClearJammedPrinter() Dim tspTimeOut As TimeSpan = New TimeSpan(0, 0, 5) Dim controllerStatus As ServiceControllerStatus = ServiceController1.Status Try If ServiceController1.Status <> ServiceProcess.ServiceControllerStatus.Stopped Then ServiceController1.Stop() End If Try ServiceController1.WaitForStatus(ServiceProcess.ServiceControllerStatus.Stopped, tspTimeOut) Catch Throw New Exception("The controller could not be stopped") End Try Dim strSpoolerFolder As String = "C:\Windows\System32\spool\PRINTERS" Dim s As String For Each s In System.IO.Directory.GetFiles(strSpoolerFolder) System.IO.File.Delete(s) Next s Catch ex As Exception MsgBox(ex.Message) Finally Try Select Case controllerStatus Case ServiceControllerStatus.Running If ServiceController1.Status <> ServiceControllerStatus.Running Then ServiceController1.Start() Case ServiceControllerStatus.Stopped If ServiceController1.Status <> ServiceControllerStatus.Stopped Then ServiceController1.Stop() End Select ServiceController1.WaitForStatus(controllerStatus, tspTimeOut) Catch MsgBox(String.Format("{0}{1}", "The print spooler service could not be returned to its original setting and is currently: ", ServiceController1.Status)) End Try End Try End Sub End Class

    Read the article

  • Zpool disk failure - Where am I at?

    - by JT.WK
    After checking the status of one of my zpools today, I was faced with the following: root@server: zpool status -v myPool pool: myPool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: resilver completed after 3h6m with 0 errors on Tue Sep 28 11:15:11 2010 config: NAME STATE READ WRITE CKSUM myPool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c6t7d0 ONLINE 0 0 0 c6t8d0 ONLINE 0 0 0 spare ONLINE 0 0 0 c6t9d0 ONLINE 54 0 0 c6t36d0 ONLINE 0 0 0 c6t10d0 ONLINE 0 0 0 c6t11d0 ONLINE 0 0 0 c6t12d0 ONLINE 0 0 0 spares c6t36d0 INUSE currently in use c6t37d0 AVAIL c6t38d0 AVAIL errors: No known data errors From what I can see, c6t9d0 has encountered 54 write errors. It seems as though it has automatically resilvered with the spare disk c6t36d0, which is now currently in use. My question is, where exactly am I at? Yes the 'action' tells me to determine whether or not the disk needs replacing, but is this disk currently still in use? Can I replace / remove it? Any explanation would be much appreciated as I'm quite new to this stuff :) update: After following the advice from C10k Consulting, ie detaching: zpool detach myPool c6t9d0 and adding as a spare: zpool add myPool spare c6t9d0 It appears as though all is well. The new status of my zpool is: root@server: zpool status -v myPool pool: myPool state: ONLINE scrub: resilver completed after 3h6m with 0 errors on Tue Sep 28 11:15:11 2010 config: NAME STATE READ WRITE CKSUM muPool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c6t7d0 ONLINE 0 0 0 c6t8d0 ONLINE 0 0 0 c6t36d0 ONLINE 0 0 0 c6t10d0 ONLINE 0 0 0 c6t11d0 ONLINE 0 0 0 c6t12d0 ONLINE 0 0 0 spares c6t37d0 AVAIL c6t38d0 AVAIL c6t9d0 AVAIL errors: No known data errors Thanks for your help c10k consulting :)

    Read the article

  • Identifying Service Error in Fedora 16

    - by Cerin
    How do you find the cause of a failed service start in Fedora 16? The new systemctl command in Fedora 16 seems to horribly obscure any useful logging info. [root@host ~]# systemctl start httpd.service Job failed. See system logs and 'systemctl status' for details. [root@host ~]# systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/lib/systemd/system/httpd.service; enabled) Active: failed since Thu, 21 Jun 2012 16:26:56 -0400; 1min 23s ago Process: 2119 ExecStop=/usr/sbin/httpd $OPTIONS -k stop (code=exited, status=0/SUCCESS) Process: 2215 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=1/FAILURE) Main PID: 1062 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/httpd.service So the first command fails...and it tells me to run another command...which simply tells me that the command returned an error code. Where's the actual error? Even more frustrating is nothing seems to have been written to the logs: [root@host ~]# ls -lah /var/log/httpd/ total 8.0K drwx------. 2 root root 4.0K Jun 21 16:19 . drwxr-xr-x. 21 root root 4.0K Jun 20 16:33 .. -rw-r----- 1 root root 0 Jun 21 16:19 modsec_audit.log -rw-r----- 1 root root 0 Jun 21 16:19 modsec_debug.log

    Read the article

  • Three apps going through apache. How to configure apache httpd?

    - by Chris F.
    I have a quick question but I've been struggling to find the best solution: I have two java webapps and wordpress (php) that I need to serve through my Prod website: App #1 should be accessed when pointing to www.example.com/ (this would have other url too such as "www.example.com/book") App #2 should be accessed when pointing to www.example.com/manage Finally WordPress would be accessed at www.example.com/info How can I configure apache to serve all these three instances at the same time? So far I have and it's not quite working right. Any suggestions would be much appreciated! Listen 8081 <VirtualHost *:8081> DocumentRoot /var/www/html </VirtualHost> ProxyPass /manage http://127.0.0.1:8080/manage ProxyPassReverse /manage http://127.0.0.1:8080/manage ProxyPass /info http://127.0.0.1:8081/info ProxyPassReverse /info http://127.0.0.1:8081/info ProxyPass / http://127.0.0.1:9000/ ProxyPassReverse / http://127.0.0.1:9000/

    Read the article

  • How Can We Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = $request_time - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = $bytes_sent - The count of bytes sent including headers. %B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • Bash shell prompt: where is $RET?

    - by Evgeni Sergeev
    I was reading this https://wiki.archlinux.org/index.php/Color_Bash_Prompt and ended up with the following: # Stores the status of each command in $RET PROMPT_COMMAND='RET=$?;' # A colour. RED_SHELL='\e[0;36m' # Prints "Status 1" if RET is 1, for example. RET_VISUALISE='$(if [[ $RET != 0 ]]; then echo -ne "Status \[$RED_SHELL\]$RET\n" && RET=0; fi;)' # What to print for each prompt. PS1="$RET_VISUALISE\[\e]0;\w\a\]\n\[\e[32m\]\u@\h \t \[\e[33m\]\w\[\e[0m\]\n\$ " This does almost what I want, except when I press Enter,Enter,Enter multiple times after a command that returned status != 0. In this case it prints "Status 1" every time I press Enter. This is what the && RET=0; part was supposed to get rid of. Also, I don't understand why env | grep RET only shows the PS1 contents. What is the scope of $RET ?

    Read the article

  • Data capture from other sheet into Summary sheet

    - by Hemant
    an Excel workbook which has Summary sheet, Pending and Master Sheet. My requirement is below and try to develop a Macro or VB logic for excel • I want to control this workbook from Summary sheet. o Generate Fault Summary – ? I have set logic but if doesn’t give warning if sheet name is exists , so need to add this logic . ? When we press the Fault Report Summary command button then it copy the master sheet with cell “A6” Name and will hide the Master sheet. Again when you select the another Month name then it will generate the sheet for that month name. o Generate Toll System Uptime ? When I select the sheet name and “Week” then Press the “Enter “Command button then it should get the result from that sheet number . Each sheet number has Month detail in B2 Cell. ? To calculate the Uptime formula for Week wise is • Week-01 = (1680-SUMIFS(L5:L23,B5:B23,"="&B2,B5:B23,"<="&(B2+6)))/1680 • Week-02 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+7),B5:B23,"<="&(B2+13)))/1680 • Week-03 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+14),B5:B23,"<="&(B2+20)))/1680 • Week-04 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+21),B5:B23,"<="&(B2+27)))/1680 • Month =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2),B5:B23,"<="&(DATE(YEAR(B2),1+MONTH(B2),1)-1)))/1680 ? Result should reflect in Summary sheet at B18 cell . o Pending Fault Report Summary ? When segregate the report on its status like which one is open or Close . It is open then it is Pending Fault Report and when it is Close status it means it is closed. ? If any fault which has OPEN status in all sheets(Jan-13,Feb-13,Mar-13….etc) then it should be come as well as in Pending Sheet which ascending date order. ? When it’s status is changed then it should be moved in that month sheet or nearby fault created date. It status is close then it should not be available in pending sheet as it’s status is Closed. ? Each fault has Reported date and we monitor all fault according reported date. ? When we press the Update Fault Report Summary command button then it should update as above logic. ? Some time we export the Pending fault report , so date calendar should be present in Start and End date to Choose the date. When we press the Export command line then it should export the Pending fault report and able to save in Excel,PDF.

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • Can I get vim to correctly indent this Ruby code (Nokogiri)?

    - by Nathan Long
    The first XML Builder Example for Nokogiri looks like this: builder = Nokogiri::XML::Builder.new do |xml| xml.root { xml.products { xml.widget { xml.id_ "10" xml.name "Awesome widget" } } } end puts builder.to_xml Even though I have the Ruby Vim files installed, Vim's autoindent flattens the above example like this: builder = Nokogiri::XML::Builder.new do |xml| xml.root { xml.products { xml.widget { xml.id_ "10" xml.name "Awesome widget" } } } end puts builder.to_xml Does anybody know how to get Vim to autoindent this correctly?

    Read the article

  • ASA Slow IPSec Performance

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremly poor performance when traffic passes over the VPN. CPU on the device is 13%, Memory at 408 MB, and active VPN sessions 2 so the load on the device is particularly low. Screenshot of wireshark file transfer between the two hosts over the VPN: The large amount of Header checksum failures is alarming, but I am not sure what to check now. I perf is showing around 4-5 Mbit/sec with differing TCP window sizes. Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >