Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 10/232 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Conditionally permitting HTTP-only requests to Tomcat?

    - by Mike
    I have 2 versions of a system: Tomcat webserver Nginx reverse-proxy sitting in front of a tomcat webserver. In version 2, nginx only ever talks to Tomcat over HTTP. A user could configure the system so that only HTTPS requests are allowed. If the user does this in Version 1 and then the XML configuration files for Tomcat takes care of this. In version 2, nginx takes care of this. The problem is this: I cannot force a user to update their Tomcat XML config files when they upgrade from version 1 to version 2 (it will be recommended that they do so) because this is done as part of a larger process. This means that if they upgrade and don't update the Tomcat config, an HTTPS request will arrive at nginx, which will proxy it over HTTP to Tomcat which will reject the request because it is not HTTPS. So I can't force an update to the Tomcat XML, and I have to use HTTP between nginx and Tomcat. Any ideas? Is there some way I can affect how Tomcat reads its config in Version 2 so that it ignores the HTTPS-only section?

    Read the article

  • IIS 7.5 stops serving requests for no apparent reason

    - by Steffen
    We're running a site on 4 virtual Win 2008 R2 64 bit servers. This is the only site on the IIS, and we use Windows Network Load Balancing to share the load between our 4 virtual servers. We've used these virtual servers for approximately a week, and we're starting to see some issues. For no apparent reason the IIS stops serving pages, and doesn't even respond with an error. So upon requesting a page from the server, the browser just waits infinitely (or until it decides to give up clientside) Sometimes an iisreset fixes the issue, other times we have to reboot the entire virtual server. There are no traces in the eventlog of why this happens, and there's no traces in our applications exception log neither. Furthermore this happens even when there's a very small load on the server, so it doesn't seem to be because it's flooded with requests. So frankly I'm at a loss here - I have no idea where to start debugging this issue :-( I'm quite certain we never had these issues on our physical servers, however they were running Win 2003 32bit, so there are quite a few differences between them and the virtual ones. (Which obviously makes it difficult to tell what exactly causes this)

    Read the article

  • Change Outlook's default calendar to iCloud (meeting requests end up in wrong calendar)

    - by flohei
    Following scenario: I've got a main computer (Windows 7, Office 2010) which is being used to manage contacts, meetings, etc. using Outlook. Now I've added an iPad and an iPhone to sync using iCloud. I moved all appointments and contacts from the old PST file to the iCloud file. All the data syncs nicely. The email account I'm using in Outlook is an IMAP account which opens up another data file which brings us to a total of three data files in Outlook's side bar. The problem: When one of our clients sends us meeting requests via email they show up in the IMAP's inbox. When we open them up they automatically get added to Outlook's default calendar (the one in the original PST). Is there any chance not to add them to that calendar but the iCloud one? Basically we could completely get rid of the original PST since we don't use it at all anymore but the settings do not allow me to remove this PST file and set the iCloud one as default. Thanks!

    Read the article

  • 100% CPU when doing 4 or more concurrent requests with Magento

    - by pancake
    Currently I'm having trouble with a server running Magento, it's unbelievably slow. It's a VPS with a few Magento installations on it used for development, so I'm the only one using them. When I do 4 request all 2 seconds after each other I'm finished in 10 seconds. Slow, but still within the limits of my patience. When I do 4 "concurrent" requests, however (opening 4 tabs in a row, very quickly) all four cores go to 100% and stay there for like a minute. How is this possible? I know that there are a lot of possibilities here, so any tips on how to make an Apache/PHP server go faster are also welcome. It used to go a lot faster before, and I've also tried APC, but it kept causing problems (PHP errors, something with memory pools) so I've disabled it. By the way, the Magento cache is off and compiling is also off. I know this makes Magento slower than usual, but I don't think a 60 second response time is normal for any Magento installation. Virtual hardware: 4 Cores and 4096MB RAM Swap is never used (checked with htop) 100GB disk space, of which 10% is in use Software: Debian 6 DirectAdmin and apache custombuild PHP 5.2.17 (CLI) If you need more info, please tell me how to get it, because I probably don't know how. I do know how to use the command line in linux and the usage of quite a few commands, but my experience with managing a server is limited.

    Read the article

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

  • One single page showing 3 requests (also printing the headers)

    - by Korcholis
    Someone in my studio designed a webpage some years ago, and now the client decided to change the server (he moved to a Linux Apache server running Gen2 SMP, 64 bits, PHP version 5.3.8, Standard MYSQL version 5). It suddenly started to do weird things. When clicking on a link that requires login, the page redirects you to the login page using header() function in PHP. Curiously, the page shows this: OK The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [no address given] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. HTTP/1.1 200 OK Date: Mon, 15 Oct 2012 17:27:32 GMT Server: Apache/2.2.22 (Unix) FrontPage/5.0.2.2635 X-Powered-By: PHP/5.3.8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Keep-Alive: timeout=5, max=399 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html 232c Then the page itself, and then, another header: 0 1f4 OK The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [no address given] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. 0 What's most intriguing is that if you refresh the page or hit enter on the url, it loads correctly. I've been checking the logs, and it only blames of an inexisting favicon. I also checked the .htaccess, everything was correct (RewriteBase was / as intended, and the only stuff there is another rule that moves ^en/ requests to request?lang=en. Has anyone faced something like this? Edit: IE doesn't trigger these two headers. This is getting wierder.

    Read the article

  • Rails App hangs after few requests

    - by Paddy
    I have Bitnami Rails stack installed on my Mac. To better explain my problem i created a simple scaffold based rails app with mysql as the backend. I can get to perform simple POST and GET requests for a while and after a few requests the app just hangs indefinitely. No exception caught or anything worthwhile in the development log to report this strange behavior. This is the last bit from the development log before the app froze: Processing WritedatasController#index (for 127.0.0.1 at 2010-03-30 20:38:51) [GET] [4;36;1mWritedata Load (0.7ms) [0m [0;1mSELECT * FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/index [4;35;1mWritedata Columns (2.9ms) [0m [0mSHOW FIELDS FROM `writedatas` [0m Completed in 99ms (View: 88, DB: 4) | 200 OK [http://localhost/writedatas] [4;36;1mSQL (0.2ms) [0m [0;1mSET NAMES 'utf8' [0m [4;35;1mSQL (0.1ms) [0m [0mSET SQL_AUTO_IS_NULL=0 [0m Processing WritedatasController#new (for 127.0.0.1 at 2010-03-30 20:38:52) [GET] [4;36;1mWritedata Columns (2.0ms) [0m [0;1mSHOW FIELDS FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/new Rendered writedatas/_form (5.9ms) Completed in 34ms (View: 25, DB: 2) | 200 OK [http://localhost/writedatas/new] [4;36;1mSQL (0.4ms) [0m [0;1mSET NAMES 'utf8' [0m [4;35;1mSQL (0.1ms) [0m [0mSET SQL_AUTO_IS_NULL=0 [0m Processing WritedatasController#index (for 127.0.0.1 at 2010-03-30 20:39:17) [GET] [4;36;1mWritedata Load (0.7ms) [0m [0;1mSELECT * FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/index [4;35;1mWritedata Columns (2.6ms) [0m [0mSHOW FIELDS FROM `writedatas` [0m Completed in 101ms (View: 90, DB: 4) | 200 OK [http://localhost/writedatas] It just hung at this point. And after this happens i have to restart the server, for it to hang again after few requests. This is the weirdest problem i have faced and i am truly stumped.

    Read the article

  • HttpWebRequest Timeouts After Ten Consecutive Requests

    - by Bob Mc
    I'm writing a web crawler for a specific site. The application is a VB.Net Windows Forms application that is not using multiple threads - each web request is consecutive. However, after ten successful page retrievals every successive request times out. I have reviewed the similar questions already posted here on SO, and have implemented the recommended techniques into my GetPage routine, shown below: Public Function GetPage(ByVal url As String) As String Dim result As String = String.Empty Dim uri As New Uri(url) Dim sp As ServicePoint = ServicePointManager.FindServicePoint(uri) sp.ConnectionLimit = 100 Dim request As HttpWebRequest = WebRequest.Create(uri) request.KeepAlive = False request.Timeout = 15000 Try Using response As HttpWebResponse = DirectCast(request.GetResponse, HttpWebResponse) Using dataStream As Stream = response.GetResponseStream() Using reader As New StreamReader(dataStream) If response.StatusCode <> HttpStatusCode.OK Then Throw New Exception("Got response status code: " + response.StatusCode) End If result = reader.ReadToEnd() End Using End Using response.Close() End Using Catch ex As Exception Dim msg As String = "Error reading page """ & url & """. " & ex.Message Logger.LogMessage(msg, LogOutputLevel.Diagnostics) End Try Return result End Function Have I missed something? Am I not closing or disposing of an object that should be? It seems strange that it always happens after ten consecutive requests. Notes: In the constructor for the class in which this method resides I have the following: ServicePointManager.DefaultConnectionLimit = 100 If I set KeepAlive to true, the timeouts begin after five requests. All the requests are for pages in the same domain. EDIT I added a delay between each web request of between two and seven seconds so that I do not appear to be "hammering" the site or attempting a DOS attack. However, the problem still occurs.

    Read the article

  • Does mod_php honor HEAD requests properly?

    - by rkulla
    The HTTP/1.1 RFC stipulates "The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response." I know Apache honors the RFC but modules don't have to. My question is, does mod_php5 honor this? The reason I ask is because I just came across an article saying that PHP developers should check this themselves with: if (stripos($_SERVER['REQUEST_METHOD'], 'HEAD') !== FALSE) { exit(); } but seeing as how browsers send HEAD requests for cache checking it seems unlikely to me that no book, docs, etc., advise PHP developers to do this check. I googled a second and not much turned up, other than some people saying they try to strange things like mod_rewrite/redirect after getting HEAD requests and some old bug ticket from like 2002 claiming that mod_php still executed the rest of the script by default. So I just ran a quick test by using PECL::HTTP to run http_head('http://mysite.com/test-head-request.php'); while having: <?php error_log('REST OF SCRIPT STILL RAN'); ?> in test-head-request.php to see if the rest of the script still executed, and it didn't. I figure that should be enough to settle it, but want to get more feedback and maybe help clear up confusion for anyone else who has wondered about this. So if anyone knows off the top of their head (no pun intended) - or have any conventions they use for receiving HEAD requests, that'd be great. Otherwise, I'll grep the C source later and respond in a comment with my findings. Thanks.

    Read the article

  • Servlet requests are executed sequentially for no apparent reason in Glassfish v3

    - by Fabien Benoit
    Hi, I'm using Glassfish 3 Web profile and can't get http workers to execute concurrently requests on a servlet. This is how i observed the problem. I've made a very simple servlet, that writes the current thread name to the standard output and sleep for 10 seconds : protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println(Thread.currentThread().getName()); try { Thread.sleep(10000); // 10 sec } catch (InterruptedException ex) {} } } And when i'm running several simultaneous requests, I clearly see in the logs that the requests are sequentially executed (one trace every 10 seconds). INFO: http-thread-pool-8080-(2) (10 seconds later...) INFO: http-thread-pool-8080-(1) (10 seconds later...) INFO: http-thread-pool-8080-(2) etc. All my GF settings are untouched - it's the out-of-the-box config (the default thread pool is 2 threads min, 5 max if I recall properly). ...I really don't understand why the sleep() block all the others worker threads. Any insight would be greatly appreciated ! Thanks, Fabien

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • All internet requests in Windows time out

    - by Brandon
    So, I've run into a very strange problem with my home wireless network. Previously, at seemingly random times, the router seemed to disconnect all wireless hosts and cause all of the wired hosts to have a "limited connection" according to windows. In order to fix this, I had to unplug all of the wired hosts from the router, unplug the modem from the router, and power cycle the router. This seemed to solve the problem for a while until the exact same thing happened a day later and I had to go through the same process again. That's where I noticed something weird happening. There was one wireless host (a Windows Vista laptop) that seemed to be causing the router to disconnect the other hosts whenever it connected. When this happened, only that laptop was able to use the wireless from the router. When this happened, I disconnected it from the wireless (by disabling the wireless adapter) then reconnected it (by re-enabling it) and now it, like the other hosts, couldn't connect. I've never really seen anything this strange happen on our network before. So, I restored the router to factory settings and the problem seems to have vanished except one crucial problem. There's another host (a Windows 7 laptop) that was perfectly able to connect before all of the router issues and even in between the crashing and power-cycling events but now says its connected and says it's able to reach the Internet, but all requests time out. In any browser I've tried, the tab says connecting to [site]... for a solid minute and then tells me the request timed out. When I try to ping google.com in cmd it also says request timed out. In frustration, I booted into a dual-boot Ubuntu installation on the Windows 7 host and the connection works fine, to my surprise, as ubuntu is where I am now typing this rather long question. I haven't looked through the event log in windows but will post anything I find in an edit I haven't tried connecting (in Windows 7) to any other wireless network, since The fact that it works in Ubuntu suggests its Windows and not the router but I didn't change any wireless settings in windows before it being able to reach the Internet and not. Does anyone have any clue what could have happened. I opened to buying another router as this one is almost a year old :) but I would like to know whats going on here. Thanks in Advance! P.S. Sorry for how long my question is, I'm a little anxious (:

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • Incorrect gzipping of http requests, can't find who's doing it

    - by Ned Batchelder
    We're seeing some very strange mangling of HTTP responses, and we can't figure out what is doing it. We have an app server handling JSON requests. Occasionally, the response is returned gzipped, but with incorrect headers that prevent the browser from interpreting it correctly. The problem is intermittent, and changes behavior over time. Yesterday morning it seemed to fail 50% of the time, and in fact, seemed tied to one of our two load-balanced servers. Later in the afternoon, it was failing only 20 times out of 1000, and didn't correlate with an app server. The two app servers are running Apache 2.2 with mod_wsgi and a Django app stack. They have identical Apache configs and source trees, and even identical packages installed on Red Hat. There's a hardware load balancer in front, I don't know the make or model. Akamai is also part of the food chain, though we removed Akamai and still had the problem. Here's a good request and response: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Length: 47 < Content-Type: application/x-javascript < Connection: keep-alive < Vary: Accept-Encoding < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 {"msg": "", "status": "OK", "printer_name": ""} And here's a bad one: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Type: application/x-javascript < Content-Encoding: gzip < Content-Length: 59 < Connection: keep-alive < Vary: Accept-Encoding < X-N: S < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 ?V?-NW?RPR?QP*.I,)-???A??????????T??Z? ??/ There are two things to notice about the bad response: It has two Content-Encoding headers, and the browsers seem to use the first. So they see an identity encoding header, and gzipped content, so they can't interpret the response. The bad response has an extra "X-N: S" header. Perhaps if I could find out what intermediary adds "X-N: S" headers to responses, I could track down the culprit...

    Read the article

  • Excessive denied requests for port 58322 in syslog

    - by Nathan C.
    My iptables is setup to block all unneeded ports as it should but I'm checking my syslog due to these random but all-to-frequent apache2 crashes and I noticed a lot of requests such as this. In all the archived syslogs that I have these are present from different IP addresses. There is a similar question with an accepted here: What service uses UDP port 60059? Jun 4 06:49:27 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=218.7.74.50 DST=MY.SERVER.IP.HERE LEN=129 TOS=0x00 PREC=0x00 TTL=115 ID=27636 PROTO=UDP SPT=9520 DPT=58322 LEN=109 Jun 4 06:49:31 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=95.160.226.177 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=116 ID=31468 PROTO=UDP SPT=47642 DPT=58322 LEN=111 Jun 4 06:49:54 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=78.137.36.10 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=118 ID=21872 PROTO=UDP SPT=57872 DPT=58322 LEN=111 Jun 4 06:50:14 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=111.253.217.11 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=116 ID=28882 PROTO=UDP SPT=51826 DPT=58322 LEN=111 Jun 4 06:51:02 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=189.45.114.173 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x16 PREC=0x00 TTL=113 ID=19985 PROTO=UDP SPT=41087 DPT=58322 LEN=111 Jun 4 06:51:09 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=87.89.202.28 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=116 ID=7874 PROTO=UDP SPT=17524 DPT=58322 LEN=111 Jun 4 06:51:20 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=24.44.124.35 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=118 ID=12978 PROTO=UDP SPT=45596 DPT=58322 LEN=111 Jun 4 06:51:22 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=81.174.48.236 DST=MY.SERVER.IP.HERE LEN=93 TOS=0x00 PREC=0x00 TTL=48 ID=0 DF PROTO=UDP SPT=21352 DPT=58322 LEN=73 Jun 4 06:51:23 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=124.107.61.84 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=114 ID=13038 PROTO=UDP SPT=14357 DPT=58322 LEN=111 Jun 4 06:51:30 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=88.8.23.200 DST=MY.SERVER.IP.HERE LEN=123 TOS=0x00 PREC=0x00 TTL=117 ID=21062 PROTO=UDP SPT=4291 DPT=58322 LEN=103 Jun 4 06:51:54 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=80.202.244.234 DST=MY.SERVER.IP.HERE LEN=129 TOS=0x00 PREC=0x00 TTL=114 ID=339 PROTO=UDP SPT=14020 DPT=58322 LEN=109 I'm not overly experienced with server configuration and debugging, so I only just installed logcheck after reading that previous question. I guess my question is what steps should I take after reading this log info to 1) further protect myself, 2) understand if this could be causing any other problems with my VPS, and 3) use this data to help others?

    Read the article

  • BIND no longer responds to AXFR Requests

    - by djsumdog
    Recently we moved our primary external DNS server. It has three caching DNS slaves in front of it provided by our ISP. They've told us they've started getting access denied requests when doing zone transfers (AXFR). If I add in my own IPs to the allow-transfer list, I also get a transfer failed when using dig with the AXFR argument. Here is what my bind configuration looks like: options { directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; zone-statistics yes; statistics-file "/var/log/named.stats"; listen-on-v6 { any; }; notify-source 10.19.0.68 port 53; querylog yes; notify yes; allow-transfer { 127.0.0.1; //localhost 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; also-notify { 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; include "/etc/named.d/forwarders.conf"; }; logging { channel simple_log { file "/var/log/bind.log" versions 10 size 3m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; channel log_zone_transfers { file "/var/log/axfr.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category xfer-out { log_zone_transfers; }; channel log_notify { file "/var/log/notify.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category notify { log_notify; }; channel queries { file "/var/log/queries.log" versions 10 size 30m; print-time yes; severity info; print-category yes; print-severity yes; }; category queries { queries; }; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; include "/etc/named.conf.include"; zone "example.net " { type master; file "/var/lib/named/master/example.net.hosts"; }; zone "example.com " { type master; file "/var/lib/named/master/example.com.hosts"; }; ## -- other master files -- And the errors in the xfer log look like the following: 29-Oct-2012 14:20:02.806 xfer-out: info: client 1.1.1.1#59069: bad zone transfer request: 'example.com./IN': non-authoritative zone (NOTAUTH) I've tried adding allow-transfer parameters directly on the zone files and still get failed transfers. Any idea what I'm doing wrong?

    Read the article

  • Good bug tracking with Sharepoint?

    - by torbengb
    At my place of work, it has been decided to move many processes to Sharepoint. I'm now looking into how Sharepoint can be used for bug tracking (à la Mantis, FogBugz etc. but within Sharepoint). Specifically, we're using a collaboration room and the solution must work inside that. I know that I can create lists using an "Issue tracker" template, but it lacks workflow, integrated correspondence (like FogBugz), and audit log (any user can edit any field any time, without it being noted anywhere). That's not sufficient, so I am looking for "bigger" solutions but haven't yet found anything at all. This question is similar but aims at Helpdesk use; we aim at bug tracking and change requests to a system. I'm open to suggestions! As I'm not an administrator, I can't just grab a Sharepoint component and install it for testing. I'm looking for experiences, documentation, white papers, screen shots -- the actual downloadable will be relevant later. Ideally, some of these matters should be covered: Support for different ticket types (bug, feature, inquiry, internal task). Configurable workflow per ticket type, no fixed number of steps. Configurable read/write permissions per field and per workflow status. Configurable dashboard for managers with nice charts. Configurable email notifications. Correspondence à la FogBugz. (Challenge: we use Notes, not Exchange.)

    Read the article

  • ISA 2006 refuses VPN DHCP requests as spoofing

    - by Daniel
    I'm running ISA 2006 with PPTP VPN for my AD-controlled network. DHCP is located on the ISA server itself and authentication is done by RADIUS (NPS) located on the DC. Right now my VPN clients can connect, access local DNS, and can ping ISA, the DC, and other clients. Here's where it gets weird. I noticed that despite all this, ipconfig shows the following: PPP adapter North Horizon VPN: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : North Horizon VPN Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.42.4.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 0.0.0.0 DNS Servers . . . . . . . . . . . : 10.42.1.10 NetBIOS over Tcpip. . . . . . . . : Enabled So I went over and checked my ISA logs for both DHCP requests and replies, only to find out that my VPN clients are being denied because ISA thinks its a spoof. Here's some relevant information from the log (the VPN subnet is 10.42.4.0/24): Client IP: 10.42.4.6 Destination: 255.255.255.255:67 Client Username: (blank) Protocol: DHCP (request) Action: Denied Connection Rule: (blank) Source Network: VPN Clients Destination Network: Local Host Result Code: 0xc0040014 FWX_E_FWE_SPOOFING_PACKET_DROPPED Network Interface: 10.42.4.11 --------------------------------------------------------- Original Client IP: 10.42.4.6 Destination: 10.42.1.1 Client Username: (valid user) Protocol: PING Action: Initiated Connection Rule: Allow PING to ISA Source Network: VPN Clients Destination Network: Local Host Result Code: 0x0 ERROR_SUCCESS Network Interface: (blank) I wasn't sure what this 10.42.4.11 network interface was - it certainly wasn't something I had setup - untill I saw it in Routing and Remote Access under IP Routing General as an interface called "Internal" bound to the same IP address. I also noticed that since ISA takes blocks of 10 IP addresses from DHCP for VPN, it had reserved 10.42.4.2-11. I'm not sure if it means anything, though. Thanks for your help.

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • NSMutableURLRequest returns null on real device, while returning image on simulator

    - by Yanchi
    I was testing my app that I've been working on for past 2 months. Basically it requests for JSON, that contains info about items. One field of JSON file is image_url. When I want to display this image, I need to download it from another server, that needs additional credentials. So it goes like this- In my cellForRowAtIndexPath I'm doing NSDictionary *aucdict = [jsonAukResults objectAtIndex:indexPath.row]; NSURL *imageURL = [NSURL URLWithString:[aucdict objectForKey:@"img_url"]]; NSString *authPString = [[[NSString stringWithFormat:@"login:password"]dataUsingEncoding:NSUTF8StringEncoding] base64EncodedString]; NSString *verifPString = [NSString stringWithFormat:@"Image %@",authPString]; NSMutableURLRequest *Prequest = [[NSMutableURLRequest alloc] initWithURL:imageURL]; [Prequest setValue:verifPString forHTTPHeaderField:@"Authorization"]; NSError *error = nil; NSURLResponse *resp = nil; NSData *picresult = [NSURLConnection sendSynchronousRequest:Prequest returningResponse:&resp error:&error]; UIImage *imageLoad = [[UIImage alloc] initWithData:picresult]; Now, I just obscured credentials (they are not login:password :)). My problem is, that right now, I get 3 items. All 3 have image on same server. I can get two of them with this code no problem. However third one is problematic, I always get (NULL) imageLoad. On my simulator, everything works fine, I get all 3 pictures. On real device I get error. I tried to NSURLConnection with error and response so I could debug better. This is what I got in my error. Printing description of error: Error Domain=NSURLErrorDomain Code=-1202 "The certificate for this server is invalid. You might be connecting to a server that is pretending to be “server name” which could put your confidential information at risk." UserInfo=0x1e5a3080 {NSErrorFailingURLStringKey=pictureLink.jpg, NSLocalizedRecoverySuggestion=Would you like to connect to the server anyway?, NSErrorFailingURLKey=pictureLink.jpg, NSLocalizedDescription=The certificate for this server is invalid. You might be connecting to a server that is pretending to be “server name” which could put your confidential information at risk., NSUnderlyingError=0x1e5a30e0 "The certificate for this server is invalid. You might be connecting to a server that is pretending to be “server name” which could put your confidential information at risk.", NSURLErrorFailingURLPeerTrustErrorKey=} I dont use SSL so Im really confused as what could cause this error. Btw, everything worked fine until now (this is my initial screen, so it's been done for good month and a half). Now I started to do graphics and this problem popped up :(

    Read the article

  • Logging ASMX Requests and Responses from Client

    - by John
    Hi, I've got a C# web application which I can't easily update the code of. However, I can make configuration changes to the application. The application calls out to a third-party ASMX web service, and I really need (if at all possible) to log the full XML requests and responses. I have no control over the web service so I have to do it from the client. I'm not using WCF - this is standard ASMX web service calls. Is there any way I can log the XML requests and responses from the client web app without having to redeploy the code? Thanks in advance John

    Read the article

  • Log axis2 client requests and responses

    - by Manuel Darveau
    I would like to log all requests/responses made by an axis2 client. I tried to create a file called client-config.wsdd as describer in http://code.google.com/support/bin/answer.py?hl=en&answer=15137 but without success (I don't get a log file). Requests are made over https and I am not sure if it matters. I tried <transport name="http" pivot="java:org.apache.axis.transport.http.HTTPSender"/> and <transport name="https" pivot="java:org.apache.axis.transport.http.HTTPSender"/> without success.

    Read the article

  • Decrease DB requests number from Django templates

    - by Andrew
    I publish discount offers for my city. Offer models are passed to template ( ~15 offers per page). Every offer has lot of items(every item has FK to it's offer), thus i have to make huge number of DB request from template. {% for item in offer.1 %} {{item.descr}} {{item.start_date}} {{item.price|floatformat}} {%if not item.tax_included %}{%trans "Without taxes"%}{%endif%} <a href="{{item.offer.wwwlink}}" >{%trans "Buy now!"%}</a> </div> <div class="clear"></div> {% endfor %} So there are ~200-400 DB requests per page, that's abnormal i expect. In django code it is possible to use select_related to prepopulate needed values, how can i decrease number of requests in template?

    Read the article

  • Capturing SOAP requests to an ASP.NET ASMX web service

    - by pcampbell
    Consider the requirement to log incoming SOAP requests to an ASP.NET ASMX web service. The task is to capture the raw XML being sent to the web service. The incoming message needs to be logged for debug inspection. The application already has its own logging library in use, so the ideal usage would be something like this: //string or XML, it doesn't matter. string incomingSoapRequest = GetSoapRequest(); Logger.LogMessage(incomingSoapRequest); Are there any easy solutions to capture the raw XML of the incoming SOAP requests? Which events would you handle to get access to this object and the relevant properties?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >