Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 2/232 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • HTTP requests and Apache modules: Creative attack vectors

    - by pinkgothic
    Slightly unorthodox question here: I'm currently trying to break an Apache with a handful of custom modules. What spawned the testing is that Apache internally forwards requests that it considers too large (e.g. 1 MB trash) to modules hooked in appropriately, forcing them to deal with the garbage data - and lack of handling in the custom modules caused Apache in its entirety to go up in flames. Ouch, ouch, ouch. That particular issue was fortunately fixed, but the question's arisen whether or not there may be other similar vulnerabilities. Right now I have a tool at my disposal that lets me send a raw HTTP request to the server (or rather, raw data through an established TCP connection that could be interpreted as an HTTP request if it followed the form of one, e.g. "GET ...") and I'm trying to come up with other ideas. (TCP-level attacks like Slowloris and Nkiller2 are not my focus at the moment.) Does anyone have a few nice ideas how to confuse the server and/or its modules to the point of self-immolation? Broken UTF-8? (Though I doubt Apache cares about encoding - I imagine it just juggles raw bytes.) Stuff that is only barely too long, followed by a 0-byte, followed by junk? et cetera I don't consider myself a very good tester (I'm doing this by necessity and lack of manpower; I unfortunately don't even have a more than basic grasp of Apache internals that would help me along), which is why I'm hoping for an insightful response or two or three. Maybe some of you have done some similar testing for your own projects? (If stackoverflow is not the right place for this question, I apologise. Not sure where else to put it.)

    Read the article

  • Apache: Limit the Number of Requests/Traffic per IP?

    - by Ian Kern
    I would like to only allow one IP to use up to, say 1GB, of traffic per day, and if that limit is exceeded, all requests from that IP are then dropped until the next day. However, a more simple solution where the connection is dropped after a certain amount of requests would suffice. Is there already some sort of module that can do this? Or perhaps I can achieve this through something like iptables? Thanks

    Read the article

  • Apache: Limit the Number of Requests/Traffic per IP?

    - by Ian Kern
    I would like to only allow one IP to use up to, say 1GB, of traffic per day, and if that limit is exceeded, all requests from that IP are then dropped until the next day. However, a more simple solution where the connection is dropped after a certain amount of requests would suffice. Is there already some sort of module that can do this? Or perhaps I can achieve this through something like iptables? Thanks

    Read the article

  • Amazon Web Services Free Trial: query about get and put requests

    - by abel
    Amazon recently introduced a free tier for its cloud offering. I signed up for AWS and while signing up for the free tier of S3, i found this As part of AWS Free Usage Tier, you can get started with Amazon S3 for free. Upon sign-up, new AWS customers receive 5 GB of Amazon S3 storage, 20,000 Get Requests, 2,000 Put Requests, 15GB of bandwidth in and 15GB of bandwidth out each month for one year. source:aws.amazon.com , emphasis mine. 20,000 GET requests & 2000 puts mean , 20,000 page views(max) and 2000 file uploads per month. Isn't that lower than what App Engine offers 43,200,000 requests per day.Am I missing some thing, please help.

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Splitting big request in multiple small ajax requests

    - by Ionut
    I am unsure regarding the scalability of the following model. I have no experience at all with large systems, big number of requests and so on but I'm trying to build some features considering scalability first. In my scenario there is a user page which contains data for: User's details (name, location, workplace ...) User's activity (blog posts, comments...) User statistics (rating, number of friends...) In order to show all this on the same page, for a request there will be at least 3 different database queries on the back-end. In some cases, I imagine that those queries will be running quite a wile, therefore the user experience may suffer while waiting between requests. This is why I decided to run only step 1 (User's details) as a normal request. After the response is received, two ajax requests are sent for steps 2 and 3. When those responses are received, I only place the data in the destined wrappers. For me at least this makes more sense. However there are 3 requests instead of one for every user page view. Will this affect the system on the long term? I'm assuming that this kind of approach requires more resources but is this trade of UX for resources a good dial or should I stick to one plain big request?

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

  • Python requests SSL version

    - by Aaron Schif
    I am using the python requests module on Ubuntu 13.04. I keep getting the error: requests.exceptions.SSLError: [Errno 1] _ssl.c:504: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure When I use curl, it fails by default but succeeds with the -3 option. curl https://username:Password@helloworldurl -3 This leads me to believe that it is the SSL version, which I found may be badly supported on ubuntu while searching the error. Sooo. How do I change or check the SSL version using python preferably with requests. Note: the url is private and cannot be given out. Sorry.

    Read the article

  • Log incoming requests

    - by Maxim Eliseev
    We have Tomcat running on Ubuntu server. It runs a web service, open to the internet. Sometimes it has sudden spike of traffic and goes down. There is nothing unusual in Tomcat access logs. I guess because some of the requests are so 'heavy' that they never finish and hence are not recorded to Tomcat access logs. Is there a way to configure Ubuntu to log incoming requests in the following format (below)? Date, Time, URL (with query string params), IP address (of client) There should be one line per request. Each request should be logged before it is executed. Only incoming requests to ports 80 and 443 should be logged.

    Read the article

  • Huge difference between Facebook Ad Click figures and Apache log requests

    - by Gearóid
    We're running a facebook ad campaign for our business but there seems to be a huge discrepancy between the number of clicks registered and the number of requests made with "facebook.com" in the HTTP referrer. The difference can be anything between 40-80 clicks/requests. I understand why the Google Analytics would be off and I understand that the figures shouldnt be exactly the same but surely if 100 people click the ad then I should be seeing at least 90 requests for the homepage with facebook.com as the referrer? Can anybody provide any insight into why this may be happening?

    Read the article

  • Monitoring JSON requests sent/received from the browser?

    - by Uwe Keim
    Having a website that generates and receives JSON requests via AJAX, I failed to find a tool that shows me live the communication including the content of the JSON calls. I thought that the Google Chrome developer tools or the IE 9 developer tools do have such a feature, but again, I failed. Searching Google, I failed too. So my question is: Is there a client-side tool to monitor the content of JSON requests that a website sends to the server?

    Read the article

  • Windows 2008 R2 Servers Sending Arp Requests for IPs outside Subnet

    - by Kyle Brandt
    By running a packet capture on my my routers I see some of my servers sending ARP requests for IPs that exist outside of its network. For example if my network is: Network: 8.8.8.0/24 Gateway: 8.8.8.1 (MAC: 00:21:9b:aa:aa:aa) Example Server: 8.8.8.20 (MAC: 00:21:9b:bb:bb:bb) By running a capture on the interface that has 8.8.8.1 I see requests like: Sender Mac: 00:21:9b:bb:bb:bb Sender IP: 8.8.8.20 Target MAC: 00:21:9b:aa:aa:aa Target IP: 69.63.181.58 Anyone seen this behavior before? My understanding of ARP is that requests should only go out for IPs within the subnet... Am I confused in my understanding of ARP? If I am not confused, anyone seen this behavior? Also, these seem to happen in bursts and it doesn't happen when I do something like ping an IP outside of the network. Update: In response to Ian's questions. I am not running anything like Hyper-V. I have multiple interfaces but only one is active (Using BACS failover teaming). The subnet mask is 255.255.255.0 (Even if it were something different it wouldn't explain an IP like 69.63.181.58). When I run MS Network Monitor or wireshark I do not see these ARP requests. What happens is that on the router capturing I see a burst of about 10 requests for IPs outside of the network from the host machine. On the machine itself using wireshark or NetMon I see a flood of ARP responses for all the machines on the network. However, I don't see any requests in the capture asking for those responses. So it seems like maybe it is maybe refreshing the arp cache but including IPs that outside of the network. Also when it does this NetMon doesn't show the ARP requests?

    Read the article

  • Enabling SSL Requests on Jdev's Integrated Weblogic

    - by Christian David Straub
    Often times you will want to enable SSL access for such things as secure login or secure signup. By default, the integrated WLS that ships with JDev does not listen to SSL requests. However, this is easily fixed.Just navigate to http://127.0.0.1:7101/console. This will deploy the console app where you can configure WLS. By default the login credentials are:username: weblogicpassword: weblogic1Then go to Environment -> Servers -> DefaultServer. Check the "SSL Listen Port Enabled" box and your server will now listen to SSL requests (just make sure to use the listen port that is specified).For added security, you can always check while processing your request that it is going through an SSL connection by first checking HttpServletRequest.isSecure().

    Read the article

  • Receiving requests where absolute URL on page are morphed to relative URLs

    - by Jacob
    In our web pages, we have a hyperlink with an href to an absolute URL: https://some.other.host.com/blah.aspx?var1=val1&var2=val2 For some reason, in our logs, we see a lot of requests to URLs of this format: http://our.site.com/https:/some.other.host.com/blah.aspx?var1=val1&var2=val2 We don't have any JavaScript that would request that URL; it only appears inside of a hyperlink. Is there some sort of known bot, browser plugin, bug, etc. that could be responsible for these requests being made?

    Read the article

  • How to publicize new Android's HTTP requests library

    - by Yaniv
    I don't know if this really belongs here, but I developed an open-source HTTP requests library for Android called Unite. This library was built mainly to significantly facilitate the work and coding time, and makes it easy to create and work with HTTP requests. I think a big advantage of this library is that it is open-source, so everyone can contribute to make it even better. I started this project for personal use, and I really like the result. What is the right and proper way to publicize the project, I do think it will be handy to Android developers. So how can I make developers know this library exist?

    Read the article

  • Apache2 Unwantingly Allowing Proxy Requests

    - by Kevin
    I'm not sure if this is the right location, but this is fairly urgent. I have completely removed all traces of mod_proxy and the other mod_proxy mods, although the Apache server continues to allow proxy requests. I have restarted numerous times, and have shut down until I can find an answer. I've noticed lots of requests from IPs in and around China to external sites such as free movie downloads and such. I'd like to prevent this from happening. I'll be grateful for any help I get.

    Read the article

  • IP/PORT forward requests to another server

    - by DT.DTDG
    I have the following listening PORT:IP set up on my UBuntu server. 12.345.67.890:3636 It receives requests perfectly, however, I would now like to forward any requests to that IP:PORT to another IP:PORT, i.e.: 09.876.54.321:3636 Essentially I want to do a request forward 12.345.67.890:3636 -> 09.876.54.321:3636. How can I go about it in Terminal and if I wanted to change it back how can I go about that too? Is there also a way to test that the data is forwarding and it is setup properly? Thanks! Edit: Can I just do as follows, just wondering how I would go about testing it and how I could disable it? sysctl net.ipv4.ip_forward=1 iptables -t nat -A PREROUTING -p tcp --dport 3636 -j DNAT --to-destination 09.876.54.321:3636 iptables -t nat -A POSTROUTING -j MASQUERADE

    Read the article

  • Significant number of non-HTTP requests hitting my site

    - by Mark Westling
    I'm seeing a significant number of non-HTTP requests hitting a site I just launched. They show up in the server (nginx) logs as non-ASCII and get rejected (correctly) with a 400 status. Here are some lines from the log: 95.132.198.189 - - [09/Jan/2011:13:53:30 -0500] "œ$A\x10õœ²É9J" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:57:42 -0500] "#§i²¸oYi á¹„\x13VJ—x·—œ\x04N \x1DÔvbÛè½\x10§¬\x1E0œ_^¼+\x09ÜÅ\x08DÌÃiJeT€¿æ]œr\x1EëîyIÐ/ßýúê5Ǹ" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:58:33 -0500] "¯Ú%ø=Œ›D@\x12¼\x1C†ÄÀe\x015mˆàd˜Û%pÛÿ" 400 173 "-" "-" What should I make of this? Is this some sort of scripted attack? Or could these be correct requests that have somehow been garbled? They're not affecting the performance of the site and I'm not seeing any other signs of attacks (e.g., no strange POSTs) so at this point I'm more curious than afraid.

    Read the article

  • Lots of http HEAD requests originating from porn sites

    - by Don Corley
    My access log on my web server has a ton of http HEAD requests coming from porn sites. What are HEAD requests and are they doing something bad with my site? Here is an excerpt from my log: (valid request) 96.251.177.249 - - [02/Jan/2011:23:42:25 -0800] "POST /ajax HTTP/1.1" 200 0 "http://www.mywebsite.com/abc.html" "Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7" 80.153.114.208 - - [02/Jan/2011:23:43:11 -0800] "HEAD / HTTP/1.0" 302 185 "http://www.somepornsite.com" "Mozilla/5.0 (X11; U; Linux i686; it-IT; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.25 (jaunty) Firefox/3.8" 80.153.114.208 - - [02/Jan/2011:23:43:11 -0800] "HEAD /tourappxsl HTTP/1.0" 200 16871 "http://www.somepornsite.com" "Mozilla/5.0 (X11; U; Linux i686; it-IT; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.25 (jaunty) Firefox/3.8" I changed only the web addresses in this log. Thanks for any ideas, Don

    Read the article

  • Adsense: You have rejected ad requests, which will result in lost revenue

    - by Chankey Pathak
    Got an alert on my adsense account which says You have rejected ad requests, which will result in lost revenue. The following ad units have made ad requests with incorrect site information. This occurs when the URL of the server from which the ad unit has been served differs from the URL of the actual page where the ad will be displayed. Learn how to fix these errors. So the solution is that I'll have to use "google_page_url = "http://myurl.com/fullpath";" I'm using wordpress, what should be the URL for google_page_url? For example my website is www.technostall.com. Should I put www.technostall.com there or should I give the path of each post? That is not good because I'm using a sidebar widget for sidebar ad unit. I can't change google_page_url for each page. What should I do? This error is appearing only on my sidebar/navigation ad units. Is using google_page_url = document.location; fine?

    Read the article

  • Windows Metro Requests

    - by Scott Dorman
    Windows 8 and Windows Metro style apps have a lot of potential, but only if application vendors realize there is a demand to see their app as a Metro style app and not just as a desktop app (or worse, only as an Android or iOS app). As consumers, the only thing we can do is be vocal about our desire to see these apps on Windows 8 as a Metro style app. In an effort to raise awareness, I just launched WinMetro Requests. This is our opportunity to request Windows Metro style apps  and show those companies just how much interest there is for seeing their app as a Metro style app. This site is running on UserVoice, so it allows you to easily submit application requests, add comments, and, more importantly, vote for your favorite applications to come to Windows as a Metro style app! As I find out the status of requested applications, I will update the status of the request. If you know and have official communication from one of the companies indicating they will be or are working on a Windows Metro style app, please let me know and I'll update the status of the request after verifying (or at least trying to verify) the information.

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • Using Url Rewrite to Block Page Requests

    - by The Official Microsoft IIS Site
    The other day I was checking the traffic stats for my WordPress blog to see which of my posts were the most popular. I was a little concerned to see that wp-login.php was in the Top 5 total requests almost every month. Since I’m the only author on my blog my logins could not possibly account for the traffic hitting that page. The only explanation could be that the additional traffic was coming from automated hacking attempts. Any server administrator concerned about security knows that “ footprinting...(read more)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >