Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 147/232 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Odd Series of Packets, How would I reproduce this behavior?

    - by JustSmith
    I recorded a series of http packets that I cant programmatically recreate. The series of packets goes like this: HTTP GET /axis-cgi/admin/param.cgi?action=list&group=Network.eth0.MACAddress,Properties.System.SerialNumber,DVTelTest,SightLogix.ProdShortName HTTP/1.1 HTTP HTTP/1.1 200 OK (text/plain) HTTP GET /axis-cgi/admin/param.cgi?action=list&group=Properties.Image.Resolution HTTP/1.1 HTTP HTTP/1.1 200 OK (text/plain) HTTP GET /axis-cgi/admin/param.cgi?action=update&Network.RTSP.ProtViewer=password HTTP/1.1 HTTP GET /axis-cgi/admin/param.cgi?action=list&group=Event HTTP/1.1 HTTP HTTP/1.1 200 OK (text/plain) HTTP GET /axis-cgi/admin/param.cgi?action=list&group=ImageSource.I0.Sensor HTTP/1.1 HTTP HTTP/1.1 200 OK (text/plain) Notice the two GET followed by one response. I though the two gets were going out at the same time but there is no corresponding number of responses. Also when trying to reproduce this pattern as the server if I abort the first GET request the client waits until it times out and starts the request over with out sending any other requests. What is happening here? How can I reproduce it?

    Read the article

  • PCAP Web Service Usage Logging for Dummies

    - by nick
    I've been assigned the task (for work) of working with PCAP for the first time in my life. I've read through the tutorials and have hacked together a real simple capture program which, it turns out, isn't that hard. However, making use of the data is more difficult. My goal is to log incomming and outgoing web service requests. Are there libraries (C or C++) that stitch together the packets from PCAP that would make reporting on this simple? Baring that is there something short of reading all of the RFC's from soup to nuts that will allow me to have an "ah-ha!" moment (all of the tutorials seem to stop at the raw packet level which isn't useful for me)? It looks like PERL has a library that may do this and I may eventually attempt a reverse engineer from PERL. NOTE BENE: Web Server logs aren't acceptable here as I will be intercepting on a routing device. If I had access to those I'd be done and happy...I don't.

    Read the article

  • Geolocation under firefox 3.6 requires Proxy Authentication?

    - by prem
    I am trying to share my location on geolocation enabled pages from firefox 3.6, but doesn't seem to get any kind of success or failures. When I wrote my custom js containing navigator.geolocation.getCurrentPosition(func1,func2), the success callback isn't called at all. When I tamper the http requests on firefox, the request to https://www.google.com/loc/json returns with status: 407 [Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. )]... Yes, my network is behind a proxy server. But the same works with Chrome. I didn't try other browsers yet.

    Read the article

  • Design Patterns: What's the antithesis of Front Controller?

    - by Brian Lacy
    I'm familiar with the Front Controller pattern, in which all events/requests are processed through a single centralized controller. But what would you call it when you wish to keep the various parts of an application separate at the presentation layer as well? My first thought was "Facade" but it turns out that's something entirely different. In my particular case, I'm converting an application from a sprawling procedural mess to a clean MVC architecture, but it's a long-term process -- we need to keep things separated as much as possible to facilitate a slow integration with the rest of the system. Our application is web-based, built in PHP, so for instance, we have an "index.php" and an IndexController, a "account.php" and an AccountController, a "dashboard.php" and DashboardController, and so on.

    Read the article

  • Reuse implemented business classes in a web applications

    - by Mohammad
    I have implemented my domain layer classes and i have used them in a java application. Now i want to use same classes in a java web application,but i dont know how can i do it? In the java aplication we make and run some objects in main(class and method) and use them while program is running. for example an object that hold a collection of data that will be needed for all user requests. My question is: How can i create and hold such objects and data that should be available for all users and clients.

    Read the article

  • MYSQL : First and last record of a grouped record (aggregate functions)

    - by Jimmy
    I am trying to do fectch the first and the last record of a 'grouped' record. More precisely, I am doing a query like this SELECT MIN(low_price), MAX(high_price), open, close FROM symbols WHERE date BETWEEN(.. ..) GROUP BY YEARWEEK(date) but I'd like to get the first and the last record of the group. It could by done by doing tons of requests but I have a quite large table. Is there a [low processing time if possible] way to do this with MySQL?

    Read the article

  • Cross-domain REST proxy with Javascript, HTML5

    - by Bosh
    I'm writing a service (say, service.com) that provides a REST API to external apps running inside of IFrames. (These apps are hosted from domains outside the service.com). I'm planning a javascript client library for the apps to make pure-javascript requests to the service.com REST API -- basically using postMessage and some ad-hoc encapsulation of my API calls to get messages back and forth across frames (from the outside-app.com IFrame -- service.com REST API, and back to the IFrame with a response). My question: is there any robust, general-purpose javascript library to accomplish the kind of cross-domain REST request proxying I need, or should I just hack it from scratch?

    Read the article

  • Can EC2 instances be set up to come from different IP ranges?

    - by Joshua Frank
    I need to run a web crawler and I want to do it from EC2 because I want the HTTP requests to come from different IP ranges so I don't get blocked. So I thought distributing this on EC2 instances might help, but I can't find any information about what the outbound IP range will be. I don't want to go to the trouble of figuring out the extra complexity of EC2 and distributed data, only to find that all the instances use the same address block and I get blocked by the server anyway. NOTE: This isn't for a DoS attack or anything. I'm trying to harvest data for a legitimate business purpose, I'm respecting robots.txt, and I'm only making one request per second, but the host is still shutting me down. Edit: Commenter Paul Dixon suggests that the act of blocking even my modest crawl indicates that the host doesn't want me to crawl them and therefore that I shouldn't do it (even assuming I can work around the blocking). Do people agree with this?

    Read the article

  • URL Rewrite in htaccess problem

    - by davykiash
    Am rather new to this world of htaccess redirects.Am trying to force all redirects in my Zend MVC to my https but I get a requested URL not found error on requests that dont go though the index controller Example https://www.example.com/auth/register gives a requested URL /auth/register not found error. However if I remove the https redirect rule it works fine over http. If I adjust the URL to https://www.example.com/index.php/auth/register it works fine. The URL https://www.example.com/index/faq works just fine since it goes through the index controller. My .htaccess file looks like this RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L] RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> What do I need to adjust to get the URL https://www.example.com/auth/register working?

    Read the article

  • Create a SOCKS Proxy that does nothing special

    - by rwired
    I am trying to create a SOCKS proxy in C++ that runs as a background process on localhost. If the user's browser is configured to use the proxy, I want all HTTP requests to be passed along through the normal TCP/IP stack. i.e. The browser will behave exactly as it normally would. Eventually I will add another layer which will check to see if the requested resource matches certain criteria, and if so will handle the request differently. But for now I'm just trying to solve the basic problem... how to create a SOCKS proxy that doesn't change anything?

    Read the article

  • Lightweight web browser for testing

    - by Ghostrider
    I have e very specific test setup in mind. I would like to start a web-browser that understands Javascript and can use HTTP proxy, point it to a URL (ideally by specifying it in the command line along with the proxy config), wait for the page to load while listening (in the proxy) requests are generated as web-page is rendered and Javascript is executed, then kill the whole thing and restart. I don't care about how the page renders graphically at all. Which browser or tool should I use for this? Ideally it should be something self-contained that doesn't require installation (just an EXE file that runs from command line). Lynx would have been ideal but for the fact that it doesn't support JS. It should have as small memory footprint as possible.

    Read the article

  • What is cached on a client machine when using https.

    - by TroyP
    I have an application that is working on https for everybody and on http for all but two users. The two users get a JavaScript error when trying to "edit" a page while on http but can edit the page on https. The problem is for occurs for both IE6 and FF3.6 for one of these users. Others have no problem using any browser. I have used Charles Proxy to look at the server response and no request is being made to https when on http and all browser requests return successfully. I have cleared all caches known to me on the clients (browser, jvm). Are http and https caches stored in different locations on the clients computers. Could a cached encrypted file be being read on the unencrypted port.

    Read the article

  • How to programatically record the start and end of an MVC FileResult for logging of incomplete downl

    - by Richard
    Hi, I have been investigating how My ASP.NET MVC application can log unsuccessful / incomplete downloads. I have a controller handling the file requests which currently returns a FileResult. What I need to record is the IPAddress, filename and when the download started, then the same data and when it completed. I have looked at intercepting the request start and end with an HttpModule and also IIS7 failed request tracing but am not sure which route would be best and have this feeling that maybe I am missing an obvious answer to the problem. Does anyone have any suggestions or know of any alternatives as this seems like something many people would want to know from their web server? Thanks for your help

    Read the article

  • Storing millions of URLs in a database for fast pattern matching

    - by Paras Chopra
    I am developing a web analytics kind of system which needs to log referring URL, landing page URL and search keywords for every visitor on the website. What I want to do with this collected data is to allow end-user to query the data such as "Show me all visitors who came from Bing.com searching for phrase that contains 'red shoes'" or "Show me all visitors who landed on URL that contained 'campaign=twitter_ad'", etc. Because this system will be used on many big websites, the amount of data that needs to log will grow really, really fast. So, my question: a) what would be the best strategy for logging so that scaling the system doesn't become a pain; b) how to use that architecture for rapid querying of arbitrary requests? Is there a special method of storing URLs so that querying them gets faster? In addition to MySQL database that I use, I am exploring (and open to) other alternatives better suited for this task.

    Read the article

  • RESTFul, statelesness and sessions

    - by Per Arneng
    RESTFul service has a rule that it should be stateless. By beeing that it does not allow a session to be created and maintained by sending a session key between the client and the server and then holding a session state on the server. If i look at the definition in wikipedia of stateless server http://en.wikipedia.org/wiki/Stateless_server "A stateless server is a server that treats each request as an independent transaction that is unrelated to any previous request" It states that it should be unrelated to any previous request. In practice this means that any type of authentication will be comparing the credentials of a user to a state on the server that was created by a previous operation. So a service called login is related to and dependent on the state that has been created by previous requests (ex: create_user and/or change_password). In my view you are breaking statelessnes by doing authentication. My point is that people are complaining about having sessions in RESTFul is breaking statelesness but doing authentication is also breaking the same rule. What do you think?

    Read the article

  • How to assign permissions for Copy/Paste on windows

    - by jalchr
    Well, as everyone knows there is no way you can assign permissions for Copy/Paste of files on windows platform. I need to control the copy process from a central file server, in a way that helps me know: which user performed the copy Which files were copied where did he pasted them Total size of data copied Time of copy operation If user exceeds the allowed "Copy-Limit", a dialog box requests him to enter administrative credentials or deny him (as it would be configured) Store all this data in a file for later review or send by email. I need to collect this data by putting a utility program on the server itself, without any other installation on client computers. I know about monitoring the Clipboard, but which clipboard would it be? the user's clipboard or the server's clipboard ? And what about drag-drop operation, which doesn't even pass through the clipboard? Any knowledge of whether SystemFileWatcher is useful in such case ? Any ideas ?

    Read the article

  • Whats the best way to stop this app crash when buttons rapidly pressed iphone obj-c

    - by dubbeat
    Hi, I'm not entirely sure why this crash is happening and I'd like to get advice on the best way to deal with it. My app has 3 buttons. Each button requests a different XML file from a server which is used to populate a table view. If I rapidly press the buttons in sequence , button 1 button 2 button3 button 1 button 2 button3 button 1 button 2 button3 the application quits. What could be causing this. Would id be in the NSURLRequest side of things or the table view population side? How would you suggest I stop this behaviour? I was going to just set a boolean "isRequesting" to true when a button is pressed and set it to false when the table view is finished populating. If any button is pressed while isRequesting is True they do nothing. Does that sound wise or is there a better way? Thanks, dub

    Read the article

  • Performance monitor shows 4294967293 sessions active

    - by TGnat
    I have an ASP.Net 3.5 website running in IIS 6 on Windows Server 2003 R2. It is a relatively small internal application that probably serves less than ten users at any given time. The server has 4 Gig of memory and shows that 3+ Gig is available while the site is active. Just minutes after restarting the web application Performance monitor shows that there is a whopping 4,294,967,293 sessions active! I am fairly certain that this number is incorrect; at the time this reading there were only 100 requests to the website. Has anyone else experienced this kind odd behavior from perf mon? Any ideas on how to get an accurate reading? UPDATE: After running for about an hour the number of active sessions has dropped by 4. So it does seem to be responding to sessions timing out.

    Read the article

  • Detect aborted connection during ASIO request

    - by Tim Sylvester
    Is there an established way to determine whether the other end of a TCP connection is closed in the asio framework without sending any data? Using Boost.asio for a server process, if the client times out or otherwise disconnects before the server has responded to a request, the server doesn't find this out until it has finished the request and generated a response to send, when the send immediately generates a connection-aborted error. For some long-running requests, this can lead to clients canceling and retrying over and over, piling up many instances of the same request running in parallel, making them take even longer and "snowballing" into an avalanche that makes the server unusable. Essentially hitting F5 over and over is a denial-of-service attack. Unfortunately I can't start sending a response until the request is complete, so "streaming" the result out is not an option, I need to be able to check at key points during the request processing and stop that processing if the client has given up.

    Read the article

  • Multiple Broadcast Messages with Less Data or Less Broadcast Messages with More Data?

    - by niko
    Hi, I am developing an application which is communicating with the server. Tha application can perform log-in and get different parameters from server. The application consists of a RESTful client (custom class for making requests), Communication Service (the service which runs in the background) and the main activity. For now I created multiple broadcast messages and multiple broadcast receivers in the main activity so when the application performs login operation a receiver (loginBroadcastReceiver) in the main activity receives a message and when another parameter is received from the server different message is broadcasted and another receiver handles the message. This way however the application performance is poor but I am not sure whether it is due to multiple broadcast receivers. Does anyone know what is the best way to exchange data between service and main activity - is it better to create a single broadcast receiver and retrieve all parameters from message or is it better to initialize multiple broadcast receivers for multiple parameters? I would appreciate if you could provide any useful resource about the topic because I'm writing the thesis and it would be good if the solution could be explained.

    Read the article

  • IIS 7.0 - Every site suddenly redirecting root request to forms authentication

    - by Pittsburgh DBA
    Suddenly, IIS 7.0 is redirecting every request for the root of any domain hosted on the box to ~/Account/Logon, which is our Forms Authentication redirect. Additionally, some JavaScript and image requests are being similarly redirected, but not other aspx pages. This is not desirable. Nobody will admit to changing anything. Any ideas? EDIT: It turns out that something has gone wrong with the disk permissions. Can anyone point me to the way things are supposed to be in Windows Server 2008 for a standard ASP.Net installation? The disk permissions are out of whack now.

    Read the article

  • Is it better to echo javascript in raw format with php, or echo a script include that has been minif

    - by Scarface
    Hey guys quick question, I am currently echoing a lot of javascript that is based conditionally on login status and other variables. I was wondering if it would be better to simply echo the script include like <script type="text/javascript" src="javascript/openlogin.js"></script> that has been run through a minifying program and been gzipped or to echo the full script in raw format. The latter suggestion is messier to me but it reduces http requests while the latter would probably be smaller but take more cpu? Just wondering what some other people think. Thanks in advance for any advice.

    Read the article

  • Please suggest some alternative to Drupal

    - by abovesun
    Drupal propose completely different approach in web development (comparing with RoR like frameworks) and it is extremely good from development speed perspective. For example, it is quite easy to clone 90% of stackoverflow functionality using Drupal. But it has several big drawbacks: it is f''cking slow (100-400 requests per page) db structure very complicated, need at least 2 tables for easy content (entity) type, CCK fields very easy generate tons of new db tables anti-object oriented, rather aspect-oriented bad "view" layer implementation, no strange forward layouts and so on. After all this items I can say I like Drupal, but I would like something same, but more elegant and more object oriented. Probably something like http://drupy.net/ - drupal emulation on the top of django. P.S. I wrote this question not for new holy word flame, just write if you know alternative that uses something similar approach.

    Read the article

  • Any way using JavaScript API via iOS? and problem with FQL queries responses.

    - by Assaf b
    Hi, I'm developing an iPhone application with FB connect, the JavaScript API includes really powerful methods like wait.on for combining requests... Any way using those API methods via iOS and Xcode? about the FQL responses, I'm using both: request:didReceiveResponse: AND request:didLoad: methods. all the FQL queries I send provoke didReceiveResponse but not all of them provoke the second one (didLoad). @"SELECT uid,eid FROM event_member WHERE uid in (select uid2 from friend where uid1=%d limit 100)", userID when the limit is 1-2 it provokes them all, when it grows too 100 (friends to fetch) it provokes only the first.. does anyone know this problem? Thanks!

    Read the article

  • So does Apple recommend to not use predicates and sort descriptors in an NSFetchRequest?

    - by dontWatchMyProfile
    From the docs: To summarize, though, if you execute a fetch directly, you should typically not add Objective-C-based predicates or sort descriptors to the fetch request. Instead you should apply these to the results of the fetch. If you use an array controller, you may need to subclass NSArrayController so you can have it not pass the sort descriptors to the persistent store and instead do the sorting after your data has been fetched. I don't get it. What's wrong with using them on fetch requests? Isn't it stupid to get back a whole big bunch of managed objects just to pick out a 1% of them in memory, leaving 99% garbage floating around? Isn't it much better to only fetch from the persistent store what you really need, in the order you need it? Probably I did get that wrong...

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >