Search Results

Search found 65392 results on 2616 pages for 'http service'.

Page 78/2616 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How to launch a Windows service network process to listen to a port on a localhost socket that is vi

    - by rwired
    Here's the code (in a standard TService in Delphi): const ProcessExe = 'MyNetApp.exe'; function RunService: Boolean; var StartInfo : TStartupInfo; ProcInfo : TProcessInformation; CreateOK : Boolean; begin CreateOK := false; FillChar(StartInfo,SizeOf(TStartupInfo),#0); FillChar(ProcInfo,SizeOf(TProcessInformation),#0); StartInfo.cb := SizeOf(TStartupInfo); CreateOK := CreateProcess(nil, PChar(ProcessEXE),nil,nil,False, CREATE_NEW_PROCESS_GROUP+NORMAL_PRIORITY_CLASS, nil, PChar(InstallDir), StartInfo, ProcInfo); CloseHandle(ProcInfo.hProcess); CloseHandle(ProcInfo.hThread); Result := CreateOK; end; procedure TServicel.ServiceExecute(Sender: TService); const IntervalsBetweenRuns = 4; //no of IntTimes between checks IntTime = 250; //ms var Count: SmallInt; begin Count := IntervalsBetweenRuns; //first time run immediately while not Terminated do begin Inc(Count); if Count >= IntervalsBetweenRuns then begin Count := 0; //We check to see if the process is running, //if not we run it. That's all there is to it. //if ProcessEXE crashes, this service host will just rerun it if processExists(ProcessEXE)=0 then RunService; end; Sleep(IntTime); ServiceThread.ProcessRequests(False); end; end; MyNetApp.exe is a SOCKS5 proxy listening on port 9870. Users configure their browser to this proxy which acts as a secure-tunnel/anonymizer. All works perfectly fine on 2000/XP/2003, but on Vista/Win7 with UAC the service runs in Session0 under LocalSystem and port 9870 doesn't show up in netstat for the logged-in user or Administrator. Seems UAC is getting in my way. Is there something I can do with the SECURITY_ATTRIBUTES or CreateProcess, or is there something I can do with CreateProcessAsUser or impersonation to ensure that a network socket on a service is available to logged-in users on the system (note, this app is for mass deployment, I don't have access to user credentials, and require the user elevate their privileges to install a service on Vista/Win7)

    Read the article

  • How can I debug a Windows service that crashes?

    - by Christopher
    I have a .NET Windows service that appears to be crashing due to C00000005 (access violation--according to Dr Watson). When I attach the VS debugger to it--whether I build it with or without symbols--the VS debugger just stops when the service crashes, instead of stopping to give me a chance to do any investigation. Is that to be expected, or am I doing something wrong? Will using WinDbg let me do something more in real time (obviously, WinDbg lets me do crash dump analysis)? Thanks!

    Read the article

  • ASP.Net How to enforce the HTTP get URL format?

    - by Hamish Grubijan
    [Sorry about a messy question. I believe I am targeting .Net 2.0 (for now)] Hi, I am an ASP.NET noob. For starters I am building a page that parses a URL string and populates a table in a database. I want that string to be strictly of the form: http://<server>:<port>/PageName.aspx?A=1&B=2&C=3&D=4&E=5 The order of the arguments do not matter, I just do not want any of them missing, or any extras. Here is what I tried (yes, it is ugly; I just want to get it to work first): #if (DEBUG) // Maps parameter names to their human readable names. // Used for error checking. private static Dictionary<string, string> paramNameToDisplayName = new Dictionary<string, string> { { A, "a"}, { B, "b"}, { C, "c"}, { D, "d"}, { E, "e"}, { F, "f"}, }; [Conditional("DEBUG")] private void validateRequestParameters(HttpRequest request) { bool endResponse = false; // Use foreach var foreach (string expectedParameterName in paramNameToDisplayName.Keys) { if (request[expectedParameterName] == null) { Response.Write(String.Format("No parameter \"{0}\", aka {1} was passed to the configuration generator. Check your URL string / cookie.", expectedParameterName, paramNameToDisplayName[expectedParameterName])); endResponse = true; } } // Use foreach var foreach (string actualParameterName in request.Params) { if (!paramNameToDisplayName.ContainsKey(actualParameterName)) { Response.Write(String.Format("The parameter \"{0}\", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.", actualParameterName)); endResponse = true; } } if (endResponse) { Response.End(); } } #endif and it works ok, except that it complains about all sorts of other stuff: http://localhost:1796/AddStatusUpdate.aspx?X=0 No parameter "A", aka a was passed to the configuration generator. Check your URL string / cookie.No parameter "B", aka b was passed to the configuration generator. Check your URL string / cookie.No parameter "C", aka c was passed to the configuration generator. Check your URL string / cookie.No parameter "D", aka d was passed to the configuration generator. Check your URL string / cookie.No parameter "E", aka e was passed to the configuration generator. Check your URL string / cookie.No parameter "F", aka f was passed to the configuration generator. Check your URL string / cookie.The parameter "X", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "ASP.NET_SessionId", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "ALL_HTTP", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "ALL_RAW", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "APPL_MD_PATH", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "APPL_PHYSICAL_PATH", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "AUTH_TYPE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "AUTH_USER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "AUTH_PASSWORD", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "LOGON_USER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REMOTE_USER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_COOKIE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_FLAGS", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_ISSUER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_KEYSIZE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SECRETKEYSIZE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SERIALNUMBER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SERVER_ISSUER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SERVER_SUBJECT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CERT_SUBJECT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CONTENT_LENGTH", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "CONTENT_TYPE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "GATEWAY_INTERFACE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS_KEYSIZE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS_SECRETKEYSIZE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS_SERVER_ISSUER", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTPS_SERVER_SUBJECT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "INSTANCE_ID", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "INSTANCE_META_PATH", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "LOCAL_ADDR", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "PATH_INFO", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "PATH_TRANSLATED", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "QUERY_STRING", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REMOTE_ADDR", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REMOTE_HOST", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REMOTE_PORT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "REQUEST_METHOD", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SCRIPT_NAME", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_NAME", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_PORT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_PORT_SECURE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_PROTOCOL", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "SERVER_SOFTWARE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "URL", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_CACHE_CONTROL", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_CONNECTION", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_ACCEPT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_ACCEPT_CHARSET", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_ACCEPT_ENCODING", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_ACCEPT_LANGUAGE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_COOKIE", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_HOST", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.The parameter "HTTP_USER_AGENT", was passed to the configuration generator, but it was not expected. Check your URL string / cookie.Thread was being aborted. Is there some way for me to separate the implicit and the explicit parameters, or is it not doable? Should I even bother? Perhaps the philosophy of get is to just throw away that what is not needed. Thanks!

    Read the article

  • Solution 6 : Kill a Non-Clustered Process during Two-Node Cluster Failover

    - by StanleyGu
    Using Visual Studio 2008 and C#, I developed a windows service A and deployed it to two nodes of a windows server 2008 failover cluster. The service A is part of the failover cluster service, which means, when failover occurs at node1, the cluster service will failover the windows service A from node 1 to node 2. One of the tasks implemented by the windows service A is to start, monitor or kill a process B. The process B is installed to the two nodes but is not part of the failover cluster service. When a failover occurs at node1, the cluster service does not failover the process B from node 1 to node 2, and the process B continues running at node1. The requirement is: When failover occurs at node1, we want the process B running at node1 gets killed, but we do not want the process B be part of the failover cluster service. The first idea that pops up immediately is to put some code in an event handler triggered by the failover in the windows service A. The failover effect to the windows service A is similar to using the task manager to kill the process of the windows service A, but there is no event in windows service that can be triggered by killing the process of the window service. The events related to terminating a windows service are OnStop and OnShutDown, but killing the process of windows service A triggers neither of them. The OnStop event can only be triggered by stopping the windows service using Services Control Manager or Services Management Console. Apparently, the first idea is not feasible. The second idea that emerges is to put code into the OnStart event handler of the windows service A. When failover occurs at node 1, the windows service A is killed at node 1 and started at node 2. During the starting, the windows service A at node 2 kills the process B that is running at node 1. It is a workaround and works very well. The C# code implementation within the OnStart event handler is as following: 1.       Capture server names of the two nodes from App.config 2.       Determine server name of the remote node. 3.       Kill the process B running on the remote node. Check here for sample code.  

    Read the article

  • Google Compute Engine Office Hours: August 22, 2012

    Google Compute Engine Office Hours: August 22, 2012 Office hours with the Google Compute Engine Team on August 22, 2012. The slides can be viewed here: goo.gl The tech talk portion of this session was about OAuth and Service Accounts, an area which the Google Compute Engine team has done a great job simplifying. From: GoogleDevelopers Views: 80 7 ratings Time: 52:42 More in Science & Technology

    Read the article

  • No updates in my Raring

    - by zatloukal-frantisek
    Since upgrade from Quantal to raring i am not recieving any updates. For example firefox package - I have version 17 installed and apt-get update && apt-get upgrade does not find updates. And output from show-versions: fanys@fanys-netbook:~$ apt-show-versions firefox firefox 17.0+build2-0ubuntu0.12.10.1 newer than version in archive fanys@fanys-netbook:~$ apt-show-versions unity unity/raring uptodate 6.12.0-0ubuntu1 I tried to remove contents of /var/lib/apt/lists/ and redo package refresh(apt-get update). But still same issue. /etc/apt/sources.list contents: # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://cz.archive.ubuntu.com/ubuntu/ raring main restricted deb-src http://cz.archive.ubuntu.com/ubuntu/ raring main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://cz.archive.ubuntu.com/ubuntu/ raring-updates main restricted deb-src http://cz.archive.ubuntu.com/ubuntu/ raring-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://cz.archive.ubuntu.com/ubuntu/ raring universe deb-src http://cz.archive.ubuntu.com/ubuntu/ raring universe deb http://cz.archive.ubuntu.com/ubuntu/ raring-updates universe deb-src http://cz.archive.ubuntu.com/ubuntu/ raring-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://cz.archive.ubuntu.com/ubuntu/ raring multiverse deb-src http://cz.archive.ubuntu.com/ubuntu/ raring multiverse deb http://cz.archive.ubuntu.com/ubuntu/ raring-updates multiverse deb-src http://cz.archive.ubuntu.com/ubuntu/ raring-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu raring-security main restricted deb-src http://security.ubuntu.com/ubuntu raring-security main restricted deb http://security.ubuntu.com/ubuntu raring-security universe deb-src http://security.ubuntu.com/ubuntu raring-security universe deb http://security.ubuntu.com/ubuntu raring-security multiverse deb-src http://security.ubuntu.com/ubuntu raring-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu raring partner deb-src http://archive.canonical.com/ubuntu raring partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu raring main deb-src http://extras.ubuntu.com/ubuntu raring main deb http://cz.archive.ubuntu.com/ubuntu/ raring-proposed main universe restricted multiverse deb http://cz.archive.ubuntu.com/ubuntu/ raring-backports main universe restricted multiverse I have no updates for 4 days of dist-upgrade. There is one package kept in actual version: libexttextcat-data Thanks in advance

    Read the article

  • Google I/O 2012 - Getting Started with Google+ History API [CONF]

    Google I/O 2012 - Getting Started with Google+ History API [CONF] Timothy Jordan, Daniel Dulitz Google+ history presents new opportunities to increase traffic to your site and engagement with your content by allowing users to connect their Google profile to your site. This session will explore the value of Google+ history and review basic implementation. Special guests will be on hand to describe their early success with this new service. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 92 6 ratings Time: 33:56 More in Science & Technology

    Read the article

  • Getting a WCF service hosted in IIS 7.0

    - by gregarobinson
     This was not as easy as I thought it would be...lots of errors. These links saved me:  http://blah.winsmarts.com/2008-4-Host_a_WCF_Service_in_IIS_7_-and-amp;_Windows_2008_-_The_right_way.aspx http://blog.donnfelker.com/2007/03/26/iis-7-this-configuration-section-cannot-be-used-at-this-path/   

    Read the article

  • Building a Charts Dashboard with Google Apps Script

    Building a Charts Dashboard with Google Apps Script In this Google Developers Live show, join Kalyan Reddy as he talks about how to build a Charts dashboard in Google Apps Script. We'll be talking about the Charts Service and how to wire this up to data that's dynamically coming in from Google Spreadsheets and other sources. From: GoogleDevelopers Views: 97 7 ratings Time: 44:17 More in Science & Technology

    Read the article

  • Chrome Apps Office Hours: Building Apps with Web Intents

    Chrome Apps Office Hours: Building Apps with Web Intents Ask and vote for questions at: goo.gl Web Intents are the core mechanism for building interconnected apps on the Chrome platform. Join Paul Kinlan and Paul Lewis next week as we show you how to build client apps that send data to other web apps, and a service app that will receive input from any intent invocation. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Google I/O 2012 - Google Cloud Messaging for Android

    Google I/O 2012 - Google Cloud Messaging for Android Francesco Nerieri Cloud-to-device-messaging (C2DM) is coming out of beta and getting a new name: Google Cloud Messaging for Android. GCM for Android incorporates the lessons we learned in the C2DM beta, many of which take the form of new features. This session will cover the new service end-to-end and in detail. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 419 11 ratings Time: 52:11 More in Science & Technology

    Read the article

  • Google Maps API Round-up

    Google Maps API Round-up This week, Mano Marks and Paul Saxman go over recent launches and things you might have missed with the Google Maps APIs, including the new Google Time Zone API, traffic estimates with the Directions API (for enterprise customers), and the Places Autocomplete API query results and data service enhancements. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Education

    Read the article

  • How do I make a HTTP POST request to an url from within a c++ dll?

    - by tjorriemorrie
    Hi, there's an open-source application I need to use. It allows you to use a custom dll. Unfortunately, I can't code c++; i don't like it; it don't want to learn it. I do know PHP very well however, thus you can see that I'll rather do my logic within a PHP application. Thus I'm thinking about posting the data from c++/dll to a url on my localhost. (i have my local server set up, that's not the problem). I need to post a large amount of variables (thus a POST and not GET request required). The return value will only be one (int)variable, either 0, 1 or 2. So I need a c++ function that: 1) will post variables to an url. 2) Wait for, and receive the answer. The data type can be in xml, soap, json, whatever, doesn't matter. Is there anyone that can write a little c++ http function for me? pretty please? ;)

    Read the article

  • POST and PUT requests – is it just the convention?

    - by bckpwrld
    I've read quite a few articles on the difference between POST and PUT and in when the two should be used. But there are still few things confusing me ( hopefully questions will make some sense ): 1) We should use PUT to create resources when we want clients to specify the URI of the newly created resources and we should use POST to create resources when we let service generate the URI of the newly created resources. a) Is it just by convention that POST create request doesn't contain an URI of the newly created resource or POST create request actually can't contain the URI of the newly created resource? b) PUT has idempotent semantics and thus can be safely used for absolute updates ( ie we send entire state of the resource to the server ), but not also for relative updates ( ie we send just changes to the resource state ), since that would violate its semantics. But I assume it's still possible for PUT to send relative updates to the server, it's just that in that case the PUT update won't be idempotent? 2) I've read somewhere that we should "use POST to append a resource to a collection identified by a service-generated URI". a) What exactly does that mean? That if URIs for the resources were generated by a server ( thus the resources were created via POST ), then ALL subsequent resources should also be created via POST? Thus, in such situation no resource should be created via PUT? b) If my assumption under a) is correct, could you elaborate why we shouldn't create some resources via POST and some via PUT ( assuming server already contains a collection of resources created via POST )? REPLY: 1) Please correct me if I'm wrong, but from your post and from the link you've posted, it seems: a) The Request-URI in POST is interpreted by server as the URI of the service. Thus, it could just as easily be interpreted as an URI of a newly created resource, if server code was written to recognize Request-URI as such b) Similarly, PUT is able to send relative updates, it's just that service code is usually written such that it will complain if PUT updates are relative. 2) Usually, create has fallen into the POST camp, because of the idea of "appending to a collection." It's become the way to append a resource to a list of resources. I don't quite understand the reasoning behind the idea of "appending to a collection" and why this idea prefers POST for create. Namely, if we create 10 resources via PUT, then server will contain a collection of 10 resources and if we then create another resource, then server will append this resource to that collection ( which will now contain 11 resources )?! Uh, this is kinda confusing thank you

    Read the article

  • Google I/O Sandbox Case Study: Assistly

    Google I/O Sandbox Case Study: Assistly We interviewed Assistly at the Google I/O Sandbox on May 11, 2011. They explained to us the benefits of building on Google Apps. Assistly is a customer management system that helps companies deliver top-quality customer service. For more information about developing with Google Apps, visit: code.google.com For more information on Assistly, visit: www.assistly.com From: GoogleDevelopers Views: 21 0 ratings Time: 01:29 More in Science & Technology

    Read the article

  • How can I build and parse HTTP URL's / URI's / paths in Perl?

    - by Robert S. Barnes
    I have a wget-like script which downloads a page and then retrieves all the files linked in IMG tags on that page. Given the URL of the original page and the the link extracted from the IMG tag in that page I need to build the URL for the image file I want to retrieve. Currently I use a function I wrote: sub build_url { my ( $base, $path ) = @_; # if the path is absolute just prepend the domain to it if ($path =~ /^\//) { ($base) = $base =~ /^(?:http:\/\/)?(\w+(?:\.\w+)+)/; return "$base$path"; } my @base = split '/', $base; my @path = split '/', $path; # remove a trailing filename pop @base if $base =~ /[[:alnum:]]+\/[\w\d]+\.[\w]+$/; # check for relative paths my $relcount = $path =~ /(\.\.\/)/g; while ( $relcount-- ) { pop @base; shift @path; } return join '/', @base, @path; } The thing is, I'm surely not the first person solving this problem, and in fact it's such a general problem that I assume there must be some better, more standard way of dealing with it, using either a core module or something from CPAN - although via a core module is preferable. I was thinking about File::Spec but wasn't sure if it has all the functionality I would need.

    Read the article

  • How to enforce that HTTP client uses conditional requests for updates?

    - by Day
    In a (proper RMM level 3) RESTful HTTP API, I want to enforce the fact that clients should make conditional requests when updating resources, in order to avoid the lost update problem. What would be an appropriate response to return to clients that incorrectly attempt unconditional PUT requests? I note that the (abandoned?) mod_atom returns a 405 Method Not Allowed with an Allow header set to GET, HEAD (view source) when an unconditional update is attempted. This seems slightly misleading - to me this implies that PUT is never a valid method to attempt on the resource. Perhaps the response just needs to have an entity body explaining that If-Match or If-Unmodified-Since must be used to make the PUT request conditional in which case it would be allowed? Or perhaps a 400 Bad Request with a suitable explanation in the entity body would be a better solution? But again, this doesn't feel quite right because it's using a 400 response for a violation of application specific semantics when RFC 2616 says (my emphasis): The request could not be understood by the server due to malformed syntax. But than again, I think that using 400 Bad Request for application specific semantics is becoming a widely accepted pragmatic solution (citation needed!), and I'm just being overly pedantic.

    Read the article

  • Over 200 active requests like "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)"

    - by Stefan Lasiewski
    Some details: Webserver: Apache/2.2.13 (FreeBSD) mod_ssl/2.2.13 OpenSSL/0.9.8e OS: FreeBSD 7.2-RELEASE This is a FreeBSD Jail. I believe I use the Apache 'prefork' MPM (I run the default for FreeBSD). I use the default values for MaxClients (256) I have enabled mod_status, with "ExtendedStatus On". When I view /server-status , I see a handful of regular requests. I also see over 230 requests from the 'localhost', like these: 37-0 - 0/0/1 . 0.00 1510 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 38-0 - 0/0/1 . 0.00 1509 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 39-0 - 0/0/3 . 0.00 1482 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 40-0 - 0/0/6 . 0.00 1445 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 I also see about 2417 requests yesterday from the localhost, like these: Apr 14 11:16:40 192.168.16.127 httpd[431]: www.example.gov 127.0.0.2 - - [15/Apr/2010:11:16:40 -0700] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The page at http://wiki.apache.org/httpd/InternalDummyConnection says "These requests are perfectly normal and you do not, in general, need to worry about them", but I'm not so sure. Why are there over 230 of these? Are these active connections? If I have "MaxClients 256", and over 230 of these connections, it seems that my webserver is dangerously close to running out of available connections. It also seems like Apache should only need a handful of these "internal dummy connections" We actually had two unexplained outages last night, and I am wondering if these "internal dummy connection" caused us to run out of available connections.

    Read the article

  • Apache web logs "GET http/1.1" 200 error?

    - by songdogtech
    I've got some puzzling (at least to me) errors that show up in my webserver logs. What's puzzling is that they show my error page being referenced, but that error page doesn't show up as being displayed in hits on statcounter.com. Can someone tell me the difference in these two kinds of errors? Or is only one a "real" error? This is in WordPress, and I have my 404.php file in my theme redirecting to a static Wordpress page at mydomain.com/error/ These first two log entries show /error/, but a "hit" on my /error/ page doesn't show in my webstats. The page referenced - /2009/08/delete-preference-files/ - exists, and shows a "hit" at statcounter. 92.17.232.242 - - [13/Mar/2010:01:45:43 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/2009/08/delete-preference-files/" "Mozilla/5.0 (Windows;U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 732 4432 92.17.232.242 - - [13/Mar/2010:01:47:11 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/wp-content/themes/default/style.css" "Mozilla/5.0(Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 861 4432 These are two error entries that display my /error/ page, because /error/ gets a "hit" in statcounter. The page /cookies/ doesn't exist. 72.174.16.202 - - [13/Mar/2010:10:53:37 -0800] "GET /cookies HTTP/1.1" 302 21 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 394 642 72.174.16.202 - - [13/Mar/2010:10:53:38 -0800] "GET /error/ HTTP/1.1" 200 4012 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 393 4432

    Read the article

  • IIS 7 + Tomcat 7 - how to reach http://localhost:8080/my_app under e.g. http://my_app.local

    - by Sk8erPeter
    In brief: IIS 7 + Apache Tomcat 7 + isapi_redirect.dll: I have a deployed and working Tomcat-application available under http://localhost:8080/my_app. I would like to see the same content under http://my_app.local (and NOT the default Tomcat-site [which you can see below]). How can I do that? Longer explained: I have IIS 7 (7.5.7600.16385) and Apache Tomcat/7.0.22 installed. I deployed an application (let's call it "my_app") in Tomcat, which now can be reached at http://localhost:8080/my_app, works fine. I added a new web site in IIS panel with the path of the Tomcat deployed my_app, which looks like this: "c:\Program Files\Apache Software Foundation\Tomcat 7.0\webapps\my_app" I binded the host name my_app.local. After that, I configured isapi_redirect.dll like this (or that). Now, when I open http://my_app.local, I can see the default Tomcat site (see below). BUT under http://my_app.local I would like to see the same content as under http://localhost:8080/my_app. How can I do that? Thank you very much in advance!! my config files: isapi_redirect.properties (I made a dir junction to c:\tomcat, so this also works :) ) workers.properties uriworkermap.properties rewrites.properties (empty)

    Read the article

  • haproxy + nginx: https trailing slashes redirected to http

    - by user1719907
    I have a setup where HTTP(S) traffic goes from HAProxy to nginx. HAProxy nginx HTTP -----> :80 ----> :9080 HTTPS ----> :443 ----> :9443 I'm having troubles with implicit redirects caused by trailing slashes going from https to http, like this: $ curl -k -I https://www.example.com/subdir HTTP/1.1 301 Moved Permanently Server: nginx/1.2.4 Date: Thu, 04 Oct 2012 12:52:39 GMT Content-Type: text/html Content-Length: 184 Location: http://www.example.com/subdir/ The reason obviously is HAProxy working as SSL unwrapper, and nginx sees only http requests. I've tried setting up the X-Forwarded-Proto to https on HAProxy config, but it does nothing. My nginx setup is as follows: server { listen 127.0.0.1:9443; server_name www.example.com; port_in_redirect off; root /var/www/example; index index.html index.htm; } And the relevant parts from HAProxy config: frontend https-in bind *:443 ssl crt /etc/example.pem prefer-server-ciphers default_backend nginxssl backend nginxssl balance roundrobin option forwardfor reqadd X-Forwarded-Proto:\ https server nginxssl1 127.0.0.1:9443

    Read the article

  • Request sent with version Http/0.9

    - by user143224
    I am using common-HttpClient ver 3.1. All my requests are having correct (default) Http version in the request line i.e Http/1.1 except fot 1 request. Following Post request gets the requestline as Http/0.9 server : port/cas/v1/tickets/TGT-1-sUqenNbqUzvkGSWW25lcbaJc0OEcJ6wg5DOj3XDMSwoIBf6s7i-cas-1 Body: service=* I debugged through the httpclient code and saw the requestline is set to http/1.1 but on the server i see the request coming as http/0.9. i tried to set the http version explicitly using the HttpMethodParams but does not help. Does someone has any idea what could be wrong? HttpClient client = new HttpClient(); HostConfiguration hc = client.getHostConfiguration(); hc.setHost(new URI(url, false)); PostMethod method = new PostMethod(); method.setURI(new URI(url, false)); method.getParams().setUriCharset("UTF-8"); method.getParams().setHttpElementCharset("UTF-8"); method.getParams().setContentCharset("UTF-8"); method.getParams().setVersion(HttpVersion.HTTP_1_1); method.addParameter("service", URLEncoder.encode(service, "UTF-8")); method.setPath(contextPath + "/tickets/" + tgt); String respBody = null; int statusCode = client.executeMethod(method); respBody = method.getResponseBodyAsString();

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >