Search Results

Search found 14226 results on 570 pages for 'feature requests'.

Page 322/570 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • c# GUI changing a listbox from another class

    - by SlowForce
    I've written a multithreaded server that uses tcplistener and a client handler class that controls input and output. I also have a GUI chat client. The chat client works fine and the console version of the server also works well. I have a start() method in the partial(?) Form class, which I run from a new thread when I click a button, that starts the TCP Listener and loops through and accepts socket requests. For every request a new ClientHandler object is created and the socket is passed to this object before being used in a new handler thread. The ClientHandler is a different class to the form and I'm having real problems writing data to the Listbox in the Form class from within the ClientHandler class. I've tried a few different ways of doing this but none of them work as they involve creating a new form class within the ClientHandler. Any help or advice on what I should be reading to help me would be really appreciated.

    Read the article

  • Decorators vs. classes in python web development.

    - by Tristan
    I've noticed three main ways Python web frameworks deal request handing: decorators, controller classes with methods for individual requests, and request classes with methods for GET/POST. I'm curious about the virtues of these three approaches. Are there major advantages or disadvantages to any of these approaches? To fix ideas, here are three examples. Bottle uses decorators: @route('/') def index(): return 'Hello World!' Pylons uses controller classes: class HelloController(BaseController): def index(self): return 'Hello World' Tornado uses request handler classes with methods for types: class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") Which style is the best practice?

    Read the article

  • C# Sending cookie in an HttpWebRequest which is redirected

    - by Nir
    I'm looking for a way to work with an API which requires login, and then redirects to another URL. The thing is that so far I've only come up with a way to make 2 Http Requests for each action I want to do: first, get cookie with AllowRedirect=false, then get the actual URI and do a second request with the cookie: HttpWebRequest request = (HttpWebRequest)WebRequest.Create(sUrl); request.AllowAutoRedirect = false; HttpWebResponse response = (HttpWebResponse)request.GetResponse(); string redirectedUrl = response.Headers["Location"]; if (!String.IsNullOrEmpty(redirectedUrl)) { redirectedUrl = "http://www.ApiUrlComesHere.com/" + redirectedUrl; HttpWebRequest authenticatedRequest = (HttpWebRequest)WebRequest.Create(redirectedUrl); authenticatedRequest.Headers["Cookie"] = response.Headers["Set-Cookie"]; response = (HttpWebResponse)request.GetResponse(); } It seems terribly inefficient. Is there another way? Thanks!

    Read the article

  • MPMoviePlayerController seems to make 2 calls for each movie

    - by user76328
    I seem to have an issue where an iphone app using the MPMoviePlayerController seems to make 2 calls to the server for each video it wants to play back. This occurs with iphone 3.x OS and libraries but not with iphone 2.x. I know that iphone does progressive download and will make multiple 206 requests, etc. but as far as our back end is concerned the player appears to make 2 separate sessions. This only appears to be an issue with iPhone native apps and not iphone videos played through safari. Additional info from apple: iPhone OS 3.0 added support for streaming audio and video over HTTP, and MPMoviePlayerController must validate the media before playback to determine if it is streaming content or progressively downloaded content. This is the delay you are experiencing. On a fast network, the delay should be minimized. Is this double check causing 2 sessions be created for each video request? Any one else seeing same issue? Is there a remedy?

    Read the article

  • ASP.NET request extension type

    - by Krishna
    Hello, I am working on a large web application which I have recently shelved tons of .aspx pages from the project. To avoid page not found error, I added these entities in the xml which came around 300+ in count. I wrote a http module that checks the request url in the xml entities and if they are found, my module is going to redirect the request to respective new pages. Everything works great, but my collection is getting iterated for all the requests, I mean for each and every .jpg, .css, .js, .ico, .pdf etc. Is there any object or property in .net that can tell the type of request that user requested for like HttpContext.request.type. So that I can avoid checking the request for all unwanted file types.

    Read the article

  • Rsspnse.redirect will not redirect on Internet explorer

    - by Amit
    Hi, I am using Response.Redirect("someurl",true); in the page_preInit event to redirect all the requests that come to a page. It works fine on Firexox, but if i access the page from internet explorer 7/8, it says page can not be found and will not redirect to new URL. Any idea why this happens?? Update: I tried giving a radom URL in the redirect such as google.com and it works fine. Actually the URL I am trying to redirect is not accessible on my machine, it is on another VPN. I guess IE will not change the URL on the addressbar if it can not access the URL. Firefox on the other hand changes the address on the address bar.

    Read the article

  • How to tell the parent that the thread is done in C++ using pthreads ?

    - by milleroff
    Hi. I have a TCP Server application that serves each client in a new thread using POSIX Threads and C++. The server calls "listen" on its socket and when a client connects, it makes a new object of class Client. The new object runs in its own thread and processes the client's requests. When a client disconnects, i want some way to tell my main() thread that this thread is done, and main() can delete this object and log something like "Client disconnected". My question is, how do i tell to the main thread, that a thread is done ?

    Read the article

  • Ruby w/ Postgres & Sinatra - Query won't order right with parameter??

    - by alleywayjack
    So I set a variable in my main ruby file that's handling all my post and get requests and then use ERB templates to actually show the pages. I pass the database handler itself into the erb templates, and then run a query in the template to get all (for this example) grants. In my main ruby file: grants_main_order = "id_num" get '/grants' do erb :grants, :locals => {:db=>db, :order=>grants_main_order, :message=>params[:message]} end In the erb template: db = locals[:db] getGrants = db.exec("SELECT * FROM grants ORDER BY $1", [locals[:order]]) This produces some very random ordering, however if I replace the $1 with id_num, it works as it should. Is this a typing issue? How can I fix this? Using string replacement with #{locals[:order]} also gives funky results.

    Read the article

  • Can a proxy server cache SSL GETs? If not, would response body encryption suffice?

    - by Damian Hickey
    Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't. I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option? I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.

    Read the article

  • How to limit the number of connections to a SQL Server server from my tomcat deployed java applicati

    - by CJ
    I have an application that is deployed on tomcat on server A and sends queries to a huge variety of SQL Server databases on an server B. I am concerned that my application could overload this SQL Server database server and would like some way to preventing it making requests to connect to any database on that server if some arbitrary number of connections were already in existence and unclosed. I am looking at using connection pooling but am under the impression that this will only pool connections to a specific database on the SQL Server server, I want to control the total of these combined connections that will occur to many different databases (incidentally I can only find out the names of individual db's dynamically as they change day to day). Will connection pooling take care of this for me, are am I looking at this from the wrong perspective? I have no access to the configuration of the SQL Server server. Links to tutorials or working examples of your suggested solution are most welcome!

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • PHP/JS/JQUERY: Smart method to Auto check/updating a points status

    - by Azzyh
    Hello. Hi everyone. So right now I am using me of this: function checkpoints() { var postThis = 'checker.php?userid='+ $('#user_id_points').val(); $.post(postThis, function(data){ $(".vispoints").html(data).find(".vispoints1").fadeIn("slow") }); setTimeout(checkpoints, 5000); } This function repeats each 5 seconds (sending request each 5 seconds) and running the checker.php each 5 seconds, to show how many points you got. (checker.php echo out how many points you've got in a span class vispoints1). Now isnt there a smarter method doing this, instead of sending requests like this all the time.. I mean sites like facebook and that, they dont do like this to check if you e.g got a new friend request? Hope you can help me find a better method examples would be good too.

    Read the article

  • Wordpress htaccess in root overriding htaccess in subdomain. Subdomain app not working now.

    - by revive
    Hello, We have a WP install in the root of our server and its running great.. but, we just installed another app in a subdomain. Now, I can view the index.php of that app but cannot do anything with it.. the htaccess rules in the root (from WP base install) are effecting the requests. So, how to I eliminate the WP htaccess file from effecting the subdomain? Here is the htaccess contents for the root (WP install): <IfModule mod_rewrite.c> RewriteEngine On # BEGIN WordPress RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress </IfModule> And for the htaccess in the subdomain: RewriteEngine on RewriteCond $1 !^(index\.php|css|stylesheets|js|images|user_guide|favicon\.ico|robots\.txt) RewriteRule ^(.*)$ /index.php/$1 [L] I've search everywhere online and tried a couple samples I found.. nothing has worked. Any help is greatly appreciated ! Thanks

    Read the article

  • Can't set element ID in JavaScript, it's always undefined

    - by Dylaan Alith
    Hi, I want to have a function that generates ID's on the fly for a given jquery object, if it doesn't have one already. These ID's should then be used in future requests. I came up with the code below, but it doesn't work. The ID's are never set. The commented out alert statement below always return undefined. I always pass code like $(this) or $(options.el) as a parameter to substitute 'el'. Initially, the elements do not have explicitly ID set in HTML. Any help would be appreciated, here's the code: getElementId: function(el) { if(undefined == el.attr('id')) { el.attr('id',"anim-"+Math.random().toString().substr(2)); } // alert(el.attr('id')); return el.attr('id'); },

    Read the article

  • RequireHttpsAttribute and Encrypted Request Data

    - by goatshepard
    I have a controller action that is accepting sensitive data. public ActionResult TakeSensitiveData(SensitiveData data){ data.SaveSomewhere(); } To ensure the data is secure I want to be certain requests are made using HTTPS (SSLv3, TLS 1). One of the approaches I've considered using was the RequireHttpsAttribute on my action: [RequireHttps] public ActionResult TakeSensitiveData(SensitiveData data){ data.SaveSomewhere(); } However, upon testing this I fiddler revealed that an HTTP request made to the action is 302 redirected to HTTPS. My question is this: If I've made a request that is 302 redirected to HTTPS haven't I already sent the sensitive data over HTTP before the redirect?

    Read the article

  • Slow Client connection blocks Mongrel

    - by Sanjay
    I have a Apache + Haproxy + Mongrel setup for my rails application. When I hit a particular server page, mongrel takes around 100ms to process the request and I get the page in around 5 secs due to data transmission time on my slow home connection. Now I see that during these 5 secs of data transmission, mongrel does not serve any other request. I am surprised as that means mongrel is serving the response html to the client and is blocked till the client receives it. Shouldn't serving response be the job of Apache? This puts serious bottleneck in the no of requests Mongrel can serve as that would depend on the speed of the client connection. Is there any way that html generated by mongrel is served by apache/haproxy or any other web server like nginx? I wonder how the other high traffic sites are managing it?

    Read the article

  • The risk of granting to IUSR* NTFS permissions on a folder on the server

    - by vtortola
    I have two web applications that must share a file in the server file system. Both apps are inside of "Inetpub\wwwroot". The file cannot be accessed freely from outside, so it is in a folder out of "Inetpub". I have granted full NTFS permissions to the user "IUSR_whatever" (is the user that runs IIS in anonymous requests) in that folder. The folder has only that file, and has no other use. It works so far :) But, what is the risk? what should I be afraid of? As I see it, as long the folder is out of the "InetPub" cannot be accessed, and as long the apps don't have any security flaw like "path traversal" or server side code injection, it should be safe enough.... But I'm always keen to be wrong :) What do you think? May the file or even the server itself get compromised because of this? Thanks.

    Read the article

  • Ninject caching an injected DataContext? Lifecycle Management?

    - by awrigley
    I had a series of very bizarre errors being thrown in my repositories. Row not found or changed, 1 of 2 updates failed... Nothing made sense. It was as if my DataContext instance was being cached... Nothing made sense and I was considering a career move. I then noticed that the DataContext instance was passed in using dependency injection, using Ninject (this is the first time I have used DI...). I ripped out the Dependency Injection, and all went back to normal. Instantly. So dependency injection was the issue, but I still don't know why. I am speculating that Ninject was caching the injected DataContext. Is this correct? Is there a way of configuring the lifecycle management of injected parameters? If so, what would be the best configuration to use to have the DataContext behave like a normal DataContext, ie, no caching across requests?

    Read the article

  • What is the proper way to handle a fully qualified domain in a GET request?

    - by Mark P Neyer
    I'm writing a proxy server. When I use curl to fetch a page, say http://www.foo.com/pants, curl makes the following request: GET /pants HTTP/1.1 When I have curl send that request through my local proxy, curl changes the GET request to: GET http://www.foo.com/pants HTTP/1.1 This change causes the foo.com server return a 404. Is foo.com broken? Or is the fully qualified domain name only meaningful to proxy servers? Should I always strip http://domain from the requests I send out? Thanks!

    Read the article

  • Terminate an inactive socket connection from TIdTCPServer

    - by A.J.
    We have an application which listens for incoming TCP requests using the Indy 10.1.1 components that ship with Delphi 2007. Occassionally we receive incoming connections which are not from our client application. Typically, one of two things happens: 1) the connection is terminated by the client before any data is received, or 2) data is received which we're not expecting and we manually terminate the connection. However, we've received connections where no data is received and appear to persist until the client terminates the connection from their end. Is there a way to terminate such a connection from the server if no data is received after a specified amount of time?

    Read the article

  • How do I restrict the WCF service called by an ASP.NET AJAX page to only allow calls for that page?

    - by NovaJoe
    I have an AjaxControlToolkit DynamicPopulate control that is updated by calls to a WCF service. I know I can check the HttpContext in the service request to see if a user of the page (and thus, the control) is authenticated. However, I don't want anyone clever to be able to call the service directly, even if they're logged in. I want access to the service to be allowed ONLY to requests that are made from the page. Mainly, I don't want anyone to be able to programatically make a large number of calls and then reverse-engineer the algorithm that sits behind the service. Any clever ideas on how this can be done? Maybe I'm over-thinking this? Thanks in advance.

    Read the article

  • Pushing notifications to a JavaScript?

    - by Michael Stum
    I'm just wondering if there is a way to have a server push information to a JavaScript function. Essentially I have a Dashboard-type page that has a javaScript function to get updates from the server and update the dashboard. I would like my server to be able to "ping" the JS. I don't even know how that could be possible (I'm guessing Twitter and Facebook use polling?), but I'd thought I ask. I heard of Comet, but I don't know if that works with a plain standard IIS 7 installation? (It's a SharePoint 2010 site if that matters in any way) If I understand it correctly, Comet is essentially a constantly open connection, so it seems like it's actually the opposite of what I want (reducing # of requests and therefore load)

    Read the article

  • How can I intercept a Tomcat request at socket level?

    - by Miguel Pardal
    Hi, I'm doing a performance study for a web application framework running on Apache Tomcat 6. I'm trying to measure the time overhead of handling HTTP requests. What I would like to do is: / // just before first request byte is read long t1 = System.nanoTime(); // request is processed... // just after final byte is written to response long t2 = System.nanoTime(); / Then I would compute the total time (t2 - t1). Is there a way to do this? Thanks for your help!

    Read the article

  • Win32 script environment for testing http redirects?

    - by Anders Lindahl
    The past few days I've been working with setting up an Apache server on Windows. The server is supposed to host several .htaccess files, each redirecting (or, in some cases, proxying) to different hosts. I want to create tests for these redirectons, and the solution I'm currently considering is a CGI script running on the same server, sending GET requests to it and verifying that it gets the correct redirection headers back. A scripting solution (vscript/jscript) seems worth exploring, but so far I've only managed to rule out Microsoft.XMLHTTP because it follows the redirect "behind the scenes". Are there any libraries or other solutions already present on a reasonably standard Windows Server that can do this kind of low-level HTTP work? If not, any other suggestions of simple environments to set up for verifying redirects?

    Read the article

  • What's the best way to call jQuery scripts, in HTML/PHP template, or seperate js file?

    - by j-man86
    StackOverflow is telling me this is a subjective question, but I think it's a matter of fact! I have a number of scripts that I'm using on different parts of my site. In terms of making fewer http requests, I know it's better to combine all of these scripts into one .js file. However, isn't a waste of time for a page to call a .js full of 10 or 15 different functions when it's only using one? The other method I am using is to use PHP conditional statements... <?php if( is_page() ) { > $(document).ready(function(){ ... }); <?php } ?> What's the best method or comination of these methods?

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >