Search Results

Search found 62215 results on 2489 pages for 'http basic authentication'.

Page 243/2489 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Server.Transfer("error_404.aspx") in Application_Error returns a blank page

    - by qntmfred
    I look for HttpExceptions in the Application_Error sub of my global.asx Sub Application_Error(ByVal sender As Object, ByVal e As EventArgs) Dim ex As Exception = HttpContext.Current.Server.GetLastError() If ex IsNot Nothing Then If TypeOf (ex) Is HttpUnhandledException Then If ex.InnerException Is Nothing Then Server.Transfer("error.aspx", False) End If ex = ex.InnerException End If If TypeOf (ex) Is HttpException Then Dim httpCode As Integer = CType(ex, HttpException).GetHttpCode() If httpCode = 404 Then Server.ClearError() Server.Transfer("error_404.aspx", False) End If End If End If End Sub I can step through this code and confirm it does hit the Server.Transfer("error_404.aspx"), as well as the Page_Load of error_404.aspx, but all it shows is a blank page.

    Read the article

  • Why can't I set a cookie and redirect?

    - by Damian
    I´m having a problem setting a cookie and doing a 302 redirect In chrome the cookie is not being set (I haven't tested safari), in other browsers I was having the same problem until I added Path=/ to the cookie an now it works. This is how the header looks; the status is 302 Found Content-Type text/html; charset=iso-8859-1 Expires Thu, 01 Jan 1970 00:00:00 GMT Set-Cookie alasca-flash=error-Message<Required<error-Name<Required<error-Sex<Required<error-Age<Required<;Path=/ Location /messages/sdf Content-Length 0 Server Jetty(6.1.x) Any idea on why the cookie is not set? Or any workaround? Thanks!

    Read the article

  • CodeIgniter, Godaddy, htaccess passing variable through url, HMVC

    - by user1492738
    I have a new controller in a module created with HMVC (admin), but i tried also without HMVC and the same issue. I am trying to pass a variable to index or another method from controller but when i access it i receive 404 error, not found. I am new to Codeigniter. I am trying to do resolve this since yesterday, please help me :) Thank you! Route: $route['default_controller'] = 'pages/view/home'; -- working $route['(:any)'] = "pages/view/$1"; -- working $route['404_override'] = ''; -- working $route['admin'] = 'admin/index'; -- working $route['admin/list'] = 'admin/list_pages/index'; -- working $route['admin/edit/(:any)'] = 'admin/edit/index/$1'; -- this is the problem, the other rules are working HTACCESS: RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?$1 [L] Controller: class Edit extends CI_Controller{ function __construct() { parent::__construct(); $this->load->model('pages_model'); } function index($id = 0){ $data['page'] = $this->pages_model->get_pages_by_id($id); $this->load->view('header', $data); $this->load->view('edit', $data); $this->load->view('footer', $data); } } Config: $config['base_url'] = ''; $config['index_page'] = 'index.php?'; $config['uri_protocol'] = 'QUERY_STRING';

    Read the article

  • Is Transport security a bad practice for the WCF service over the Internet?

    - by Sergey
    Hello, I have a WCF service accessible over the Internet. It has wsHttpBinding binding and message security mode with username credentials to authenticate clients. The msdn says that we should use message security for the Internet scenarios, because it provides end-to-end security instead of point-to-point security as Transport security has. What if i use transport security for the wcf service over the Internet? Is it a bad practice? Could my data be seen by malicious users? Thanks, Sergey

    Read the article

  • How could I embed a html/css/js view in a webstart application

    - by phmr
    I would like to use a html/css/js view in my webstart project without requesting all permissions. I figured out that I could use the java HTTPServer to process the requests but I need a way to avoid using real sockets, so that the HTTPServer instantiation doesn't ask for some permission. Do you know any projects that achieve that ? and if not, what should I do to get an HTTPServer completely working locally (without hitting boundaries...) ? edit: maybe an HTTPServer is too much, I maybe only need a HttpHandler..

    Read the article

  • IE and Content-disposition inline vs. extension-token

    - by pinkgothic
    Preamble So IE does Mime-Type sniffing. That part's old news. Suggestions of how to combat it tend to be along the lines of 'supply a content-type IE trusts' (i.e. anything that isn't text/plain or application/octet-stream) or 'add extraneous data at the start of the file that is definitely of the type you're serving'. Now, I'm working on an application that has to allow message attachments (like in e-mails), and we want to close up XSS vectors. IE's mime sniffing is one of those vectors - a text/plain file with html content will trigger as html. Recoding isn't an option at this point, changing the attachments the user has provided can only happen if there is absolutely no doubt about the maliciousness of the file - and someone might want to send HTML as text. Now, Microsoft's MSDN article implies the situation might be easier to fix than advertised: If Internet Explorer knows the Content-Type specified and there is no Content-Disposition data, Internet Explorer performs a "MIME sniff," [...] Great! Except I don't have IE nor current means to reliably install it (I realise this is a fairly sad state for a webdeveloper to be in, I hope to fix this soon) and this is grey theory that I can't quite seem to get confirmed one way or the other. Local sources say that line is hogwash - IE will mime sniff anything that is Content-Disposition: inline / <default> and not specific enough for its tastes in -Type. But what about x-* ('extension-token' in the RFC)? Trying to google for how browsers handle Content-Disposition: <extension-token> hasn't yielded anything (though I may just be doing it wrong, my understanding of Google is seriously slipping lately). I found one question that looked promising, but turned out to be a misunderstanding on side of the thread author, meaning that the train of thought was never actually addressed there. Question(s) Does IE really Mime sniff if you expressly pass Content-Disposition: inline? If so: Does anyone here know how browsers handle Content-Disposition: <extension-token>? If they do this in a way that is for my purposes benign, by presuming it to be synonymous with the default (effectively 'inline', though I hear it's not defined anywhere?), is it specific enough for IE not to Mime sniff? Or am I actually shooting myself in the foot by thinking of pursuing this avenue?

    Read the article

  • What is the best way to post data from web browser to server?

    - by Kronass
    Hi, I want to know what is the best way to send data from web browser to server using post method. I've seen a practice where they wrap all the elements data in XML, convert it into Base64 string and then post it to the server (via Ajax or hidden field). this way will not work if the Javascript is disabled, any how if I ignored this. is it a good practice to wrap elements into XML (or create my custom wrapper in general) and post them to server saying it will enhance the maintainability of the code or just stick with the classical way and no need to add unnecessary text in the post.

    Read the article

  • How to decompress/inflate an XML response from ASP

    - by krisg
    Can anyone provide some insight into how i'd go about decompressing an XML response in classic ASP. We've been handed some code and asked to get it working: Set oXMLHttp = Server.CreateObject("MSXML2.ServerXMLHTTP") URL = HttpServer + re_domain + ".do;jsessionid=" + ue_session + "?" + data oXMLHttp.setTimeouts 5000, 60000, 1200000, 1200000 oXMLHttp.open "GET", URL, false oXMLHttp.setRequestHeader "Accept-Encoding", "gzip" oXMLHttp.send() if oXMLHttp.status = 200 Then if oXMLHttp.responseText = "" then htmlrequest_get = "Empty Response from Server" else htmlrequest_get = oXMLHttp.responseText end if else ... Apparently now that the response is compressed using gzip, we have to un-compress the XML response before we can start to work with the data. How should i go about this?

    Read the article

  • Upload large files via a webpage

    - by Hultner
    What way is the best way to let users upload large files from there webbrowser to a server. I'm talking 200MB+ possible up to a few gigatyes. I have been thinking of a few possible solutions to the problem (not tried them yet) and this is basically the things I came up with. Server download speed will not be a problem but the users connection possibly could. Having some sort of applet on the client side written in Java or Flash which sends the file in parts (is this possible with an applet) to a php/other script on the server and a checksum+ some other info about the file. On the server scripts all the parts and the info file is saved in a temporary directory wich has a unique name based on the checksum of the file and the ip of the user. When the last chunk is sent the applet sends a signal to the server saying it's finished and the server put the file together in the right location. If a chunk doesn't match the checksum for that part the server will send a response to the applet telling it to reupload that chunk. I don't know how important the checksum checking is since it's all tcpackages, someone with more insigth migth be able to answer on that. This is probably the worst way, changing the settings on your server to allow huge fileuploads via an inputfiel. Do it like a normal transfer. User an uploadmanager which does pretty much the same thing as applet i mentioned above. Pros of the first is probably that it would most likely be rather secure, you could show progress as well and possibly resume an upload if ip hasn't changed and do a threaded upload of the chunks. Cons of the first is that the user will need flash/java for it to work. Pros of the 2nd is that it will pretty much work for everyone but cons are big, first there's no way resuming an intruppted download and if something is wrong the whole file would have to be reuploaded is a few of cons. For the third one the pros is pretty muc the same as for the first but the cons is that the user would have to download an application to their computer and run and the application will have to be have to be compatible with their computer and OS. Another way may be a combination of two. Lets say an applet for bigger or more files and a simple input which is rather restricted to maybe max 10-20MB for smaller files and comability. There are probably other much smarter ways to tackle this and that's why I'm asking for advice here on SO.

    Read the article

  • Need to check uptime on a large file being hosted

    - by trustfundbaby
    I have a dynamically generated rss feed that is about 150M in size (don't ask) The problem is that it keeps crapping out sporadically and there is no way to monitor it without downloading the entire feed to get a 200 status. Pingdom times out on it and returns a 'down' error. So my question is, how do I check that this thing is up and running

    Read the article

  • Removing X-Powered-By

    - by Castor
    How can I remove X-Powered-By header in PHP? I am on an Apache Server and I use php 5.21. I can't use the header_remove function in php as it's not supported by 5.21. I used Header unset X-Powered-By, it worked on my local machine, but not on my production server. If php doesn't support header_remove() for ver < 5.3, is there an alternative?

    Read the article

  • Best format to submit an autocomplete?

    - by Keyo
    I'm seeking advice on best practice when submitting a varying amount of POST variables. Do I use JSON or some character separated list to merge all the values into one field or use a sequence of fields like 'autocomplete1', 'autocomplete2' and so on. Thanks in advance, Ben

    Read the article

  • handling filename* parameters with spaces via RFC 5987 results in '+' in filenames

    - by Peter Friend
    I have some legacy code I am dealing with (so no I can't just use a URL with an encoded filename component) that allows a user to download a file from our website. Since our filenames are often in many different languages they are all stored as UTF-8. I wrote some code to handle the RFC5987 conversion to a proper filename* parameter. This works great until I have a filename with non-ascii characters and spaces. Per RFC, the space character is not part of attr_char so it gets encoded as %20. I have new versions of Chrome as well as Firefox and they are all converting to %20 to + on download. I have tried not encoding the space and putting the encoded filename in quotes and get the same result. I have sniffed the response coming from the server to verify that the servlet container wasn't mucking with my headers and they look correct to me. The RFC even has examples that contain %20. Am I missing something, or do all of these browsers have a bug related to this? Many thanks in advance. The code I use to encode the filename is below. Peter public static boolean bcsrch(final char[] chars, final char c) { final int len = chars.length; int base = 0; int last = len - 1; /* Last element in table */ int p; while (last >= base) { p = base + ((last - base) >> 1); if (c == chars[p]) return true; /* Key found */ else if (c < chars[p]) last = p - 1; else base = p + 1; } return false; /* Key not found */ } public static String rfc5987_encode(final String s) { final int len = s.length(); final StringBuilder sb = new StringBuilder(len << 1); final char[] digits = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'}; final char[] attr_char = {'!','#','$','&','\'','+','-','.','0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z','^','_','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','|', '~'}; for (int i = 0; i < len; ++i) { final char c = s.charAt(i); if (bcsrch(attr_char, c)) sb.append(c); else { final char[] encoded = {'%', 0, 0}; encoded[1] = digits[0x0f & (c >>> 4)]; encoded[2] = digits[c & 0x0f]; sb.append(encoded); } } return sb.toString(); } Update Here is a screen shot of the download dialog I get for a file with Chinese characters with spaces as mentioned in my comment.

    Read the article

  • Html validation error for property attribute

    - by Castor
    I am using few facebook social plugins and I am using the meta header. When validating the page, the W3C validator is throwing the error - "Error: there is no attribute "property". I am using the XHTML Transitional doctype - Pls Suggest if I have to change the doctype to something else.

    Read the article

  • Opening two connections to my Apache server?

    - by Ron Meretns
    Hi guys. I have an Apache server running PHP. Everything works fine. But If I have a script that runs for a long time, and I try to open another page to the same server, the second page waits till my first page finishes. This behavior happens to me on Linux and PC. I'm running Apache V2.2.9, and PHP 5.2.6. I'm not sure why this happens... is this normal behavior? Ron

    Read the article

  • Setting Curl's Timeout in PHP

    - by Moki
    I'm running a curl request on an eXist database through php. The dataset is very large, and as a result, the database consistently takes a long amount of time to return an XML response. To fix that, we set up a curl request, with what is supposed to be a long timeout. $ch = curl_init(); $headers["Content-Length"] = strlen($postString); $headers["User-Agent"] = "Curl/1.0"; curl_setopt($ch, CURLOPT_URL, $requestUrl); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_USERPWD, 'admin:'); curl_setopt($ch,CURLOPT_TIMEOUT,1000); $response = curl_exec($ch); curl_close($ch); However, the curl request consistently ends before the request is completed (<1000 when requested via a browser). Does anyone know if this is the proper way to set timeouts in curl?

    Read the article

  • Location detecting tecniques for IP addresses

    - by ilhan
    What are the location detecting tecniques for IP adresses? I know to look at the $_SERVER['HTTP_ACCEPT_LANGUAGE'] (not accurate but mostly useful to detect location, for example if an IP range's users set French to their browser then it means that this range) belongs to France and gethostbyaddr($_SERVER['REMOTE_ADDR']) (to look country code top-level domain) then may be to whois gethostbyaddr($_SERVER['REMOTE_ADDR']) sometimes: $HTTP_USER_AGENT (Firefox's user agent string has language code, not accurate but mostly can be used to detect the location) But what about cities?

    Read the article

  • Why will IIS 6 not serve my custom 404 page when I set the URL in 'Custom Errors'?

    - by Glenn Slaven
    I've got an ASP.NET MVC site & I've got an Errors controller with a NotFound action which works great for 404 errors that pass though .NET, but for stuff that doesn't (like static files) I've set the Custom Errors value for 404 to URL with a value of /Errors/NotFound. But when I do this & hit a non-existant page the site just gives me this: The system cannot find the path specified. Is this because it's a dynamic url, can IIS not redirect 404 requests to dynamic urls or have I screwed up the config somewhere?

    Read the article

  • Sending form in libcurl

    - by sfactor
    i need to send a file to a webserver using libcurl. i saw one of the examples in the curl website and am trying to implement it. it is the postit2.c example. can someone tell me how i might extend this to be able to send username and password as well

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >