Search Results

Search found 55308 results on 2213 pages for 'http max age'.

Page 3/2213 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • What HTTP redirect status should I use to link offsite?

    - by Matthew Scharley
    While randomly browsing my website via Google, I noticed that it was showing up remote redirects as local files. Now this can be both good and bad, but I'm wondering how can I fix this so it doesn't happen? I'm currently using PHP and header('Location: ... which sends a 302 redirect. Looking over the list of HTTP status codes, I'd take a guess that I should be using 303 redirects to redirect offsite. Is anyone able to help me here, and either confirm/deny this, or tell me what I should be doing instead? Obviously, due to me not being able to tell Google to reindex my site on command, there's issues with being able to test this myself...

    Read the article

  • ASP.Net: HTTP 400 Bad Request error when trying to process http://localhost:5957/http://yahoo.com

    - by mat3
    I'm trying to create something similar to the diggbar : http://digg.com/http://cnn.com I'm using Visual Studio 2010 and Asp Development server. However, I can't get the ASP dev server to handle the request because it contains "http:" in the path. I've tried to create an HTTPModule to rewrite the URL in the BeginRequest , but the event handler doesn't get called when the url is http://localhost:5957/http://yahoo.com. The event handler does get called if the url is http://localhost:5957/http/yahoo.com To summarize http://localhost:5957/http/yahoo.com works http://localhost:5957/http//yahoo.com does not work http://localhost:5957/http://yahoo.com does not work http://localhost:5957/http:/yahoo.com does not work Any ideas?

    Read the article

  • Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively

    - by Gopinath
    Amazon S3 is an dead cheap cloud storage service that offers unlimited storage in pay as you use model. Recently we moved all the images and other static files(scripts & css) of Tech Dreams to Amazon S3 to reduce load on VPS server. Amazon S3 is cheap, but monthly bill will shoot up if images/static files of the blog are not cached properly (more details). By adding caching HTTP Headers Cache-Control or Expires to all the files hosted on Amazon S3 we reduced the monthly bills and also load time of blog pages. Lets see how to add custom headers to files stored on Amazon S3 service. Updating HTTP Headers of one file at a time The web interface of Amazon S3 Management console allows adding custom HTTP headers to one file at a time  through “Properties”  window (to access properties, right on a file and select Properties menu). So if you have to add headers to 100s of files then this is not the way to go! Updating HTTP Headers of multiple files of a folder recursively To update HTTP headers of multiple files in a folder recursively, we can use CloudBerry Explorer freeware or Bucket Explorer trail ware applications. CloudBerry is my favourite as it’s a freeware and also it’s has excellent interface to access Amazon S3 from desktops. Adding HTTP Headers with CloudBerry application is straight forward – right click on the required folders and choose the option “Set HTTP Headers”. Download CloudBerry Explorer This article titled,Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Preserving case in HTTP headers with Ruby's Net:HTTP

    - by emh
    Although the HTTP spec says that headers are case insensitive; Paypal, with their new adaptive payments API require their headers to be case-sensitive. Using the paypal adaptive payments extension for ActiveMerchant (http://github.com/lamp/paypal_adaptive_gateway) it seems that although the headers are set in all caps, they are sent in mixed case. Here is the code that sends the HTTP request: headers = { "X-PAYPAL-REQUEST-DATA-FORMAT" => "XML", "X-PAYPAL-RESPONSE-DATA-FORMAT" => "JSON", "X-PAYPAL-SECURITY-USERID" => @config[:login], "X-PAYPAL-SECURITY-PASSWORD" => @config[:password], "X-PAYPAL-SECURITY-SIGNATURE" => @config[:signature], "X-PAYPAL-APPLICATION-ID" => @config[:appid] } build_url action request = Net::HTTP::Post.new(@url.path) request.body = @xml headers.each_pair { |k,v| request[k] = v } request.content_type = 'text/xml' proxy = Net::HTTP::Proxy("127.0.0.1", "60723") server = proxy.new(@url.host, 443) server.use_ssl = true server.start { |http| http.request(request) }.body (i added the proxy line so i could see what was going on with Charles - http://www.charlesproxy.com/) When I look at the request headers in charles, this is what i see: X-Paypal-Application-Id ... X-Paypal-Security-Password... X-Paypal-Security-Signature ... X-Paypal-Security-Userid ... X-Paypal-Request-Data-Format XML X-Paypal-Response-Data-Format JSON Accept */* Content-Type text/xml Content-Length 522 Host svcs.sandbox.paypal.com I verified that it is not Charles doing the case conversion by running a similar request using curl. In that test the case was preserved.

    Read the article

  • 3d Studio Max and 2+CPUs - Core limit ?

    - by FreekOne
    Hi guys, I am scouting for parts to put in a new machine, and in the process, while looking at different benchmarks I stumbled upon this benchmark and it got me a bit worried. Quote form it: Noticably absent from this review is an old-time favorite, 3ds Max. I did attempt to run our custom 3ds Max benchmark on both the 2009 and 2010 versions of the software, but the application would simply not load on the Westmere box with hyper-threading enabled. Evidently Autodesk didn't plan far enough ahead to write their software for more than 16 threads. Once there is an update that addresses this issue, I will happily add 3ds Max back into the benchmarking mix. Since I was looking at dual hexa-core Xeons (x5650), that would put my future machine at 24 logical cores which (duh) is well over 16 cores and since I'm mostly building this for 3DS Max work, you can see how this would seriously spoil my plans. I tried looking for additional information on this potential issue, but the above article seems to be the only one who mentions it. Could anyone who has access to a 16 core machine or an in-depth knowledge about 3DS Max please confirm this ? Any help would be much appreciated !

    Read the article

  • HTTP 2.0 : Microsoft propose « HTTP Speed+Mobility » pour augmenter la vitesse du Web

    HTTP 2.0 : Microsoft propose « HTTP Speed+Mobility » pour augmenter la vitesse du Web Microsoft veut augmenter la vitesse du Web et propose à L'IETF, Internet Engineering Task Force, l'organisme chargé de la standardisation de l'internet, des éléments pour le protocole HTTP 2.0. Après Google avec son projet SPDY ayant pour objectif de doubler la vitesse du Web en apportant des ajustements au protocole HTTP par une couche supérieure, c'est au tour de la firme de Redmond de montrer son intérêt pour l'avenir du Web. Dans un billet de blog publié récemment, la firme présente sa proposition ?HTTP Speed+Mobility? qui sera soumise au groupe de travail HTTPbis.

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • Lots of http HEAD requests originating from porn sites

    - by Don Corley
    My access log on my web server has a ton of http HEAD requests coming from porn sites. What are HEAD requests and are they doing something bad with my site? Here is an excerpt from my log: (valid request) 96.251.177.249 - - [02/Jan/2011:23:42:25 -0800] "POST /ajax HTTP/1.1" 200 0 "http://www.mywebsite.com/abc.html" "Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7" 80.153.114.208 - - [02/Jan/2011:23:43:11 -0800] "HEAD / HTTP/1.0" 302 185 "http://www.somepornsite.com" "Mozilla/5.0 (X11; U; Linux i686; it-IT; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.25 (jaunty) Firefox/3.8" 80.153.114.208 - - [02/Jan/2011:23:43:11 -0800] "HEAD /tourappxsl HTTP/1.0" 200 16871 "http://www.somepornsite.com" "Mozilla/5.0 (X11; U; Linux i686; it-IT; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.25 (jaunty) Firefox/3.8" I changed only the web addresses in this log. Thanks for any ideas, Don

    Read the article

  • Mod Rewrite - directing HTTP/HTTPS traffic to the appropriate virtual hosts

    - by kce
    I have an Apache2 web server (v. 2.2.16) running on Debian hosting three virtual hosts. The first two hosts are HTTP only (server1 and server2). The last host is HTTPS only (server3). My virtual host configuration files can be found at pastebin. I would like to use mod rewrite to get the following behavior: Any request for http://server3 is re-directed to https://server3 Any request for either https://server1 or https://server2 is re-directed to http://server1 or http://server2 as appropriate. Currently, requesting http://server3 gives you a 403 because indexing is disabled for that host and a request for https://server1 or https://server2 will resolve as https://server3 (as its the only virtual host running SSL). This behavior is not desirable. So far I have added a rewrite rule to the central configuration file (myServerWideConfs.conf), with unfortunately no effect. I was under the impression that this rule (or something similar) should rewrite all https:// requests for server1 and server2 to the proper http:// request. RewriteEngine On RewriteCond %{HTTP_HOST} !^server3 [NC] RewriteRule (.*) http://%{HTTP_HOST} My question is two-fold: What mod rewrite rules should I use to accomplish this? And where should they go? Debian's packaging of Apache has a pretty granular (i.e., fractured) configuration file layout; should my rewrite rules go in /etc/apache2/apache2.conf, /etc/apache2/conf.d/myServerWideConfs.conf, or the individual virtual host files? Is mod rewrite the right tool to accomplish this or am I missing something in my greater apache configuration?

    Read the article

  • http to https upgrade -- SEO troubles

    - by SLIM
    I upgraded my site so that all pages have gone from using http to https. I didn't consider that Google treats https pages differently than http. I re-created my sitemap to so that all links now reflect the new https and let it be for a few days. (Whoops!) Google is now re-indexing all https pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new https. The problem is that Google sees all of these as brand new pages when many of them have a long http history. Of course most of you will recognize the problem, I didn't set up a 301 from the old http to the new https. Is it too late to do this? Should I switch my sitemap back to http and then 301 to the new https? Or should I leave the sitemap as is, and setup 301 redirects anyway.. I'm not even sure if Google is trying to reach the http site anymore. Currently the site is doing 303 redirects (from http to https), although I haven't figured out why yet. Thanks for any suggestions you can offer.

    Read the article

  • Getting the index of the returned max or min item using max()/min() on a list

    - by KevinGriffin
    I'm using Python's max and min functions on lists for a minimax algorithm, and I need the index of the value returned by max() or min(). In other words, I need to know which move produced the max (at a first player's turn) or min (second player) value. for i in range(9): newBoard = currentBoard.newBoardWithMove([i / 3, i % 3], player) if newBoard: temp = minMax(newBoard, depth + 1, not isMinLevel) values.append(temp) if isMinLevel: return min(values) else: return max(values) I need to be able to return the actual index of the min or max value, not just the value.

    Read the article

  • Age verification forms and crawlers

    - by user333763
    I have created a website about some beer brand and had to include age verification page. The verification script is written in PHP and uses sessions to store verification variable. The script works the way that no matter form which link you will try to enter the website it will take you to the verification page first. The verification is very simple. There are 2 button: "I'm under 21" and "I'm over 21". If you click the latter, you can browse the website. After some time I discovered that the web crawlers are not able to get past verification page. I checked the website in Google webmaster tools and the only text content scanned was from the verification page. I read somewhere that crawlers are not able to submit form buttons, is it true? Considering the fact that age verification pages are useless anyways, maybe I should just leave it as a starting page but don't forbid going around it, e.g. from links to the subpages?

    Read the article

  • Fluid CSS: floating column with max-width and overflow

    - by Ates Goral
    I'm using a fluid layout in the new theme that I'm working on for my blog. I often blog about code and include <pre> blocks within the posts. The float: left column for the content area has a max-width so that the column stops at a certain maximum width and can also be shrunk: +----------+ +------+ | text | | text | | | | | | | | | | | | | | | | | | | | | +----------+ +------+ max shrunk What I want is for the <pre> elements to be wider than the text column so that I can fit 80-character-wrapped code without horizontal scroll bars. But I want the <pre> elements to overflow from the content area, without affecting its fluidity: +----------+ +------+ | text | | text | | | | | +----------+--+ +------+------+ | code | | code | +----------+--+ +------+------+ | | | | +----------+ +------+ max shrunk But, max-width stops being fluid once I insert the overhanging <pre> in there: the width of the column remains at the specified max-width even when I shrink the browser beyond that width. I've reproduced the issue with this bare-minimum scenario: <div style="float: left; max-width: 460px; border: 1px solid red"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> <pre style="max-width: 700px; border: 1px solid blue"> function foo() { // Lorem ipsum dolor sit amet, consectetur adipisicing elit } </pre> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> </div> I noticed that doing either of the following brings back the fluidity: Remove the <pre> (doh...) Remove the float: left The workaround I'm currently using is to insert the <pre> elements into "breaks" in the post column, so that the widths of the post segments and the <pre> segments are managed mutually exclusively: +----------+ +------+ | text | | text | +----------+ +------+ +-------------+ +-------------+ | code | | code | +-------------+ +-------------+ +----------+ +------+ +----------+ +------+ max shrunk But this forces me to insert additional closing and opening <div> elements into the post markup which I'd rather keep semantically pristine. Admittedly, I don't have a full grasp of how the box model works with floats with overflowing content, so I don't understand why the combination of float: left on the container and the <pre> inside it cripple the max-width of the container. I'm observing the same problem on Firefox/Chrome/Safari/Opera. IE6 (the crazy one) seems happy all the time. This also doesn't seem dependent on quirks/standards mode. Update I've done further testing to observe that max-width seems to get ignored when the element has a float: left. I glanced at the W3C box model chapter but couldn't immediately see an explicit mention of this behaviour. Any pointers?

    Read the article

  • Keep-alive for long-lived HTTP session (not persistent HTTP)

    - by stackoverflowuser2010
    At work, we have a client-server system where clients submit requests to a web server through HTTP. The server-side processing can sometimes take more than 60 seconds, which is the proxy timeout value set by our company's IT staff and cannot be changed. Is there a way to keep the HTTP connection alive for longer than 60 seconds (preferably for an arbitrarily long period of time), either by heartbeat messages from the server or the client? I know there are HTTP 1.1 persistent connections, but that is not what I want. Does HTTP have a keep-alive capability, or would this have to be done at the TCP level through some sort of socket option?

    Read the article

  • Why is IIS 7.5 seeing some requests as HTTP/1.0?

    - by Zhaph - Ben Duguid
    While trying to work out why Static File Compression wasn't working on one of our IIS servers, the error was coming back as "NO_COMPRESSION_10" which translates to: Server not configured to compress 1.0 requests Looking at the requests in Fiddler, I can see that I'm requesting HTTP 1.1, but everything is being sent back as HTTP 1.0: Request (from chrome, captured via Fiddler): GET /css/reset.css HTTP/1.1 Host: [-----].com Connection: keep-alive Cache-Control: max-age=0 If-Modified-Since: Tue, 16 Oct 2012 15:04:34 GMT User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11 Accept: text/css,*/*;q=0.1 Referer: http://[-----].com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en;q=0.8,en-US;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Response from IIS: HTTP/1.0 200 OK Cache-Control: no-cache, no-store Pragma: no-cache Content-Type: text/html; charset=utf-8 Expires: -1 Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Tue, 11 Dec 2012 11:57:03 GMT Connection: close Content-Length: 108837 Other servers with the same host that I'm running this site on all respond with HTTP/1.1. How can I persuade IIS to respond with HTTP/1.1 rather than HTTP/1.0? Edit to add: Digging deeper, I can see that some responses from the server are indeed being returned compressed, so I guess really I'm trying to work out why talking to this particular server from our office seems to result in it seeing 1.0 requests, while other servers at the same co-loc don't?

    Read the article

  • Mathematically Find Max Value without Conditional Comparison

    - by Cnich
    ----------Updated ------------ codymanix and moonshadow have been a big help thus far. I was able to solve my problem using the equations and instead of using right shift I divided by 29. Because with 32bits signed 2^31 = overflows to 29. Which works! Prototype in PHP $r = $x - (($x - $y) & (($x - $y) / (29))); Actual code for LEADS (you can only do one math function PER LINE!!! AHHHH!!!) DERIVDE1 = IMAGE1 - IMAGE2; DERIVED2 = DERIVED1 / 29; DERIVED3 = DERIVED1 AND DERIVED2; MAX = IMAGE1 - DERIVED3; ----------Original Question----------- I don't think this is quite possible with my application's limitations but I figured it's worth a shot to ask. I'll try to make this simple. I need to find the max values between two numbers without being able to use a IF or any conditional statement. In order to find the the MAX values I can only perform the following functions Divide, Multiply, Subtract, Add, NOT, AND ,OR Let's say I have two numbers A = 60; B = 50; Now if A is always greater than B it would be simple to find the max value MAX = (A - B) + B; ex. 10 = (60 - 50) 10 + 50 = 60 = MAX Problem is A is not always greater than B. I cannot perform ABS, MAX, MIN or conditional checks with the scripting applicaiton I am using. Is there any way possible using the limited operation above to find a value VERY close to the max?

    Read the article

  • How do I determine if my controller is in IDE or AHCI mode in Linux?

    - by philcolbourn
    I have an old MacBook Pro 4,1 (early 2008) - but I suspect an answer would apply to many MacBook Pros. It has an Intel IDE/SATA controller (ICH8M/ICH8M-E). I have installed a patched MBR that is supposed to put my controller into AHCI mode. It does this by setting some controller port value that I don't understand. This seems to work as I get this from lspci: 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) Now most, perhaps all, sites that provide a solution (enabling AHCI) suggest that after a sleep/wake cycle that a controller will revert to IDE mode due to how Apple support Windows. They recommend disabling sleep. From author of patchedcode.bin I think Enabling AHCI for Windows on MacBooks NB: I do not have bootcamp installed and I do not have Windows installed. Is there a way to prove that my controller is in IDE or AHCI mode? Background Data Using patchedcode.bin MBR I get this in syslog: Jun 12 22:33:22 max kernel: [ 1.860955] ahci 0000:00:1f.2: version 3.0 Jun 12 22:33:22 max kernel: [ 1.861052] ahci 0000:00:1f.2: irq 45 for MSI/MSI-X Jun 12 22:33:22 max kernel: [ 1.861117] ahci 0000:00:1f.2: AHCI 0001.0100 32 slots 3 ports 1.5 Gbps 0x1 impl SATA mode Jun 12 22:33:22 max kernel: [ 1.861120] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ccc ems Jun 12 22:33:22 max kernel: [ 1.861130] ahci 0000:00:1f.2: setting latency timer to 64 Jun 12 22:33:22 max kernel: [ 1.880880] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) Jun 12 22:33:22 max kernel: [ 1.880983] scsi2 : ahci Jun 12 22:33:22 max kernel: [ 1.884552] scsi3 : ahci Jun 12 22:33:22 max kernel: [ 1.886932] scsi4 : ahci Jun 12 22:33:22 max kernel: [ 1.886998] ata3: SATA max UDMA/133 abar m2048@0xdb504000 port 0xdb504100 irq 45 Jun 12 22:33:22 max kernel: [ 1.887000] ata4: DUMMY Jun 12 22:33:22 max kernel: [ 1.887002] ata5: DUMMY Jun 12 22:33:22 max kernel: [ 2.204103] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 12 22:33:22 max kernel: [ 2.204656] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 12 22:33:22 max kernel: [ 2.204662] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 12 22:33:22 max kernel: [ 2.205324] ata3.00: configured for UDMA/100 Jun 12 22:33:22 max kernel: [ 2.205554] scsi 2:0:0:0: Direct-Access ATA FUJITSU MHY2200B 0081 PQ: 0 ANSI: 5 Using my original MBR I get this from syslog: Jun 13 18:07:13 max kernel: [ 0.622861] ata_piix 0000:00:1f.1: version 2.13 Jun 13 18:07:13 max kernel: [ 0.622869] ata_piix 0000:00:1f.1: power state changed by ACPI to D0 Jun 13 18:07:13 max kernel: [ 0.622924] ata_piix 0000:00:1f.1: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.623339] scsi0 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623730] scsi1 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623765] ata1: PATA max UDMA/100 cmd 0x8108 ctl 0x811c bmdma 0x80e0 irq 21 Jun 13 18:07:13 max kernel: [ 0.623767] ata2: PATA max UDMA/100 cmd 0x8100 ctl 0x8118 bmdma 0x80e8 irq 21 Jun 13 18:07:13 max kernel: [ 0.623810] ata_piix 0000:00:1f.2: MAP [ Jun 13 18:07:13 max kernel: [ 0.623811] P0 -- -- -- ] Jun 13 18:07:13 max kernel: [ 0.623866] ata_piix 0000:00:1f.2: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.624241] scsi2 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624558] scsi3 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624862] ata3: SATA max UDMA/133 cmd 0x80f8 ctl 0x8114 bmdma 0x8020 irq 18 Jun 13 18:07:13 max kernel: [ 0.624865] ata4: SATA max UDMA/133 cmd 0x80f0 ctl 0x8110 bmdma 0x8028 irq 18 Jun 13 18:07:13 max kernel: [ 1.208879] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 13 18:07:13 max kernel: [ 1.208882] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 0/32) Jun 13 18:07:13 max kernel: [ 1.208961] ata1.01: ATAPI: MATSHITA DVD+/-RW UJ-867S, 1.00, max UDMA/33 Jun 13 18:07:13 max kernel: [ 1.216186] ata3.00: configured for UDMA/100 Jun 13 18:07:13 max kernel: [ 1.224396] ata1.01: configured for UDMA/33

    Read the article

  • HTTP Error: 400 when sending msmq message over http

    - by dontera
    I am developing a solution which will utilize msmq to transmit data between two machines. Due to the seperation of said machines, we need to use HTTP transport for the messages. In my test environment I am using a Windows 7 x64 development machine, which is attempting to send messages using a homebrew app to any of several test machines I have control over. All machines are either windows server 2003 or server 2008 with msmq and msmq http support installed. For any test destination, I can use the following queue path name with success: FORMATNAME:DIRECT=TCP:[machine_name_or_ip]\private$\test_queue But for any test destination, the following always fails FORMATNAME:DIRECT=HTTP://[machine_name_or_ip]/msmq/private$/test_queue I have used all permutations of machine names/ips available. I have created mappings using the method described at this blog post. All result in the same HTTP Error: 400. The following is the code used to send messages: MessageQueue mq = new MessageQueue(queuepath); System.Messaging.Message msg = new System.Messaging.Message { Priority = MessagePriority.Normal, Formatter = new XmlMessageFormatter(), Label = "test" }; msg.Body = txtMessageBody.Text; msg.UseDeadLetterQueue = true; msg.UseJournalQueue = true; msg.AcknowledgeType = AcknowledgeTypes.FullReachQueue | AcknowledgeTypes.FullReceive; msg.AdministrationQueue = new MessageQueue(@".\private$\Ack"); if (SendTransactional) mq.Send(msg, MessageQueueTransactionType.Single); else mq.Send(msg); Additional Information: in the IIS logs on the destination machines I can see each message I send being recorded as a POST with a status code of 200. I am open to any suggestions.

    Read the article

  • W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise-updates/main i386 Packages

    - by Harbhag
    I keep getting this warning whenever I try to run sudo apt-get update W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise-updates/main i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise-updates_main_binary-i386_Packages) W: You may want to run apt-get update to correct these problems Below is the output from /etc/apt/sources.list file deb http://archive.ubuntu.com/ubuntu precise main restricted deb-src http://archive.ubuntu.com/ubuntu precise main restricted deb http://archive.ubuntu.com/ubuntu precise-updates main restricted deb-src http://archive.ubuntu.com/ubuntu precise-updates main restricted deb http://archive.ubuntu.com/ubuntu precise universe deb-src http://archive.ubuntu.com/ubuntu precise universe deb http://archive.ubuntu.com/ubuntu precise-updates universe deb-src http://archive.ubuntu.com/ubuntu precise-updates universe deb http://archive.ubuntu.com/ubuntu precise multiverse deb-src http://archive.ubuntu.com/ubuntu precise multiverse deb http://archive.ubuntu.com/ubuntu precise-updates multiverse deb-src http://archive.ubuntu.com/ubuntu precise-updates multiverse deb http://archive.ubuntu.com/ubuntu precise-security main restricted deb-src http://archive.ubuntu.com/ubuntu precise-security main restricted deb http://archive.ubuntu.com/ubuntu precise-security universe deb-src http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse deb-src http://archive.ubuntu.com/ubuntu precise-security multiverse

    Read the article

  • ASP.NET, HTTP 404 and SEO

    - by paxer
    The other day our SEO Manager told me that he is not happy about the way ASP.NET application return HTTP response codes for Page Not Found (404) situation. I've started research and found interesting things, which could probably help others in similar situation.  1) By default ASP.NET application handle 404 error by using next web.config settings           <customErrors defaultRedirect="GenericError.htm" mode="On">             <error statusCode="404" redirect="404.html"/>           </customErrors> However this approach has a problem, and this is actually what our SEO manager was talking about. This is what HTTP return to request in case of Page not Found situation. So first of all it return HTTP 302 Redirect code and then HTTP 200 - ok code. The problem : We need to have HTTP 404 response code at the end of response for SEO purposes.  Solution 1 Let's change a bit our web.config settings to handle 404 error not on static html page but on .aspx page      <customErrors defaultRedirect="GenericError.htm" mode="On">             <error statusCode="404" redirect="404.aspx"/>           </customErrors> And now let's add in Page_Load event on 404.aspx page next lines     protected void Page_Load(object sender, EventArgs e)             {                 Response.StatusCode = 404;             } Now let's run our test again Now it has got better, last HTTP response code is 404, but my SEO manager still was not happy, becouse we still have 302 code before it, and as he said this is bad for Google search optimization. So we need to have only 404 HTTP code alone. Solution 2 Let's comment our web.config settings     <!--<customErrors defaultRedirect="GenericError.htm" mode="On">             <error statusCode="404" redirect="404.html"/>           </customErrors>--> Now, let's open our Global.asax file, or if it does not exist in your project - add it. Then we need to add next logic which will detect if server error code is 404 (Page not found) then handle it.       protected void Application_Error(object sender, EventArgs e)             {                            Exception ex = Server.GetLastError();                 if (ex is HttpException)                 {                     if (((HttpException)(ex)).GetHttpCode() == 404)                         Server.Transfer("~/404.html");                 }                 // Code that runs when an unhandled error occurs                 Server.Transfer("~/GenericError.htm");                  } Cool, now let's start our test again... Yehaa, looks like now we have only 404 HTTP response code, SEO manager and Google are happy and so do i:) Hope this helps!  

    Read the article

  • HTTP Cache Control max-age, must-revalidate

    - by nyb
    I have a couple of queries related to Cache-Control. If I specify Cache-Control "max-age=3600, must-revalidate" for a static html/js/images/css file, with Last Modified Header defined in HTTP header, a. Does browser/proxy cache(liek Squid/Akamai) go all the way to orgin server to validate before max-age expires? Or will it serve content from cache till max-age expires? b. After max-age expiry(that is expiry from cache), is there a IMS check or is content re-downloaded from origin server w/o IMS check?

    Read the article

  • Regex age verification

    - by KevinDeus
    Anyone have a good regex for Age verification? I have a date field where I am validating that what the user enters is a date. basically I wanted to figure out if that date is valid, and then further refine the date to be within x number of years Perhaps this may be too complicated to tack onto whatever I have too far, but I figured it wouldn't be. ^([1][012]|[0]?[1-9])[/-]([3][01]|[12]\d|[0]?[1-9])[/-](\d{4}|\d{2})$

    Read the article

  • How to send a HTTP "Bad Request" Response

    - by michael
    Hi, I am writing a C program which needs to send back a HTTP Bad Response. This is what I write to socket. HTTP/1.1 400 Bad Request\r\n Connection: close\r\n \r\n My question is why the broswer still spinning (like appear it is still loading something? Am I missing header in the Http response? OR I miss something else? Thank you.

    Read the article

  • MatheMagiics _Guess My Age using cards 1 - 63

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved MatheMagics – Guess My Age By Using Cards 1-63 This is an interesting game. If you are under 63 – yes we discriminate – look at the 6 cards and click the check-box underneath each card where you age is listed. When this is done you have 2 options. 1: Let the Mathemagician tell you your age, or 2: click submit. To get to the game, go to: www.mgsltns.com/games.htm and then click on the link to Guess My age 1-63 or go directly to:  http://www.mgsltns.com/GMAJS.htm If you want to learn how it works, click the the how it works link at the bottom-right of the game page. That’s All Folks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >