Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 644/837 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • Enable Session state in sharePoint 2010

    - by Albert D. Kallal
    I setup a test box computer with server 2008 (standard edition, not R2 and not hyper-v editing). I then installed SharePoint 2010. I was amazed how easy the whole setup went (the prerequisites setup on the SharePoint disk made this process oh so easy – great install system). Really this was just so easy. This test box is being used for testing Access web services. I am able to well publish access applications to this test server and Access applications publish and run just fine on the web SharePoint site through an web browser. However, the only thing that does not work is when I launch a Access report. The error message I get back is This report failed to load because session state is not turned on. Here is a screen shot: I can’t seem to find the setting anywhere to turn session state on. Any hints or links on how to enable session state in SharePoint 2010 would be most appreciated.

    Read the article

  • Best practices for combining Lucene.NET and a relational database?

    - by FlySwat
    I'm working on a project where I will have a LOT of data, and it will be searchable by several forms that are very efficiently expressed as SQL Queries, but it also needs to be searched via natural language processing. My plan is to build an index using Lucene for this form of search. My question is that if I do this, and perform a search, Lucene will then return the ID's of matching documents in the index, I then have to lookup these entities from the relational database. This could be done in two ways (That I can think of so far): N amount of queries (Horrible) Pass all the ID's to a stored procedure at once (Perhaps as a comma delimited parameter). This has the downside of being limited to the max parameter size, and the slow performance of a UDF to split the string into a temporary table. I'm almost tempted to mirror everything into lucenes index, so that I can periodicly generate the index from the backing store, but only need to access it for the frontend. Advice?

    Read the article

  • Transfer iPod files to Computer/iTunes/iPod [closed]

    - by lucarn23
    Hi, for all iPod fans, I want to share you a smart tool called iPod Transfer. It can help: (1)Transfer iPod music, video, photo, podcast, tv shows to your iTunes or Local Computer. (2)Transfer iPod files to another iPod/iPhone. (3)Transfer local files from computer to your iPod without iTunes. (4)Manage your iPod as hard disk instead of just syncing it with your iTunes. Free download it here to try it and it is easy to use. http://www.macdvdrippersoft.com/ipod-transfer-avcware.html

    Read the article

  • ASP.NET Validator Controls Slowing Down Page

    - by Calvin Nguyen
    Hi all, I have an UpdatePanel that has user controls dynamically added to it. There can be a few dozen user controls at times. The page / UpdatePanel slows down big time on each postback as more user controls are added. After some digging, I was surprised to find the cause is the various CompareValidator, CustomValidator, RegularExpressionValidator and RequiredFieldValidator controls that exist on each user control. Dose anyone have suggestions? It strikes me as very peculiar that inclusion of these ASP.NET controls could have such a horrible effect on performance. Thanks, Calvin

    Read the article

  • Image Application in WPF and Perfomance.

    - by Harsha
    Hello All, I am planning to build Image processing application using WPF. Brightness /Contrast and Histogram are main operation of this application. I have downloaded the application " Foundations: Bitmaps and Pixel Bits" from http://msdn.microsoft.com/en-us/magazine/cc534995.aspx . But when I tried to open the images which are more than 1200x1600, It is very slow. How to increase the performance. Is any one worked on Image processing in WPF. Please suggest me how to solve this perfomance issue in WPF for image(more than 1600x1200) operation. Thanks you, Harsha

    Read the article

  • Alternative to Nested Loop For Comparison

    - by KGVT
    I'm currently writing a program that needs to compare each file in an ArrayList of variable size. Right now, the way I'm doing this is through a nested code loop: if(tempList.size()>1){ for(int i=0;i<=tempList.size()-1;i++) //Nested loops. I should feel dirty? for(int j=i+1;j<=tempList.size()-1;j++){ //*Gets sorted. System.out.println(checkBytes(tempList.get(i), tempList.get(j))); } } I've read a few differing opinions on the necessity of nested loops, and I was wondering if anyone had a more efficient alternative. At a glance, each comparison is going to need to be done, either way, so the performance should be fairly steady, but I'm moderately convinced there's a cleaner way to do this. Any pointers?

    Read the article

  • Custom broadcast events in AS3?

    - by Ender
    In Actionscript 3, most events use the capture/target/bubble model, which is pretty popular nowadays: When an event occurs, it moves through the three phases of the event flow: the capture phase, which flows from the top of the display list hierarchy to the node just before the target node; the target phase, which comprises the target node; and the bubbling phase, which flows from the node subsequent to the target node back up the display list hierarchy. However, some events, such as the Sprite class's enterFrame event, do not capture OR bubble - you must subscribe directly to the target to detect the event. The documentation refers to these as "broadcast events." I assume this is for performance reasons, since these events will be triggered constantly for each sprite on stage and you don't want to have to deal with all that superfluous event propagation. I want to dispatch my own broadcast events. I know you can prevent an event from bubbling (Event.bubbles = false), but can you get rid of capture as well?

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay

    Read the article

  • Cast to generic type in C#

    - by Andrej
    I have a Dictionary to map a certain type to a certain generic object for that type. For example: typeof(LoginMessage) maps to MessageProcessor<LoginMessage> Now the problem is to retrieve this generic object at runtime from the Dictionary. Or to be more specific: To cast the retrieved object to the specific generic type. I need it to work something like this: Type key = message.GetType(); MessageProcessor<key> processor = messageProcessors[key] as MessageProcessor<key>; Hope there is a easy solution to this. Edit: I do not want to use Ifs and switches. Due to performance issues I cannot use reflection of some sort either.

    Read the article

  • Entity Framework, full-text search and temporary tables

    - by markus
    I have a LINQ-2-Entity query builder, nesting different kinds of Where clauses depending on a fairly complex search form. Works great so far. Now I need to use a SQL Server fulltext search index in some of my queries. Is there any chance to add the search term directly to the LINQ query, and have the score available as a selectable property? If not, I could write a stored procedure to load a list of all row IDs matching the full-text search criteria, and then use a LINQ-2-Entity query to load the detail data and evaluate other optional filter criteria in a loop per row. That would be of course a very bad idea performance-wise. Another option would be to use a stored procedure to insert all row IDs matching the full-text search into a temporary table, and then let the LINQ query join the temporary table. Question is: how to join a temporary table in a LINQ query, as it cannot be part of the entity model?

    Read the article

  • Can JAXB Incrementally Marshall An Object?

    - by Intellectual Tortoise
    I've got a fairly simple, but potentially large structure to serialize. Basically the structure of the XML will be: <simple_wrapper> <main_object_type> <sub_objects> </main_object_type> ... main_object_type repeats up to 5,000 times </simple_wrapper> The main_object_type can have a significant amount of data. On my first 3,500 record extract, I had to give the JVM way more memory than it should need. So, I'd like to write out to disk after each (or a bunch of) main_object_type. I know that setting Marshaller.JAXB_FRAGMENT would allow it fragments, but I loose the outer xml document tags and the <simple_wrapper>. Any suggestions?

    Read the article

  • PHP/MySQL database connection priority?

    - by Josh
    Hello, I have a production database where usage statistics for a service we're working on. I use php to periodically roll up different resolutions (day, week, month, year) of interesting statistics in buckets dictated by the resolution. The php application I've written "completes" its data when its run, such that it will calculate all the rolled-up statistics for the resolutions and periods since it was last run. This is useful if we want to turn this off to debug database performance issues, because I can turn it back on and have it complete its data set. The problem I have, is the calculations are fairly intensive and drive the QPS of the production database server up. Is there a way to set a "priority" on a particular database connection so that it will only use "off-cycles" to do these calculations? Maybe a proper response would be to replicate the tables I'm working on into a different stats database, but, unfortunately I don't have the resources in place to attempt such a thing (yet). Thanks in advance for any help, Josh

    Read the article

  • How can I turn on DynamicCompression feature of IIS programmatically?

    - by LockeVN
    I'm making an installer program for my web application. My web application uses CSS and JS heavily, so I want to enable both Static and Dynamic HttpCompression for IIS7/7.5. It needs 2 steps: I can modified the web.config, put <httpcompression> tag, it's ok. DynamicContentCompression must be turned on in Windows Feature to make httpCompression work. Static HttpCompression is enable by default in IIS7 and IIS7.5, but Dynamic HttpCompression is not enable by default (although it's available). I can do manually by turn on: Start/ControlPanel/ProgramsAndFeatures/TurnWindowsFeatures on or Off/IIS/WWW Service/Performance features/Dynamic Content Compression, but How can I programmatically turn it on that Windows Feature? I can use PowerShell, C# in my installer. Any idea how I might be able to do this? Thanks.

    Read the article

  • eclipse plugin not loading dll due to long path

    - by user113018
    I am building an eclipse plugin (a notes plugin, but its a eclipse plugin in the end). One of the plugins my plugin depends on needs to load a native dll. The problem is, that fails depending on where in the disk such dll is. If it is longer than a certain threshold I get the error below java.lang.UnsatisfiedLinkError: nlsxbe (The filename or extension is too long. ) at java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:952) at java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:921) at java.lang.System.loadLibrary(System.java:452) at lotus.domino.NotesThread.load(Unknown Source) at lotus.domino.NotesThread.checkLoaded(Unknown Source) at lotus.domino.NotesThread.sinitThread(Unknown Source) at com.atempo.adam.lotus.plugin.views.TopicView.createPartControl(TopicView.java:609) I have added the path to Path env var, and also registered the dll to no avail. My env is Ms vista profesional, java1.5, eclipse3.4 (and lotus 8) Anyone out there have a clue? Many thanks in advance.

    Read the article

  • Can I draw Qt objects directly to Win32 DC (Device Context)?

    - by Kevin
    I can draw Qt objects to an QImage and then draw the image to HDC or CDC. This may hurt our application's performance. It would be great if I can draw Qt objects directly to Win32 HDC or MFC CDC. I expect that there is a class, say QWin32Image for clear, then I can use it in this way: QWin32Image image(hdc, 100, 100, Format_ARGB32_Premultiplied); QPainter painter(&image); painter.drawText(....); Is it possible for my thought? Or is there a better way to do that?

    Read the article

  • How can a Delphi TForm / TPersistent object calculate its own deserialization time?

    - by mjustin
    For performance tests I need a way to measure the time needed for a form to load its definition from the DFM. All existing forms inherit a custom form class. To capture the current time, this base class needs overriden methods as "extension points": start of the deserialization process after the deserialization (can be implemented by overriding the Loaded procedure) the moment just before the execution of the OnFormCreate event So the log for TMyForm.Create(nil) could look like: - 00.000 instance created - 00.010 before deserialization - 01.823 after deserialization - 02.340 before OnFormCreate Which TObject (or TComponent) methods are best suited? Maybe there are other extension points in the form creation process, please feel free to make suggestions.

    Read the article

  • Query optimization

    - by Lijo
    Hi Team, I am trying to learn SQL query optimization using SQL Server 2005. However, I did not find any practical example where I can improve performance by tweaking query alone. Can you please list some example queries - prior and after optimization? It has to be query tuning – not adding index, not making covering index, not making any alteration to table. No change is supposed to be done on table and index. [ I understand that indexes are important. This is only for learning purpose] It would be great if you can exaplain using temp table instead of refering any sample databases like adventureworks. Expecting your support... Thanks Lijo

    Read the article

  • PHP on Windows: Apache or IIS7?

    - by josecortesp
    Here's the thing: Due to a new project, I have to learn PHP from scratch. I'm now in windows and I DON'T want to change quickly, so, don't tell me to change. I need to setup a DreamWeaverCS4/PHP5 develop environment, and, I don't have a clue whether to use Apache or IIS7. I just need some advice relating to it. Remarks: I really don't care about performance(yet), neither portability, and so on. I just need to set it up quickly and easily. Thanks in advance

    Read the article

  • iPhone "slide to unlock" animation

    - by Russ
    Any ideas as to how Apple implemented the "slide to unlock" (also, "slide to power off" is another identical example) animation? I thought about some sort of animating mask - but masking is not available on the iPhone OS for performance reasons. Is there some private API effect (like SuckEffect) that they might have used? A spotlight type of effect? Some Core Animation thing? Edit: It's definitely not a series of stills. I've seen examples of being edit a plist value or something and customize the string on jailbroken iphones.

    Read the article

  • Programatically detect number of physical processors/cores or if hyper-threading is active on Window

    - by HTASSCPP
    I have a multithreaded c++ application that runs on Windows, Mac and a few Linux flavours. To make a long story short: Inorder for it to run at maximum efficiency I have to be able to instantiate a single thread per physical processor/core. Creating more threads than there are physical processors/cores degrades the performance of my program considerably. I can already correctly detect the number of logical processors/cores correctly on all three of these platforms. To be able to detect the number of physical processors/cores correctly I'll have to detect if hyper-treading is supported AND active. My question therefore is if there is a way to detect whether hyperthreading is supported AND ENABLED? If so, how exactly.

    Read the article

  • Core Data iPad/iPhone BLOBS vs File system for 20k PDFs

    - by jamone
    I'm designing an iPad/iPhone app using core data. The main focus of the app is sorting and viewing up to 20,000 PDFs They are ~200KB each. Typically its best to not store BLOBS in a DB, but for desktop systems I've typically seen it said that if the blobs are < 1 MB then its fine to use the DB. Any considerations I should take into count? If I store them in the file system can I store them all in one directory and not have performance issues (I won't need to ever get a directory list since I'd store each's path in the DB)? Should I divide them among a handful of directories? If so is there a good rule on # of files per dir?

    Read the article

  • best practices for setting development environment

    - by Sharique
    I use Linux as primary OS. I need some suggestions regarding how should I set up my desktop and development. I do work on mostly .Net and Drupal, but some time on other lamp products and C/C++, Qt. I'm also interested in mobile (android..) and embedded development. Currently I install everything on my main OS, even I use it a little. I use VMs a little. Should I use separate VM for each kind of development (like one for .Net/Mono, another C++, one for mobile and one for db only, one for xyz things etc) Keep primary development environment on main os and moveothers in VM. main os should be messed up keep things easy to organize (must) performance should be optimal

    Read the article

  • Efficiently Combine MatchCollections in .Net Regex

    - by Laramie
    In the simplified example, there are 2 Regular Expressions, one case sensitive, the other not. The idea would be to efficiently create an IEnumerable collection (see "combined" below) combining the results. string test = "abcABC"; string regex = "(?<grpa>a)|(?<grpb>b)|(?<grpc>c)]"; Regex regNoCase = new Regex(regex, RegexOptions.IgnoreCase); Regex regCase = new Regex(regex); MatchCollection matchNoCase = regNoCase.Matches(test); MatchCollection matchCase = regCase.Matches(test); //Combine matchNoCase and matchCase into an IEnumerable IEnumerable<Match> combined= null; foreach (Match match in combined) { //Use the Index and (successful) Groups properties //of the match in another operation } In practice, the MatchCollections might contain thousands of results and be run frequently using long dynamically created REGEXes, so I'd like to shy away from copying the results to arrays, etc. I am still learning LINQ and am fuzzy on how to go about combining these or what the performance hits to an already sluggish process will be.

    Read the article

  • Visual Studio Database Professional GDR R2 takes a long time to serialize DBMDL file

    - by Nicolas Webb
    The amount of time it takes to completely serialize the DBMDL (to finish "Your project will be available after 10000 operations are completed) is becoming a hindrance to productivity. I've done what I can to optimize disk activity (excluding my personal TEMP folder from the virus scanner, along with my local source repository). Short of getting a SSD I'm not sure what else I can do along those lines. I believe it has something to do with how the project is organized. The finished DBMDL file is roughly 150MB. Others throughout our organization do not seem to have this issue. Anyone had to deal with this?

    Read the article

  • How mod_cache working with "must-revalidate" and "max-age"?

    - by Dmitriy Sosunov
    Quick question before I will explain my flow: ?an mod_cache perform revalidate with if-none-match only if max-age is expired in case if it configured in reverse proxy mode? My goal is to reduce a number of revalidation requests to our the origin server. For instance: The first request goes to the origin server and then mod_cache save a response in to the cache according to header cache-control: max-age. And only when max-age is expired then mod_cache will revalidate with if-none-match. Currently, mod_cache revalidate each request, regardless that max-age is defined or not. My configuration of Apache 2.4.3 (Windows), on linux I see the same behavior that I will show below. ServerName proxy.lo ProxyRequests Off ProxyPreserveHost Off Header set Vary "Accept, Content-Type, Content-Encoding, Accept-Language" RequestHeader set X-Forwarded-Proto "http" # modify header for user agent's Header set Cache-Control "private, no-cache, no-store, no-transform" CacheQuickHandler off CacheDefaultExpire 300 # the origin server do not provide last-modified CacheIgnoreNoLastMod On CacheIgnoreCacheControl On # the origin server define cache-control: private, no-store only for user agents # Therefore, I would like ignore those headers on the proxy server. CacheStorePrivate On CacheStoreNoStore On CacheEnable disk / CacheRoot "C:/Apache.Cache" CacheDirLevels 5 CacheDirLength 4 CacheMinExpire 15 CacheDetailHeader on CacheHeader on KeepAlive Off ProxyPass / http://origin.lo/ ProxyPassReverse / http://origin.lo/ Also, I have turned on debug log level to see how mod_cache handles a content for caching: I provided this to show that mod_proxy always decides that a content isn't fresh. Why?I provided this to show that mod_proxy always decide that a content isn't fresh. Why? max-age was provided (see below). [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(624): [client 192.168.1.100:63741] AH00698: cache: Key for entity /testpage?(null) is http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(569): [client 192.168.1.100:63741] AH00709: Recalled cached URL info header http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(865): [client 192.168.1.100:63741] AH00720: Recalled headers for URL http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(320): [client 192.168.1.100:63741] AH00695: Cached response for /testpage isn't fresh. Adding/replacing conditional request headers. [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(414): [client 192.168.1.100:63741] AH00757: Adding CACHE_SAVE filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(448): [client 192.168.1.100:63741] AH00759: Adding CACHE_REMOVE_URL filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] mod_proxy.c(1068): [client 192.168.1.100:63741] AH01143: Running scheme http handler (attempt 0) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1976): AH00942: HTTP: has acquired connection for (origin.lo) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2029): [client 192.168.1.100:63741] AH00944: connecting http://origin.lo/testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2151): [client 192.168.1.100:63741] AH00947: connected /testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2554): AH00962: HTTP: connection complete to 192.168.1.100:80 (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1991): AH00943: http: has released connection for (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [headers:debug] [pid 6492:tid 1400] mod_headers.c(800): AH01502: headers: ap_headers_output_filter() [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1190): [client 192.168.1.100:63741] AH00769: cache: Caching url: /testpage [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1196): [client 192.168.1.100:63741] AH00770: cache: Removing CACHE_REMOVE_URL filter. [Sun Nov 04 11:58:42.904890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(1318): [client 192.168.1.100:63741] AH00737: commit_entity: Headers and body for URL http://proxy.lo/testpage? cached. The first request to the origin server without mod_proxy to http://origin.lo/ GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The first response from the origin without mod_proxy HTTP/1.1 200 OK Cache-Control: must-revalidate, proxy-revalidate, max-age=30 Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:11:01 GMT Content-Length: 1877 So, I assumed that revalidation must be occur only in 30 seconds after the success response. Is't right? Let's check it:) Within 30 sec, the Google Chrome didn't perform any requests to the origin server to revalidate a request and has return the response from local cache. When max-age is expired, the Google Chrome perform a request to revalidate: GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive Cache-Control: max-age=0 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/xml If-None-Match: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and response: HTTP/1.1 304 Not Modified Cache-Control: must-revalidate, proxy-revalidate, max-age=30 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:16:20 GMT As you can see, all works as expected. User agent revalidates request only when max-age is expired. Let's now try perform the folling flow though mod_proxy (see configuration above). The first request: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the response was: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:23:36 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: MISS from proxy.lo X-Cache-Detail: "cache miss: attempting entity save" from proxy.lo Connection: close Ok, let's see to the disk cache and try to see how request and response was stored. (I cut binary data) http://proxy.lo/testpage? Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Date: Sun, 04 Nov 2012 10:27:15 GMT Content-Length: 1932 Vary: Accept, Content-Type, Content-Encoding, Accept-Language Host: proxy.lo User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 X-Forwarded-Proto: http Cache-Control: max-age=300, must-revalidate X-Forwarded-For: 192.168.1.100 X-Forwarded-Host: proxy.lo X-Forwarded-Server: origin.lo Ok, what we see? We see that the first request was performed with max-age=300 & must-revalidate Ok, looks good, as for me, lets perform the next call: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the second response from mod_proxy: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:31:58 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: REVALIDATE from proxy.lo X-Cache-Detail: "conditional cache hit: entity refreshed" from proxy.lo Connection: close Content-Type: application/json; charset=utf-8 SO, MY QUESTION IS: WHY mod_proxy perform revalidation on each request regardless that max-age is defined? N.B. Apache 2.4.3 Thanks, I would be grateful for any help.

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >