Search Results

Search found 3752 results on 151 pages for 'offline caching'.

Page 4/151 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Caching Profiles web.config vs IIS

    - by Lieven Cardoen
    What is the difference between configuring a Caching Profile in Web.Config and configuring it in IIS? If you have this in Web.Config <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> And nothing configured in IIS in the Output Caching, will it work? And what if you add all the extensions I use in Output Caching in IIS, what does that change? It's a aspx page RetrieveBlob.aspx that uses this Caching Profile: <%@ OutputCache CacheProfile="AssetCacheProfile" %> <%@ Page Language="C#" AutoEventWireup="true" CodeFile="RetrieveBlob.aspx.cs" Inherits="RetrieveBlob" %>

    Read the article

  • WCF Caching Solution - Need Advice

    - by Brandon
    The company I work for is looking to implement a caching solution. We have several WCF Web Services hosted and we need to cache certain values that can be persisted and fetched regardless of a client's session to a service. I am looking at the following technologies: Caching Application Block 4.1 WCF TCP Service using HttpRuntime Caching Memcached Win32 and Client Microsoft AppFabric Caching Beta 2 Our test server is a Windows Server 2003 with IIS6, but our production server is Windows Server 2008, so any of the above options would work (except for AppFabric Caching on our test server). Does anyone have any experience with any of these? This caching solution will not be used to store a lot of data, but it will need to be fetched from frequently and fast. Thanks in advance.

    Read the article

  • "Always available offline" option missing on one network drive in Windows 7

    - by Rynardt
    My network setup at home has 2 network storage devices on the network. The one is for media content on a Popcorn Hour A-110 and the other is a D-Link DNS-320 in RAID 1 configuration for business files. When I access these network drives and right click a folder the following context menu appears for the A-110 device, but not for the D-link. I have tested this in both Windows 7 32bit and Windows Vista 64bit. In both instances the "Always available offline" option is only available for the A-110 storage device, and not for the D-link. How do I get this option for the D-link? Any advice or ideas are welcome.

    Read the article

  • Offline productivity

    - by Frank Meulenaar
    On some days I'm commuting 2hs (oneway) in the train. I don't have any mobile internet nor is there always WiFi service in the train. Because of security reasons I can't do any work in the train so I'm trying to work on my geek time. I'm looking for general solutions on how to do this (I'm on FireFox/Windows but I don't think it matters) Email works perfectly with gmail offline. It syncs directly when online and remembers complicated stuff. So far I used the ScrapBook plugin to store an website. It works good, but I have to download my favorite news page every day again - I want it to sync as soon as possible. It would even be more awesome if I could click a page on my desktop and my laptop would sync as soon as it has the chance. (edit: maybe the autosave plugin for scrapbook can do this) Similarily, I use the Downloadhelper plugin to download youtube vids, but I'd like something that automatically downloads videos from a given channel. Any tips are welcome. So far my early morning schedule is: wake up, power on laptop, make coffee, power off laptop and leave within 10 minutes (enough time for GMail to sync) but I can imagine a system where my laptop stays on during the night (or boots before I wake (and makes me coffee :])).

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

  • How to distribute an offline cube for excel

    - by Mike M
    I have the following scenario. A cube created in SSAS 2008. I can connected to this cube via Excel. I can create an offline cube file. I can connect to this offline cube file. Now, say I want to email this excel file along with the cube file so that another user can view it. I run into the problem that the connection path the offline cube is hard coded into the excel file. Its the same problem this person had. http://stackoverflow.com/questions/1253950/opening-offline-cube-from-another-machine Their solution was to just make sure the other user saved the cube in the same directory structure. I don't love that solution. I also came across this idea: http://www.pcreview.co.uk/forums/thread-948974.php I tried that, it errored out, but I am not an Excel VBA programmer and really have no idea if I even put the code in the right place. So anyway, anyone out there have any ideas about who to do this? If the VBA solution is the best, could someone give me some tips on where to actually put that code?

    Read the article

  • Extending ASP.NET Output Caching

    One of the most sure-fire ways to improve a web application's performance is to employ caching. Caching takes some expensive operation and stores its results in a quickly accessible location. Since it's inception, ASP.NET has offered two flavors of caching:<ul><li><b>Output Caching</b> - caches the entire rendered markup of an ASP.NET page or <a href="http://www.asp101.com/lessons/usercontrols.asp">User Control</a> for a specified duration.</li><li><b>Data Caching</b> - a API for caching objects. Using the data cache you can write code to add, remove, and retrieve items from the cache.</li></ul>Until recently, the underlying functionality of these two caching mechanisms was fixed - both cached data

    Read the article

  • Client side caching in GWT

    - by Silfverstrom
    We have a gwt-client, which recieves quite a lot of data from our servers. Logically, i want to cache the data on the client side, sparing the server from unnecessary requests. As of today i have let it up to my models to handle the caching of data, which doesn't scale very well. It's also become a problem since different developers in our team develop their own "caching" functionality, which floods the project with duplications. I'm thinking about how one could implement a "single point of entry", that handles all the caching, leaving the models clueless about how the caching is handled. Does anyone have any experience with client side caching in GWT? Is there a standard approach that can be implemented?

    Read the article

  • Alternative to distributed caching

    - by Chen
    Hi, There is a technical requirement to scale a new system easily. This new system consists of three tiered applications (as a batch processors). Each tier will contains at least 2 servers with the same application resides on each server. So, when one of the tier reaches peak performance, we could extend the scalability easily by adding a new server and the same application to off-load some of the processing loads. The problem is that one or two of the three tiers require heavy caching (about 3 million records and increasing). I'm thinking of using distributed caching system to overcome this problem but the new distributed caching system will means an additional point of failure as applications now need to interact with additional caching systems for processing. I'm currently looking at ncache but just wondering if there is an alternatives to this problem? or is there any other comparable distributed caching system that maybe similar or better than ncache and provide enterprise supports too? Thanks, Chen

    Read the article

  • Rails 3 functional optionally testing caching

    - by Stephan
    Generally, I want my functional tests to not perform action caching. Rails seems to be on my side, defaulting to config.action_controller.perform_caching = false in environment/test.rb. This leads to normal functional tests not testing the caching. So how do I test caching in Rails 3. The solutions proposed in this thread seem rather hacky or taylored towards Rails 2: How to enable page caching in a functional test in rails? I want to do something like: test "caching of index method" do with_caching do get :index assert_template 'index' get :index assert_template '' end end Maybe there is also a better way of testing that the cache was hit?

    Read the article

  • VLC Caching levels

    - by Svish
    When I open the Preferences of VLC and go to Input & Codecs, I have a setting called Default Caching Level. I can choose between Cusom Lowest latency Low latency Normal High latency Higher latency I'm used to caching being set in seconds or something like that. So, more seconds/higher buffer means less chane of buffer underrun while streaming. What is latency? What does it mean to set it lower or higher? In what cases should I go in what direction? If I'm struggling with buffer underruns, should I set it to lower or higher latency?

    Read the article

  • Implementing Model-level caching

    - by Byron
    I was posting some comments in a related question about MVC caching and some questions about actual implementation came up. How does one implement a Model-level cache that works transparently without the developer needing to manually cache, yet still remains efficient? I would keep my caching responsibilities firmly within the model. It is none of the controller's or view's business where the model is getting data. All they care about is that when data is requested, data is provided - this is how the MVC paradigm is supposed to work. (Source: Post by Jarrod) The reason I am skeptical is because caching should usually not be done unless there is a real need, and shouldn't be done for things like search results. So somehow the Model itself has to know whether or not the SELECT statement being issued to it worthy of being cached. Wouldn't the Model have to be astronomically smart, and/or store statistics of what is being most often queried over a long period of time in order to accurately make a decision? And wouldn't the overhead of all this make the caching useless anyway? Also, how would you uniquely identify a query from another query (or more accurately, a resultset from another resultset)? What about if you're using prepared statements, with only the parameters changing according to user input? Another poster said this: I would suggest using the md5 hash of your query combined with a serialized version of your input arguments. This would require twice the number of serialization options. I was under the impression that serialization was quite expensive, and for large inputs this might be even worse than just re-querying. And is the minuscule chance of collision worth worrying about? Conceptually, caching in the Model seems like a good idea to me, but it seems in practicality the developer should have direct control over caching and write it into the controller. Thoughts/ideas? Edit: I'm using PHP and MySQL if that helps to narrow your focus.

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • Max size iPad / iPhone Offline Application Cache

    - by Rien
    Anyone knows the max size of Safari's 'Offline Application Cache' on the iPad & iPhone. Looks like it's 5MB. Is there any way to enlarge this size? Offline application cache docs: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/OfflineApplicationCache/OfflineApplicationCache.html

    Read the article

  • Exporting SQL Server Databases for offline use

    - by WedTM
    I have a desktop application (C# .NET 3.5) that uses a SQL server for it's database. I have had a request from the client, however, to make it possible to export the database as it stands, and be able to use it on a laptop without connectivity. They understand that updates to the parent server will not be reflected in these offline clients. Is there a way I can just save the DataSet's to a binary form and write them to a disk and send those files to the offline clients.

    Read the article

  • HTML5 offline cache google font api

    - by Bala Clark
    I'm trying to create an offline HTML5 test application, and am playing with the new google fonts api at the same time. Does anyone have any ideas how to cache the remote fonts? Simply putting the api call in the cache manifest doesn't work, I assume this is because the api actually loads other files (ttf, eot, etc). Any ideas if using the font api offline would be possible? For reference this is the call I am making: http://fonts.googleapis.com/css?family=IM+Fell+English|Molengo|Reenie+Beanie

    Read the article

  • Using "offline files sync" to sync with a local resource [closed]

    - by Kije
    Possible Duplicate: Which is the best application to Sync two folders? I have been trying get my machine (XP-Pro SP3) to sync files to a local USB drive in the same way as I can with mapped network drives. I particularly want to the sync to happen automatically when the USB is connected - in the same way that Off-line files will sync when the network drive comes on line. I can get a folder on the USB mounted as a network drive, but cannot get XP to offer the off-line files functionality. I have tried MS's SyncToy - it works as advertised, and will do scheduled and ad-hoc sync's, but does not seem to offer automatic syncs. I suspect I could do this by plugging my USB into another machine on my network - but that seems heavy handed. All insight appreciated - If you know this cannot be done please say so.

    Read the article

  • offline undeploy war from glassfish

    - by andrej
    I've got Glassfish 2.1.1 here and I need to undeploy war application. Problem is that the application is corrupted and prevents the Glassfish server from starting, so I need to undeploy it while server is down. asadmin undeploy need conenction to running server... So the question is - how to undeploy from off-line Glassfish server.

    Read the article

  • Offline editable and mergeable wiki

    - by Zed
    I'm looking for a wiki software which allows me to store wiki pages on a server (basic wiki functionality) store a local copy on my computer edit my local content, and then merge my changes back to the server (version control for conflicting update) Is there a wiki, or a set of software which allows me to do this? The primary platform is linux, but I wouldn't mind if it also ran on windows...

    Read the article

  • Offline translator for the iPhone?

    - by jonathanconway
    Is there a translator app for the iPhone that can be downloaded and run without an internet connection? I'd especially like an English-Vietnamese one, since it's hard to get WiFi in the village area where I'm staying. I would happily pay up to $15 for this. If one hasn't been made, maybe I should develop it myself, since I could possibly make some good money off it!

    Read the article

  • set all apache sites offline with temporary static cached original pages

    - by rubo77
    I would like to set all virtualhosts on my server down for maintenance for some time. The temporary page should contain something like sorry, the page www.xxx.com is down for maintenance. you can see the cached version here: Then the trick: the user should then see the cached page from a cache like googlecache or similar for the requested page as long as the server is down. This would show the correct content on pages, that are static anyway and give the visitor the needed content in many cases, while I can shut down mysql and other services that would usually be needed to show that pages. How can I set a global page on all virtualhosts, that parses the original requested URL through PHP?

    Read the article

  • Installing php on Centos 6 offline

    - by adstan
    I have just installed CENTOS 6 on an internal network with no Internet Access. Not very familiar with LINUX/CENTOS, I managed to get httpd service started and I believe PHP wasn't installed. As the machine is not connected to the Internet, it will not be able to use YUM directly. How do I go about installing PHP and their dependencies, or the only way is to download the source codes and build it on my own?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >