Search Results

Search found 11531 results on 462 pages for 'cpu cache'.

Page 135/462 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Upgrading MacBookPro

    - by moray95
    I'm using a Late 2011 13" MacBook Pro with an Intel i5 @ 2.4 GHz and 4 GB 1333 MHz ram. The computer has started to get older. I was going to upgrade the ram but since Mavericks come out, the ram problem just went away and now, it started to get slower and slower. So I was thinking of upgrading my ram to at least 8GB and my CPU. I have two question about that. As I have 1333Mhz rams installed by default, the motherboard should not support 1666Mhz rams. But can I use 1666 Mhz ones and if I can will it make any difference? Also is it possible to upgrade the CPU of my computer? If yes how can I find a CPU compatible with the other components?

    Read the article

  • SSD Performance for PHP?

    - by Andrew Fashion
    My programmer just built an application with PHP using Doctrine ORM (will be a high traffic social networking website), and it's very heavy in PHP/Apache and CPU. The queries are wonderfully fast, and MySQL is barely using any CPU, it's just Apache. I was curious to if an SSD would help speed up PHP/Apache, because I know the bottleneck is in PHP reading multiple files, class files, and loading up a bunch of data. So common sense makes me think if PHP is reading multiple PHP files, an SSD would only help as far as read/write? I was thinking of doing a high performance SSD for the PHP application, but for user image uploads, I would just continue using a 15k SAS. Is there any performance issues regarding using an SSD in this kind of situation? And would it prove to help speed up PHP/Apache, and help the CPU problem out?

    Read the article

  • further troubleshooting on a p8z77 motherboard

    - by Journeyman Geek
    I just bought a brand now asus p8z77 motherboard with a intel 3770. Its currently not booting - the system powers up for half a second and then powers down, and the CPU error light shines. I've tried switching ram between slots, switching PSUs, updating the bios (which can be done sans processor or ram on this model). Power: The 8 pin power connector is definitely in place correctly, and I've tried swapping PSUs between a new seasonic m12, and a known good cheapie PSU Ram: Ram is in the recommended slot for single stick operation, and tried swapping between the two sticks of DDR3 I have. Processor: Its installed correctly as far as I can tell, no obvious bent pins on the motherboard. At this point I'm guessing I'll need to RMA something. Are there any 'definitive' tests I can try, short of swapping CPU and motherboard that would let me know it is the CPU? Can I actually trust the error light on the motherboard?

    Read the article

  • how to obtain the relative path of a resource in a j2ee project

    - by Neeraj
    I have a Dynamic Web Project having a flat file (or say text file). I have created a servlet in which I need to use this file. My code is as following: protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // String resource = request.getParameter ("json") ; if ( resource != null && !resource.equals ( "" ) ) { //use getResourceAsStream ( ) to properly get the file. InputStream is = getServletContext ().getResourceAsStream ("rateJSON") ; if ( is != null ) { // the resource exists response.setContentType("application/json"); response.setHeader("Pragma", "No-cache"); response.setDateHeader("Expires", 0); response.setHeader("Cache-Control", "no-cache"); StringWriter sw = new StringWriter ( ) ; for ( int c = is.read ( ) ; c != -1; c = is.read ( ) ) { sw.write ( c ) ; } PrintWriter out = response.getWriter(); out.print (sw.toString ()) ; out.flush(); } } } The problem is that the InputStream is has null value. I'm not sure how to get the correct relative path. I'm using JBOSS as the app server. I have added the resource file in the WebContent directory of a Dynamic Web Project. As a different approch, I tried this: protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // TODO Auto-generated method stub ServletConfig config = getServletConfig(); String contextName = config.getInitParameter("ApplicationName"); System.out.println("context name"+ contextName); String contextPath = config.getServletContext().getRealPath(contextName); System.out.println("context Path"+contextPath); //contextPath = contextPath.substring(0, contextPath.indexOf(contextName)); contextPath += "\\rateJSON.txt"; System.out.println(contextPath); String resource = request.getParameter ("json") ; System.out.println("Hi there1"+resource); if ( resource != null && !resource.equals ( "" ) ) { System.out.println("Hi there"); //use getResourceAsStream ( ) to properly get the file. //InputStream is = getServletContext ().getResourceAsStream (resource) ; InputStream is = getServletConfig().getServletContext().getResourceAsStream(contextPath); if ( is != null ) { // the resource exists System.out.println("Hi there2"); response.setContentType("application/json"); response.setHeader("Pragma", "No-cache"); response.setDateHeader("Expires", 0); response.setHeader("Cache-Control", "no-cache"); StringWriter sw = new StringWriter ( ); for ( int c = is.read ( ) ; c != -1; c = is.read ( ) ) { sw.write ( c ) ; System.out.println(c); } PrintWriter out = response.getWriter(); out.print (sw.toString ()) ; System.out.println(sw.toString()); out.flush(); } } } The value of contextPath is now: C:\JBOSS\jboss-5.0.1.GA\server\default\tmp\4p72206b-uo5r7k-g0vn9pof-1-g0vsh0o9-b7\Nationwide.war\WEB-INF\rateJSON But at this location the rateJSON file is not there? It seems JBOSS is not putting this file in the App.war or doesn't deploy it??? Could someone please help me?

    Read the article

  • Mac OS X Lion (10.7.3) Virtual Machine?

    - by Ben Hooper
    I have been looking into this for a while and have attempted quite a few "solutions" (hackintosh boot images, universal unlockers, etc) before I gave in and asked for help. I know this is extremely difficult to accomplish, especially with an AMD CPU, but it has been done and it can't hurt to ask. Question Does anyone know of any way to actually get Mac OS X Lion (10.7.3) to boot in VMware Workstation 8.0.2? I know that Mac OS X is heavily dependant on hardware configuration, so I will post my PC's hardware below, if it helps. As far as I know it's only reliant on the CPU, but I will post it all just in case. PC Hardware Motherboard: ASUS M4A77T CPU: AMD Phenom II x4 955 Black Edition Graphics Card: Palit Sonic Platinum nVIDIA Geforce GTX 460 Memory: G-Skill [RipjawsX F3-12800CL9D-8GBXL] 8GB PSU: Arctic Power 700(W) Hard Drive: SAMSUNG HD204UI 2TB Thanks in advance. :)

    Read the article

  • I have problem with pc. Random reboots with post hyper transport sync flood error

    - by user29867
    I have problem with my pc. Random reboots with post "hyper transport sync flood" error. It happens completely random. Most times i watch movie, some times when i play a game, even when i browse internet. My spec are as follow: MB ASUS M4A78-E CPU Phenom II 940 Sapphire Radeon hd4850 Vapor-X RAM OCZ DDR2 PC2-8500 kit 2x2GB PSU CM Real Power M520W CASE CM Centurion 590 Temps of MB, CPU and 4850 are in normal range. MB is hooter, around 60-70°C under load, Radeon 60-65°C under load (playing game for couple hours). CPU dos not past 50-55°C. So i think its not a cooling problem, CM case is pretty good, and have a lots of fans. I try with this memmory: TWIN2X4096-6400C5DHX same problem.

    Read the article

  • Dell PowerEdge 2950 III running XenServer with 2 VMs gets sluggish after a week and needs rebooted?

    - by Joshua Rountree
    It has weird hangs and then random CPU spikes that do a ton at once. While remoted into the VMs I get an update all at once then it hangs for another 20 seconds. When it lets it go through I get a CPU spike. Basic server specs for the HW node is: 8 CPUs, 16GB ram 1TB HDD total iPERC6 raid 10 The VMs are barely used but I have them spec'd at VM 1: 4 CPUs, 4GB Ram VM2: 4 CPUs 6GB ram The HW node currently says it's total CPU usage is 11% AND Used Memory is at 63%out of 16GB I'm new to this stuff so I'm not sure. I just recently installed this and set it all up. Please advise if you can!

    Read the article

  • Configuring memcached for a particular scenario

    - by pradeepchhetri
    I have a web application which queries opentsdb server(which in backend using Hbase cluster) for the datapoints of different metrics and using dygraph javascript graphing library, I am plotting those metrics. Since getting all the datapoints of past one day from opentsdb for a particular metric is itself taking nearly 2 seconds, my application which is plotting nearly 25 metrics is becoming very slow. In order to reduce this latency, I am thinking of using memcached module of php5 for caching all the queries. But I have few questions regarding memcached. Is there any way I can configure memcache to keep on updating its cache in the background by running some command line queries after particular interval of time. Is there any way I can configure memcache to always reply for a query using cache instead of first updating its cache because my application just plots datapoints for past one day. Missing out some datapoints is not that critical.

    Read the article

  • How can I tell if my live web-server is overloaded?

    - by Nick G
    We have a live webserver which doesn't seem to be performing all that well. It's a Dell PowerEdge machine, a few years old (dual core, 4GB) which is hosting about 20 low-traffic websites. However it doesn't seem to be as fast as it used to be. How can we determine the cause of this? If it's website traffic, I would be expecting high CPU but CPU usage is quite low and hovers around the 15-30% mark except for very brief periods. I'm wondering perhaps, if rather than CPU performance being a problem, perhaps it's disk thrashing due to the constant read/writes of all the small web files and database queries. It has 4x 7200 RPM SATA drives in RAID 5. So is there a way to check that it's not disk thrashing?

    Read the article

  • How to combine wildcards and spaces (quotes) in an Windows command?

    - by Jan Fabry
    I want to remove directories of the following format: C:\Program Files\FogBugz\Plugins\cache\[email protected]_NN NN is a number, so I want to use a wildcard (this is part of a post-build step in Visual Studio). The problem is that I need to combine quotes around the path name (for the space in Program Files) with a wildcard to match the end of the path. I already found out that rd is the remove command that accepts wildcards, but where do I put the quotes? I have tried no ending quote (works for dir), ...example.com*", ...example.com"*, ...example.com_??", ...cache\"[email protected]*, ...cache"\[email protected]*, but none of them work. (How many commands to remove a file/directory are there in Windows anyway? And why do they all differ in capabilities?)

    Read the article

  • Limit a process's relative (not absolute) processor consumption in Linux

    - by BobBanana
    What is the standard way in Linux to enforce a system policy to limit the relative CPU use of a single process? That is, on a quad-core machine, I never want a process to use more than 2 CPUs at once, even if the process creates more threads. I do not want an absolute time limit, just a relative limit so that one task cannot dominate the machine. This is also different than renice, which allows a process to use all the resources but just politely step aside if others need them too. ulimit is the usual resource limiting tool, but it does not allow such CPU restrictions.. it can limit the number of processes per user, or absolute CPU time, not restrict the maximum number of active threads of a single process. I've found a couple of user-level tools, like CPUlimit, but not a system level tool or setting. Does such a standard resource controller exist in Linux (Red Hat Enterprise, if it matters.) If there is such a limit imposed, how would a user identify it?

    Read the article

  • High-End Gaming Desktop PC in France [closed]

    - by Lerikunus
    Hello, I was looking to buy an ALIENWARE AREA 51, as my Desktop crashed some days ago. But the Preliminary Ship date is 03.01.11 and I need the new PC until Friday next week =( Does someone know a(n) (online)-store where Build&Shipping time is quite low so I can get it until next Friday? I am living in France (Nice). Or any place I can ask fot his kind of advice? This is my desired configuration: Overclocked Intel® Core™ i7 980x Extreme Six Core Processor (4.0GHz, 12MB Cache) 12GB Triple Channel 1600MHz DDR3 Dual 2GB ATI Radeon™ HD 6950 2TB RAID 1+0 (4x 1TB SATA-II, 7,200 RPM, 32MB Cache HDDs) 2TB RAID 1 (2x2TB SATA-II, 7,200 RPM, 32MB Cache HDDs) Thank you very much!

    Read the article

  • What is excessive swapping.

    - by amateur barista
    This post led me to ask that question. Cache contention On a large site, if you are using MyISAM, contention occurs in the database tables when the cache is forced to clear after a node or a comment is added. With tens of thousands of filter text snippets needing to be deleted, the table will be locked for a long period, and any accesses to it will be queued pending the purge of the data in it. The same is true for the page cache as well. This often causes a "site hang" for a minute or two. During that time new requests keep piling up, and if you do not have the MaxClients parameter in Apache setup correctly, the system can go into thrashing because of excessive swapping.

    Read the article

  • Override `drop` for a custom sequence

    - by Bruno Reis
    In short: in Clojure, is there a way to redefine a function from the standard sequence API (which is not defined on any interface like ISeq, IndexedSeq, etc) on a custom sequence type I wrote? 1. Huge data files I have big files in the following format: A long (8 bytes) containing the number n of entries n entries, each one being composed of 3 longs (ie, 24 bytes) 2. Custom sequence I want to have a sequence on these entries. Since I cannot usually hold all the data in memory at once, and I want fast sequential access on it, I wrote a class similar to the following: (deftype DataSeq [id ^long cnt ^long i cached-seq] clojure.lang.IndexedSeq (index [_] i) (count [_] (- cnt i)) (seq [this] this) (first [_] (first cached-seq)) (more [this] (if-let [s (next this)] s '())) (next [_] (if (not= (inc i) cnt) (if (next cached-seq) (DataSeq. id cnt (inc i) (next cached-seq)) (DataSeq. id cnt (inc i) (with-open [f (open-data-file id)] ; open a memory mapped byte array on the file ; seek to the exact position to begin reading ; decide on an optimal amount of data to read ; eagerly read and return that amount of data )))))) The main idea is to read ahead a bunch of entries in a list and then consume from that list. Whenever the cache is completely consumed, if there are remaining entries, they are read from the file in a new cache list. Simple as that. To create an instance of such a sequence, I use a very simple function like: (defn ^DataSeq load-data [id] (next (DataSeq. id (count-entries id) -1 []))) ; count-entries is a trivial "open file and read a long" memoized As you can see, the format of the data allowed me to implement count in very simply and efficiently. 3. drop could be O(1) In the same spirit, I'd like to reimplement drop. The format of these data files allows me to reimplement drop in O(1) (instead of the standard O(n)), as follows: if dropping less then the remaining cached items, just drop the same amount from the cache and done; if dropping more than cnt, then just return the empty list. otherwise, just figure out the position in the data file, jump right into that position, and read data from there. My difficulty is that drop is not implemented in the same way as count, first, seq, etc. The latter functions call a similarly named static method in RT which, in turn, calls my implementation above, while the former, drop, does not check if the instance of the sequence it is being called on provides a custom implementation. Obviously, I could provide a function named anything but drop that does exactly what I want, but that would force other people (including my future self) to remember to use it instead of drop every single time, which sucks. So, the question is: is it possible to override the default behaviour of drop? 4. A workaround (I dislike) While writing this question, I've just figured out a possible workaround: make the reading even lazier. The custom sequence would just keep an index and postpone the reading operation, that would happen only when first was called. The problem is that I'd need some mutable state: the first call to first would cause some data to be read into a cache, all the subsequent calls would return data from this cache. There would be a similar logic on next: if there's a cache, just next it; otherwise, don't bother populating it -- it will be done when first is called again. This would avoid unnecessary disk reads. However, this is still less than optimal -- it is still O(n), and it could easily be O(1). Anyways, I don't like this workaround, and my question is still open. Any thoughts? Thanks.

    Read the article

  • Run script when Varnish starts

    - by kipusoep
    I'd like to run a script when Varnish starts. This script should execute a webrequest to a webserver (its backend), which then makes sure Varnish's cache gets filled with all pages residing on this webserver. So this script makes sure everyting is in Varnish's cache when Varnish (re)starts, because we're using Varnish as cache and fail-over (the webserver should be able to be down for let's say a week for example, without any consequences). What are the possibilities to do this? We can't just edit /etc/init.d/varnish and /usr/sbin/varnishd because they can het overwritten when updating varnish? Thanks!

    Read the article

  • Can I extend my total RAM by buying more, and what kind do I need to buy

    - by Xeon06
    I currently have 4 GB total RAM and I would like to get some more, to bring it to a total of 8 GB. Is it possible to simply buy another 4 GB and bring it to 8? If so, what kind should I be buying? There is a lot of different possibilities, DDR3, DDR2, clock speed, etc. I am kind of lost among all this. My current setup goes like this: ACER EG43M mainboard Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz 4 total RAM slots, 2 occupied by 2 GB sticks According to CPU-Z, my memory type is DDR3 (not sure how reliable that is) Full CPU-Z dump Windows 7 64-bit So basically, I want to know whether it's possible to extend my current RAM to get 8 GB total by buying another 4, and if so, what kind of RAM do I need? Note that I am not looking for shopping recommendations. I'm worried about the hardware compatibility.

    Read the article

  • Varnishhist x- and y-axis

    - by pst
    In varnishhist the x-axis shows the time varnish took between getting the request from the kernel and sending it back to kernel. The y-axis shows the number of requests. | => cache hit # => cache miss That is what I understood from the manpage. Correct me if I'm wrong. Yet there is one thing I'm unsure about. The pipes (|) on the far left side, do they also stand for cache hits or are they just there to print the y-axis? I'm voting for the latter, but would like to be sure.

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • Is recursion really bad?

    - by dotneteer
    After my previous post about the stack space, it appears that there is perception from the feedback that recursion is bad and we should avoid deep recursion. After writing a compiler, I know that the modern computer and compiler are complex enough and one cannot automatically assume that a hand crafted code would out-perform the compiler optimization. The only way is to do some prototype to find out. So why recursive code may not perform as well? Compilers place frames on a stack. In additional to arguments and local variables, compiles also need to place frame and program pointers on the frame, resulting in overheads. So why hand-crafted code may not performance as well? The stack used by a compiler is a simpler data structure and can grow and shrink cleanly. To replace recursion with out own stack, our stack is allocated in the heap that is far more complicated to manage. There could be overhead as well if the compiler needs to mark objects for garbage collection. Compiler also needs to worry about the memory fragmentation. Then there is additional complexity: CPUs have registers and multiple levels of cache. Register access is a few times faster than in-CPU cache access and is a few 10s times than on-board memory access. So it is up to the OS and compiler to maximize the use of register and in-CPU cache. For my particular problem, I did an experiment to rewrite my c# version of recursive code with a loop and stack approach. So here are the outcomes of the two approaches:   Recursive call Loop and Stack Lines of code for the algorithm 17 46 Speed Baseline 3% faster Readability Clean Far more complex So at the end, I was able to achieve 3% better performance with other drawbacks. My message is never assuming your sophisticated approach would automatically work out better than a simpler approach with a modern computer and compiler. Gage carefully before committing to a more complex approach.

    Read the article

  • Machine Check Exception

    - by Karl Entwistle
    When trying to install ubuntu-12.04-desktop-amd64.iso from USB I get one of the following errors http://en.wikipedia.org/wiki/Machine_Check_Exception states the error can occur due to -poorly fitted heatsink/computer fans (the same problem can happen with excessive dust in the CPU fan) -an overloaded internal or external power supply (fixable by upgrading) So I tried the following -Using rubbing alcohol to remove all the thermal paste from the CPU and heatsink, I then reseated the CPU after checking all the pins on the MOBO, everything seems fine. -Boot without the GPU to see if was the PSU that is being over stressed. -Removing all RAM apart from one stick and running a Memtest86 which it passed -Using Ubuntu 10.04.4 Desktop 64 bit (Different USB slots and USB sticks) -Using Ubuntu 12.04 Desktop 64 bit (Different USB slots and USB sticks) -Reset the BIOS using the Clear CMOS jumper -Removing all HD power cables and SATA cables -Updating the BIOS from F2 to F6 My PC is using the following parts. -Gigabyte GA-Z77-DS3H (F6 BIOS) -Intel Core i7 3770K 3.5GHz Socket 1155 -G-Skill 8GB (2x4GB) DDR3 1600Mhz RipjawsX Memory Kit CL9 (9-9-9-24) 1.5V -Be Quiet Shadow Rock Pro -Be Quiet Pure Power 730W Modular PSU -Sapphire HD 6870 1GB GDDR5 DVI HDMI DisplayPort PCI-E Graphics Card Any ideas?

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • unmet dependencies and broken count>0 problem

    - by Simon
    I tried installing fbreader, following all the steps, but ended up with unmet dependencies, i also think a file is referenced in two locations at once and hence killing it.. any ideas how I can fix it? i've done alot of research and tried: simon@simon-Studio-1558:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dkms patch Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libzlcore0.12 The following NEW packages will be installed: libzlcore0.12 0 upgraded, 1 newly installed, 0 to remove and 61 not upgraded. 6 not fully installed or removed. Need to get 0 B/270 kB of archives. After this operation, 811 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 179860 files and directories currently installed.) Unpacking libzlcore0.12 (from .../libzlcore0.12_0.12.10dfsg-4_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) sorry for the formatting, but it basically isn't liking: dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 Any ideas? Also I don't care about keeping the program, but the error is stopping sudo apt-get remove fbreader from working too.

    Read the article

  • Flash Technology Can Revolutionize your IT Infrastructure

    - by kimberly.billings
    A recent article in the Data Center Journal written by Mark Teter outlines how flash is becoming a disruptive technology in the data center and how it will soon replace HDDs in the storage hierarchy. As Teter explains, the drivers behind this trend are lower cost/performance and power savings; flash is over 100x faster for reads than the fastest HDD, and while it is expensive, it can produce dramatic reductions in the cost of performance as measured in Input/Outputs per second (IOPS). What's more, flash consumes 1/5th the power of HDD, so it's faster AND greener. Teter writes, "when appropriately used, flash turns the current economics of IT performance on its head. That's disruptive." Exadata Smart Flash Cache in the Sun Oracle Database Machine makes intelligent use of flash storage to deliver extreme performance for OLTP and mixed workloads. It intelligently caches data from the Oracle Database replacing slow mechanical I/O operations to disk with very rapid flash memory operations. Exadata Smart Flash Cache is the fundamental technology of the Sun Oracle Database Machine that enables the processing of up to 1 million random I/O operations per second (IOPS), and the scanning of data within Exadata storage at up to 50 GB/second. Are you incorporating flash into your storage strategy? Let us know! Read more: "Flash technology can revolutionize your IT infrastructure", The Data Center Journal, March 30, 2010. Exadata Smart Flash Cache and the Sun Oracle Database Machine white paper var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Tuning Red Gate: #5 of Multiple

    - by Grant Fritchey
    In the Tuning Red Gate series I've shown you how to look at a current load on the system and how to drill down to look at historical analysis of the system. I've also shown how you can see the top queries and other information from the current status of the system. I have one more thing I can show you before we need to start fixing things and showing how that affects the data collected, historical moments in time. For example, back in Post #3 I was looking at some spikes in some of the monitored resources that were taking place a couple of weeks back in time. Once I identify a moment in time that I'm interested in, I can go back to the first page of Monitor, Global Overview, and click on the icon: From this you can select the date and time you're interested in. For example, I saw some serious CPU queues last week: This then rolls back the time for all the information that's available to the Global Overview and the drill down to the server and the SQL Server instance there. This then allows me to look at the Top Queries running at this point, sort them by CPU and identify what was potentially the query that was causing the problem right when I saw the CPU queuing This ability to correlate a moment in time with the information available to you in the Analysis window makes for an excellent tool to investigate your systems going backwards in time. It really makes a huge difference in your knowledge. It's not enough to know that something happened at a particular time. You need to know what it was that was occurring. Remember, the key to tuning your systems is having enough knowledge about them. I'll post more on Tuning Red Gate as soon as I can get some queries rewritten. I'm working on that.

    Read the article

  • Oracle vous invite à un atelier découverte Oracle Coherence composé d’une présentation du produit et de ses concepts, suivi par des exercices pratiques.

    - by mseika
    Oracle vous invite à un atelier découverte Oracle Coherence composé d’une présentation du produit et de ses concepts, suivi par des exercices pratiques. Objectifs : Cet atelier est destiné aux populations suivantes : architectes, développeurs, ainsi que les responsables de projets. Le format retenu (1 journée) pour cet atelier vous permettra de mesurer ce qu’Oracle Coherence peut apporter à votre entreprise ou vos clients au travers de quelques exercices. Cette journée de prise en main vous permettra de mieux comprendre : Le positionnement d’Oracle Coherence au travers des différents cas d’utilisation rencontrés sur le marché français Les concepts technique d’Oracle Coherence Création d’une grille de données distribuée Insérer et lire des données dans un cache distribué Effectuer une requête sur un cache distribué Effectuer une aggrégation sur un cache distribué Etc… En fonction de votre niveau il y aura toujours un exercice supplémentaire à réaliser… Pré-requis :Matériel : Pour la session, chaque participant doit disposer de son pc portable avec un minimum de 4Go de RAM (idéalement Windows XP ou 7). Sur le PC on doit trouver déjà installés les éléments suivants : un Jdk 6, Eclipse dans une version récente, et Coherence 3.7. Technique : Eclipse et  Programmation Java niveau débutant (vous devez être à l’aise pour créer un projet Java, utiliser des librairies, compiler, exécuter, créer des configurations de lancement Eclipse, etc…). Durée : 1 jour L'équipe Enablement Oracle France.NB: Merci de prévoir les frais liés au déjeuner qui n'est pas pris en charge par Oracle

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >