Search Results

Search found 415 results on 17 pages for 'bottleneck'.

Page 8/17 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Bundling in visual studio 2012 for web optimization

    - by Jalpesh P. Vadgama
    I have been writing a series of posts about Visual Studio 2012 features. This series describes what are the new features in the Visual Studio 2012. This post will also be part of Visual Studio 2012 feature series. As we know now days web applications or site are providing more and more features and due to that we have include lots of JavaScript and CSS files in our web application.So once we load site then we will have all the JavaScript  js files and CSS files loaded in the browsers and If you have lots of JavaScript files then its consumes lots of time when browser request them. Following images show the same situation over there.   Here you can see total 25 files loaded into the system and it's almost more than 1MB of total size. As we need to have our web application of site very responsive and need to have high performance application/site, this will be a performance bottleneck to our site. In situation like this, the bundling feature of Visual Studio 2012 and ASP.NET 4.5 comes very handy. With the help of this feature we do optimization there and we can increase performance of our application. To enable this feature in Visual Studio 2012 we just made debug=”false” in web.config of our application like following. Now once you enable this feature and run this application in the browser to see your traffic it will have less items like following. As you can see in the above image there are only 8 items. So after enabling bundling it will automatically convert all js and css files into the one request. Isn’t that cool feature? This feature will surely going to have great impact on performance. Hope you like it. Stay tuned for more.. Till then happy programming!!

    Read the article

  • What is the fastest CPU my laptop can support?

    - by Dave
    I have a Dell Latitude D830 laptop and would like to speed up compile times on it. I have confirmed that it is, indeed, the processor time that is the bottleneck. How can I tell what processors are compatible with the motherboard to pick the best available? I run dual boot Ubuntu Maverick and Windows 7. lshw tells me that my motherboard is OHN338 from Dell, Inc. If anyone has a generic solution, i.e. "For motherboard X, here is how you find out what processors are supported," that would be make this question much more useful to future visitors. But if you also know of a way to find out specific to my model, that would be great as well.

    Read the article

  • How can I throttle the bandwidth consumed by Windows Automatic Updates?

    - by eleven81
    We have many Windows XP computers sharing one connection to the internet. These machines are set to download all available automatic updates and then prompt the user to install them. Whenever Patch Tuesday rolls around, our internet usage pegs out, and remains that way for most of the day, and sometimes into the following Wednesday. This hurts! I still want the machines to start to download the updates as soon as they are available, but if it takes until Thursday or Friday before the last updates are downloaded, that's still better than the latency and dropped connections we are seeing now as a result of the internet connection bottleneck. What can I do to throttle back how rapidly each machine downloads the updates, while still having them all start the download process as soon as the updates are available? I have no desire to run a WSUS server. Also, the internet connection is more than enough, whenever there are no updates to download.

    Read the article

  • Throttling apache downloads selectively

    - by Synchro
    I have a linux box running Debian Sarge (old I know) and apache 2.0.54. It serves two kinds of files - regular web pages and small images, and a lot of large podcast mp3s. The podcast downloads swamp the connection and make the rest of the site unresponsive, so I'm looking to throttle the data transfer rate (not the request rate) of just the podcasts. I've set up haproxy using this technique which does what it says it will, but solves a different problem - even only 5 simultaneous podcast downloads is enough to saturate the link. In a perfect world, haproxy would support per-connection throttling, but it doesn't. So far I've looked at mod_bw (won't compile for me, seems unsupported), mod_cband (unsupported, widely reported as problematic) and iptables using tc. The iptables approach would allow me to throttle things, but would not be at all selective, slowing down everything on the server, not just the podcasts, so would just move the bottleneck without changing overall behaviour. Ideas?

    Read the article

  • How do I pick a motherboard?

    - by EpsilonVector
    When building your own computer one part was always a mystery to me: the motherboard. Picking a CPU/GPU/memory is easy- you just figure out where the various chips are in the low end to high end scale, do a little market research on what current games demand, and pick the parts from their respective continuums of low-to-high-end models. A mother board is more complicated though. Its features are not as obvious as "this motherboard is faster than that motherboard". Now you need to deal with part compatibilities, bus speeds, maybe power management stuff, etc. I'm interested in a short guide for selecting a motherboard, especially- what pitfalls to avoid (for example, can bus speed become a bottleneck?). To clarify: I'm not looking for motherboard recommendations. I'm looking for guidance regarding how to evaluate the fitness of a motherboard given the rest of the computer parts.

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high

    - by Gk.
    I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • Is a PCIE X1 card for USB 3.0 worth it?

    - by lelouch
    I have a portable HDD on USB 3.0, and soon I'll have a 32gb flash drive on USB 3.0 as well. My motherboard is pretty old - and the next time I'll upgrade it is when I get a new CPU/mobo/ram. Slots on my mobo: PCIE X16 - For my GTX 260 2xPCI PCIE X1 I'm considering purchasing a PCI express X1 card like this - http://www.ebay.com.au/itm/PC-Laptop-2-USB3-0-Ports-PCI-E-PCI-Express-X1-Card-New-/280772000446?pt=AU_Networking&hash=item415f50babe Now the question: What are realistic (read/write) speeds I'll get in this setup, assuming I have 1 USB 3 device connected at a time, and the bottleneck is the PCIE?

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • Slow write speeds on new Gigabit home file server

    - by Ryan Holder
    So I finally got all my parts delivered to setup a home file/backup server this week. It's currently running Ubuntu Server and I'm using Samba to share files on my network. The server currently has a 2TB WD Green drive in it connected to a Asus M5A78L-M This is then connected via CAT6a to my new Gigabit switch (TP-Link TL-SG1005D). My home desktop is then also connected to this switch and again also through CAT6a cable. Currently when transfering files I will get a perfect 100MB/s read from the server to my Windows machine. When copying from my Windows machine to the server I get around 30/38MB/s. I know this drive is capable is faster speeds so would anybody have an idea of where the bottleneck is? Any help would be greatly appreciated :) EDIT: I have found ftp's write speed is much closer to what my Samba read speed is so I'm going to give it a guess that is a software problem rather than hardware

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • monitoring TCP/IP performance on Solaris

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck?

    Read the article

  • linux: upload / download difference on network shares

    - by Batsu
    I have a Red Hat Enterprise Linux 6 (with SELinux) which shows significant differences of speed between download and upload (the latter significantly slower) of files shared over the LAN. The bottleneck seems to be the output of the linux machine since I have a rate around 1Mb/s when WinXP machines download files shared (using samba) by the RHEL machine uploading files from the RHEL to a WinXP's shared folder while uploading from the XP machines to linux's shares downloading XPs' shares on the RHEL any share between Windows machines only run smooth (around 50Mb/s). Since the upload from RHEL to WinXP's share is slowed too I would exclude an issue in the configuration of samba. What could possibly determine this limit in the upload speed? update: iptables doesn't show any output rule and disabling it doesn't show any noticeable difference, so I would rule out it too.

    Read the article

  • SSD Performance for PHP?

    - by Andrew Fashion
    My programmer just built an application with PHP using Doctrine ORM (will be a high traffic social networking website), and it's very heavy in PHP/Apache and CPU. The queries are wonderfully fast, and MySQL is barely using any CPU, it's just Apache. I was curious to if an SSD would help speed up PHP/Apache, because I know the bottleneck is in PHP reading multiple files, class files, and loading up a bunch of data. So common sense makes me think if PHP is reading multiple PHP files, an SSD would only help as far as read/write? I was thinking of doing a high performance SSD for the PHP application, but for user image uploads, I would just continue using a 15k SAS. Is there any performance issues regarding using an SSD in this kind of situation? And would it prove to help speed up PHP/Apache, and help the CPU problem out?

    Read the article

  • High CPU Steal percentage on Amazon EC2 Instance

    - by Aditya Patawari
    I am experiencing high CPU steal percentage in a Amazon EC2 large instance. I know it means that my virtual CPU is waiting on the real CPU of the machine for time. My question is that what can I do to reduce this percentage and get maximum out of the CPU? Steal percentage is consistently at 20%. System load crosses 10 when this happens. I have checked memory and network and I am sure that they are not the bottleneck. Is that normal for such environment? Also are there any system level optimization techniques for reducing steal percentage form the virtual instance? avg-cpu: %user %nice %system %iowait %steal %idle 52.38 0.00 8.23 0.00 21.21 18.18

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • Could 11.5 Million 401's be causing bottlenecks?

    - by roviuser
    I'm going to preface this with a warning: My knowledge about servers and networking is VERY limited, and if you provide me with technical answers, I probably won't understand much until I research your answer further. I'm trying to expand my knowledge and learn about it, though. If the information that I am able to provide in this question is insufficient to answer the question, I understand, and it can be closed. We have a SharePoint 2007 system that is extremely slow, mostly from huge amounts of use. We've been told that the main speed bottleneck is the access to the sql databases. However, they do provide a statistics dashboard, so I did some poking around, and noticed that we have 11.5 million or more 401 - access denied errors every month. Could this be causing major speed/performance decreases? Authentication for sharepoint uses active directory.

    Read the article

  • PHP's page generation time takes 0.01s. 1/0.01 = 100; however i'm having problems reaching that number of request per seconds. Why?

    - by cedivad
    On average, my PHP page generation time is 10ms. So i should be able to execute 100 requests one after the other one (using a single core on the server, since that php is not multithreaded). However, i'm having problems reaching 50 pages per seconds. As of now i do 25 on avg., with a medium load. The application is really light, it consist in a read (<5KB) from a pool of SSDs, some read queries solved by indexes. Where should i look to solve this bottleneck?

    Read the article

  • What is the preferred pattern when attaching a 'runtime object'?

    - by sebf
    In my application I have the following: public class NeatObject { /* lots of static data, and configuration flags */ } public class NeatObjectConsumer { void DoCleverStuffWithObjectOnGPU(NeatObject obj); } Where NeatObject and its consumer are used to control the GPU. The idea being that, the configuration of an instance of NeatObject and its members, define how the consumer instance behaves. The object can be passed around, edited, and most importantly serialised/deserialised by the application, with and without knowledge of NeatObjectConsumer, then provided back to the consumer to do something else. The purpose of this seperation is: The consumer manages hardware resources, which change depending on the computer, and even on the execution of the application, making preserving the state of an object which does everything difficult. Avoids circular references if the assembly that contains the consumer needs to reference one that only needs to know about NeatObject. However, there is a complication in that the consumer creates hardware resources and needs to associate them with NeatObject. These don't need to be preserved, but still need to be retrieved. DoCleverStuffWithObjectOnGPU() will be called many, many times during execution and so any bottleneck is a concern, therefore I would like to avoid dictionary lookups. What is the preferred method of attaching this information to NeatObject? By preferred, I mean intuitive - other coders can see immediately what is going on - and robust - method doesn't invite playing with the resources or present them in such a way as to make them easily corruptible. Essentially, I want to add my own metadata - how should I do it? Try to use 'actual metadata' functionality like Reflection? A member of the type of an abstract class? Unmanaged pointers? If you took on a project that used this pattern, what would you have liked the previous developer to do?

    Read the article

  • Webserver - Memory-bound or CPU-bound? [closed]

    - by JJP
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Web Sites I'm installing a social network using Zend Framework & MySql, with lots of plugins & queries. I want Webserver & Sql server on one box. I'm trying to choose between two machines (on hetzner.de): A) intel i7-2600 3.4 GHz 16 GB DDR3 RAM B) intel i7-920 2.6 GHz 24 GB DDR3 RAM B has 50% more RAM but 30% slower clock speed. Q is: is it obvious where the bottleneck will be? Would I ever need 24GB of RAM, even with lots of concurrent users?

    Read the article

  • Effective optimization strategies on modern C++ compilers

    - by user168715
    I'm working on scientific code that is very performance-critical. An initial version of the code has been written and tested, and now, with profiler in hand, it's time to start shaving cycles from the hot spots. It's well-known that some optimizations, e.g. loop unrolling, are handled these days much more effectively by the compiler than by a programmer meddling by hand. Which techniques are still worthwhile? Obviously, I'll run everything I try through a profiler, but if there's conventional wisdom as to what tends to work and what doesn't, it would save me significant time. I know that optimization is very compiler- and architecture- dependent. I'm using Intel's C++ compiler targeting the Core 2 Duo, but I'm also interested in what works well for gcc, or for "any modern compiler." Here are some concrete ideas I'm considering: Is there any benefit to replacing STL containers/algorithms with hand-rolled ones? In particular, my program includes a very large priority queue (currently a std::priority_queue) whose manipulation is taking a lot of total time. Is this something worth looking into, or is the STL implementation already likely the fastest possible? Along similar lines, for std::vectors whose needed sizes are unknown but have a reasonably small upper bound, is it profitable to replace them with statically-allocated arrays? I've found that dynamic memory allocation is often a severe bottleneck, and that eliminating it can lead to significant speedups. As a consequence I'm interesting in the performance tradeoffs of returning large temporary data structures by value vs. returning by pointer vs. passing the result in by reference. Is there a way to reliably determine whether or not the compiler will use RVO for a given method (assuming the caller doesn't need to modify the result, of course)? How cache-aware do compilers tend to be? For example, is it worth looking into reordering nested loops? Given the scientific nature of the program, floating-point numbers are used everywhere. A significant bottleneck in my code used to be conversions from floating point to integers: the compiler would emit code to save the current rounding mode, change it, perform the conversion, then restore the old rounding mode --- even though nothing in the program ever changed the rounding mode! Disabling this behavior significantly sped up my code. Are there any similar floating-point-related gotchas I should be aware of? One consequence of C++ being compiled and linked separately is that the compiler is unable to do what would seem to be very simple optimizations, such as move method calls like strlen() out of the termination conditions of loop. Are there any optimization like this one that I should look out for because they can't be done by the compiler and must be done by hand? On the flip side, are there any techniques I should avoid because they are likely to interfere with the compiler's ability to automatically optimize code? Lastly, to nip certain kinds of answers in the bud: I understand that optimization has a cost in terms of complexity, reliability, and maintainability. For this particular application, increased performance is worth these costs. I understand that the best optimizations are often to improve the high-level algorithms, and this has already been done.

    Read the article

  • Using Javascript to call the Azure Blob Storage REST API

    - by user350829
    I'm developing a Flash app that saves files to the Azure Blob Storage API. I've learned that you should use the REST API directly rather than a go-between WCF service as this is the most efficient (using a web role is a bottleneck). The problem is that Flash can't do PUT or DELETE methods over Http and has to use external Javascript. This is not an area that I'm familiar with and need some advice/links to examples of using Javascript to work with the Storage API (I've obviously Googled this to no avail). Is this even possible? The Javascript would be hosted in a web role on the same domain. Many thanks, Ed

    Read the article

  • Enhance Localization performances? (ComponentResourceManager.ApplyResources)

    - by Srodriguez
    Dear all, After experiencing some performances issues on my client side, we decided to give a try to some of the performance profilers to try to find the bottleneck or identify the guilty parts of the code. Of course, as many performance investigations, the problems comes from various things, but something I find out is that the ComponentResourceManager.ApplyResources of my user controls takes way too much time in the construction of my forms: more than 24% of the construction time is spent in the ApplyResources inside the InitializeComponent(). This seems rather a lot for only "finding a resource string and putting it in it's container". What is exactly done in the ComponentResourceManager.ApplyResources ? I guess more than searching the string, if not it wouldn't take that long. Is there a way to enhance the performances of the localization? Our software is localized in several languages, so we do need to keep this multi-lingual feature. Any recommendations regarding this issue? Thanks! PS: We are coding in C#, .NET 3.5 SP1.

    Read the article

  • Android: Filtering a SimpleCursorAdapter ListView

    - by Diego Tori
    Right now, I'm running into issues trying to implement a FilterQueryProvider in my custom SimpleCursorAdapter, since I'm unsure of what to do in the FilterQueryProvider's runQuery function. In other words, since the query that comprises my ListView basically gets the rowID, name, and a third column from my databases's table, I want to be able to filter the cursor based on the partial value of the name column. However, I am uncertain of whether I can do this directly from runQuery without expanding my DB class since I want to filter the existing cursor, or will I have to create a new query function in my DB class that partially searches my name column, and if so, how would I go about creating the query statement while using the CharSequence constraint argument in runQuery? I am also concerned about the performance issues associated with trying to run multiple queries based on partial text since the DB table in question has about 1300-1400 rows. In other words, would I run into a bottleneck trying to filter the cursor?

    Read the article

  • Robust and fast checksum algorithm?

    - by bene
    Which checksum algorithm can you recommend in the following use case? I want to generate checksums of small JPEG files (~8 kB each) to check if the content changed. Using the filesystem's date modified is unfortunately not an option. The checksum need not be cryptographically strong but it should robustly indicate changes of any size. The second criterion is speed since it should be possible to process at least hundreds of images per second (on a modern CPU). The calculation will be done on a server with several clients. The clients send the images over Gigabit TCP to the server. So there's no disk I/O as bottleneck.

    Read the article

  • Ever any performance different between Java >> and >>> right shift operators?

    - by Sean Owen
    Is there ever reason to think the (signed) and (unsigned) right bit-shift operators in Java would perform differently? I can't detect any difference on my machine. This is purely an academic question; it's never going to be the bottleneck I'm sure. I know: it's best to write what you mean foremost; use for division by 2, for example. I assume it comes down to which architectures have which operations implemented as an instruction.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >