Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 266/3915 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • .NET WebService IPC - Should it be done to minimise some expensive operations?

    - by Kyle
    I'm looking at a few different approaches to a problem: Client requests work, some stuff gets done, and a result (ok/error) is returned. A .NET web service definitely seems like the way to go, my only issue is that the "stuff" will involve building up and tearing down a session for each request. Does abstracting the "stuff" out to an app (which would keep a single session active, and process the request from the web service) seem like the right way to go? (and if so, what communication method) The work time is negligible, my concern is the hammering the transaction servers in question will probably get if I create/drop a session for each job. Is some form of IPC or socket based communication a feasible solution here? Thoughts/comments/experiences much appreciated. Edit: After a bit more research, it seems like hosting a WCF service in a Windows Service is probably a better way to go...

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Django: Update order attribute for objects in a queryset

    - by lazerscience
    I'm having a attribute on my model to allow the user to order the objects. I have to update the element's order depending on a list, that contains the object's ids in the new order; right now I'm iterating over the whole queryset and set one objects after the other. What would be the easiest/fastest way to do the same with the whole queryset? def update_ordering(model, order): """ order is in the form [id,id,id,id] for example: [8,4,5,1,3] """ id_to_order = dict((order[i], i) for i in range(len(order))) for x in model.objects.all(): x.order = id_to_order[x.id] x.save()

    Read the article

  • integer division in php

    - by oezi
    hi guys, i'm looking for the fastest way to do an integer division in php. for example, 5 / 2 schould be 4 | 6 / 2 should be 3 and so on. if i simply do this, php will return 2.5 in the first case, the only solution i could find was using intval($my_number/2) - wich isn't as fast as i want it to be (but gives the expected results). can anyone help me out with this?

    Read the article

  • XSLT 1.0: restrict entries in a nodeset

    - by Mike
    Hi, Being relatively new to XSLT I have what I hope is a simple question. I have some flat XML files, which can be pretty big (eg. 7MB) that I need to make 'more hierarchical'. For example, the flat XML might look like this: <D0011> .... .... and it should end up looking like this: <D0011> .... .... I have a working XSLT for this, and it essentially gets a nodeset of all the b elements and then uses the 'following-sibling' axis to get a nodeset of the nodes following the current b node (ie. following-sibling::*[position() =$nodePos]). Then recursion is used to add the siblings into the result tree until another b element is found (I have parameterised it of course, to make it more generic). I also have a solution that just sends the position in the XML of the next b node and selects the nodes after that one after the other (using recursion) via a *[position() = $nodePos] selection. The problem is that the time to execute the transformation increases unacceptably with the size of the XML file. Looking into it with XML Spy it seems that it is the 'following-sibling' and 'position()=' that take the time in the two respective methods. What I really need is a way of restricting the number of nodes in the above selections, so fewer comparisons are performed: every time the position is tested, every node in the nodeset is tested to see if its position is the right one. Is there a way to do that ? Any other suggestions ? Thanks, Mike

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • Jmeter- HTTP Cache Manager, Unable to cache everything what it is being cached by Browser

    - by chinmay brahma
    I used HTTP Chache Manager to Cache files which are being cached in browser. I am successful of doing it for some of the pages. Number of files being cached in Jmeter is equal to Number of files being cached by browser. But in some cases : I found number files being cached is lesser than the files being cached by browser. Using Jmeter I found only 5 files are being cached but in real browser 12 files are getting cached. Thanks in advance

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • C#/WPF FileSystemWatcher on every extension on every path

    - by BlueMan
    I need FileSystemWatcher, that can observing same specific paths, and specific extensions. But the paths could by dozens, hundreds or maybe thousand (hope not :P), the same with extensions. The paths and ext are added by user. Creating hundreds of FileSystemWatcher it's not good idea, isn't it? So - how to do it? Is it possible to watch/observing every device (HDDs, SD flash, pendrives, etc.)? Will it be efficient? I don't think so... . Every changing Windows log file, scanning file by antyvirus program - it could realy slow down my program with SystemWatcher :(

    Read the article

  • Advise guidance on how to form this jQuery script for show/hide fade element

    - by Rick
    Hey guys.. I basically have several links on the left side of the screen and on the right is a preview window. Below the preview window is another box for the affiliate link code. So what I am trying to do is create an affiliate page where you choose the banner size on the left by clicking on the link and on the right you see it dynamically change to the banner size and the code changes accordingly as well. So far I have the following code and it works but it seems very very cumbersome and bloated. Can you see if I can trim this down? jQuery(".banner-style li").click(function() { jQuery(".banner-style li").removeClass("selected"); jQuery(this).addClass("selected"); var $banner = jQuery(this).attr("class"); $banner = $banner.replace(" selected",""); jQuery(".preview img").fadeOut('fast',function() { jQuery(".preview img").attr("src", "http://localhost/site/banners/"+$banner+".jpg") .fadeIn('slow'); }); jQuery(".code p").removeClass('hide').hide(); jQuery(".code p."+$banner).show(); }); Also to note the funny thing is in FF, when you click for the first to on any link, the original image on the right fades out and in real quick and then it loads the "clicked" image. This does not happen in other browsers...

    Read the article

  • How to increase query speed without using full-text search?

    - by andre matos
    This is my simple query; By searching selectnothing I'm sure I'll have no hits. SELECT nome_t FROM myTable WHERE nome_t ILIKE '%selectnothing%'; This is the EXPLAIN ANALYZE VERBOSE Seq Scan on myTable (cost=0.00..15259.04 rows=37 width=29) (actual time=2153.061..2153.061 rows=0 loops=1) Output: nome_t Filter: (nome_t ~~* '%selectnothing%'::text) Total runtime: 2153.116 ms myTable has around 350k rows and the table definition is something like: CREATE TABLE myTable ( nome_t text NOT NULL, ) I have an index on nome_t as stated below: CREATE INDEX idx_m_nome_t ON myTable USING btree (nome_t); Although this is clearly a good candidate for Fulltext search I would like to rule that option out for now. This query is meant to be run from a web application and currently it's taking around 2 seconds which is obviously too much; Is there anything I can do, like using other index methods, to improve the speed of this query?

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Good Open Source Code to learn Web Programming

    - by Prabu
    Hi can someone point me to some Good Open Source Code to learn Web Programming (Language doesn't matter). i'm looking for source code of web-applications not frameworks I'm not a beginner, I can code to some extent. I want to know how stuffs are done in real world applications.

    Read the article

  • Reasonably faster way to traverse a directory tree in Python?

    - by Sridhar Ratnakumar
    Assuming that the given directory tree is of reasonable size: say an open source project like Twisted or Python, what is the fastest way to traverse and iterate over the absolute path of all files/directories inside that directory? I want to do this from within Python (subprocess is allowed). os.path.walk is slow. So I tried ls -lR and tree -fi. For a project with about 8337 files (including tmp, pyc, test, .svn files): $ time tree -fi > /dev/null real 0m0.170s user 0m0.044s sys 0m0.123s $ time ls -lR > /dev/null real 0m0.292s user 0m0.138s sys 0m0.152s $ time find . > /dev/null real 0m0.074s user 0m0.017s sys 0m0.056s $ tree appears to be faster than ls -lR (though ls -R is faster than tree, but it does not give full paths). find is the fastest. Can anyone think of a faster and/or better approach? On Windows, I may simply ship a 32-bit binary tree.exe or ls.exe if necessary. Update 1: Added find

    Read the article

  • Spring Web application internal visual monitoring heap space and permgen.

    - by Veniamin
    If I have created web application used Spring framework(based on maven), can I include some module/dependency/plugin into my app to monitor Heap space, Permgen, etc. It whould be greate to get some charts output likes as in VisuaVM. For example: http://localhost:8080/monitoring = Including something likes VisualVM I have found next dependency without link to repo: <dependency> <groupId>com.sun.tools.visualvm</groupId> <artifactId>core</artifactId> <version>1.3.3</version> </dependency> How to use it?

    Read the article

  • How to find the worst performing queries in MS SQL Server 2008?

    - by Thomas Bratt
    How to find the worst performing queries in MS SQL Server 2008? I found the following example but it does not seem to work: SELECT TOP 5 obj.name, max_logical_reads, max_elapsed_time FROM sys.dm_exec_query_stats a CROSS APPLY sys.dm_exec_sql_text(sql_handle) hnd INNER JOIN sys.sysobjects obj on hnd.objectid = obj.id ORDER BY max_logical_reads DESC Taken from: http://www.sqlservercurry.com/2010/03/top-5-costly-stored-procedures-in-sql.html

    Read the article

  • SQL Server uncorrelated subquery very slow

    - by brianberns
    I have a simple, uncorrelated subquery that performs very poorly on SQL Server. I'm not very experienced at reading execution plans, but it looks like the inner query is being executed once for every row in the outer query, even though the results are the same each time. What can I do to tell SQL Server to execute the inner query only once? The query looks like this: select * from Record record0_ where record0_.RecordTypeFK='c2a0ffa5-d23b-11db-9ea3-000e7f30d6a2' and ( record0_.EntityFK in ( select record1_.EntityFK from Record record1_ join RecordTextValue textvalues2_ on record1_.PK=textvalues2_.RecordFK and textvalues2_.FieldFK = '0d323c22-0ec2-11e0-a148-0018f3dde540' and (textvalues2_.Value like 'O%' escape '~') ) )

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • What's the good of IDE's auto generated @override annotation ?

    - by Tony
    I am using eclipse , when I use shortcut to generate override implementations , there is an override annotation up there , I am using JDK 6 , this is all right , but under JDK 5 this annotation will cause an error, so I want to ask , if this annotation is completely useless ? Will compiler do some kind of optimization using this annotation ?

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >