Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 201/837 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • How to test my GAE site for performance

    - by Sergey Basharov
    I am building a GAE site that uses AJAX/JSON for almost all its tasks including building the UI elements, all interactions and client-server requests. What is a good way to test it for highloads so that I could have some statistics about how much resources 1000 average users per some period of time would take. I think I can create some Python functions for this purpose. What can you advise? Thanks.

    Read the article

  • how to avoid sub-query to gain performance

    - by chun
    hi i have a reporting query which have 2 long sub-query SELECT r1.code_centre, r1.libelle_centre, r1.id_equipe, r1.equipe, r1.id_file_attente, r1.libelle_file_attente,r1.id_date, r1.tranche, r1.id_granularite_de_periode,r1.granularite, r1.ContactsTraites, r1.ContactsenParcage, r1.ContactsenComm, r1.DureeTraitementContacts, r1.DureeComm, r1.DureeParcage, r2.AgentsConnectes, r2.DureeConnexion, r2.DureeTraitementAgents, r2.DureePostTraitement FROM ( SELECT cc.id_centre_contact, cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_file_attente, f.libelle_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode, g.granularite, sum(Nb_Contacts_Traites) as ContactsTraites, sum(Nb_Contacts_en_Parcage) as ContactsenParcage, sum(Nb_Contacts_en_Communication) as ContactsenComm, sum(Duree_Traitement/1000) as DureeTraitementContacts, sum(Duree_Communication / 1000 + Duree_Conference / 1000 + Duree_Com_Interagent / 1000) as DureeComm, sum(Duree_Parcage/1000) as DureeParcage FROM agr_synthese_activite_media_fa_agent a, centre_contact cc, direction_contact dc, granularite_de_periode g, media m, file_attente f WHERE m.id_media = a.id_media AND cc.id_centre_contact = a.id_centre_contact AND a.id_direction_contact = dc.id_direction_contact AND dc.direction_contact ='INCOMING' AND a.id_file_attente = f.id_file_attente AND m.media = 'PHONE' AND ( ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') or ( g.granularite = 'Heure' and a.id_th_heure = g.id_granularite_de_periode) ) GROUP by cc.id_centre_contact, a.id_equipe, a.id_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode) r1, ( (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact, a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode ) UNION (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.granularite = 'Heure' AND a.id_th_heure = g.id_granularite_de_periode) AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact,a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode) ) r2 WHERE r1.id_centre_contact = r2.id_centre_contact AND r1.id_equipe = r2.id_equipe AND r1.id_date = r2.id_date AND r1.tranche = r2.tranche AND r1.id_granularite_de_periode = r2.id_granularite_de_periode GROUP BY r1.id_centre_contact , r1.id_equipe, r1.id_file_attente, r1.id_date, r1.tranche, r1.id_granularite_de_periode ORDER BY r1.code_centre, r1.libelle_centre, r1.equipe, r1.libelle_file_attente, r1.id_date, r1.id_granularite_de_periode,r1.tranche the EXPLAIN shows | id | select_type | table | type| possible_keys | key | key_len | ref| rows | Extra | '1', 'PRIMARY', '<derived3>', 'ALL', NULL, NULL, NULL, NULL, '2520', 'Using temporary; Using filesort' '1', 'PRIMARY', '<derived2>', 'ALL', NULL, NULL, NULL, NULL, '4378', 'Using where; Using join buffer' '3', 'DERIVED', 'a', 'ALL', 'fk_Activite_Agent_centre_contact', NULL, NULL, NULL, '83433', 'Using temporary; Using filesort' '3', 'DERIVED', 'g', 'ref', 'Index_granularite,Index_Valeur_min', 'Index_Valeur_min', '23', 'func', '1', 'Using where' '3', 'DERIVED', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' '4', 'UNION', 'g', 'ref', 'PRIMARY,Index_granularite', 'Index_granularite', '23', '', '24', 'Using where; Using temporary; Using filesort' '4', 'UNION', 'a', 'ref', 'fk_Activite_Agent_centre_contact,fk_activite_agent_TH_heure', 'fk_activite_agent_TH_heure', '5', 'reporting_acd.g.Id_Granularite_de_periode', '2979', 'Using where' '4', 'UNION', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' NULL, 'UNION RESULT', '<union3,4>', 'ALL', NULL, NULL, NULL, NULL, NULL, '' '2', 'DERIVED', 'g', 'range', 'PRIMARY,Index_granularite,Index_Valeur_min', 'Index_granularite', '23', NULL, '389', 'Using where; Using temporary; Using filesort' '2', 'DERIVED', 'a', 'ALL', 'fk_agr_synthese_activite_media_fa_agent_centre_contact,fk_agr_synthese_activite_media_fa_agent_direction_contact,fk_agr_synthese_activite_media_fa_agent_file_attente,fk_agr_synthese_activite_media_fa_agent_media,fk_agr_synthese_activite_media_fa_agent_th_heure', NULL, NULL, NULL, '20903', 'Using where; Using join buffer' '2', 'DERIVED', 'cc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Centre_Contact', '1', '' '2', 'DERIVED', 'f', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_File_Attente', '1', '' '2', 'DERIVED', 'dc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Direction_Contact', '1', 'Using where' '2', 'DERIVED', 'm', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Media', '1', 'Using where' don't know it very clear, but i think is the problem of seems it take full scaning than i change all the sub-query to views(create view as select sub-query), and the result is the same thanks for any advice

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Performance implications of finalizers on JVM

    - by Alexey Romanov
    According to this post, in .Net, Finalizers are actually even worse than that. Besides that they run late (which is indeed a serious problem for many kinds of resources), they are also less powerful because they can only perform a subset of the operations allowed in a destructor (e.g., a finalizer cannot reliably use other objects, whereas a destructor can), and even when writing in that subset finalizers are extremely difficult to write correctly. And collecting finalizable objects is expensive: Each finalizable object, and the potentially huge graph of objects reachable from it, is promoted to the next GC generation, which makes it more expensive to collect by some large multiple. Does this also apply to JVMs in general and to HotSpot in particular?

    Read the article

  • Performance improvement of client server system

    - by Tanuj
    I have a legacy client server system where the server maintains a record of some data stored in a sqlite database. The data is related to monitoring access patterns of files stored on the server. The client application is basically a remote viewer of the data. When the client is launched, it connects to the server and gets the data from the server to display in a grid view. The data gets updated in real time on the server and the view in the client automatically gets refreshed. There are two problems with the current implementation: When the database gets too big, it takes a lot of time to load the client. What are the best ways to deal with this. One option is to maintain a cache at the client side. How to best implement a cache ? How can the server maintain a diff so that it only sends the diff during the refresh cycle. There can be multiple clients and each client needs to display the latest data available on the server. The server is a windows service daemon. Both the client and the server are implemented in C#

    Read the article

  • Oracle performance report

    - by John
    Hi, Is there any way of running the $ORACLE_HOME/rdbms/admin/awrrpt.sql so that it doesn't require any input parameters, as in automatically collects a report for the previous hour? /j

    Read the article

  • UITableView performance degrading after adding subviews on cell iphone sdk

    - by neha
    Hi all, In my application, I'm adding a label and two buttons on cell of UItableView [I have not created a separate cell class]. I'm adding these to cell and not cell.contentview. After I run my application on IPhone, the tableview cell rendering after I move the cells up-down, is very jerky. Is it because I'm not adding the views on cell.contentView or because I've not created a custom class for cell? Can anybody please help? Thanks in advance.

    Read the article

  • How large is the performance loss for a 64-bit VirtualBox guest running on a 32-bit host?

    - by IllvilJa
    I have a 64-bit Virtualbox guest running Gentoo Linux (amd64) and it is currently hosted on a 32-bit Gentoo laptop. I've noticed that the performance of the VM is very slow compared to the performance of the 32-bit host itself. Also when I compare with another 32-bit Linux VM running on the same host, performance is significantly less on the 64-bit VM. I know that running a 64-bit VM on a 32-bit host does incur some performance penalties for the VM, but does anyone have any deeper knowledge of how large a penalty one might expect in this scenario, roughly speaking? Is a 10% slowdown something to expect, or should it be a slowdown in the 90% range (running at 1/10 the normal speed)? Or to phrase it in another way: would it be reasonable to expect that the performance improvement for the 64-bit VM increases so much that it is worth reinstalling the host machine to run 64-bit Gentoo instead? I'm currently seriously considering that upgrade, but am curious about other peoples experience of the current scenario. I am aware that the host OS will require more RAM when running in 64-bit, but that's OK for me. Also, I do know that one usually don't run a 64-bit VM on a 32-bit server (I'm surprised I even got the VM started in the first place) but things turned out that way when I tried to future proof the VM I was setting up and decided to make it 64-bit anyway.

    Read the article

  • Dynamically generating high performance functions in clojure

    - by mikera
    I'm trying to use Clojure to dynamically generate functions that can be applied to large volumes of data - i.e. a requirement is that the functions be compiled to bytecode in order to execute fast, but their specification is not known until run time. e.g. suppose I specify functions with a simple DSL like: (def my-spec [:add [:multiply 2 :param0] 3]) I would like to create a function compile-spec such that: (compile-spec my-spec) Would return a compiled function of one parameter x that returns 2x+3. What is the best way to do this in Clojure?

    Read the article

  • glibc regexp performance

    - by Jack
    Anyone has experience measuring glibc regexp functions? Are there any generic tests I need to run to make such a measurements (in addition to testing the exact patterns I intend to search)? Thanks.

    Read the article

  • Ubunt doesn't mount one of my NTFS disks

    - by Jader Dias
    There is a mountable /dev/sda NTFS formatted (Windows disk) There is no /dev/sdb when I ls /dev (NTFS Data disk) There is a /dev/sdc which is another disk of the same model, (Ubuntu disk) I can see that Ubuntu detected this unmountable disk in the Disk Utility It states incorrectly it is unpartioned and a RAID volume. (it previously was RAID0 setup with /dev/sdc but now it is a simple volume, no RAID whatsoever) When I boot Windows 7, it uses this unmountable disk without a glitch The problem happens in both IDE and AHCI modes Ubuntu 10.04 Lucid Lynx

    Read the article

  • Improve performance writing 10 million records to text file using windows service

    - by user1039583
    I'm fetching more than 10 millions of records from database and writing to a text file. It takes hours of time to complete this operation. Is there any option to use TPL features here? It would be great if someone could get me started implementing this with the TPL. using (FileStream fStream = new FileStream("d:\\file.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite)) { BufferedStream bStream = new BufferedStream(fStream); TextWriter writer = new StreamWriter(bStream); for (int i = 0; i < 100000000; i++) { writer.WriteLine(i); } bStream.Flush(); writer.Flush(); // empty buffer; fStream.Flush(); }

    Read the article

  • why arrayfun does NOT improve my struct array operation performance

    - by HaveF
    here is the input data: % @param Landmarks: % Landmarks should be 1*m struct. % m is the number of training set. % Landmark(i).data is a n*2 matrix old function: function Landmarks=CenterOfGravity(Landmarks) % align center of gravity for i=1 : length(Landmarks) Landmarks(i).data=Landmarks(i).data - ones(size(Landmarks(i).data,1),1)... *mean(Landmarks(i).data); end end new function which use arrayfun: function [Landmarks] = center_to_gravity(Landmarks) Landmarks = arrayfun(@(struct_data)... struct('data', struct_data.data - repmat(mean(struct_data.data), [size(struct_data.data, 1), 1]))... ,Landmarks); end %function center_to_gravity when using profiler, I find the usage of time is NOT what I expected: Function Total Time Self Time* CenterOfGravity 0.011s 0.004 s center_to_gravity 0.029s 0.001 s Can someone tell me why? BTW...I can't add "arrayfun" as a new tag for my reputation.

    Read the article

  • .NET IsValidXml Extension Method Performance

    - by tyndall
    I have a legacy application that I inherited that passes a lot of XML around as Strings. I often need the ability to check if a String will be valid XML. What is the fastest and least expensive way to check if a string is valid XML in .NET? I'm working in .NET 3.5 and would most likely use this as an extension method (off of string) in this one project within the solution.

    Read the article

  • Sql Server performance

    - by Jose
    I know that I can't get a specific answer to my question, but I would like to know if I can find the tools to get to my answer. Ok we have a Sql Server 2008 database that for the last 4 days has had moments where for 5-20 minutes becomes unresponsive for specific queries. e.g. The following queries run in different query windows simultaneously have the following results SELECT * FROM Assignment --hangs indefinitely SELECT * FROM Invoice -- works fine Many of the tables have non-clustered indexes to help speed up SELECTs Here's what I know: 1) The same query will either hang indefinitely or run normally. 2) In Activity Monitor in the processes tab there are normally around 80-100 processes running I think that what's happening is 1) A user updates a table 2) This causes one or more indexes to get updated 3) Another user issues a select while the index is updating Is there a way I can figure out why at a specific moment in time SQL Server is being unresponsive for a specific query?

    Read the article

  • Python: Time a code segment for testing performance (with timeit)

    - by Mestika
    Hi, I've a python script which works just as it should but I need to write the time for the execution. I've gooled that I should use timeit but I can't seem to get it to work. My Python script looks like this: import sys import getopt import timeit import random import os import re import ibm_db import time from string import maketrans myfile = open("results_update.txt", "a") for r in range(100): rannumber = random.randint(0, 100) update = "update TABLE set val = %i where MyCount >= '2010' and MyCount < '2012' and number = '250'" % rannumber #print rannumber conn = ibm_db.pconnect("dsn=myDB","usrname","secretPWD") for r in range(5): print "Run %s\n" % r ibm_db.execute(query_stmt) query_stmt = ibm_db.prepare(conn, update) myfile.close() ibm_db.close(conn) What I need it the time it takes the execution of the query and written to the file "results_update.txt". The purpose is to test an update statement for my database with different indexes and tuning mechanisms. Sincerely Mestika

    Read the article

  • Entity framework with Linq to Entities performance

    - by mare
    If I have a static method like this public static string GetTicClassificationTitle(string classI, string classII, string classIII) { using (TicDatabaseEntities ticdb = new TicDatabaseEntities()) { var result = from classes in ticdb.Classifications where classes.ClassI == classI where classes.ClassII == classII where classes.ClassIII == classIII select classes.Description; return result.FirstOrDefault(); } } and use this method in various places in foreach loops or just plain calling it numerous times, does it create and open new connection every time? If so, how can I tackle this? Should I cache the results somewhere, like in this case, I would cache the entire Classifications table in Memory Cache? And then do queries vs this cached object? Or should I make TicDatabaseEntities variable static and initialize it at class level? Should my class be static if it contains only static methods? Because right now it is not.. Also I've noticed that if I return result.First() instead of FirstOrDefault() and the query does not find a match, it will issue an exception (with FirstOrDefault() there is no exception, it returns null). Thank you for clarification.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >