Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 211/811 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Right Language for the Job

    - by Manoj
    Using the right language for the job is the key - this is the comment I read in SO and I also belive thats the right thing to do. Because of this we ended up using different languages for different parts of the project - like perl, VBA(Excel Macros), C# etc. We have three to four languages currently in use inside the project. Using the right language for the job has made it immensly more easy to do automate a job, but of late people are complaining that any new person who has to take over the project will have to learn so many different languages to get started. Also it is difficult to find such kind of person. Please note that this is a one to two person working on the project maximum at a given point of time. I would like to know if the method we are following is right or should we converge to single language and try to use it across all the job even though another language might be better suited for it. Your experenece related to this would also help. Languages used and their purpose: Perl - Processing large text file(log files) C# with Silverlight for web based reporting. LabVIEW for automation Excel macros for processing data in excel sheets, generating graphs and exporting to powerpoint.

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Sql Server 2000 Stored Procedure Prevents Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • MySQL Insert Query Randomly Takes a Long Time

    - by ShimmerTroll
    I am using MySQL to manage session data for my PHP application. When testing the app, it is usually very quick and responsive. However, seemingly randomly the response will stall before finally completing after a few seconds. I have narrowed the problem down to the session write query which looks something like this: INSERT INTO Session VALUES('lvg0p9peb1vd55tue9nvh460a7', '1275704013', '') ON DUPLICATE KEY UPDATE sessAccess='1275704013',sessData=''; The slow query log has this information: Query_time: 0.524446 Lock_time: 0.000046 Rows_sent: 0 Rows_examined: 0 This happens about 1 out of every 10 times. The query usually only takes ~0.0044 sec. The table is InnoDB with about 60 rows. sessId is the primary key with a BTREE index. Since this is accessed on every page view, it is clearly not an acceptable execution time. Why is this happening? Update: Table schema is: sessId:varchar(32), sessAccess:int(10), sessData:text

    Read the article

  • Using FILE_FLAG_NO_BUFFERING will return noticeable speed gain?

    - by 9dan
    Recently noticed detail description of FILE_FLAG_NO_BUFFERING flag in MSDN, and read several Google search results about unbuffered I/O in Windows. http://msdn.microsoft.com/en-us/library/aa363858(v=vs.85).aspx I wondering now, is it really important to consider unbuffered option in file I/O programming? Because many programs use plain old C stream I/O or C++ iostream, I didn't gave any attention to FILE_FLAG_NO_BUFFERING flag before. Let's say we are developing photo explorer program like Picasa. If we implement unbuffered I/O, could thumbnail display speed show noticeable difference in ordinary users?

    Read the article

  • What is a 'best practice' backup plan for a website?

    - by HollerTrain
    I have a website which is very large and has a large user-base. I am trying to think of a 'best practice' way to create a back up or mirror website, so if something happens on domain.com, I can quickly point the site to backup.domain.com via 401 redirect. This would give me time to troubleshoot domain.com while everyone is viewing backup.domain.com and not knowing the difference. Is my method the ideal method, or have you enacted better methods to creating a backup site? I don't want to have the site go down and then get yelled at every minute while I'm trying to fix it. Ideally I would just 'flip the switch' and it would redirect the user to a backup. Any insight would be greatly appreciated.

    Read the article

  • What is going to be "the next Hype" ? [closed]

    - by kabado
    I know it's a very vague question, but maybe the answers could act like a poll about what could possibly be the next big event in computer science. A new language ? A new technology ? A new architecture ? A new business model ? Please provide details, what is it and why do you think it's going to rule the world.

    Read the article

  • How are files (especially audio files) organized internally?

    - by mystify
    I try to grok that: Apple is talking about "packets" in audio files, and there is a fancy function called AudioFileReadPackets which takes a lot of arguments. One of them specifies the "start packet", and another one the number of packets which you want to read. So I imagine an audio file to look like this, internally: It's made up of a lot of packets. If it's an audio file which has an variable bit rate format, then every packet may have a different size. If the file has an constant bit rate format, then every packet is the same size. So an audio file is like a truck full of boxes, and every box contains some interesting stuff. Is that correct? Does it apply to any kind of file? Is this how files actually look like?

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • Django: IE doesn't load locahost or loads very SLOWLY

    - by reedvoid
    I'm just starting to learn Django, building a project on my computer, running Windows 7 64-bit, Python 2.7, Django 1.3. Basically whatever I write, it loads in Chrome and Firefox instantly. But for IE (version 9), it just stalls there, and does nothing. I can load up "http://127.0.0.1:8000" on IE and leave the computer on for hours and it doesn't load. Sometimes, when I refresh a couple of times or restart IE it'll work. If I change something in the code, again, Chrome and Firefox reflects changes instantly, whereas IE doesn't - if it loads the page at all. What is going on? I'm losing my mind here....

    Read the article

  • VS2008 is very slow on a specific large C++ solution

    - by VioletRose
    I have a solution with 21 C++ projects and 1 VB.NET project. The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement. It only happens with this solution and only on my machine. The solution has total of 2380 source and header files, of which 1280 are header files. I tried to remove all connection to the source control (Perforce) but it didn't help. Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued. Any idea?

    Read the article

  • Nested loop traversing arrays

    - by alecco
    There are 2 very big series of elements, the second 100 times bigger than the first. For each element of the first series, there are 0 or more elements on the second series. This can be traversed and processed with 2 nested loops. But the unpredictability of the amount of matching elements for each member of the first array makes things very, very slow. The actual processing of the 2nd series of elements involves logical and (&) and a population count. I couldn't find good optimizations using C but I am considering doing inline asm, doing rep* mov* or similar for each element of the first series and then doing the batch processing of the matching bytes of the second series, perhaps in buffers of 1MB or something. But the code would be get quite messy. Does anybody know of a better way? C preferred but x86 ASM OK too. Many thanks! Sample/demo code with simplified problem, first series are "people" and second series are "events", for clarity's sake. (the original problem is actually 100m and 10,000m entries!) #include <stdio.h> #include <stdint.h> #define PEOPLE 1000000 // 1m struct Person { uint8_t age; // Filtering condition uint8_t cnt; // Number of events for this person in E } P[PEOPLE]; // Each has 0 or more bytes with bit flags #define EVENTS 100000000 // 100m uint8_t P1[EVENTS]; // Property 1 flags uint8_t P2[EVENTS]; // Property 2 flags void init_arrays() { for (int i = 0; i < PEOPLE; i++) { // just some stuff P[i].age = i & 0x07; P[i].cnt = i % 220; // assert( sum < EVENTS ); } for (int i = 0; i < EVENTS; i++) { P1[i] = i % 7; // just some stuff P2[i] = i % 9; // just some other stuff } } int main(int argc, char *argv[]) { uint64_t sum = 0, fcur = 0; int age_filter = 7; // just some init_arrays(); // Init P, P1, P2 for (int64_t p = 0; p < PEOPLE ; p++) if (P[p].age < age_filter) for (int64_t e = 0; e < P[p].cnt ; e++, fcur++) sum += __builtin_popcount( P1[fcur] & P2[fcur] ); else fcur += P[p].cnt; // skip this person's events printf("(dummy %ld %ld)\n", sum, fcur ); return 0; } gcc -O5 -march=native -std=c99 test.c -o test

    Read the article

  • Update table with index is too slow

    - by pauloya
    Hi, I was watching the Profiler on a live system of our application and I saw that there was an update instruction that we run periodically (every second) that was quite slow. It took around 400ms every time. The query includes this update (which is the slow part) UPDATE BufferTable SET LrbCount = LrbCount + 1, LrbUpdated = getdate() WHERE LrbId = @LrbId This is the table CREATE TABLE BufferTable( LrbId [bigint] IDENTITY(1,1) NOT NULL, ... LrbInserted [datetime] NOT NULL, LrbProcessed [bit] NOT NULL, LrbUpdated [datetime] NOT NULL, LrbCount [tinyint] NOT NULL, ) The table has 2 indexes (non unique and non clustered) with the fields by this order: * Index1 - (LrbProcessed, LrbCount) * Index2 - (LrbInserted, LrbCount, LrbProcessed) When I looked at this I thought that the problem would come from Index1 since LrbCount is changing a lot and it changes the order of the data in the index. But after desactivating index1 I saw the query was taking the same time as initially. Then I rebuilt index1 and desactivated index2, this time the query was very fast. It seems to me that Index2 should be faster to update, the order of the data shouldn't change since the LrbInserted time is not changed. Can someone explain why index2 is much heavier to update then index1? Thank you!

    Read the article

  • What is microbenchmarking?

    - by polygenelubricants
    I've heard this term used, but I'm not entirely sure what it means, so: What DOES it mean and what DOESN'T it mean? What are some examples of what IS and ISN'T microbenchmarking? What are the dangers of microbenchmarking and how do you avoid it? (or is it a good thing?)

    Read the article

  • C++ Function pointers vs Switch

    - by Perfix
    What is faster: Function pointers or switch? The switch statement would have around 30 cases, consisting of enumarated unsigned ints from 0 to 30. I could do the following: class myType { FunctionEnum func; string argv[123]; int someOtherValue; }; // In another file: myType current; // Iterate through a vector containing lots of myTypes // ... for ( i=0; i < myVecSize; i ++ ) switch ( current.func ) { case 1: //... break; // ........ case 30: // blah break; } And go trough the switch with func every time. The good thing about switch would also be that my code is more organized than with 30 functions. Or I could do that (not so sure with that): class myType { myReturnType (*func); string argv[123]; int someOtherValue; }; I'd have 30 different functions then, at the beginning a pointer to one of them is assigned to myType. What is probably faster: Switch statement or function pointer? Calls per second: Around 10 million. I can't just test it out - that would require me to rewrite the whole thing. Currently using switch. I'm building an interpreter which I want to be faster than Python & Ruby - every clock cycle matters!

    Read the article

  • PHP : flatten array - fastest way?

    - by Industrial
    Is there any fast way to flatten an array and select subkeys ('key'&'value' in this case) without running a foreach loop, or is the foreach always the fastest way? Array ( [0] => Array ( [key] => string [value] => a simple string [cas] => 0 ) [1] => Array ( [key] => int [value] => 99 [cas] => 0 ) [2] => Array ( [key] => array [value] => Array ( [0] => 11 [1] => 12 ) [cas] => 0 ) ) To: Array ( [int] => 99 [string] => a simple string [array] => Array ( [0] => 11 [1] => 12 ) )

    Read the article

  • fast load big object graph from DB

    - by Famos
    Hi I have my own data structure written in C# like: public class ElectricScheme { public List<Element> Elements { get; set; } public List<Net> Nets { get; set; } } public class Element { public string IdName { get; set; } public string Func { get; set; } public string Name { get; set; } public BaseElementType Type { get; set; } public List<Pin> Pins { get; set; } } public class Pin { public string IdName { get; set; } public string Name { get; set; } public BasePinType PinType { get; set; } public BasePinDirection PinDirection { get; set; } } public class Net { public string IdName { get; set; } public string Name { get; set; } public List<Tuple<Element,Pin>> ConnectionPoints { get; set; } } Where Elements count ~19000, each element contain =3 Pin, Nets count ~20000, each net contain =3 pair (Element, Pin) Parse txt (file size ~17mb) file takes 5 minutes. Serilization / Deserialization by default serializer ~3 minutes. Load from DB 20 minutes and not loaded... I use Entity Framework like public ElectricScheme LoadScheme(int schemeId) { var eScheme = (from s in container.ElectricSchemesSet where s.IdElectricScheme.Equals(schemeId) select s).FirstOrDefault(); if (eScheme == null) return null; container.LoadProperty(eScheme, "Elements"); container.LoadProperty(eScheme, "Nets"); container.LoadProperty(eScheme, "Elements.Pins"); return eScheme; } The problem is dependencies between Element and Pin... (for ~19000 elements ~95000 pins) Any ideas?

    Read the article

  • SQL Server - Missing Indexes - What would use the index?

    - by BankZ
    I am using SQL Server 2008 and we are using the DMV's to find missing indexes. However, before I create the new index I am trying to figure out what proc/query is wanting that index. I want the most information I can get so I can make informed decision on my indexes. Sometimes the indexes SQL Server wants does not make sense to me. Does anyone know how I can figure out what wants it?

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • SQL Server - how to determine if indexes aren't being used?

    - by rwmnau
    I have a high-demand transactional database that I think is over-indexed. Originally, it didn't have any indexes at all, so adding some for common processes made a huge difference. However, over time, we've created indexes to speed up individual queries, and some of the most popular tables have 10-15 different indexes on them, and in some cases, the indexes are only slightly different from each other, or are the same columns in a different order. Is there a straightforward way to watch database activity and tell if any indexes are not hit anymore, or what their usage percentage is? I'm concerned that indexes were created to speed up either a single daily/weekly query, or even a query that's not being run anymore, but the index still has to be kept up to date every time the data changes. In the case of the high-traffic tables, that's a dozen times/second, and I want to eliminate indexes that are weighing down data updates while providing only marginal improvement.

    Read the article

  • Does beginTransaction in Hibernate allocate a new DB connection?

    - by illscience
    Hi folks - Just wondering if beginning a new transaction in Hibernate actually allocates a connection to the DB? I'm concerned b/c our server begins a new transaction for each request received, even if that request doesn't interact with the DB. We're seeing DB connections as a major bottleneck, so I'm wondering if I should take the time narrow the scope of my transactions. Searched everywhere and haven't been able to find a good answer. The very simple code is here: SessionFactory sessionFactory = (SessionFactory) Context.getContext().getBean("sessionFactory"); sessionFactory.getCurrentSession().beginTransaction(); sessionFactory.getCurrentSession().setFlushMode(FlushMode.AUTO); thanks very much! a

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >