Search Results

Search found 9889 results on 396 pages for 'pointer speed'.

Page 344/396 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • How to optimize paging for large in memory database

    - by snakefoot
    I have an application where the entire database is implemented in memory using a stl-map for each table in the database. Each item in the stl-map is a complex object with references to other items in the other stl-maps. The application works with a large amount of data, so it uses more than 500 MByte RAM. Clients are able to contact the application and get a filtered version of the entire database. This is done by running through the entire database, and finding items relevant for the client. When the application have been running for an hour or so, then Windows 2003 SP2 starts to page out parts of the RAM for the application (Eventhough there is 16 GByte RAM on the machine). After the application have been partly paged out then a client logon takes a long time (10 mins) because it now generates a page fault for each pointer lookup in the stl-map. I can see it is possible to tell Windows to lock memory in RAM, but this is generally only recommended for device drivers, and only for "small" amounts of memory. I guess a poor mans solution could be to loop through the entire memory database, and thus tell Windows we are still interested in keeping the datamodel in RAM. I guess another poor mans solution could be to disable the pagefile completely on Windows. I guess the expensive solution would be a SQL database, and then rewrite the entire application to use a database layer. Then hopefully the database system will have implemented means to for fast access. Are there other more elegant solutions ?

    Read the article

  • SWIG_NewPointerObj and values always being nil

    - by Tom J Nowell
    I'm using SWIG to wrap C++ objects for use in lua, and Im trying to pass data to a method in my lua script, but it always comes out as 'nil' void CTestAI::UnitCreated(IUnit* unit){ lua_getglobal(L, "ai"); lua_getfield(L, -1, "UnitCreated"); swig_module_info *module = SWIG_GetModule( L ); swig_type_info *type = SWIG_TypeQueryModule( module, module, "IUnit *" ); SWIG_NewPointerObj(L,unit,type,0); lua_epcall(L, 1, 0); } Here is the lua code: function AI:UnitCreated(unit) if(unit == nil) then game:SendToConsole("I CAN HAS nil ?") else game:SendToConsole("I CAN HAS UNITS!!!?") end end unit is always nil. I have checked and in the C++ code, the unit pointer is never invalid/null I've also tried: void CTestAI::UnitCreated(IUnit* unit){ lua_getglobal(L, "ai"); lua_getfield(L, -1, "UnitCreated"); SWIG_NewPointerObj(L,unit,SWIGTYPE_p_IUnit,0); lua_epcall(L, 1, 0); } with identical results. Why is this failing? How do I fix it?

    Read the article

  • jQuery: If user clicks out of div

    - by Paul Knopf
    I am creating a popup in query. All is well with the popup on hover. I made it see a timer starts when user's mouse leaves the div to close it. If he enters the div again before the timer is done, then the timer is cleared. This is fine, but what if a user clicks out of the div? I want that to also close the div. The only problem is that I don't want to attach click events to the wrapper on the page (because it includes the popout menu). I have seen techniques that wraps entire screen with a transparent div, and then attach a click event to hit. Although this may work, it confuses the user if he clicks the hidden div intending to click a link. He will have to click the link twice. Once two close the div (hiding the link), and another time to actually click think. Also, you loose in mouse cursors attached to any masked elements on the page (such as links cursor pointer, etc). There has to be a way to check if the user clicked out of the div without a modal and without attaching events to the entire page (page wrappers and child elements included). onclicksomewhereelse="closeme();" lol.

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • Guidance required: FIrst time gonna work with real high end database (size = 50GB).

    - by claws
    I got a project of designing a Database. This is going to be my first big scale project. Good thing about it is information is mostly organized & currently stored in text files. The size of this information is 50GB. There are going to be few millions of records in each Table. Its going to have around 50 tables. I need to provide a web interface for searching & browsing. I'm going to use MySQL DBMS. I've never worked with a database more than 200MB before. So, speed & performance was never a concern but I followed things like normalization & Indexes. I never used any kind of testing/benchmarking/queryOptimization/whatever because I never had to care about them. But here the purpose of creating a database is to make it quickly searchable. So, I need to consider all possible aspects in design. I was browsing archives & found: http://stackoverflow.com/questions/1981526/what-should-every-developer-know-about-databases http://stackoverflow.com/questions/621884/database-development-mistakes-made-by-app-developers I'm gonna keep the points mentioned in above answers in mind. What else should I know? What else should I keep in mind?

    Read the article

  • SELECT SQL Variable - should i avoid using this syntax and always use SET?

    - by Sholom
    Hi All, This may look like a duplicate to here, but it's not. I am trying to get a best practice, not a technical answer (which i already (think) i know). New to SQL Server and trying to form good habits. I found a great explanation of the functional differences between SET @var = and SELECT @var = here: http://vyaskn.tripod.com/differences_between_set_and_select.htm To summarize what each has that the other hasn't (see source for examples): SET: ANSI and portable, recommended by Microsoft. SET @var = (SELECT column_name FROM table_name) fails when the select returns more then one value, eliminating the possibility of unpredictable results. SET @var = (SELECT column_name FROM table_name) will set @var to NULL if that's what SELECT column_name FROM table_name returned, thus never leaving @var at it's prior value. SELECT: Multiple variables can be set in one statement Can return multiple system variables set by the prior DML statement SELECT @var = column_name FROM table_name would set @var to (according to my testing) the last value returned by the select. This could be a feature or a bug. Behavior can be changed with SELECT @j = (SELECT column_name FROM table_name) syntax. Speed. Setting multiple variables with a single SELECT statement as opposed to multiple SET/SELECT statements is much quicker. He has a sample test to prove his point. If you could design a test to prove the otherwise, bring it on! So, what do i do? (Almost) always use SET @var =, using SELECT @var = is messy coding and not standard. OR Use SELECT @var = freely, it could accomplish more for me, unless the code is likely to be ported to another environment. Thanks

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • Adding and sorting a linked list in C

    - by user1202963
    In my assignment, I have to write a function that takes as arguments a pointer to a "LNode" structure and an integer argument. Then, I have to not only add that integer into the linked list, but also put place it so that the list is in proper ascending order. I've tried several various attempts at this, and this is my code as of posting. LNode* AddItem(LNode *headPtr, int newItem) { auto LNode *ptr = headPtr; ptr = malloc(sizeof(LNode)); if (headPtr == NULL) { ptr->value = newItem; ptr->next = headPtr; return ptr; } else { while (headPtr->value > newItem || ptr->next != NULL) { printf("While\n"); // This is simply to let me know how many times the loop runs headPtr = headPtr->next; } ptr->value = newItem; ptr->next = headPtr; return ptr; } } // end of "AddItem" When I run it, and try to insert say a 5 and then a 3, the 5 gets inserted, but then the while loop runs once and I get a segmentation fault. Also I cannot change the arguments as it's part of a skeletal code for this project. Thanks to anyone who can help. If it helps this is what the structure looks like typedef struct LNode { int value; struct LNode *next; } LNode;

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

  • Automatically converting an A* into a B*

    - by Xavier Nodet
    Hi, Suppose I'm given a class A. I would like to wrap pointers to it into a small class B, some kind of smart pointer, with the constraint that a B* is automatically converted to an A* so that I don't need to rewrite the code that already uses A*. I would therefore want to modify B so that the following compiles... struct A { void foo() {} }; template <class K> struct B { B(K* k) : _k(k) {} //operator K*() {return _k;} //K* operator->() {return _k;} private: K* _k; }; void doSomething(A*) {} void test() { A a; A* pointer_to_a (&a); B<A> b (pointer_to_a); //b->foo(); // I don't need those two... //doSomething(b); B<A>* pointer_to_b (&b); pointer_to_b->foo(); // 'foo' : is not a member of 'B<K>' doSomething(pointer_to_b); // 'doSomething' : cannot convert parameter 1 from 'B<K> *' to 'A *' } Note that B inheriting from A is not an option (instances of A are created in factories out of my control)... Is it possible? Thanks.

    Read the article

  • My java program seems to be skipping over the try{}, executing the catch{} and then throwing a NullPointerException. What should I do?

    - by Matt Bolopue
    I am writing a program that calculates the number of words, syllables, and sentences in any given text file. I don't need help finding those numbers, however my program (which currently should only find the number of words in the text file) will not import the text file even when I type in the name of the file correctly. The text file is in the same folder as the source code. Instead it tells me every time that what I typed in has the wrong file extension (see my catch{}) and then proceeds to throw a null pointer. I am at a loss for what could be causing it. Any suggestions? import java.io.*; import java.util.*; public class Reading_Lvl_Calc { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub int words = 0; String fileName; Scanner scan; Scanner keyread = new Scanner(System.in); System.out.println("Please enter a file name (or QUIT to exit)"); fileName = keyread.nextLine(); File doc = new File(fileName); //while(scan.equals(null)){ try { scan = new Scanner(doc); } catch(Exception e) { if(fileName.substring(fileName.indexOf(".")) != ".txt") System.out.println("I'm sorry, the file \"" + fileName + "\" has an invalid file extension."); else System.out.println("I am sorry, the file \"" + fileName + " \" cannot be found.\n The file must be in the same directory as this program"); scan = null; } // } while(scan.hasNext()){ words++; scan.next(); } System.out.println(words); } }

    Read the article

  • How to identify multiple identical pairs in two vectors

    - by Sacha Epskamp
    In my graph-package (as in graph theory, nodes connected by edges) I have a vector indicating for each edge the node of origin from, a vector indicating for each edge the node of destination to and a vector indicating the curve of each edge curve. By default I want edges to have a curve of 0 if there is only one edge between two nodes and curve of 0.2 if there are two edges between two nodes. The code that I use now is a for-loop, and it is kinda slow: curve <- rep(0,5) from<-c(1,2,3,3,2) to<-c(2,3,4,2,1) for (i in 1:length(from)) { if (any(from==to[i] & to==from[i])) { curve[i]=0.2 } } So basically I look for each edge (one index in from and one in to) if there is any other pair in from and to that use the same nodes (numbers). What I am looking for are two things: A way to identify if there is any pair of nodes that have two edges between them (so I can omit the loop if not) A way to speed up this loop # EDIT: To make this abit clearer, another example: from <- c(4L, 6L, 7L, 8L, 1L, 9L, 5L, 1L, 2L, 1L, 10L, 2L, 6L, 7L, 10L, 4L, 9L) to <- c(1L, 1L, 1L, 2L, 3L, 3L, 4L, 5L, 6L, 7L, 7L, 8L, 8L, 8L, 8L, 10L, 10L) cbind(from,to) from to [1,] 4 1 [2,] 6 1 [3,] 7 1 [4,] 8 2 [5,] 1 3 [6,] 9 3 [7,] 5 4 [8,] 1 5 [9,] 2 6 [10,] 1 7 [11,] 10 7 [12,] 2 8 [13,] 6 8 [14,] 7 8 [15,] 10 8 [16,] 4 10 [17,] 9 10 In these two vectors, pair 3 is identical to pair 10 (both 1 and 7 in different orders) and pairs 4 and 12 are identical (both 2 and 8). So I would want curve to become: [1,] 0.0 [2,] 0.0 [3,] 0.2 [4,] 0.2 [5,] 0.0 [6,] 0.0 [7,] 0.0 [8,] 0.0 [9,] 0.0 [10,] 0.2 [11,] 0.0 [12,] 0.2 [13,] 0.0 [14,] 0.0 [15,] 0.0 [16,] 0.0 [17,] 0.0 (as I vector, I transposed twice to get row numbers).

    Read the article

  • C++ Dynamic Allocation Mismatch: Is this problematic?

    - by acanaday
    I have been assigned to work on some legacy C++ code in MFC. One of the things I am finding all over the place are allocations like the following: struct Point { float x,y,z; }; ... void someFunc( void ) { int numPoints = ...; Point* pArray = (Point*)new BYTE[ numPoints * sizeof(Point) ]; ... //do some stuff with points ... delete [] pArray; } I realize that this code is atrociously wrong on so many levels (C-style cast, using new like malloc, confusing, etc). I also realize that if Point had defined a constructor it would not be called and weird things would happen at delete [] if a destructor had been defined. Question: I am in the process of fixing these occurrences wherever they appear as a matter of course. However, I have never seen anything like this before and it has got me wondering. Does this code have the potential to cause memory leaks/corruption as it stands currently (no constructor/destructor, but with pointer type mismatch) or is it safe as long as the array just contains structs/primitive types?

    Read the article

  • How to access the element of a list/vector that passed by reference in C++

    - by bsoundra
    Hi all, The problem is passing lists/vectors by reference int main(){ list<int> arr; //Adding few ints here to arr func1(&arr); return 0; } void func1(list<int> * arr){ // How Can I print the values here ? //I tried all the below , but it is erroring out. cout<<arr[0]; // error cout<<*arr[0];// error cout<<(*arr)[0];//error //How do I modify the value at the index 0 ? func2(arr);// Since it is already a pointer, I am passing just the address } void func2(list<int> *arr){ //How do I print and modify the values here ? I believe it should be the same as above but // just in case. } Is the vectors any different from the lists ? Thanks in advance. Any links where these things are explained elaborately will be of great help. Thanks again.

    Read the article

  • Time taken for memcpy decreases after certain point

    - by tss
    I ve a code which increases the size of the memory(identified by a pointer) exponentially. Instead of realloc, I use malloc followed by memcpy.. Something like this.. int size=5,newsize; int *c = malloc(size*sizeof(int)); int *temp; while(1) { newsize=2*size; //begin time temp=malloc(newsize*sizeof(int)); memcpy(temp,c,size*sizeof(int)); //end time //print time in mili seconds c=temp; size=newsize; } Thus the number of bytes getting copied is increasing exponentially. The time required for this task also increases almost linearly with the increase in size. However after certain point, the time taken abruptly reduces to a very small value and then remains constant. I recorded time for similar code, copyin data(Of my own type) 5 -> 10 - 2 ms 10 -> 20 - 2 ms . . 2560 -> 5120 - 5 ms . . 20480 -> 40960 - 30 ms 40960 -> 91920 - 58 ms 367680 -> 735360 - 2 ms 735360 -> 1470720 - 2 ms 1470720 -> 2941440 - 2 ms What is the reason for this drop in time ? Does a more optimal memcpy method get called when the size is large ?

    Read the article

  • Deleting from 2 arrays of dictionaries

    - by Matt Winters
    I have an array of dictionaries loaded from a plist (below) called arrayHistory. <plist version="1.0"> <array> <dict> <key>item</key> <string>1</string> <key>result</key> <string>8.1</string> <key>date</key> <date>2009-12-15T19:36:59Z</date> </dict> ... </array> </plist> I filter this array based on 'item' so that a second array, arrayHistoryDetail has the same structure as arrayHistory but only contains e.g. 'item's equal to '1'. These detail items are successfully displayed in a tableView. I then want to select an item from the tableView, and delete it from the tableView data source, arrayHistoryDetail (line 2 in the code below) - works, then I want to delete the item from the tableView itself (line 3 in the code below) - also works. My problem is that I also need to delete it from the original arrayHistory, so I tried the following: created a temporary dictionary as an ivar: NSMutableDictionary *tempDict; @property (nonatomic, retain) NSMutableDictionary *tempDict; Then my thinking was to make a copy in line 1 and remove it from the original array in line 4. 1 tempDict = [arrayHistoryDetail objectAtIndex: indexPath.row]; 2 [arrayHistoryDetail removeObjectAtIndex: indexPath.row]; 3 [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; 4 [arrayHistory removeObject:tempDict]; Didn't work. Could someone please guide me in the right direction. I'm thinking that tempDict is a pointer and that removeObject needs a copy? I don't know. Thanks.

    Read the article

  • Quickly accessing files in a 'project'

    - by bbbscarter
    Hi all. I'm looking for a way to quickly open files in my project's source tree. What I've been doing so far is adding files to the file-name-cache like so: (file-cache-add-directory-recursively (concat project-root "some/sub/folder") ".*\\.\\(py\\)$") after which I can use anything-for-files to access any file in the source tree with about 4 keystrokes. Unfortunately, this solution started falling over today. I've added another folder to the cache and emacs has started running out of memory. What's weird is that this folder contains less than 25% of files I'm adding, and yet emacs memory use goes up from 20mb to 400mb on adding just this folder. The total number of files is around 2000, so this memory use seems very high. Presumably I'm abusing the file cache. Anyway, what do other people do for this? I like this solution for its simplicity and speed; I've looked at some of the many, many project management packages for emacs and none of them really grabbed me... Thanks in advance! Simon

    Read the article

  • local vs core contoller

    - by latvian
    Hi, I am adding new column and action in the local admin app/code/local/Mage/Adminhtml/Block/Catalog/Product/Grid.php which works fine, however. The local controller/app/code/local/Mage/Adminhtml/Block/Catalog/Product.php is not being used or is not overloading the admin one /app/code/core/Mage/Adminhtml/Block/Catalog/Product.php. This is almost fresh install of Magento 1.4.0.1. I am the only one working, so i know it is not overloaded by some custom controller. I have disabled all custom modules. I have rolled back most of my changes. I have checked /etc/Modules/Mage_Catalog.xml. Refreshed cache all possible ways, loged in and out. Nothing....still using the core contoller copy. why? How do you troubleshoot, meaning, at what moment magento decides using between core or local copies? ...its even more strange because it does not parse local Adminhtml config.xml but uses local Adminthml copy of Blocks. Any pointer would help. I would like to keep everything in local code. Thank You, Margots

    Read the article

  • Return a Const Char* by reading an @property NSString in separate class

    - by Andrew
    I'm probably being an idiot here, but I cannot for the life of me find the answer that I'm looking for. I have an array of CalEvents returned from a CalendarStore query, and for other reasons I am finding the first location of any upcoming event for today that is not an all-day or multi-day event. +(const char*) suggestFirstiCalLocation{ CalCalendarStore *store = [CalCalendarStore defaultCalendarStore]; NSPredicate *allEventsPredicate = [CalCalendarStore eventPredicateWithStartDate:[NSDate date] endDate:[[NSDate date] initWithTimeIntervalSinceNow:3600] calendars:[store calendars]]; NSArray *currentEventCalendarArray = [store eventsWithPredicate:allEventsPredicate]; for (int i = 0; i< [currentEventCalendarArray count]; i++){ if (![[currentEventCalendarArray objectAtIndex:i] isAllDay]){ //Now that other events are cleared, check for multi-day NSDate *startOnDate = [[currentEventCalendarArray objectAtIndex:i] startDate]; NSDate *endOnDate = [[currentEventCalendarArray objectAtIndex:i] endDate]; if ([endOnDate timeIntervalSinceDate:startOnDate ] < 86400.0){ NSString * iCalLocation = [[currentEventCalendarArray objectAtIndex:i] location]; return [iCalLocation UTF8String]; } } } return ""; } For other reasons, I am returning a const char with the value of the location that is found. However, I cannot seem to return "iCalLocation" at all. The compiler fails on the line where I am initializing the "iCalLocation" variable: "Cannot convert to pointer type" Being frank: I am new to Objective-C, and I am still trying to figure points, properties, and such out.

    Read the article

  • jQuery slideToggle jump on close

    - by spankmaster79
    Hi, I'm trying to slideToggle a in a table with jQuery and it works in FF, OP, CHrome. Only IE (6,7,8) is giving me problems. It slides down perfectly down and up, but after the slide up animation is finished. The hidden pops up in full height and then closes. So I guess it must be somwhere inbetween when it switches from a minimal height to "display:none" that it appears for a short second. The code is built dynamically but I'll try to give an example: <table> <tr> <td> <script type="text/javascript"> function toggleTr_{$dynamicID}() { $('#content_{$dynamicID}').slideToggle('slow'); /* DO SOME OTHER STOFF LIKE COLOR CHANGES CSS CLASS CHANGES */ } </script> </td> </tr> <tr id="list_{$dynamicID}" onclick="toggleTr_{$dynamicID}();" style="cursor:pointer;"> <td> <!-- INFO HEADER --> </td> </tr> <tr> <td> <div id="content_{$dynamicID}" style="display:none;"> <!-- INFO BODY HIDDEN --> </div </td> </tr> Other problems here with slideToggle only descibed problems with padding, margin, or problems with the animation, but that all works. Help is appreciated. Thx, Spanky

    Read the article

  • How to cast C struct just another struct type if their memory size are equal?

    - by Eonil
    I have 2 matrix structs means equal data but have different form like these: // Matrix type 1. typedef float Scalar; typedef struct { Scalar e[4]; } Vector; typedef struct { Vector e[4]; } Matrix; // Matrix type 2 (you may know this if you're iPhone developer) struct CATransform3D { CGFloat m11, m12, m13, m14; CGFloat m21, m22, m23, m24; CGFloat m31, m32, m33, m34; CGFloat m41, m42, m43, m44; }; typedef struct CATransform3D CATransform3D; Their memory size are equal. So I believe there is a way to convert these types without any pointer operations or copy like this: // Implemented from external lib. CATransform3D CATransform3DMakeScale (CGFloat sx, CGFloat sy, CGFloat sz); Matrix m = (Matrix)CATransform3DMakeScale ( 1, 2, 3 ); Is this possible? Currently compiler prints an "error: conversion to non-scalar type requested" message.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • How to draw a part of a window into a memory device context?

    - by Nell
    I'm using simple statements to keep it, er, simple: The screen goes from 0, 0 to 1000, 1000 (screen coordinates). A window goes from 100, 100 to 900, 900 (screen coordinates). I have a memory device context that goes from 0, 0 to 200, 200 (logical coordinates). I need to send a WM_PRINT message to the window. I can pass the device context to the window via WM_PRINT, but I cannot pass which part of its window it should draw into the device context. Is there some way to alter the device context that will result in the window drawing a specific part of itself into the device context (say, its bottom right portion from 700, 700 to 900, 900)? (This is all under plain old GDI and in C or C++. Any solution must be too.) Please note: This problem is part of a larger solution in which the device context size is fixed and speed is crucial, so I cannot draw the window in full into a separate device context and blit the part I want from the resultant full bitmap into my device context.

    Read the article

  • pthread_create followed by pthread_detach still results in possibly lost error in Valgrind.

    - by alesplin
    I'm having a problem with Valgrind telling me I have some memory possible lost: ==23205== 544 bytes in 2 blocks are possibly lost in loss record 156 of 265 ==23205== at 0x6022879: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==23205== by 0x540E209: allocate_dtv (in /lib/ld-2.12.1.so) ==23205== by 0x540E91D: _dl_allocate_tls (in /lib/ld-2.12.1.so) ==23205== by 0x623068D: pthread_create@@GLIBC_2.2.5 (in /lib/libpthread-2.12.1.so) ==23205== by 0x758D66: MTPCreateThreadPool (MTP.c:290) ==23205== by 0x405787: main (MServer.c:317) The code that creates these threads (MTPCreateThreadPool) basically gets an index into a block of waiting pthread_t slots, and creates a thread with that. TI becomes a pointer to a struct that has a thread index and a pthread_t. (simplified/sanitized): for (tindex = 0; tindex < NumThreads; tindex++) { int rc; TI = &TP->ThreadInfo[tindex]; TI->ThreadID = tindex; rc = pthread_create(&TI->ThreadHandle,NULL,MTPHandleRequestsLoop,TI); /* check for non-success that I've omitted */ pthread_detach(&TI->ThreadHandle); } Then we have a function MTPDestroyThreadPool that loops through all the threads we created and cancels them (since the MTPHandleRequestsLoop doesn't exit). for (tindex = 0; tindex < NumThreads; tindex++) { pthread_cancel(TP->ThreadInfo[tindex].ThreadHandle); } I've read elsewhere (including other questions here on SO) that detaching a thread explicitly would prevent this possibly lost error, but it clearly isn't. Any thoughts?

    Read the article

  • Should this work?

    - by Noah Roberts
    I am trying to specialize a metafunction upon a type that has a function pointer as one of its parameters. The code compiles just fine but it will simply not match the type. #include <iostream> #include <boost/mpl/bool.hpp> #include <boost/mpl/identity.hpp> template < typename CONT, typename NAME, typename TYPE, TYPE (CONT::*getter)() const, void (CONT::*setter)(TYPE const&) > struct metafield_fun {}; struct test_field {}; struct test { int testing() const { return 5; } void testing(int const&) {} }; template < typename T > struct field_writable : boost::mpl::identity<T> {}; template < typename CONT, typename NAME, typename TYPE, TYPE (CONT::*getter)() const > struct field_writable< metafield_fun<CONT,NAME,TYPE,getter,0> > : boost::mpl::false_ {}; typedef metafield_fun<test, test_field, int, &test::testing, 0> unwritable; int main() { std::cout << typeid(field_writable<unwritable>::type).name() << std::endl; std::cin.get(); } Output is always the type passed in, never bool_.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >