Search Results

Search found 12988 results on 520 pages for 'performance'.

Page 467/520 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • C++: Copy contructor: Use Getters or access member vars directly?

    - by cbrulak
    Have a simple container class: public Container { public: Container() {} Container(const Container& cont) //option 1 { SetMyString(cont.GetMyString()); } //OR Container(const Container& cont) //option 2 { m_str1 = cont.m_str1; } public string GetMyString() { return m_str1;} public void SetMyString(string str) { m_str1 = str;} private: string m_str1; } So, would you recommend this method or accessing the member variables directly? In the example, all code is inline, but in our real code there is no inline code. Update (29 Sept 09): Some of these answers are well written however they seem to get missing the point of this question: this is simple contrived example to discuss using getters/setters vs variables initializer lists or private validator functions are not really part of this question. I'm wondering if either design will make the code easier to maintain and expand. Some ppl are focusing on the string in this example however it is just an example, imagine it is a different object instead. I'm not concerned about performance. we're not programming on the PDP-11

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

  • UDP security and identifying incoming data.

    - by Charles
    I have been creating an application using UDP for transmitting and receiving information. The problem I am running into is security. Right now I am using the IP/socketid in determining what data belongs to whom. However, I have been reading about how people could simply spoof their IP, then just send data as a specific IP. So this seems to be the wrong way to do it (insecure). So how else am I suppose to identify what data belongs to what users? For instance you have 10 users connected, all have specific data. The server would need to match the user data to this data we received. The only way I can see to do this is to use some sort of client/server key system and encrypt the data. I am curious as to how other applications (or games, since that's what this application is) make sure their data is genuine. Also there is the fact that encryption takes much longer to process than unencrypted. Although I am not sure by how much it will affect performance. Any information would be appreciated. Thanks.

    Read the article

  • How much does an InnoDB table benefit from having fixed-length rows?

    - by Philip Eve
    I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's give an example: I have the following table CREATE TABLE `PlayerGameRcd` ( `User` SMALLINT UNSIGNED NOT NULL, `Game` MEDIUMINT UNSIGNED NOT NULL, `GameResult` ENUM('Quit', 'Kicked by Vote', 'Kicked by Admin', 'Kicked by System', 'Finished 5th', 'Finished 4th', 'Finished 3rd', 'Finished 2nd', 'Finished 1st', 'Game Aborted', 'Playing', 'Hide' ) NOT NULL DEFAULT 'Playing', `Inherited` TINYINT NOT NULL, `GameCounts` TINYINT NOT NULL, `Colour` TINYINT UNSIGNED NOT NULL, `Score` SMALLINT UNSIGNED NOT NULL DEFAULT 0, `NumLongTurns` TINYINT UNSIGNED NOT NULL DEFAULT 0, `Notes` MEDIUMTEXT, `CurrentOccupant` TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (`Game`, `User`), UNIQUE KEY `PGR_multi_uk` (`Game`, `CurrentOccupant`, `Colour`), INDEX `Stats_ind_PGR` (`GameCounts`, `GameResult`, `Score`, `User`), INDEX `GameList_ind_PGR` (`User`, `CurrentOccupant`, `Game`, `Colour`), CONSTRAINT `Constr_PlayerGameRcd_User_fk` FOREIGN KEY `User_fk` (`User`) REFERENCES `User` (`UserID`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `Constr_PlayerGameRcd_Game_fk` FOREIGN KEY `Game_fk` (`Game`) REFERENCES `Game` (`GameID`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARACTER SET utf8 COLLATE utf8_general_ci The only column that is nullable is Notes, which is MEDIUMTEXT. This table presently has 33097 rows (which I appreciate is small as yet). Of these rows, only 61 have values in Notes. How much of an improvement might I see from, say, adding a new table to store the Notes column in and performing LEFT JOINs when necessary?

    Read the article

  • C++ Array vs vector

    - by blue_river
    when using C++ vector, time spent is 718 milliseconds, while when I use Array, time is almost 0 milliseconds. Why so much performance difference? int _tmain(int argc, _TCHAR* argv[]) { const int size = 10000; clock_t start, end; start = clock(); vector<int> v(size*size); for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { v[i*size+j] = 1; } } end = clock(); cout<< (end - start) <<" milliseconds."<<endl; // 718 milliseconds int f = 0; start = clock(); int arr[size*size]; for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { arr[i*size+j] = 1; } } end = clock(); cout<< ( end - start) <<" milliseconds."<<endl; // 0 milliseconds return 0; }

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • Advice on Minimizing Stored Procedure Parameters

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC Web Application that interacts with a SQL Server 2008 database via Entity Framework 4.0. On a particular page, i call a stored procedure in order to pull back some results based on selections on the UI. Now, the UI has around 20 different input selections, ranging from a textbox, dropdown list, checkboxes, etc. Each of those inputs are "grouped" into logical sections. Example: Search box : "Foo" Checkbox A1: ticked, Checkbox A2: unticked Dropdown A: option 3 selected Checkbox B1: ticked, Checkbox B2: ticked, Checkbox B3: unticked So i need to call the SPROC like this: exec SearchPage_FindResults @SearchQuery = 'Foo', @IncludeA1 = 1, @IncludeA2 = 0, @DropDownSelection = 3, @IncludeB1 = 1, @IncludeB2 = 1, @IncludeB3 = 0 The UI is not too important to this question - just wanted to give some perspective. Essentially, i'm pulling back results for a search query, filtering these results based on a bunch of (optional) selections a user can filter on. Now, My questions/queries: What's the best way to pass these parameters to the stored procedure? Are there any tricks/new ways (e.g SQL Server 2008) to do this? Special "table" parameters/arrays - can we pass through User-Defined-Types? Keep in mind im using Entity Framework 4.0 - but could always use classic ADO.NET for this if required. What about XML? What are the serialization/de-serialization costs here? Is it worth it? How about a parameter for each logical section? Comma-seperated perhaps? Just thinking out loud. This page is particulary important from a user point of view, and needs to perform really well. The stored procedure is already heavy in logic, so i want to minimize the performance implications - so keep that in mind. With that said - what is the best approach here?

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • Array of Arrays - writing to File problem

    - by iFloh
    Hi, and again my array of arrays ... I try to improve my app performance by buffering arrays on file for later reuse. I have an NSMutableArray that contains about 30 NSMutableArrays with NSNumber, NSDate and NSString Objects. I try to write the file using this call: bool result = [myArray writeToFile:[fileMethods getFullPath:[NSString stringWithFormat:@"iEts%@.arr", [aDate shortDateString]]] atomically:NO]; = result = FALSE. The Path method is: + (NSString *) getFullPath:(NSString *)forFileName { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:forFileName]; } and the aDate call returns a shortDateString with ddMMyy. The NSLog NSLog(@"%@", [fileMethods getFullPath:[NSString stringWithFormat:@"iEts%@.arr", [aDate shortDateString]]]); on the path generation returns: /Users/me/Library/Application Support/iPhone Simulator/User/Applications/86729620-EC1D-4C10-A799-0C638BB27933/Documents/iEts010510.arr FURTHER: It must have something to do with the Array of Arrays, since I also write 3 further simple arrays (containing NSStrings) that all succeed. The Array of Arrays gets generated using the addObject method Any ideas what could cause the trouble?

    Read the article

  • R optimization: How can I avoid a for loop in this situation?

    - by chrisamiller
    I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops. In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this: 0 100 200 300 400 500 600 windows: |-----|-----|-----|-----|-----|-----| mylist: |-| |-----------| So I wrote some code to do just that, but it's fairly slow and has become a bottleneck in my code: ##window for each 100-bp segment windows <- numeric(6) ##second track mylist = vector("list") mylist[[1]] = c(1,20) mylist[[2]] = c(120,320) ##do the intersection for(i in 1:length(mylist)){ st <- floor(mylist[[i]][1]/100)+1 sp <- floor(mylist[[i]][2]/100)+1 for(j in st:sp){ b <- max((j-1)*100, mylist[[i]][1]) e <- min(j*100, mylist[[i]][2]) windows[j] <- windows[j] + e - b + 1 } } print(windows) [1] 20 81 101 21 0 0 Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly. I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?

    Read the article

  • Setting pixel values in Nvidia NPP ImageCPU objects?

    - by solvingPuzzles
    In the Nvidia Performance Primitives (NPP) image processing examples in the CUDA SDK distribution, images are typically stored on the CPU as ImageCPU objects, and images are stored on the GPU as ImageNPP objects. boxFilterNPP.cpp is an example from the CUDA SDK that uses these ImageCPU and ImageNPP objects. When using a filter (convolution) function like nppiFilter, it makes sense to define a filter as an ImageCPU object. However, I see no clear way setting the values of an ImageCPU object. npp::ImageCPU_32f_C1 hostKernel(3,3); //allocate space for 3x3 convolution kernel //want to set hostKernel to [-1 0 1; -1 0 1; -1 0 1] hostKernel[0][0] = -1; //this doesn't compile hostKernel(0,0) = -1; //this doesn't compile hostKernel.at(0,0) = -1; //this doesn't compile How can I manually put values into an ImageCPU object? Notes: I didn't actually use nppiFilter in the code snippet; I'm just mentioning nppiFilter as a motivating example for writing values into an ImageCPU object. The boxFilterNPP.cpp example doesn't involve writing directly to an ImageCPU object, because nppiFilterBox is a special case of nppiFilter that uses a built-in gaussian smoothing filter (probably something like [1 1 1; 1 1 1; 1 1 1]).

    Read the article

  • How to give new life into a five years old, simple but reliable PHP form?

    - by Sam
    Hi all. I have a script in php 5.2. I want to use a simple form. I found something a programmer made for me about 5 years ago. When I use it, PHP outputs an error now unless I set register_long_arrays = On, then it works fine. On the PHP website, however, it says: Warning This feature has been DEPRECATED as of PHP 5.3.0. Relying on this feature is highly discouraged. It's recommended to turn them off, for performance reasons. Instead, use the superglobal arrays, like $_GET. Should I listen to PHP's warning, or just enable the option and keep using my old form happily? If the former, then how/where do I change this simple form, so it does not rely on the deprecated setting? Your answer is much appreciated. form.htm <html><body> <form method="POST" action="form_sent.php"> ... </form> </body></html> form_sent.php <html><body> <?php $email = $HTTP_POST_VARS[email]; $mailto = "[email protected]"; $mailsubj = "A Form was Sent from Website!"; $mailhead = "From: $email\n"; reset ($HTTP_POST_VARS); $mailbody = "Values submitted from web site form:\n"; while (list($key, $val) = each ($HTTP_POST_VARS)){$mailbody .= "$key : $val\n";} if (!eregi("\n",$HTTP_POST_VARS[email])) { mail($mailto, $mailsubj, $mailbody, $mailhead); } ?> <b>Form Sent. Thank you.</b> </body></html>

    Read the article

  • Inline form editing on client side

    - by bykasif
    I see some web sites use dynamic forms(I am not sure about how to call them!) to edit a group of data. For example: there is a group of data such as name, last name, city, country.etc. when user clicks on EDIT button, instead of doing postback, a form, consisisting of 2 textboxes + 2 comboboxes, dynamically opens to edit,And then when you click on Save button, edit form disappears, and all data updates.. Now, I know what happens over here is using Ajax for server calls and some javascript for dom manipulation.. I even found some jquery plugins for textbox editing.. However, I could not found anything for full implementation of form fields. Therefore I have implemented it on asp.net by jquery ajax calls and dom manipulation manually. here is my process: 1) when Edit button clicked: Make a ajax call to server to retrieve necessary formedit.aspx 2) it returns editable form fields with values assigned. 3) when Save button clicked: make ajax call to server to retrieve formupdateprocess.aspx page. it basically do the database updates and then return necessary DOM snipplet (...) to insert current page.. well it works but MY PROBLEM, is performance.. Result seems slower than samples I see in other sites.:(( IS there anything that I dont know? a better way to implement this??

    Read the article

  • C++ Virtual Methods for Class-Specific Attributes or External Structure

    - by acanaday
    I have a set of classes which are all derived from a common base class. I want to use these classes polymorphically. The interface defines a set of getter methods whose return values are constant across a given derived class, but vary from one derived class to another. e.g.: enum AVal { A_VAL_ONE, A_VAL_TWO, A_VAL_THREE }; enum BVal { B_VAL_ONE, B_VAL_TWO, B_VAL_THREE }; class Base { //... virtual AVal getAVal() const = 0; virtual BVal getBVal() const = 0; //... }; class One : public Base { //... AVal getAVal() const { return A_VAL_ONE }; BVal getBVal() const { return B_VAL_ONE }; //... }; class Two : public Base { //... AVal getAVal() const { return A_VAL_TWO }; BVal getBVal() const { return B_VAL_TWO }; //... }; etc. Is this a common way of doing things? If performance is an important consideration, would I be better off pulling the attributes out into an external structure, e.g.: struct Vals { AVal a_val; VBal b_val; }; storing a Vals* in each instance, and rewriting Base as follows? class Base { //... public: AVal getAVal() const { return _vals->a_val; }; BVal getBVal() const { return _vals->b_val; }; //... private: Vals* _vals; }; Is the extra dereference essentially the same as the vtable lookup? What is the established idiom for this type of situation? Are both of these solutions dumb? Any insights are greatly appreciated

    Read the article

  • Cannot disable index during PL/SQL procedure

    - by nw
    I've written a PL/SQL procedure that would benefit if indexes were first disabled, then rebuilt upon completion. An existing thread suggests this approach: alter session set skip_unusable_indexes = true; alter index your_index unusable; [do import] alter index your_index rebuild; However, I get the following error on the first alter index statement: SQL Error: ORA-14048: a partition maintenance operation may not be combined with other operations ORA-06512: [...] 14048. 00000 - "a partition maintenance operation may not be combined with other operations" *Cause: ALTER TABLE or ALTER INDEX statement attempted to combine a partition maintenance operation (e.g. MOVE PARTITION) with some other operation (e.g. ADD PARTITION or PCTFREE which is illegal *Action: Ensure that a partition maintenance operation is the sole operation specified in ALTER TABLE or ALTER INDEX statement; operations other than those dealing with partitions, default attributes of partitioned tables/indices or specifying that a table be renamed (ALTER TABLE RENAME) may be combined at will The problem index is defined so: CREATE INDEX A11_IX1 ON STREETS ("SHAPE") INDEXTYPE IS "SDE"."ST_SPATIAL_INDEX" PARAMETERS ('ST_GRIDS=890,8010,72090 ST_SRID=2'); This is a custom index type from a 3rd-party vendor, and it causes chronic performance degradation during high-volume update/insert/delete operations. Any suggestions on how to work around this error? By the way, this error only occurs within a PL/SQL block.

    Read the article

  • Using an embedded DB (SQLite / SQL Compact) for Message Passing within an app?

    - by wk1989
    Hello, Just out of curiosity, for applications that have a fairly complicated module tree, would something like sqlite/sql compact edition work well for message passing? So if I have modules containing data such as: \SubsystemA\SubSubSysB\ModuleB\ModuleDataC, \SubSystemB\SubSubSystemC\ModuleA\ModuleDataX Using traditional message passing/routing, you have to go through intermediate modules in order to pass a message to ModuleB to request say ModuleDataC. Instead of doing that, if we we simply store "\SubsystemA\SubSubSysB\ModuleB\ModuleDataC" in a sqlite database, getting that data is as simple as a sql query and needs no routing and passing stuff around. Has anyone done this before? Even if you haven't, do you foresee any issues & performance impact? The only concern I have right now would be the passing of custom types, e.g. if ModuleDataC is a custom data structure or a pointer, I'll need some way of storing the data structure into the DB or storing the pointer into the DB. Thanks, JW EDIT One usage case I haven't thought about is when you want to send a message from ModuleA to ModuleB to get ModuleB to do something rather than just getting/setting data. Is it possible to do this using an embedded DB? I believe callback from the DB would be needed, how feasible is this?

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • Searching the head of CVS

    - by bobtheowl2
    I'm looking for a 'relatively' easy way to search through cvs to look for a particular string in the HEAD revisions. I realize the way CVS stores versions makes this difficult. But I'm trying to come up with some script to allow this search (performance is not expected here). Currently this command will output the contents of the head files cvs co -r HEAD -p stdout = file contents (to be grepped for the search string) stderr = the file name/header info (to be grepped for the line that signifies file name). Ideally, I want to grep the contents and display the header + a few lines before and after the searched item (the output of this likely directed to some file). I found a way to grep the stdout and stderr using different values. And the resulting stdout/stderr displayed is in the right order. But any attempt to redirect it to a file messes up the order? { { cvs co -r HEAD -p myModule 4>&- | grep 'myString' 2>&4 4>&- } 4>&2 2>&1 >&3 3>&- | grep 'Check' >&2 3>&- } 3>&1 Question 1. Is there an easier way to do this all together? Question 2. If not, how do I get the output of the code above to append to a file in the same order as displayed on the console?

    Read the article

  • Custom ADO.NET provider to intercept and modify sql queries.

    - by Faisal
    Our client has an application that stores blobs in database which has now grown enough to impact the performance of SQL Server. To overcome this issue, we are planning to offload all blobs to file system and leave the path of file in a new column in user table. Like if user has a table docs with columns id, name and content (blob); we would ask him to add a new column 'filepath' in this table. Our client is willing to make this change in this database. But when it comes to changing the sql queries to read and write into this table, they are not ready to accep this. Actually, they don't want any change that results in recompilation and deployment. Now we are planning to write a custom ADO.NET provider that will intercept the select queries add a column 'filepath' at the end of the select statement retieve the result set and modify the 'content' column value based on 'filepath' value Is there any use case that you think will certainly fail with this approach? I know this sounds dirty but do we have a better way?

    Read the article

  • What's the best practice for handling system-specific information under version control?

    - by Joe
    I'm new to version control, so I apologize if there is a well-known solution to this. For this problem in particular, I'm using git, but I'm curious about how to deal with this for all version control systems. I'm developing a web application on a development server. I have defined the absolute path name to the web application (not the document root) in two places. On the production server, this path is different. I'm confused about how to deal with this. I could either: Reconfigure the development server to share the same path as the production Edit the two occurrences each time production is updated. I don't like #1 because I'd rather keep the application flexible for any future changes. I don't like #2 because if I start developing on a second development server with a third path, I would have to change this for every commit and update. What is the best way to handle this? I thought of: Using custom keywords and variable expansion (such as setting the property $PATH$ in the version control properties and having it expanded in all the files). Git doesn't support this because it would be a huge performance hit. Using post-update and pre-commit hooks. Possibly the likely solution for git, but every time I looked at the status, it would report the two files as being changed. Not really clean. Pulling the path from a config file outside of version control. Then I would have to have the config file in the same location on all servers. Might as well just have the same path to begin with. Is there an easy way to deal with this? Am I over thinking it?

    Read the article

  • Using an unencoded key vs a real Key, benefits?

    - by user246114
    Hi, I am reading the docs for Key generation in app engine. I'm not sure what effect using a simple String key has over a real Key. For example, when my users sign up, they must supply a unique username: class User { /** Key type = unencoded string. */ @PrimaryKey private String name; } now if I understand the docs correctly, I should still be able to generate named keys and entity groups using this, right?: // Find an instance of this entity: User user = pm.findObjectById(User.class, "myusername"); // Create a new obj and put it in same entity group: Key key = new KeyFactory.Builder( User.class.getSimpleName(), "myusername") .addChild(Goat.class.getSimpleName(), "baa").getKey(); Goat goat = new Goat(); goat.setKey(key); pm.makePersistent(goat); the Goat instance should now be in the same entity group as that User, right? I mean there's no problem with leaving the User's primary key as just the raw String? Is there a performance benefit to using a Key though? Should I update to: class User { /** Key type = unencoded string. */ @PrimaryKey private Key key; } // Generate like: Key key = KeyFactory.createKey( User.class.getSimpleName(), "myusername"); user.setKey(key); it's almost the same thing, I'd still just be generating the Key using the unique username anyway, Thanks

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >