Search Results

Search found 7551 results on 303 pages for 'pre optimization'.

Page 78/303 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Which way to store this data is effective?

    - by Tattat
    I am writing a game, which need a map, and I want to store the map. The first thing I can think of, is using a 2D-array. But the problem is what data should I store in the 2D-array. The player can tap different place to have different reaction. So, I am thinking store a 2D-array with objects, when player click some position, and I find it in the array, and use the object in that array to execute a cmd. But I have a concern that storing lots of object may use lots of memory. So, I am think storing char/int only. But it seems that not enough for me. I want to store the data like that: { Type:1 Color:Green } No matter what color is, if they are all type 1, the have same reactions in logic, but the visual effect is based on the color. So, it is not easy to store using a prue char/int data, unless I make something like this: 1-5 --> all type 1. 1=color green , 2=color red, 3 = color yellow.... ... 6-10 --> all type 2. 2 = color green, 2 = color red ... ... So, do you have any ideas on how to minimize the ram use, but also easy for me to read... ...thx

    Read the article

  • Filtering with joined tables

    - by viraptor
    I'm trying to get some query performance improved, but the generated query does not look the way I expect it to. The results are retrieved using: query = session.query(SomeModel). options(joinedload_all('foo.bar')). options(joinedload_all('foo.baz')). options(joinedload('quux.other')) What I want to do is filter on the table joined via 'first', but this way doesn't work: query = query.filter(FooModel.address == '1.2.3.4') It results in a clause like this attached to the query: WHERE foos.address = '1.2.3.4' Which doesn't do the filtering in a proper way, since the generated joins attach tables foos_1 and foos_2. If I try that query manually but change the filtering clause to: WHERE foos_1.address = '1.2.3.4' AND foos_2.address = '1.2.3.4' It works fine. The question is of course - how can I achieve this with sqlalchemy itself?

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • Creating objects makes the VM faster?

    - by Sudhir Jonathan
    Look at this piece of code: MessageParser parser = new MessageParser(); for (int i = 0; i < 10000; i++) { parser.parse(plainMessage, user); } For some reason, it runs SLOWER (by about 100ms) than for (int i = 0; i < 10000; i++) { MessageParser parser = new MessageParser(); parser.parse(plainMessage, user); } Any ideas why? The tests were repeated a lot of times, so it wasn't just random. How could creating an object 10000 times be faster than creating it once?

    Read the article

  • How computer multiplies 2 numbers?

    - by ckv
    How does a computer perform a multiplication on 2 numbers say 100 * 55. My guess was that the computer did repeated addition to achieve multiplication. Of course this could be the case for integer numbers. However for floating point numbers there must be some other logic. Note: This was asked in an interview.

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • Improving MySQL Update Query Efficiency

    - by Russell C.
    In our database tables we keep a number of counting columns to help reduce the number of simple lookup queries. For example, in our users table we have columns for the number of reviews written, photos uploaded, friends, followers, etc. To help make sure these stay in sync we have a script that runs periodically to check and update these counting columns. The problem is that now that our database has grown significantly the queries we have been using are taking forever to run since they are totally inefficient. I would appreciate someone with more MySQL knowledge than myself to recommend how we can improve it's efficiency: update users set photos=(select count(*) from photos where photos.status="A" AND photos.user_id=users.id) where users.status="A"; If this were a select statement I would just use a join but I'm not sure if that is possible with update. Thanks in advance for your help!

    Read the article

  • optimize a string.Format + replace.

    - by acidzombie24
    I have this function. The visual studio profile marked the line with string.Format as hot and were i spend much of my time. How can i write this loop more efficiently? public string EscapeNoPredicate(string sz) { var s = new StringBuilder(sz); s.Replace(sepStr, sepStr + sepStr); foreach (char v in IllegalChars) { string s2 = string.Format("{0}{1:X2}", seperator, (Int16)v); s.Replace(v.ToString(), s2); } return s.ToString(); }

    Read the article

  • Composite primary keys in N-M relation or not?

    - by BerggreenDK
    Lets say we have 3 tables (actually I have 2 at the moment, but this example might illustrate the thought better): [Person] ID: int, primary key Name: nvarchar(xx) [Group] ID: int, primary key Name: nvarchar(xx) [Role] ID: int, primary key Name: nvarchar(xx) [PersonGroupRole] Person_ID: int, PRIMARY COMPOSITE OR NOT? Group_ID: int, PRIMARY COMPOSITE OR NOT? Role_ID: int, PRIMARY COMPOSITE OR NOT? Should any of the 3 ID's in the relation PersonGroupRole be marked as PRIMARY key or should they all 3 be combined into one composite?? whats the real benefit of doing it or not? I can join anyways as far as I know, so Person JOIN PersonGroupRole JOIN Group gives me which persons are in which Groups etc. I will be using LINQ/C#/.NET on top of SQL-express and SQL-server, so if there is any reasons regarding language/SQL that might make the choice more clear, thats the platform I ask about. Looking forward to see what answers pops up, as I have thought of these primary keys/indexes many times when making combined ones.

    Read the article

  • Working with PHP and MySQL - need a good and secure design with OO design

    - by Andrew
    I am new to PHP- first time developer. I am working on my web application and it is nearly done; nevertheless, most of my sql was done directly via code using direct mysql requests. This is the way I approached it: In classes_db.php I declared the db settings and created methods that I use to open and close DB connections. I declare those objects on my regular pages: class classes_db { public $dbserver = 'server; public $dbusername = 'user'; public $dbpassword = 'pass'; public $dbname = 'db'; function openDb() { $dbhandle = mysql_connect($this->dbserver, $this->dbusername, $this->dbpassword); if (!$dbhandle) { die('Could not connect: ' . mysql_error()); } $selected = mysql_select_db($this->dbname, $dbhandle) or die("Could not select the database"); return $dbhandle; } function closeDb($con) { mysql_close($con); } } On my regular page, I do this: <?php require 'classes_db.php'; session_start(); //create instance of the DB class $db = new classes_db(); //get dbhandle $dbhandle = $db->openDb(); //process query $result = mysql_query("update user set username = '" . $usernameFromForm . "' where iduser= " . $_SESSION['user']->iduser); //close the connection if (isset($dbhandle)) { $db->closeDb($dbhandle); } ?> My questions is: how to do it right and make it OO and secure? I know that I need incorporate prepared queries- how to do it the best way? Please provide some code

    Read the article

  • Can I optimize this at all?

    - by Moshe
    I'm working on an iOS app and I'm using the following code for one of my tables to return the number of rows in a particular section: return [[kSettings arrayForKey:@"views"] count]; Is there any other way to write that line of code so that it is more memory efficient? EDIT: kSettings = NSUserDefaults standardUserDefaults. Is there any way to rewrite my line of code so that whatever memory it occupies is released sooner than it is released now?

    Read the article

  • How can I optimize this loop?

    - by Moshe
    I've got a piece of code that returns a super-long string that represents "search results". Each result is represented by a double HTML break symbol. For example: Result1<br><br>Result 2<br><br>Result3 I've got the following loop that takes each result and puts it into an array, stripping out the break indicator, "kBreakIndicator" (<br><br>). The problem is that this lopp takes way too long to execute. With a few results it's fine, but once you hit a hundred results, it's about 20-30 seconds slower. It's unacceptable performance. What can I do to improve performance? Here's my code: content is the original NSString. NSMutableArray *results = [[NSMutableArray alloc] init]; //Loop through the string of results and take each result and put it into an array while(![content isEqualToString:@""]){ NSRange rangeOfResult = [content rangeOfString:kBreakIndicator]; NSString *temp = (rangeOfResult.location != NSNotFound) ? [content substringToIndex:rangeOfResult.location] : nil; if (temp) { [results addObject:temp]; content = [[[content stringByReplacingOccurrencesOfString:[NSString stringWithFormat:@"%@%@", temp, kBreakIndicator] withString:@""] mutableCopy] autorelease]; }else{ [results addObject:[content description]]; content = [[@"" mutableCopy] autorelease]; } } //Do something with the results array. [results release];

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

  • Using Custom Generic Collection faster with objects than List

    - by Kaminari
    I'm iterating through a List<> to find a matching element. The problem is that object has only 2 significant values, Name and Link (both strings), but has some other values which I don't want to compare. I'm thinking about using something like HashSet (which is exactly what I'm searching for -- fast) from .NET 3.5 but target framework has to be 2.0. There is something called Power Collections here: http://powercollections.codeplex.com/, should I use that? But maybe there is other way? If not, can you suggest me a suitable custom collection?

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • Why isn't the copy constructor elided here?

    - by Jesse Beder
    (I'm using gcc with -O2.) This seems like a straightforward opportunity to elide the copy constructor, since there are no side-effects to accessing the value of a field in a bar's copy of a foo; but the copy constructor is called, since I get the output meep meep!. #include <iostream> struct foo { foo(): a(5) { } foo(const foo& f): a(f.a) { std::cout << "meep meep!\n"; } int a; }; struct bar { foo F() const { return f; } foo f; }; int main() { bar b; int a = b.F().a; return 0; }

    Read the article

  • C++ pimpl idiom wastes an instruction vs. C style?

    - by Rob
    (Yes, I know that one machine instruction usually doesn't matter. I'm asking this question because I want to understand the pimpl idiom, and use it in the best possible way; and because sometimes I do care about one machine instruction.) In the sample code below, there are two classes, Thing and OtherThing. Users would include "thing.hh". Thing uses the pimpl idiom to hide it's implementation. OtherThing uses a C style – non-member functions that return and take pointers. This style produces slightly better machine code. I'm wondering: is there a way to use C++ style – ie, make the functions into member functions – and yet still save the machine instruction. I like this style because it doesn't pollute the namespace outside the class. Note: I'm only looking at calling member functions (in this case, calc). I'm not looking at object allocation. Below are the files, commands, and the machine code, on my Mac. thing.hh: class ThingImpl; class Thing { ThingImpl *impl; public: Thing(); int calc(); }; class OtherThing; OtherThing *make_other(); int calc(OtherThing *); thing.cc: #include "thing.hh" struct ThingImpl { int x; }; Thing::Thing() { impl = new ThingImpl; impl->x = 5; } int Thing::calc() { return impl->x + 1; } struct OtherThing { int x; }; OtherThing *make_other() { OtherThing *t = new OtherThing; t->x = 5; } int calc(OtherThing *t) { return t->x + 1; } main.cc (just to test the code actually works...) #include "thing.hh" #include <cstdio> int main() { Thing *t = new Thing; printf("calc: %d\n", t->calc()); OtherThing *t2 = make_other(); printf("calc: %d\n", calc(t2)); } Makefile: all: main thing.o : thing.cc thing.hh g++ -fomit-frame-pointer -O2 -c thing.cc main.o : main.cc thing.hh g++ -fomit-frame-pointer -O2 -c main.cc main: main.o thing.o g++ -O2 -o $@ $^ clean: rm *.o rm main Run make and then look at the machine code. On the mac I use otool -tv thing.o | c++filt. On linux I think it's objdump -d thing.o. Here is the relevant output: Thing::calc(): 0000000000000000 movq (%rdi),%rax 0000000000000003 movl (%rax),%eax 0000000000000005 incl %eax 0000000000000007 ret calc(OtherThing*): 0000000000000010 movl (%rdi),%eax 0000000000000012 incl %eax 0000000000000014 ret Notice the extra instruction because of the pointer indirection. The first function looks up two fields (impl, then x), while the second only needs to get x. What can be done?

    Read the article

  • Execute a method less times possible - PHP

    - by serhio
    I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then. // --- en/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); // --- fr/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); ... // somewhere in the page <td><?php echo $currencies["eur"]["today"]; ?></td> So, every time I load, en/ or fr/ or other language, I request the exchange rates from a external site. Can I optimize this behavior (reading once per day or session)? maybe to store a global variable and check the update date?

    Read the article

  • Counting context switches per thread

    - by Sarmun
    Is there a way to see how many context switches each thread generates? (both in and out if possible) Either in X/s, or to let it run and give aggregated data after some time. (either on linux or on windows) I have found only tools that give aggregated context-switching number for whole os or per process. My program makes many context switches (50k/s), probably a lot not necessary, but I am not sure where to start optimizing, where do most of those happen.

    Read the article

  • Compile Flex application without debug? Optimisation options for flex compiler?

    - by maoanz
    I have created a simple test application with the following code var i : int; for (i=0; i<3000000; i++){ trace(i); } When I run the application, it's very slow to load, which means the "trace" is running. I check the flash player by right-clicking, the debugger option is not enable. So I wonder if there is an option to put in compiler to exclude the trace. Otherwise, I have to remove manually all the trace in the program. Are there any other options of compiler to optimize the flex application in a maximum way? Thanks

    Read the article

  • Defined variables and arrays vs functions in php

    - by Frank Presencia Fandos
    Introduction I have some sort of values that I might want to access several times each page is loaded. I can take two different approaches for accessing them but I'm not sure which one is 'better'. Three already implemented examples are several options for the Language, URI and displaying text that I describe here: Language Right now it is configured in this way: lang() is a function that returns different values depending on the argument. Example: lang("full") returns the current language, "English", while lang() returns the abbreviation of the current language, "en". There are many more options, like lang("select"), lang("selectact"), etc that return different things. The code is too long and irrelevant for the case so if anyone wants it just ask for it. Url The $Url array also returns different values depending on the request. The whole array is fully defined in the beginning of the page and used to get shorter but accurate links of the current page. Example: $Url['full'] would return "http://mypage.org/path/to/file.php?page=1" and $Url['file'] would return "file.php". It's useful for action="" within the forms and many other things. There are more values for $Url['folder'], $Url['file'], etc. Same thing about the code, if wanted, just request it. Text [You can skip this section] There's another array called $Text that is defined in the same way than $Url. The whole array is defined at the beginning, making a mysql call and defining all $Text[$i] for current page with a while loop. I'm not sure if this is more efficient than multiple calls for a single mysql cell. Example: $Text['54'] returns "This is just a test array!" which this could perfectly be implemented with a function like text(54). Question With the 3 examples you can see that I use different methods to do almost the same function (no pun intended), but I'm not sure which one should become the standard one for my code. I could create a function called url() and other called text() to output what I want. I think that working with functions in those cases is better, but I'm not sure why. So I'd really appreciate your opinions and advice. Should I mix arrays and functions in the way I described or should I just use funcions? Please, base your answer in this: The source needs to be readable and reusable by other developers Resource consumption (processing, time and memory). The shorter the code the better. The more you explain the reasons the better. Thank you PS, now I know the differences between $Url and $Uri.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >