Search Results

Search found 20224 results on 809 pages for 'query optimization'.

Page 184/809 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Ternary Operators in JavaScript Without an "Else"

    - by Oscar Godson
    I've been using them forever, and I love them. To me they see cleaner and i can scan faster, but ever since I've been using them i've always had to put null in the else conditions that don't have anything. Is there anyway around it? E.g. condition ? x=true : null ; basically, is there a way to do: condition ? x=true; Now it shows up as a syntax error...

    Read the article

  • Execute a method less times possible - PHP

    - by serhio
    I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then. // --- en/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); // --- fr/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); ... // somewhere in the page <td><?php echo $currencies["eur"]["today"]; ?></td> So, every time I load, en/ or fr/ or other language, I request the exchange rates from a external site. Can I optimize this behavior (reading once per day or session)? maybe to store a global variable and check the update date?

    Read the article

  • I'm doing a lot of lists and dictionary sorting...and this is causing memory errors in Python websit

    - by alex
    I retrieved data from the log table in my database. Then I started finding unique users, comparing/sorting lists, etc. In the end I got down to this. stats = {'2010-03-19': {'date': '2010-03-19', 'unique_users': 312, 'queries': 1465}, '2010-03-18': {'date': '2010-03-18', 'unique_users': 329, 'queries': 1659}, '2010-03-17': {'date': '2010-03-17', 'unique_users': 379, 'queries': 1845}, '2010-03-16': {'date': '2010-03-16', 'unique_users': 434, 'queries': 2336}, '2010-03-15': {'date': '2010-03-15', 'unique_users': 390, 'queries': 2138}, '2010-03-14': {'date': '2010-03-14', 'unique_users': 460, 'queries': 2221}, '2010-03-13': {'date': '2010-03-13', 'unique_users': 507, 'queries': 2242}, '2010-03-12': {'date': '2010-03-12', 'unique_users': 629, 'queries': 3523}, '2010-03-11': {'date': '2010-03-11', 'unique_users': 811, 'queries': 4274}, '2010-03-10': {'date': '2010-03-10', 'unique_users': 171, 'queries': 1297}, '2010-03-26': {'date': '2010-03-26', 'unique_users': 299, 'queries': 1617}, '2010-03-27': {'date': '2010-03-27', 'unique_users': 323, 'queries': 1310}, '2010-03-24': {'date': '2010-03-24', 'unique_users': 352, 'queries': 2112}, '2010-03-25': {'date': '2010-03-25', 'unique_users': 330, 'queries': 1290}, '2010-03-22': {'date': '2010-03-22', 'unique_users': 329, 'queries': 1798}, '2010-03-23': {'date': '2010-03-23', 'unique_users': 329, 'queries': 1857}, '2010-03-20': {'date': '2010-03-20', 'unique_users': 368, 'queries': 1693}, '2010-03-21': {'date': '2010-03-21', 'unique_users': 329, 'queries': 1511}, '2010-03-29': {'date': '2010-03-29', 'unique_users': 325, 'queries': 1718}, '2010-03-28': {'date': '2010-03-28', 'unique_users': 340, 'queries': 1815}, '2010-03-30': {'date': '2010-03-30', 'unique_users': 329, 'queries': 1891}} It's not a big dictionary. But when I try to do one last thing...it craps out on me. for k, v in stats: mylist.append(v) too many values to unpack What the heck does that mean??? TOO MANY VALUES TO UNPACK.

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • How do I know if my PHP application is using too much memory?

    - by John
    I'm working on a PHP web application that let's users network with each other, book events, message etc... I launched it a few months ago and at the moment, there's only about 100 users. I set up the application on a VPS with ubuntu 9.10, apache 2, mysql 5 and php 5. I had 360 Mb of RAM, but upgraded to 720 MB a few minutes ago. Lately, my web application has been experiencing outages due to excessive memory usage. From what I can tell in error logs, it seems the server automatically kills apache processes that consume too much memory. As a result, I upgraded memory from 360 MB to 720 MB as a stop-gap measure. So my question is, how do I go about resolving these outage issues? How do I know if my website's need for more memory is due to poor code or if it's part of the website's natural growth? What's the most efficient way to determine which PHP scripts consume the most memory?

    Read the article

  • Tree iterator, can you optimize this any further?

    - by Ron
    As a follow up to my original question about a small piece of this code I decided to ask a follow up to see if you can do better then what we came up with so far. The code below iterates over a binary tree (left/right = child/next ). I do believe there is room for one less conditional in here (the down boolean). The fastest answer wins! The cnt statement can be multiple statements so lets make sure this appears only once The child() and next() member functions are about 30x as slow as the hasChild() and hasNext() operations. Keep it iterative <-- dropped this requirement as the recursive solution presented was faster. This is C++ code visit order of the nodes must stay as they are in the example below. ( hit parents first then the children then the 'next' nodes). BaseNodePtr is a boost::shared_ptr as thus assignments are slow, avoid any temporary BaseNodePtr variables. Currently this code takes 5897ms to visit 62200000 nodes in a test tree, calling this function 200,000 times. void processTree (BaseNodePtr current, unsigned int & cnt ) { bool down = true; while ( true ) { if ( down ) { while (true) { cnt++; // this can/will be multiple statesments if (!current->hasChild()) break; current = current->child(); } } if ( current->hasNext() ) { down = true; current = current->next(); } else { down = false; current = current->parent(); if (!current) return; // done. } } }

    Read the article

  • How to properly cast a global memory array using the uint4 vector in CUDA to increase memory throughput?

    - by charis
    There are generally two techniques to increase the memory throughput of the global memory on a CUDA kernel; memory accesses coalescence and accessing words of at least 4 bytes. With the first technique accesses to the same memory segment by threads of the same half-warp are coalesced to fewer transactions while be accessing words of at least 4 bytes this memory segment is effectively increased from 32 bytes to 128. To access 16-byte instead of 1-byte words when there are unsigned chars stored in the global memory, the uint4 vector is commonly used by casting the memory array to uint4: uint4 *text4 = ( uint4 * ) d_text; var = text4[i]; In order to extract the 16 chars from var, i am currently using bitwise operations. For example: s_array[j * 16 + 0] = var.x & 0x000000FF; s_array[j * 16 + 1] = (var.x >> 8) & 0x000000FF; s_array[j * 16 + 2] = (var.x >> 16) & 0x000000FF; s_array[j * 16 + 3] = (var.x >> 24) & 0x000000FF; My question is, is it possible to recast var (or for that matter *text4) to unsigned char in order to avoid the additional overhead of the bitwise operations?

    Read the article

  • Iterative Reduction to Null Matrix

    - by user1459032
    Here's the problem: I'm given a matrix like Input: 1 1 1 1 1 1 1 1 1 At each step, I need to find a "second" matrix of 1's and 0's with no two 1's on the same row or column. Then, I'll subtract the second matrix from the original matrix. I will repeat the process until I get a matrix with all 0's. Furthermore, I need to take the least possible number of steps. I need to print all the "second" matrices in O(n) time. In the above example I can get to the null matrix in 3 steps by subtracting these three matrices in order: Expected output: 1 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 0 0 I have coded an attempt, in which I am finding the first maximum value and creating the second matrices based on the index of that value. But for the above input I am getting 4 output matrices, which is wrong: My output: 1 0 0 0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 My solution works for most of the test cases but fails for the one given above. Can someone give me some pointers on how to proceed, or find an algorithm that guarantees optimality? Test case that works: Input: 0 2 1 0 0 0 3 0 0 Output 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0

    Read the article

  • how to avoid temporaries when copying weakly typed object

    - by Truncheon
    Hi. I'm writing a series classes that inherit from a base class using virtual. They are INT, FLOAT and STRING objects that I want to use in a scripting language. I'm trying to implement weak typing, but I don't want STRING objects to return copies of themselves when used in the following way (instead I would prefer to have a reference returned which can be used in copying): a = "hello "; b = "world"; c = a + b; I have written the following code as a mock example: #include <iostream> #include <string> #include <cstdio> #include <cstdlib> std::string dummy("<int object cannot return string reference>"); struct BaseImpl { virtual bool is_string() = 0; virtual int get_int() = 0; virtual std::string get_string_copy() = 0; virtual std::string const& get_string_ref() = 0; }; struct INT : BaseImpl { int value; INT(int i = 0) : value(i) { std::cout << "constructor called\n"; } INT(BaseImpl& that) : value(that.get_int()) { std::cout << "copy constructor called\n"; } bool is_string() { return false; } int get_int() { return value; } std::string get_string_copy() { char buf[33]; sprintf(buf, "%i", value); return buf; } std::string const& get_string_ref() { return dummy; } }; struct STRING : BaseImpl { std::string value; STRING(std::string s = "") : value(s) { std::cout << "constructor called\n"; } STRING(BaseImpl& that) { if (that.is_string()) value = that.get_string_ref(); else value = that.get_string_copy(); std::cout << "copy constructor called\n"; } bool is_string() { return true; } int get_int() { return atoi(value.c_str()); } std::string get_string_copy() { return value; } std::string const& get_string_ref() { return value; } }; struct Base { BaseImpl* impl; Base(BaseImpl* p = 0) : impl(p) {} ~Base() { delete impl; } }; int main() { Base b1(new INT(1)); Base b2(new STRING("Hello world")); Base b3(new INT(*b1.impl)); Base b4(new STRING(*b2.impl)); std::cout << "\n"; std::cout << b1.impl->get_int() << "\n"; std::cout << b2.impl->get_int() << "\n"; std::cout << b3.impl->get_int() << "\n"; std::cout << b4.impl->get_int() << "\n"; std::cout << "\n"; std::cout << b1.impl->get_string_ref() << "\n"; std::cout << b2.impl->get_string_ref() << "\n"; std::cout << b3.impl->get_string_ref() << "\n"; std::cout << b4.impl->get_string_ref() << "\n"; std::cout << "\n"; std::cout << b1.impl->get_string_copy() << "\n"; std::cout << b2.impl->get_string_copy() << "\n"; std::cout << b3.impl->get_string_copy() << "\n"; std::cout << b4.impl->get_string_copy() << "\n"; return 0; } It was necessary to add an if check in the STRING class to determine whether its safe to request a reference instead of a copy: Script code: a = "test"; b = a; c = 1; d = "" + c; /* not safe to request reference by standard */ C++ code: STRING(BaseImpl& that) { if (that.is_string()) value = that.get_string_ref(); else value = that.get_string_copy(); std::cout << "copy constructor called\n"; } If was hoping there's a way of moving that if check into compile time, rather than run time.

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • Oracle SQL: Query results from previous X isoweeks () (where X might be > 52)

    - by tommy-o-dell
    How could I adapt this query to show the previous 61 weeks? (still exlcluding the current week). My query currently shows me the total weekly sales for 2010 grouped by ISO Week and ISO Year (exlcuding the current week). select to_char(order_date,'IYYY') as iso_year, to_char(order_date,'IW') as iso_week, sum(sale_amount) from orders where to_char(order_date,'IW') <> to_char(SYSDATE) and to_char(order_date,'IYYY') = 2010 group by to_char(order_date,'IYYY') to_char(order_date,'IW') I realize I could probably just omit the "2010" requirement, order by desc and limit results to a certain bnumber of rows. But that just doesn't seem right! Much appreciate any help pointing me in the right direction!

    Read the article

  • Word frequency tally script is too slow

    - by Dave Jarvis
    Background Created a script to count the frequency of words in a plain text file. The script performs the following steps: Count the frequency of words from a corpus. Retain each word in the corpus found in a dictionary. Create a comma-separated file of the frequencies. The script is at: http://pastebin.com/VAZdeKXs Problem The following lines continually cycle through the dictionary to match words: for i in $(awk '{if( $2 ) print $2}' frequency.txt); do grep -m 1 ^$i\$ dictionary.txt >> corpus-lexicon.txt; done It works, but it is slow because it is scanning the words it found to remove any that are not in the dictionary. The code performs this task by scanning the dictionary for every single word. (The -m 1 parameter stops the scan when the match is found.) Question How would you optimize the script so that the dictionary is not scanned from start to finish for every single word? The majority of the words will not be in the dictionary. Thank you!

    Read the article

  • Which is faster in memory, ints or chars? And file-mapping or chunk reading?

    - by Nick
    Okay, so I've written a (rather unoptimized) program before to encode images to JPEGs, however, now I am working with MPEG-2 transport streams and the H.264 encoded video within them. Before I dive into programming all of this, I am curious what the fastest way to deal with the actual file is. Currently I am file-mapping the .mts file into memory to work on it, although I am not sure if it would be faster to (for example) read 100 MB of the file into memory in chunks and deal with it that way. These files require a lot of bit-shifting and such to read flags, so I am wondering that when I reference some of the memory if it is faster to read 4 bytes at once as an integer or 1 byte as a character. I thought I read somewhere that x86 processors are optimized to a 4-byte granularity, but I'm not sure if this is true... Thanks!

    Read the article

  • HELP with sql query involving two tables and a max date

    - by wes
    hello all....firstly any help is greatly appreciated! i have been searching for a solution to my problem for a while and haven't found exactly what i am looking for. i have two tables notifications and mailmessages. notifications has fields( notifytime, notifynumber, and accountnumber). mailmessages has fields(id, messagesubject, messagenumber, accountnumber) my goal is to create a single sql query to retrieve distinct rows from mailmessages WHERE the accountnumber is a specific number AND the notifynumber=messagenumber AND ONLY the most recent notifytime from the notifications table where the accountnumbers match in both tables. i am using sqlexpress2008 as a backend to an asp.net page....this query should return distinct messages for an account with only the most recent date from the notifications table..please help! i'll buy you a beer!!!

    Read the article

  • getting a combo box that has a row source equal to a query - and the query takes data from a form -

    - by primus285
    I have a combo box with a row source based on an SQL query about like SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date= DateSerial([cboYear], 1, 1) And Database_New.Date<= DateSerial([cboYear], 12, 31); the trouble is that if I change the value of cboYear, the values in the drop down cboASEC do not update. I have to open the query, save it and close it to get the thing to update while I have the form open. Is there a way to get the cboASEC to update somehow? maybe a little tidbit of code in the cboYear - afterupdate?

    Read the article

  • optimize a string.Format + replace.

    - by acidzombie24
    I have this function. The visual studio profile marked the line with string.Format as hot and were i spend much of my time. How can i write this loop more efficiently? public string EscapeNoPredicate(string sz) { var s = new StringBuilder(sz); s.Replace(sepStr, sepStr + sepStr); foreach (char v in IllegalChars) { string s2 = string.Format("{0}{1:X2}", seperator, (Int16)v); s.Replace(v.ToString(), s2); } return s.ToString(); }

    Read the article

  • Overhead of serving pages - JSPs vs. PHP vs. ASPXs vs. C

    - by John Shedletsky
    I am interested in writing my own internet ad server. I want to serve billions of impressions with as little hardware possible. Which server-side technologies are best suited for this task? I am asking about the relative overhead of serving my ad pages as either pages rendered by PHP, or Java, or .net, or coding Http responses directly in C and writing some multi-socket IO monster to serve requests (I assume this one wins, but if my assumption is wrong, that would actually be most interesting). Obviously all the most efficient optimizations are done at the algorithm level, but I figure there has got to be some speed differences at the end of the day that makes one method of serving ads better than another. How much overhead does something like apache or IIS introduce? There's got to be a ton of extra junk in there I don't need. At some point I guess this is more a question of which platform/language combo is best suited - please excuse the in-adroitly posed question, hopefully you understand what I am trying to get at.

    Read the article

  • Has anyone ever successfully make index merge work for MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+ Has anyone ever successfully make index merge work for MySQL? I'll be glad to see successful stories here:)

    Read the article

  • Using Custom Generic Collection faster with objects than List

    - by Kaminari
    I'm iterating through a List<> to find a matching element. The problem is that object has only 2 significant values, Name and Link (both strings), but has some other values which I don't want to compare. I'm thinking about using something like HashSet (which is exactly what I'm searching for -- fast) from .NET 3.5 but target framework has to be 2.0. There is something called Power Collections here: http://powercollections.codeplex.com/, should I use that? But maybe there is other way? If not, can you suggest me a suitable custom collection?

    Read the article

  • How do I select a random record efficiently in MySQL?

    - by user198729
    mysql> EXPLAIN SELECT * FROM urls ORDER BY RAND() LIMIT 1; +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | 1 | SIMPLE | urls | ALL | NULL | NULL | NULL | NULL | 62228 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ The above doesn't qualify as efficient,how should I do it properly?

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >