Search Results

Search found 22292 results on 892 pages for 'image optimization'.

Page 204/892 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Compiler reordering around mutex boundaries?

    - by shojtsy
    Suppose I have my own non-inline functions LockMutex and UnlockMutex, which are using some proper mutex - such as boost - inside. How will the compiler know not to reorder other operations with regard to calls to the LockMutex and UnlockMutex? It can not possibly know how will I implement these functions in some other compilation unit. void SomeClass::store(int i) { LockMutex(_m); _field = i; // could the compiler move this around? UnlockMutex(_m); } ps: One is supposed to use instances of classes for holding locks to guarantee unlocking. I have left this out to simplify the example.

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • Why this query is so slow?

    - by Silver Light
    This query appears in mysql slow query log: it takes 11 seconds. INSERT INTO record_visits ( record_id, visit_day ) VALUES ( '567', NOW() ); The table has 501043 records and it's structure looks like this: CREATE TABLE IF NOT EXISTS `record_visits` ( `id` int(11) NOT NULL AUTO_INCREMENT, `record_id` int(11) DEFAULT NULL, `visit_day` date DEFAULT NULL, `visit_cnt` bigint(20) DEFAULT '1', PRIMARY KEY (`id`), UNIQUE KEY `record_id_visit_day` (`record_id`,`visit_day`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 ; What could be wrong? Why this INSERT takes so long?

    Read the article

  • How can you determine the file size in JavaScript?

    - by Daniel Lew
    I help moderate a forum online, and on this forum we restrict the size of signatures. At the moment we test this via a simple Greasemonkey script I wrote; we wrap all signatures with a <div>, the script looks for them, and then measures the div's height and width. All the script does right now is make sure the signature resides in a particular height/width. I would like to start measuring the file size of the images inside of a signature automatically so that the script can automatically flag users who are including huge images in their signature. However, I can't seem to find a way to measure the size of images loaded on the page. I've searched and found a property special to IE (element.fileSize) but I obviously can't use that in my Greasemonkey script. Is there a way to find out the file size of an image in Firefox via JavaScript? Edit: People are misinterpreting the problem. The forums themselves do not host images; we host the BBCode that people enter as their signature. So, for example, people enter this: This is my signature, check out my [url=http://google.com]awesome website[/url]! This image is cool! [img]http://image.gif[/img] I want to be able to check on these images via Greasemonkey. I could write a batch script to scan all of these instead, but I'm just wondering if there's a way to augment my current script.

    Read the article

  • Initializing a C++ vector to random values... fast

    - by Flamewires
    Hey, id like to make this as fast as possible because it gets called A LOT in a program i'm writing, so is there any faster way to initialize a C++ vector to random values than: double range;//set to the range of a particular function i want to evaluate. std::vector<double> x(30, 0.0); for (int i=0;i<x.size();i++) { x.at(i) = (rand()/(double)RAND_MAX)*range; } EDIT:Fixed x's initializer.

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • How computer multiplies 2 numbers?

    - by ckv
    How does a computer perform a multiplication on 2 numbers say 100 * 55. My guess was that the computer did repeated addition to achieve multiplication. Of course this could be the case for integer numbers. However for floating point numbers there must be some other logic. Note: This was asked in an interview.

    Read the article

  • Creating objects makes the VM faster?

    - by Sudhir Jonathan
    Look at this piece of code: MessageParser parser = new MessageParser(); for (int i = 0; i < 10000; i++) { parser.parse(plainMessage, user); } For some reason, it runs SLOWER (by about 100ms) than for (int i = 0; i < 10000; i++) { MessageParser parser = new MessageParser(); parser.parse(plainMessage, user); } Any ideas why? The tests were repeated a lot of times, so it wasn't just random. How could creating an object 10000 times be faster than creating it once?

    Read the article

  • How to make if-elif-else statement in python more space-saving?

    - by Neverland
    I have a lot of if-elif-else statements in my code if message == '0' or message == '3' or message == '5' or message == '7': ... elif message == '1' or message == '2' or message == '4' or message == '6' or message == '8': ... else: ... Is it possible to format this in a more space-saving way? I tried it this way: if message == '0' or '3' or '5' or '7': ... elif message == '1' or '2' or '4' or '6' or '8': ... else: ... But without success.

    Read the article

  • How can I get a COUNT(col) ... GROUP BY to use an index?

    - by thecoop
    I've got a table (col1, col2, ...) with an index on (col1, col2, ...). The table has got millions of rows in it, and I want to run a query: SELECT col1, COUNT(col2) WHERE col1 NOT IN (<couple of exclusions>) GROUP BY col1 Unfortunately, this is resulting in a full table scan of the table, which takes upwards of a minute. Is there any way of getting oracle to use the index on the columns to return the results much faster?

    Read the article

  • To create new DB connection or not?

    - by Yeti
    I'm running a cron job (every 15 minutes) which takes about a minute to execute. It makes lots of API calls and stores data to the database. Right now I create a mysql connection at the beginning and use the same connection through out the code. Most of the time is spent making the API calls. Will it be more efficient to create a new database connection only when it's time to store the data (below)? Kill the last connection Wait for API call to complete Create new DB connection Execute query Goto 1

    Read the article

  • How to create an image from canvas data?

    - by Jakub Hampl
    In my application I am trying to save an arbitrary part of a rendered HTML canvas to an image file. In my Javascript I call ctx.getImageData(x, y, w, h) and pass the resulting object to my macruby code (though if you know a solution in objc I am also very interested). There I'm trying to create a NSBitmapImageRep object so that I can then save to an image format the user desires. This is my code so far (the function gets a WebScriptObject as it's argument): def setimagedata(d) w = d.valueForKey("width").to_i h = d.valueForKey("height").to_i data = Pointer.new(:char, d.valueForKey("data").valueForKey("length").to_i) d.valueForKey("data").valueForKey("length").to_i.times do |i| data[i] = d.valueForKey("data").webScriptValueAtIndex(i).to_i end puts "data complete" # get's called @exported_image = NSBitmapImageRep.alloc.initWithBitmapDataPlanes(data, pixelsWide: w, pixelsHigh:h, bitsPerSample: 32, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSCalibratedRGBColorSpace, bitmapFormat: NSAlphaNonpremultipliedBitmapFormat, bytesPerRow: 0, bitsPerPixel: 0) puts "done" # doesn't get called end The code doesn't seem to get through the initWithBitmapDataPlanes function but gives no error. My question is: what am I doing wrong? Is this approach reasonable (if not, what would be better?).

    Read the article

  • Java - Optimize finding a string in a list

    - by Mark
    I have an ArrayList of objects where each object contains a string 'word' and a date. I need to check to see if the date has passed for a list of 500 words. The ArrayList could contain up to a million words and dates. The dates I store as integers, so the problem I have is attempting to find the word I am looking for in the ArrayList. Is there a way to make this faster? In python I have a dict and mWords['foo'] is a simple lookup without looping through the whole 1 million items in the mWords array. Is there something like this in java? for (int i = 0; i < mWords.size(); i++) { if ( word == mWords.get(i).word ) { mLastFindIndex = i; return mWords.get(i); } }

    Read the article

  • Where does the compiler store methods for C++ classes?

    - by Mashmagar
    This is more a curiosity than anything else... Suppose I have a C++ class Kitty as follows: class Kitty { void Meow() { //Do stuff } } Does the compiler place the code for Meow() in every instance of Kitty? Obviously repeating the same code everywhere requires more memory. But on the other hand, branching to a relative location in nearby memory requires fewer assembly instructions than branching to an absolute location in memory on modern processors, so this is potentially faster. I suppose this is an implementation detail, so different compilers may perform differently. Keep in mind, I'm not considering static or virtual methods here.

    Read the article

  • Display an ashx image using jQuery?

    - by Abe Miessler
    I've been trying to use the jQuery plugin Colorbox to display images I have in my DB through an ashx file. Unfortunately it just spits a bunch of gibberish at the top of the page and no image. Can this be done? Here is what I have so far: $(document).ready ( function () { $("a[rel='cbImg']").colorbox(); } ); ... <a rel="cbImg" href="HuntImage.ashx?id=15">Click to see image</a>

    Read the article

  • C++ DWORD* to BYTE*

    - by NomeSkavinski
    My issue, i am trying to convert and array of dynamic memory of type DWORD to a BYTE. Fair enough i can for loop through this and convert the DWORD into a BYTE per entry. But is their a faster way to do this? to take a pointer to DWORD data and convert the whole piece of data into a pointer to BYTE data? such as using a memcpy operation? I feel this is not possible, im not requesting an answer just an experienced opinion on my approach, as i have tried testing both approaches but seem to fail getting to a solution on my second solution. Thanks for any input, again no answers just a point in the right direction. Nor is this a homework question, i felt that had to be mentioned.

    Read the article

  • Composite primary keys in N-M relation or not?

    - by BerggreenDK
    Lets say we have 3 tables (actually I have 2 at the moment, but this example might illustrate the thought better): [Person] ID: int, primary key Name: nvarchar(xx) [Group] ID: int, primary key Name: nvarchar(xx) [Role] ID: int, primary key Name: nvarchar(xx) [PersonGroupRole] Person_ID: int, PRIMARY COMPOSITE OR NOT? Group_ID: int, PRIMARY COMPOSITE OR NOT? Role_ID: int, PRIMARY COMPOSITE OR NOT? Should any of the 3 ID's in the relation PersonGroupRole be marked as PRIMARY key or should they all 3 be combined into one composite?? whats the real benefit of doing it or not? I can join anyways as far as I know, so Person JOIN PersonGroupRole JOIN Group gives me which persons are in which Groups etc. I will be using LINQ/C#/.NET on top of SQL-express and SQL-server, so if there is any reasons regarding language/SQL that might make the choice more clear, thats the platform I ask about. Looking forward to see what answers pops up, as I have thought of these primary keys/indexes many times when making combined ones.

    Read the article

  • Execute a method less times possible - PHP

    - by serhio
    I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then. // --- en/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); // --- fr/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); ... // somewhere in the page <td><?php echo $currencies["eur"]["today"]; ?></td> So, every time I load, en/ or fr/ or other language, I request the exchange rates from a external site. Can I optimize this behavior (reading once per day or session)? maybe to store a global variable and check the update date?

    Read the article

  • I'm doing a lot of lists and dictionary sorting...and this is causing memory errors in Python websit

    - by alex
    I retrieved data from the log table in my database. Then I started finding unique users, comparing/sorting lists, etc. In the end I got down to this. stats = {'2010-03-19': {'date': '2010-03-19', 'unique_users': 312, 'queries': 1465}, '2010-03-18': {'date': '2010-03-18', 'unique_users': 329, 'queries': 1659}, '2010-03-17': {'date': '2010-03-17', 'unique_users': 379, 'queries': 1845}, '2010-03-16': {'date': '2010-03-16', 'unique_users': 434, 'queries': 2336}, '2010-03-15': {'date': '2010-03-15', 'unique_users': 390, 'queries': 2138}, '2010-03-14': {'date': '2010-03-14', 'unique_users': 460, 'queries': 2221}, '2010-03-13': {'date': '2010-03-13', 'unique_users': 507, 'queries': 2242}, '2010-03-12': {'date': '2010-03-12', 'unique_users': 629, 'queries': 3523}, '2010-03-11': {'date': '2010-03-11', 'unique_users': 811, 'queries': 4274}, '2010-03-10': {'date': '2010-03-10', 'unique_users': 171, 'queries': 1297}, '2010-03-26': {'date': '2010-03-26', 'unique_users': 299, 'queries': 1617}, '2010-03-27': {'date': '2010-03-27', 'unique_users': 323, 'queries': 1310}, '2010-03-24': {'date': '2010-03-24', 'unique_users': 352, 'queries': 2112}, '2010-03-25': {'date': '2010-03-25', 'unique_users': 330, 'queries': 1290}, '2010-03-22': {'date': '2010-03-22', 'unique_users': 329, 'queries': 1798}, '2010-03-23': {'date': '2010-03-23', 'unique_users': 329, 'queries': 1857}, '2010-03-20': {'date': '2010-03-20', 'unique_users': 368, 'queries': 1693}, '2010-03-21': {'date': '2010-03-21', 'unique_users': 329, 'queries': 1511}, '2010-03-29': {'date': '2010-03-29', 'unique_users': 325, 'queries': 1718}, '2010-03-28': {'date': '2010-03-28', 'unique_users': 340, 'queries': 1815}, '2010-03-30': {'date': '2010-03-30', 'unique_users': 329, 'queries': 1891}} It's not a big dictionary. But when I try to do one last thing...it craps out on me. for k, v in stats: mylist.append(v) too many values to unpack What the heck does that mean??? TOO MANY VALUES TO UNPACK.

    Read the article

  • Anything wrong with this MySQL quert? takes 10 seconds+ to load

    - by user345426
    I have a search that is taking 10 seconds+ to execute! Keep in mind it is also searching over 200,000 products in the database. I posted the explain and MySQL query here. 1 SIMPLE p ref PRIMARY,products_status,prod_prodid_status,product... products_status 1 const 9048 Using where; Using temporary; Using filesort 1 SIMPLE v ref PRIMARY,vendors_id,vendors_vendorid vendors_vendorid 4 rhinomar_rhinomartnew.p.vendors_id 1 1 SIMPLE s ref products_id products_id 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE pd ref PRIMARY,products,prod_desc_prodid_prodname prod_desc_prodid_prodname 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE p2c ref PRIMARY,ptc_catidx PRIMARY 4 rhinomar_rhinomartnew.p.products_id 1 Using where; Using index 1 SIMPLE c eq_ref PRIMARY PRIMARY 4 rhinomar_rhinomartnew.p2c.categories_id 1 Using where MySQL Query: select p.products_id, p.products_image, p.products_price, p.products_weight, p.products_unit_quantity, s.specials_new_products_price, s.status, pd.products_name, pd.products_img_alt from products p left join vendors v ON v.vendors_id = p.vendors_id left join specials s on s.products_id = p.products_id left join products_description pd on pd.products_id = p.products_id left join products_to_categories p2c on p2c.products_id = p.products_id left join categories c on c.categories_id = p2c.categories_id where ( ( pd.products_name like '%apparel%' ) or p2c.categories_id IN (773, 132, 135, 136, 119, 122, 124, 125, 126, 1749, 1753, 1747, 123, 127, 130, 131, 178, 137, 140, 164, 165, 166, 167, 168, 169, 832, 2045 ) or p.products_id = 'apparel' or p.products_model = 'apparel' or CONCAT(v.vendors_prefix, '-') = 'apparel' or CONCAT( v.vendors_prefix, '-', p.products_id ) = 'apparel' ) and p.products_status = '1' and c.categories_status = '1' group by p.products_id order by pd.products_name

    Read the article

  • how to avoid temporaries when copying weakly typed object

    - by Truncheon
    Hi. I'm writing a series classes that inherit from a base class using virtual. They are INT, FLOAT and STRING objects that I want to use in a scripting language. I'm trying to implement weak typing, but I don't want STRING objects to return copies of themselves when used in the following way (instead I would prefer to have a reference returned which can be used in copying): a = "hello "; b = "world"; c = a + b; I have written the following code as a mock example: #include <iostream> #include <string> #include <cstdio> #include <cstdlib> std::string dummy("<int object cannot return string reference>"); struct BaseImpl { virtual bool is_string() = 0; virtual int get_int() = 0; virtual std::string get_string_copy() = 0; virtual std::string const& get_string_ref() = 0; }; struct INT : BaseImpl { int value; INT(int i = 0) : value(i) { std::cout << "constructor called\n"; } INT(BaseImpl& that) : value(that.get_int()) { std::cout << "copy constructor called\n"; } bool is_string() { return false; } int get_int() { return value; } std::string get_string_copy() { char buf[33]; sprintf(buf, "%i", value); return buf; } std::string const& get_string_ref() { return dummy; } }; struct STRING : BaseImpl { std::string value; STRING(std::string s = "") : value(s) { std::cout << "constructor called\n"; } STRING(BaseImpl& that) { if (that.is_string()) value = that.get_string_ref(); else value = that.get_string_copy(); std::cout << "copy constructor called\n"; } bool is_string() { return true; } int get_int() { return atoi(value.c_str()); } std::string get_string_copy() { return value; } std::string const& get_string_ref() { return value; } }; struct Base { BaseImpl* impl; Base(BaseImpl* p = 0) : impl(p) {} ~Base() { delete impl; } }; int main() { Base b1(new INT(1)); Base b2(new STRING("Hello world")); Base b3(new INT(*b1.impl)); Base b4(new STRING(*b2.impl)); std::cout << "\n"; std::cout << b1.impl->get_int() << "\n"; std::cout << b2.impl->get_int() << "\n"; std::cout << b3.impl->get_int() << "\n"; std::cout << b4.impl->get_int() << "\n"; std::cout << "\n"; std::cout << b1.impl->get_string_ref() << "\n"; std::cout << b2.impl->get_string_ref() << "\n"; std::cout << b3.impl->get_string_ref() << "\n"; std::cout << b4.impl->get_string_ref() << "\n"; std::cout << "\n"; std::cout << b1.impl->get_string_copy() << "\n"; std::cout << b2.impl->get_string_copy() << "\n"; std::cout << b3.impl->get_string_copy() << "\n"; std::cout << b4.impl->get_string_copy() << "\n"; return 0; } It was necessary to add an if check in the STRING class to determine whether its safe to request a reference instead of a copy: Script code: a = "test"; b = a; c = 1; d = "" + c; /* not safe to request reference by standard */ C++ code: STRING(BaseImpl& that) { if (that.is_string()) value = that.get_string_ref(); else value = that.get_string_copy(); std::cout << "copy constructor called\n"; } If was hoping there's a way of moving that if check into compile time, rather than run time.

    Read the article

  • Calculating Growth-Rates by applying log-differences

    - by mropa
    I am trying to transform my data.frame by calculating the log-differences of each column and controlling for the rows id. So basically I like to calculate the growth rates for each id's variable. So here is a random df with an id column, a time period colum p and three variable columns: df <- data.frame (id = c("a","a","a","c","c","d","d","d","d","d"), p = c(1,2,3,1,2,1,2,3,4,5), var1 = rnorm(10, 5), var2 = rnorm(10, 5), var3 = rnorm(10, 5) ) df id p var1 var2 var3 1 a 1 5.375797 4.110324 5.773473 2 a 2 4.574700 6.541862 6.116153 3 a 3 3.029428 4.931924 5.631847 4 c 1 5.375855 4.181034 5.756510 5 c 2 5.067131 6.053009 6.746442 6 d 1 3.846438 4.515268 6.920389 7 d 2 4.910792 5.525340 4.625942 8 d 3 6.410238 5.138040 7.404533 9 d 4 4.637469 3.522542 3.661668 10 d 5 5.519138 4.599829 5.566892 Now I have written a function which does exactly what I want BUT I had to take a detour which is possibly unnecessary and can be removed. However, somehow I am not able to locate the shortcut. Here is the function and the output for the posted data frame: fct.logDiff <- function (df) { df.log <- dlply (df, "code", function(x) data.frame (p = x$p, log(x[, -c(1,2)]))) list.nalog <- llply (df.log, function(x) data.frame (p = x$p, rbind(NA, sapply(x[,-1], diff)))) ldply (list.nalog, data.frame) } fct.logDiff(df) id p var1 var2 var3 1 a 1 NA NA NA 2 a 2 -0.16136569 0.46472004 0.05765945 3 a 3 -0.41216720 -0.28249264 -0.08249587 4 c 1 NA NA NA 5 c 2 -0.05914281 0.36999681 0.15868378 6 d 1 NA NA NA 7 d 2 0.24428771 0.20188025 -0.40279188 8 d 3 0.26646102 -0.07267311 0.47041227 9 d 4 -0.32372771 -0.37748866 -0.70417351 10 d 5 0.17405309 0.26683625 0.41891802 The trouble is due to the added NA-rows. I don't want to collapse the frame and reduce it, which would be automatically done by the diff() function. So I had 10 rows in my original frame and am keeping the same amount of rows after the transformation. In order to keep the same length I had to add some NAs. I have taken a detour by transforming the data.frame into a list, add the NAs, and afterwards transform the list back into a data.frame. That looks tedious. Any ideas to avoid the data.frame-list-data.frame class transformation and optimize the function?

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >