Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 106/307 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • VS2010 (older) installer project - two or more objects have the same target location.

    - by Hamish Grubijan
    This installer project was created back in 2004 and upgraded ever since. There are two offending dll files, which produce a total of 4 errors. I have searched online for this warning message and did not find a permanent fix (I did manage to make it go away once until I have done something like a clean, or built in Release, and then in Debug). I also tried cleaning, and then refreshing the dependencies. The duplicated entries are still in there. I also did not find a good explanation for what this error means. Additional warnings are of this nature: Warning 36 The version of the .NET Framework launch condition '.NET Framework 4' does not match the selected .NET Framework bootstrapper package. Update the .NET Framework launch condition to match the version of the .NET Framework selected in the Prerequisites Dialog Box. So, where is this prerequisites box? I want to make both things agree on .Net 4.0, just having a hard time locating both of them.

    Read the article

  • Nested loop traversing arrays

    - by alecco
    There are 2 very big series of elements, the second 100 times bigger than the first. For each element of the first series, there are 0 or more elements on the second series. This can be traversed and processed with 2 nested loops. But the unpredictability of the amount of matching elements for each member of the first array makes things very, very slow. The actual processing of the 2nd series of elements involves logical and (&) and a population count. I couldn't find good optimizations using C but I am considering doing inline asm, doing rep* mov* or similar for each element of the first series and then doing the batch processing of the matching bytes of the second series, perhaps in buffers of 1MB or something. But the code would be get quite messy. Does anybody know of a better way? C preferred but x86 ASM OK too. Many thanks! Sample/demo code with simplified problem, first series are "people" and second series are "events", for clarity's sake. (the original problem is actually 100m and 10,000m entries!) #include <stdio.h> #include <stdint.h> #define PEOPLE 1000000 // 1m struct Person { uint8_t age; // Filtering condition uint8_t cnt; // Number of events for this person in E } P[PEOPLE]; // Each has 0 or more bytes with bit flags #define EVENTS 100000000 // 100m uint8_t P1[EVENTS]; // Property 1 flags uint8_t P2[EVENTS]; // Property 2 flags void init_arrays() { for (int i = 0; i < PEOPLE; i++) { // just some stuff P[i].age = i & 0x07; P[i].cnt = i % 220; // assert( sum < EVENTS ); } for (int i = 0; i < EVENTS; i++) { P1[i] = i % 7; // just some stuff P2[i] = i % 9; // just some other stuff } } int main(int argc, char *argv[]) { uint64_t sum = 0, fcur = 0; int age_filter = 7; // just some init_arrays(); // Init P, P1, P2 for (int64_t p = 0; p < PEOPLE ; p++) if (P[p].age < age_filter) for (int64_t e = 0; e < P[p].cnt ; e++, fcur++) sum += __builtin_popcount( P1[fcur] & P2[fcur] ); else fcur += P[p].cnt; // skip this person's events printf("(dummy %ld %ld)\n", sum, fcur ); return 0; } gcc -O5 -march=native -std=c99 test.c -o test

    Read the article

  • Reflector error or optimisation?

    - by David_001
    Long story short: I used reflector on the System.Security.Util.Tokenizer class, and there's loads of goto statements in there. Here's a brief example snippet: Label_0026: if (this._inSavedCharacter != -1) { num = this._inSavedCharacter; this._inSavedCharacter = -1; } else { switch (this._inTokenSource) { case TokenSource.UnicodeByteArray: if ((this._inIndex + 1) < this._inSize) { break; } stream.AddToken(-1); return; case TokenSource.UTF8ByteArray: if (this._inIndex < this._inSize) { goto Label_00CF; } stream.AddToken(-1); return; case TokenSource.ASCIIByteArray: if (this._inIndex < this._inSize) { goto Label_023C; } stream.AddToken(-1); return; case TokenSource.CharArray: if (this._inIndex < this._inSize) { goto Label_0272; } stream.AddToken(-1); return; case TokenSource.String: if (this._inIndex < this._inSize) { goto Label_02A8; } stream.AddToken(-1); return; case TokenSource.NestedStrings: if (this._inNestedSize == 0) { goto Label_030D; } if (this._inNestedIndex >= this._inNestedSize) { goto Label_0306; } num = this._inNestedString[this._inNestedIndex++]; goto Label_0402; default: num = this._inTokenReader.Read(); if (num == -1) { stream.AddToken(-1); return; } goto Label_0402; } num = (this._inBytes[this._inIndex + 1] << 8) + this._inBytes[this._inIndex]; this._inIndex += 2; } goto Label_0402; Label_00CF: num = this._inBytes[this._inIndex++]; if ((num & 0x80) != 0) { switch (((num & 240) >> 4)) { case 8: case 9: case 10: case 11: throw new XmlSyntaxException(this.LineNo); case 12: case 13: num &= 0x1f; num3 = 2; break; case 14: num &= 15; num3 = 3; break; case 15: throw new XmlSyntaxException(this.LineNo); } if (this._inIndex >= this._inSize) { throw new XmlSyntaxException(this.LineNo, Environment.GetResourceString("XMLSyntax_UnexpectedEndOfFile")); } byte num2 = this._inBytes[this._inIndex++]; if ((num2 & 0xc0) != 0x80) { throw new XmlSyntaxException(this.LineNo); } num = (num << 6) | (num2 & 0x3f); if (num3 != 2) { if (this._inIndex >= this._inSize) { throw new XmlSyntaxException(this.LineNo, Environment.GetResourceString("XMLSyntax_UnexpectedEndOfFile")); } num2 = this._inBytes[this._inIndex++]; if ((num2 & 0xc0) != 0x80) { throw new XmlSyntaxException(this.LineNo); } num = (num << 6) | (num2 & 0x3f); } } goto Label_0402; Label_023C: num = this._inBytes[this._inIndex++]; goto Label_0402; Label_0272: num = this._inChars[this._inIndex++]; goto Label_0402; Label_02A8: num = this._inString[this._inIndex++]; goto Label_0402; Label_0306: this._inNestedSize = 0; I essentially wanted to know how the class worked, but the number of goto's makes it impossible. Arguably something like a Tokenizer class needs to be heavily optimised, so my question is: is Reflector getting it wrong, or is goto an optimisation for this class?

    Read the article

  • MySQL TEXT field performance

    - by Jonathon
    I have several TEXT and/or MEDIUMTEXT fields in each of our 1000 MySQL tables. I now know that TEXT fields are written to disk rather than in memory when queried. Is that also true even if that field is not called in the query? For example, if I have a table (tbExam) with 2 fields (id int(11) and comment text) and I run SELECT id FROM tbExam, does MySQL still have to write that to disk before returning results or will it run that query in memory? I am trying to figure out if I need to reconfigure our actual db tables to switch to varchar(xxxx) or keep the text fields and reconfigure the queries.

    Read the article

  • Java API Method Run Times

    - by Mike
    Is there a good resource to get run times for standard API functions? It's somewhat confusing when trying to optimize your program. I know Java isn't made to be particularly speedy but I can't seem to find much info on this at all. Example Problem: If I am looking for a certain token in a file is it faster to scan each line using string.contains(...) or to bring in say 100 or so lines putting them to a local string them performing contains on that chunk.

    Read the article

  • How can I speed-up this loop (in C)?

    - by splicer
    Hi! I'm trying to parallelize a convolution function in C. Here's the original function which convolves two arrays of 64-bit floats: void convolve(const Float64 *in1, UInt32 in1Len, const Float64 *in2, UInt32 in2Len, Float64 *results) { UInt32 i, j; for (i = 0; i < in1Len; i++) { for (j = 0; j < in2Len; j++) { results[i+j] += in1[i] * in2[j]; } } } In order to allow for concurrency (without semaphores), I created a function that computes the result for a particular position in the results array: void convolveHelper(const Float64 *in1, UInt32 in1Len, const Float64 *in2, UInt32 in2Len, Float64 *result, UInt32 outPosition) { UInt32 i, j; for (i = 0; i < in1Len; i++) { if (i > outPosition) break; j = outPosition - i; if (j >= in2Len) continue; *result += in1[i] * in2[j]; } } The problem is, using convolveHelper slows down the code about 3.5 times (when running on a single thread). Any ideas on how I can speed-up convolveHelper, while maintaining thread safety?

    Read the article

  • 'umfpack.h' not found, but it's in /opt/local/include/

    - by user2924321
    I'm trying to compile a program called hiQlab on OSX 10.8 g++ -g -O2 -I`echo /Users/.../Documents/hiQlab/hiqlab-2006-07-20/tools/`/lua/include -I`echo /Users/.../Documents/hiQlab/hiqlab-2006-07-20/tools/`/tolua++/include -c cscmatrix.cc cscmatrix.cc:13:12: fatal error: 'umfpack.h' file not found #include "umfpack.h" but I just installed SuiteSparse through macports which includes umfpack and umfpack.h is in fact present in default directory /opt/local/include/ do I need to add the path or something? Thanks!

    Read the article

  • C: Why does gcc allow char array initialization with string literal larger than array?

    - by Ashwin
    int main() { char a[7] = "Network"; return 0; } A string literal in C is terminated internally with a nul character. So, the above code should give a compilation error since the actual length of the string literal Network is 8 and it cannot fit in a char[7] array. However, gcc (even with -Wall) on Ubuntu compiles this code without any error or warning. Why does gcc allow this and not flag it as compilation error? gcc only gives a warning (still no error!) when the char array size is smaller than the string literal. For example, it warns on: char a[6] = "Network"; [Related] Visual C++ 2012 gives a compilation error for char a[7]: 1>d:\main.cpp(3): error C2117: 'a' : array bounds overflow 1> d:\main.cpp(3) : see declaration of 'a'

    Read the article

  • Minimizing distance to a weighted grid

    - by Andrew Tomazos - Fathomling
    Lets suppose you have a 1000x1000 grid of positive integer weights W. We want to find the cell that minimizes the average weighted distance.to each cell. The brute force way to do this would be to loop over each candidate cell and calculate the distance: int best_x, best_y, best_dist; for x0 = 1:1000, for y0 = 1:1000, int total_dist = 0; for x1 = 1:1000, for y1 = 1:1000, total_dist += W[x1,y1] * sqrt((x0-x1)^2 + (y0-y1)^2); if (total_dist < best_dist) best_x = x0; best_y = y0; best_dist = total_dist; This takes ~10^12 operations, which is too long. Is there a way to do this in or near ~10^8 or so operations?

    Read the article

  • How to optimize this MYSQL table?

    - by Lost_in_code
    This is for an upcoming project. I have two tables - first one keeps tracks of photos, and the second one keeps track of the photo's rank Photos: +-------+-----------+------------------+ | id | photo | current_rank | +-------+-----------+------------------+ | 1 | apple | 5 | | 2 | orange | 9 | +-------+-----------+------------------+ The photo rank keeps changing on a regular basis and this is the table that tracks it: Ranks: +-------+-----------+----------+-------------+ | id | photo_id | ranks | timestamp | +-------+-----------+----------+-------------+ | 1 | 1 | 8 | * | | 2 | 2 | 2 | * | | 3 | 1 | 3 | * | | 4 | 1 | 7 | * | | 5 | 1 | 5 | * | | 6 | 2 | 9 | * | +-------+-----------+----------+-------------+ * = current timestamp Every rank is tracked for reporting/analysis purpose. I talked to someone who has experience in this field and he told me that storing ranks like above is the way to go. But I'm not so sure yet. The problem here is data redundancy. There are going to be tens of thousands of photos. The photo rank changes on a hourly basis (many time within minutes) for recent photos but less frequently for older photos. At this rate the table will have millions of records within months. And since I do not have experience in working with large databases, this makes me a little nervous. I thought of this: Ranks: +-------+-----------+--------------------+ | id | photo_id | ranks | +-------+-----------+--------------------+ | 1 | 1 | 8:*,3:*,7:*,5:* | | 2 | 2 | 2:*,9:* | +-------+-----------+--------------------+ * = current timestamp That means some extra code in PHP to split the rank/time (and sorting) but that looks OK to me. Is this a correct way to optimize the table for performance? What would you recommend? Any suggestions would be great.

    Read the article

  • which one is a faster/better sql practice?

    - by artsince
    Suppose I have a 2 column table (id, flag) and id is sequential. I expect this table to contain a lot of records. I want to periodically select the first row not flagged and update it. Some of the records on the way may have already been flagged, so I want to skip them. Does it make more sense if I store the last id I flagged and use it in my select statement, like select * from mytable where id > my_last_id order by id asc limit 1 or simply get the first unflagged row, like: select * from mytable where flagged = 'F' order by id asc limit 1 Thank you!

    Read the article

  • Optimizing MySQL queries with IN operator

    - by Arkadiusz Kondas
    I have a MySQL database with a fairly large table where the products are. Each of them has its own id and categoryId field where there is a category id belongs to this product. Now I have a query that pulls out products from given categories such as: SELECT * FROM products WHERE categoryId IN ( 1, 2, 3, 4, 5, 34, 6, 7, 8, 9, 10, 11, 12 ) Of course, come a WHERE clause and ORDER BY sort but not in this thing. Let's say that these products is 250k and the visits are over 100k per day. Under such conditions in the table slow_log registered weight of these queries with large generation time. Do you have any ideas how to optimize the given problem? Table engine is MyISAM.

    Read the article

  • Haskell mutability in compiled state?

    - by pile of junk
    I do not know much about Haskell, but from what I have read about the mutability of computations (e.g: functions returning functions, complex monads and functions, etc.) it seems like you can do a lot of meta-programming, even at runtime. How can Haskell, if everything like functions and monads are so complex, compile to machine code and retain all this?

    Read the article

  • Any tools can randomly generate the source code according to a language grammar?

    - by wbsun
    A C program source code can be parsed according to the C grammar(described in CFG) and eventually turned into many ASTs. I am considering if such tool exists: it can do the reverse thing by firstly randomly generating many ASTs, which include tokens that don't have the concrete string values, just the types of the tokens, according to the CFG, then generating the concrete tokens according to the tokens' definitions in the regular expression. I can imagine the first step looks like an iterative non-terminals replacement, which is randomly and can be limited by certain number of iteration times. The second step is just generating randomly strings according to regular expressions. Is there any tool that can do this?

    Read the article

  • With (or similar) statement in JQuery

    - by Salman A
    Very simple question: I want to optimize the following jQuery code with maximum readability, optimal performance and minimum fuss (fuss = declaring new variables etc): $(".addthis_toolbox").append('<a class="addthis_button_delicious"></a>'); $(".addthis_toolbox").append('<a class="addthis_button_facebook"></a>'); $(".addthis_toolbox").append('<a class="addthis_button_google"></a>'); $(".addthis_toolbox").append('<a class="addthis_button_reddit"></a>'); . . .

    Read the article

  • Create a java executable with Eclipse

    - by Micah
    This is a totally newbie question. I'm running Eclipse on Ubuntu. I created a test project that I want to compile to an executable (whataver the linux equivalent is of a Windows .exe file). Here's the contents of my program: public class MyTest { public static void main(String[] args) { System.out.println("You passed in: " + args[0]); } } I want to know how to compile it and then how to execute it from the command line. Thanks!

    Read the article

  • Checking for a session variable... returns (Object reference not set to an instance of an object.)

    - by Chocol8
    In the Session_Start i have Sub Session_Start(ByVal sender As Object, ByVal e As EventArgs) Session("login") = False End Sub and in a custom .aspx page i included the following... Public ReadOnly Property GetLoginState() As Boolean Get If Current.Session("login") Is Nothing Then Current.Session("login") = False End If Return Current.Session("login") End Get End Property Why i am getting error the error Object reference not set to an instance of an object. at line If Current.Session("login") Is Nothing Then while checking for the login state as follows? If GetLogin = False Then 'Do something End if I mean i have already created the instance on the Session_Start... Haven't i? What am i doing wrong here?

    Read the article

  • Improve C function performance with cache locality?

    - by Christoper Hans
    I have to find a diagonal difference in a matrix represented as 2d array and the function prototype is int diagonal_diff(int x[512][512]) I have to use a 2d array, and the data is 512x512. This is tested on a SPARC machine: my current timing is 6ms but I need to be under 2ms. Sample data: [3][4][5][9] [2][8][9][4] [6][9][7][3] [5][8][8][2] The difference is: |4-2| + |5-6| + |9-5| + |9-9| + |4-8| + |3-8| = 2 + 1 + 4 + 0 + 4 + 5 = 16 In order to do that, I use the following algorithm: int i,j,result=0; for(i=0; i<4; i++) for(j=0; j<4; j++) result+=abs(array[i][j]-[j][i]); return result; But this algorithm keeps accessing the column, row, column, row, etc which make inefficient use of cache. Is there a way to improve my function?

    Read the article

  • Is it faster to compute values in a query, call a Scalar Function (decimal(28,2) datatype) 4 times,

    - by Pulsehead
    I have a handful of queries I need to write in SQL Server 2005. Each Query will be calculating 4 unit cost values based on a handful of (up to 11) fields. Any time I want 1 of these 4 unit cost values, I'll want all 4. Which is quicker? Computing in the SQL Query ((a+b+c+d+e+f+g+h+i)/(j+k)), calling ComputeScalarUnitCost(datapoint.ID) 4 times, or joining to ComputeUnitCostTable(datapoint.ID) one time?

    Read the article

  • Does it make sense to replace sub-queries by join?

    - by Roman
    For example I have a query like that. select col1 from t1 where col2>0 and col1 in (select col1 from t2 where col2>0) As far as I understand, I can replace it by the following query: select t1.col1 from t1 join (select col1 from t2 where col2>0) as t2 on t1.col1=t2.col1 where t1.col2>0 ADDED In some answers I see join in other inner join. Are both right? Or they are even identical?

    Read the article

  • Why use one dimensional array instead of a two dimensional arrray?

    - by user3869145
    I was doing some work handling a lot of information and my partner told me that I was using too many matrices to manipulate the variables of the problem. The idea was to use one dimension arrays int a[] instead of the 2 dimensional arrays int b[][], to save memory and processing speed of the algorithm. How certain is that this change will accelerate the speed of execution or compilation of my code in c ++?

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >