Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 8/307 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • C# Compiler should give warning but doesn't?

    - by Cristi Diaconescu
    Someone on my team tried fixing a 'variable not used' warning in an empty catch clause. try { ... } catch (Exception ex) { } - gives a warning about ex not being used. So far, so good. The fix was something like this: try { ... } catch (Exception ex) { string s = ex.Message; } Seeing this, I thought "Just great, so now the compiler will complain about s not being used." But it doesn't! There are no warnings on that piece of code and I can't figure out why. Any ideas? PS. I know catch-all clauses that mute exceptions are a bad thing, but that's a different topic. I also know the initial warning is better removed by doing something like this, that's not the point either. try { ... } catch (Exception) { } or try { ... } catch { }

    Read the article

  • Does replacing statements by expressions using the C++ comma operator could allow more compiler opti

    - by Gabriel Cuvillier
    The C++ comma operator is used to chain individual expressions, yielding the value of the last executed expression as the result. For example the skeleton code (6 statements, 6 expressions): step1; step2; if (condition) step3; return step4; else return step5; May be rewritten to: (1 statement, 6 expressions) return step1, step2, condition? step3, step4 : step5; I noticed that it is not possible to perform step-by-step debugging of such code, as the expression chain seems to be executed as a whole. Does it means that the compiler is able to perform special optimizations which are not possible with the traditional statement approach (specially if the steps are const or inline)? Note: I'm not talking about the coding style merit of that way of expressing sequence of expressions! Just about the possible optimisations allowed by replacing statements by expressions.

    Read the article

  • Code crashing compiler: main() returning a struct instead of an int

    - by AndrejaKo
    Hi! I'm experimenting with a piece of C code. Can anyone tell me why is VC 9.0 with SP1 crashing for me? Oh, and the code is meant to be an example used in a discussion why something like void main (void) is evil. struct foo { int i; double d; } main (double argc, struct foo argv) { struct foo a; a.d=0; a.i=0; return a.i; } If I put return a; compiler doesn't crash.

    Read the article

  • How do I use compiler intrinsic __fmul_?

    - by Eric Thoma
    I am writing a massively parallel GPU application. I have been optimizing it by hand. I received a 20% performance increase with _fdividef(x, y), and according to The Cuda C Programming Guide (section C.2.1), using similar functions for multiplication and adding is also beneficial. The function is stated as this: "_fmulrn,rz,ru,rd". __fdividef(x,y) was not stated with the arguments in brackets. I was wondering, what are those brackets? If I run the simple code: int t = __fmul_(5,4); I a compiler error about how _fmul is undefined. I have the CUDA runtime included, so I don't think it is a setup thing; rather it is something to do with those square brackets. How do I correctly use this function? Thank you.

    Read the article

  • C++ -malign-double compiler flag

    - by Martin
    I need some help on compiler flags in c++. I'm using a library that is a port to linux from windows, that has to be compiled with the -malign-double flag, "for Win32 compatibility". It's my understanding that this mean I absolutely have to compile my own code with this flag as well? How about other .so shared libraries, do they have be recompiled with this flag as well? If so, is there any way around this? I'm a linux newbie (and c++), so even though I tried to recompile all the libraries I'm using for my project, it was just too complicated to recursively find the source for all the libraries and the libraries they're dependent on, and recompile everything.

    Read the article

  • .net Compiler Optimizations

    - by Dested
    I am writing an application that I need to run at incredibly low speeds. The application creates and destroys memory in creative ways throughout its run, and it works just fine. I am wondering what compiler optimizations occur so I can try to build to that. One trick off hand is that the CLR handles arrays much faster than lists, so if you need to handle a ton of elements in a List, you may be better off calling ToArray() and handling it rather than calling ElementAt() again and again. I am wondering if there is any sort of comprehensive list for this kind of thing, or maybe the SO community can create one :-)

    Read the article

  • Possible compiler bug in MSVC12 (VS2013) with designated initializer

    - by diapir
    Using VS2013 Update 2, I've stumbled on some strange error message : // test.c int main(void) { struct foo { int i; float f; }; struct bar { unsigned u; struct foo foo; double d; }; struct foo some_foo = { .i = 1, .f = 2.0 }; struct bar some_bar = { .u = 3, // error C2440 : 'initializing' : cannot convert from 'foo' to 'int' .foo = some_foo, .d = 4.0 }; // Works fine some_bar.foo = some_foo; return 0; } Both GCC and Clang accept it. Am I missing something or does this piece of code exposes a compiler bug ? EDIT : Duplicate: Initializing struct within another struct using designated initializer causes compile error in Visual Studio 2013

    Read the article

  • C++ performance, optimizing compiler, empty function in .cpp

    - by Dodo
    I've a very basic class, name it Basic, used in nearly all other files in a bigger project. In some cases, there needs to be debug output, but in release mode, this should not be enabled and be a NOOP. Currently there is a define in the header, which switches a makro on or off, depending on the setting. So this is definetely a NOOP, when switched off. I'm wondering, if I have the following code, if a compiler (MSVS / gcc) is able to optimize out the function call, so that it is again a NOOP. (By doing that, the switch could be in the .cpp and switching will be much faster, compile/link time wise). --Header-- void printDebug(const Basic* p); class Basic { Basic() { simpleSetupCode; // this should be a NOOP in release, // but constructor could be inlined printDebug(this); } }; --Source-- // PRINT_DEBUG defined somewhere else or here #if PRINT_DEBUG void printDebug(const Basic* p) { // Lengthy debug print } #else void printDebug(const Basic* p) {} #endif

    Read the article

  • Compiler: Translation to assembly

    - by sub
    I've written an interpreter for my experimental language and know I want to move on and write a small compiler for it. It will probably take the source, go through the same steps as the interpreter (tokenizer, parser) and then translate the source to assembly. Now my questions: Can I expect that every command in my language can be 1:1 translated to a bunch of assembly instructions? What I mean is if I will have to completely throw over the whole input program or if it is just translated to assembly per line. Which assembler should I use as output format?

    Read the article

  • C++0x optimizing compiler quality

    - by aaa
    hello. I do some heavy numbercrunching and for me floating-point performance is very important. I like performance of Intel compiler very much and quite content with quality of assembly it produces. I am thinking at some point to try C++0x mainly for sugar parts, like auto, initializer list, etc, but also lambdas. at this point I use those features in regular C++ by the means of boost. How good of assembly code do compilers C++0x generate? specifically Intel and gcc compilers. Do they produce SSE code? is performance comparable to C++? are there any benchmarks? My Google search did not reveal much. Thank you.

    Read the article

  • C# logic order and compiler behavior

    - by Terrapin
    In C#, (and feel free to answer for other languages), what order does the runtime evaluate a logic statement? Example: DataTable myDt = new DataTable(); if (myDt != null && myDt.Rows.Count > 0) { //do some stuff with myDt } Which statement does the runtime evaluate first - myDt != null or: myDt.Rows.Count > 0 ? Is there a time when the compiler would ever evaluate the statement backwards? Perhaps when an "OR" operator is involved?

    Read the article

  • Source-to-source compiler framework wanted

    - by cheungcc_2000
    Dear all, I used to use OpenC++ (http://opencxx.sourceforge.net/opencxx/html/overview.html) to perform code generation like: Source: class MyKeyword A { public: void myMethod(inarg double x, inarg const std::vector<int>& y, outarg double& z); }; Generated: class A { public: void myMethod(const string& x, double& y); // generated method below: void _myMehtod(const string& serializedInput, string& serializedOutput) { double x; std::vector<int> y; // deserialized x and y from serializedInput double z; myMethod(x, y, z); } }; This kind of code generation directly matches the use case in the tutorial of OpenC++ (http://www.csg.is.titech.ac.jp/~chiba/opencxx/tutorial.pdf) by writing a meta-level program for handling "MyKeyword", "inarg" and "outarg" and performing the code generation. However, OpenC++ is sort of out-of-date and inactive now, and my code generator can only work on g++ 3.2 and it triggers error on parsing header files of g++ of higher version. I have looked at VivaCore, but it does not provide the infra-structure for compiling meta-level program. I'm also looking at LLVM, but I cannot find documentation that tutor me on working out my source-to-source compilation usage. I'm also aware of the ROSE compiler framework, but I'm not sure whether it suits my usage, and whether its proprietary C++ front-end binary can be used in a commercial product, and whether a Windows version is available. Any comments and pointers to specific tutorial/paper/documentation are much appreciated.

    Read the article

  • Strange VS2005 compile errors: unable to locate resource file (because the compiler keeps deleting i

    - by Velika
    I AM GETTING THE FOLLOWING ERROR IN A VERY SIMPLE CLASS LIBRARY: Error 1 Unable to copy file "obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources" to "obj\Debug\SMIT.SysAdmin.BusinessLayer.SMIT.SysAdmin.BusinessLayer.Resources.resources". Could not find file 'obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources'. SMIT.SysAdmin.BusinessLayer Going to the Project Properties-Resource tab, I see that I defined do resources. Still, I tried to delete the resource file and recreate by going to the resource tab. When I recompile, I still get the same error. Why is it even looking for a resource file? I define no resources on teh project properties tab and added no new resource file items. Any suggestions of things to try? Update: I found the missing file in an old backup. I copied it to the location where the compiler expected it, and then successfully recompiled the project that previously had compile time errors. However, when I rebuild the entire solution, it deletes the file that I previously restored and I'm back to where I started. My solution contains several projects (maybe 10 or so). Could VS 2005 be having a problem determining dependencies and the proper order to compile these projects?

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • Specific compiler flags for specific files in Xcode

    - by Jasarien
    I've been tasked to work on a project that has some confusing attributes. The project is of the nature that it won't compile for the iPhone Simulator And the iPhone Device with the same compile settings. I think it has to do with needing to be specifically compiled for x86 or arm6/7 depending on the target platform. So the project's build settings, when viewed in Xcode's Build Settings view doesn't enable me to set specific compiler flags per specific files. However, the previous developer that worked on this project has somehow declared the line: CE7FEB5710F09234004DE356 /* MyFile.m in Sources */ = {isa = PBXBuildFile; fileRef = CE7FEB5510F09234004DE356 /* MyFile.m */; settings = {COMPILER_FLAGS = "-fasm-blocks -marm -mfpu=neon"; }; }; Is there any way to do this without editing the project file by hand? I know that editing the project file can result in breaking it completely, so I'd rather not do that, as I obviously don't know as much as the previous developer. So to clarify, the question is: The build fails when compiling for simulator unless I remove the -fasm-blocks flag. The build fails when compiling for device unless I add the -fasm-blocks flag. Is there a way to set this flag per file without editing the project file by hand?

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • How do determine what is *really* causing your compiler error

    - by ML
    Hi All, I am porting a very large code base and I am having more difficulty with old code. For example, this causes a compiler error: inline CP_M_ReferenceCounted * FrAssignRef(CP_M_ReferenceCounted * & to, CP_M_ReferenceCounted * from) { if (from) from->AddReference(); if (to) to->RemoveReference(); to = from; return to; } The error is: error: expected initializer before '*' token. How do I know what this is. I looked up inline member functions to be sure I understood and I dont think the inlining is the cause but I am not sure what is. Another example: template <class eachClass> eachClass FrReferenceIfClass(FxRC * ptr) { eachClass getObject = dynamic_cast<eachClass>(ptr); if (getObject) getObject->AddReference(); return getObject; } The error is: error: template declaration of 'eachClass FrReferenceIfClass' That is all. How do I decide what this is?. I am admittedly rusty with templates.

    Read the article

  • Should a new language compiler target the JVM?

    - by Pindatjuh
    I'm developing a new language. My initial target was to compile to native x86 for the Windows platform, but now I am in doubt. I've seen some new languages target the JVM (most notable Scala and Clojure). Ofcourse it's not possible to port every language easily to the JVM; to do so, it may lead to small changes to the language and it's design. So that's the reason behind this doubt, and thus this question: Is targetting the JVM a good idea, when creating a compiler for a new language? Or should I stick with x86? I have experience in generating JVM bytecode. Are there any workarounds to JVM's GC? The language has deterministic implicit memory management. How to produce JIT-compatible bytecode, such that it will get the highest speedup? Is it similar to compiling for IA-32, such as the 4-1-1 muops pattern on Pentium? I can imagine some advantages (please correct me if I'm wrong): JVM bytecode is easier than x86. Like x86 communicates with Windows, JVM communicates with the Java Foundation Classes. To provide I/O, Threading, GUI, etc. Implementing "lightweight"-threads.I've seen a very clever implementation of this at http://www.malhar.net/sriram/kilim/. Most advantages of the Java Runtime (portability, etc.) The disadvantages, as I imagined, are: Less freedom? On x86 it'll be more easy to create low-level constructs, while JVM has a higher level (more abstract) processor. Most disadvantages of the Java Runtime (no native dynamic typing, etc.)

    Read the article

  • How to document and teach others "optimized beyond recognition" computationally intensive code?

    - by rwong
    Occasionally there is the 1% of code that is computationally intensive enough that needs the heaviest kind of low-level optimization. Examples are video processing, image processing, and all kinds of signal processing, in general. The goals are to document, and to teach the optimization techniques, so that the code does not become unmaintainable and prone to removal by newer developers. (*) (*) Notwithstanding the possibility that the particular optimization is completely useless in some unforeseeable future CPUs, such that the code will be deleted anyway. Considering that software offerings (commercial or open-source) retain their competitive advantage by having the fastest code and making use of the newest CPU architecture, software writers often need to tweak their code to make it run faster while getting the same output for a certain task, whlist tolerating a small amount of rounding errors. Typically, a software writer can keep many versions of a function as a documentation of each optimization / algorithm rewrite that takes place. How does one make these versions available for others to study their optimization techniques?

    Read the article

  • Issues in Convergence of Sequential minimal optimization for SVM

    - by Amol Joshi
    I have been working on Support Vector Machine for about 2 months now. I have coded SVM myself and for the optimization problem of SVM, I have used Sequential Minimal Optimization(SMO) by Mr. John Platt. Right now I am in the phase where I am going to grid search to find optimal C value for my dataset. ( Please find details of my project application and dataset details here http://stackoverflow.com/questions/2284059/svm-classification-minimum-number-of-input-sets-for-each-class) I have successfully checked my custom implemented SVM`s accuracy for C values ranging from 2^0 to 2^6. But now I am having some issues regarding the convergence of the SMO for C 128. Like I have tried to find the alpha values for C=128 and it is taking long time before it actually converges and successfully gives alpha values. Time taken for the SMO to converge is about 5 hours for C=100. This huge I think ( because SMO is supposed to be fast. ) though I`m getting good accuracy? I am screwed right not because I can not test the accuracy for higher values of C. I am actually displaying number of alphas changed in every pass of SMO and getting 10, 13, 8... alphas changing continuously. The KKT conditions assures convergence so what is so weird happening here? Please note that my implementation is working fine for C<=100 with good accuracy though the execution time is long. Please give me inputs on this issue. Thank You and Cheers.

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • ASP.NET Web Optimization - confusion about loading order

    - by Ciel
    Using the ASP.NET Web Optimization Framework, I am attempting to load some javascript files up. It works fine, except I am running into a peculiar situation with either the loading order, the loading speed, or its execution. I cannot figure out which. Basically, I am using ace code editor for javascript, and I also want to include its autocompletion package. This requires two files. /ace.js /ext-language_tools.js This isn't an issue, if I load both of these files the normal way (with <script> tags) it works fine. But when I try to use the web optimization bundles, it seems as if something goes wrong. Trying this out... bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") .Include("~/js/ext-language_tools.js") }); and then in the view .. @Scripts.Render("~/bundles/js") I get the error ace is not defined This means that the ace.js file hasn't run, or hasn't loaded. Because if I break it apart into two bundles, it starts working. bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") }); bundles.Add(new ScriptBundle("~/bundles/js/language_tools") { .Include("~/js/ext-language_tools.js") }); Can anyone explain why this would behave in this fashion?

    Read the article

  • Memory optimization while downloading

    - by lboregard
    hello all i have the following piece of code, that im looking forward to optimize, since i'm consuming gobs of memory this routine is heavily used first optimization would be to move the stringbuilder construction out of the download routine and make it a field of the class, then i would clear it inside the routine can you please suggest any other optimization or point me in the direction of some resources that could help me with this (web articles, books, etc). i'm thinking about replacing the stringbuilder by a fixed (much larger) size buffer ... or perhaps create a larger sized stringbuilder thanks in advance. StreamWriter _writer; StreamReader _reader; public string Download(string msgId) { _writer.WriteLine("BODY <" + msgId + ">"); string response = _reader.ReadLine(); if (!response.StartsWith("222")) return null; bool done = false; StringBuilder body = new StringBuilder(256* 1024); do { response = _reader.ReadLine(); if (OnProgress != null) OnProgress(response.Length); if (response == ".") { done = true; } else { if (response.StartsWith("..")) response = response.Remove(0, 1); body.Append(response); body.Append("\r\n"); } } while (!done); return body.ToString(); }

    Read the article

  • optimization math computation (multiplication and summing)

    - by wiso
    Suppose you want to compute the sum of the square of the differences of items: $\sum_{i=1}^{N-1} (x_i - x_{i+1})^2$, the simplest code (the input is std::vector<double> xs, the ouput sum2) is: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (prev - (*i)) * (prev - (*i)); // only 1 - with compiler optimization prev = (*i); } I hope that the compiler do the optimization in the comment above. If N is the length of xs you have N-1 multiplications and 2N-3 sums (sums means + or -). Now suppose you know this variable: sum = $x_1^2 + x_N^2 + 2 sum_{i=2}^{N-1} x_i^2$ Expanding the binomial square: $sum_i^{N-1} (x_i-x_{i+1})^2 = sum - 2\sum_{i=1}^{N-1} x_i x_{i+1}$ so the code becomes: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (*i) * prev; prev = (*i); } sum2 = -sum2 * 2. + sum; Here I have N multiplications and N-1 additions. In my case N is about 100. Well, compiling with g++ -O2 I got no speed up (I try calling the inlined function 2M times), why?

    Read the article

  • io operations in compilers

    - by Aastha
    How are constructs of io operations handled by a compiler? Like the RTL mapping for memory related operations which is done in a compiler at the time of target code generation, where and how exactly is the same done for io operations? How are the appeoaches different for processors supporting MMIO and I/O mapped I/O? Are there any optimizations done for the io operations in compilers?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >