Search Results

Search found 125 results on 5 pages for 'llvm'.

Page 1/5 | 1 2 3 4 5  | Next Page >

  • LLVM-3.1 libLLVMSupport.a undefined reference to `dladdr'

    - by user91387
    I'm trying to compile using the llvm-3.1 package. I'm running 12.04 x64 (3.2.0-26 kernel) && 12.10 (3.5.0-4) x64 backported llvm-3.1 from quantal, then debian experimental. Next I tried 12.10 with the native ubuntu llvm-3.1 package; this failed as well. user@system:/tmp/llvm-test# make compiling cpp yacc file: decaf-llvm.y output file: decaf-llvm bison -b decaf-llvm -d decaf-llvm.y /bin/mv -f decaf-llvm.tab.c decaf-llvm.tab.cc flex -odecaf-llvm.lex.cc decaf-llvm.lex g++ -o ./decaf-llvm decaf-llvm.tab.cc decaf-llvm.lex.cc decaf-stdlib.c `llvm-config --cppflags --ldflags --libs core jit native` -ly -ll /usr/lib/llvm-3.1/lib/libLLVMSupport.a(Signals.o): In function `PrintStackTrace(void*)': (.text+0x6c): undefined reference to `dladdr' /usr/lib/llvm-3.1/lib/libLLVMSupport.a(Signals.o): In function `PrintStackTrace(void*)': (.text+0x18f): undefined reference to `dladdr' collect2: error: ld returned 1 exit status make: *** [decaf-llvm] Error 1 I know the code works as I've run it in centos fine using llvm-3.1-6.fc18(rpm) Google was a bit helpful with this: "On some systems, incluning Ubuntu 11.10, linking may fail with message that libLLVMSupport.a in function PrintStackTrace(void*) has undefined reference to dladdr." "Workaround is to compile LLVM with cmake specifying the following variable: -DCMAKE_EXE_LINKER_FLAGS=-ldl" http://svn.dsource.org/projects/bindings/trunk/llvm-3.0/Readme I double checked y ldflags and everything seems ok. user@system:/llvm-config --ldflags -L/usr/lib/llvm-3.1/lib -lpthread -lffi -ldl -lm I'm unclear of what to do next; any suggestions?

    Read the article

  • LLVM: bitcode with llvm-gcc (mingw) for windows

    - by TheShow
    Hi, i'm currently building a small JIT compiler. For the language I need a runtime library for some special math functions. I think the best would be to compile the lib to bitcode and link it. The compiler should be integrated in a product and as of this, it must work under windows (VC10, 64bit). So is it possible to build the math lib with the mingw llvm-gcc build an link it later with the JITed Code? Or are there any problems regarding the portability of the bitcode build with llvm-gcc under mingw? If there are problems, what solution would you suggest?

    Read the article

  • Build 32-bit with 64-bit llvm-gcc

    - by Jay Conrod
    I have a 64-bit version of llvm-gcc, but I want to be able to build both 32-bit and 64-bit binaries. Is there a flag for this? I tried passing -m32 (which works on the regular gcc), but I get an error message like this: [jay@andesite]$ llvm-gcc -m32 test.c -o test Warning: Generation of 64-bit code for a 32-bit processor requested. Warning: 64-bit processors all have at least SSE2. /tmp/cchzYo9t.s: Assembler messages: /tmp/cchzYo9t.s:8: Error: bad register name `%rbp' /tmp/cchzYo9t.s:9: Error: bad register name `%rsp' ... This is backwards; I want to generate 32-bit code for a 64-bit processor! I'm running llvm-gcc 4.2, the one that comes with Ubuntu 9.04 x86-64. EDIT: Here is the relevant part of the output when I run llvm-gcc with the -v flag: [jay@andesite]$ llvm-gcc -v -m32 test.c -o test.bc Using built-in specs. Target: x86_64-linux-gnu Configured with: ../llvm-gcc4.2-2.2.source/configure --host=x86_64-linux-gnu --build=x86_64-linux-gnu --prefix=/usr/lib/llvm/gcc-4.2 --enable-languages=c,c++ --program-prefix=llvm- --enable-llvm=/usr/lib/llvm --enable-threads --disable-nls --disable-shared --disable-multilib --disable-bootstrap Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5546) (LLVM build) /usr/lib/llvm/gcc-4.2/libexec/gcc/x86_64-linux-gnu/4.2.1/cc1 -quiet -v -imultilib . test.c -quiet -dumpbase test.c -m32 -mtune=generic -auxbase test -version -o /tmp/ccw6TZY6.s I looked in /usr/lib/llvm/gcc-4.2/libexec/gcc hoping to find another binary, but the only directory there is x86_64-linux-gnu. I will probably look at compiling llvm-gcc from source with appropriate options next.

    Read the article

  • how to compile with llvm and g++?

    - by Sriram
    Hi, I use a fedora-11 system and recently I installed llvm ( sudo yum -y install llvm llvm-docs llvm-devel ). When I search for llvm I get them in /usr/bin. some of the links to the binaries are broken(llvm-gcc,llvm-g++,llvm-cpp,etc.) the include files are found within /usr/include/llvm and libs at /usr/lib/llvm. How to compile them using g++? I tried to compile the kaleidoscope code given in the tutorial (http://llvm.org/docs/tutorial/LangImpl3.html) as per directed, but it fails to compile.. I get this... toy.cpp:5:30: error: llvm/LLVMContext.h: No such file or directory toy.cpp:352: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘virtual llvm::Value* NumberExprAST::Codegen()’: toy.cpp:358: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘virtual llvm::Value* BinaryExprAST::Codegen()’: toy.cpp:379: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp:379: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘llvm::Function* PrototypeAST::Codegen()’: toy.cpp:407: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp:407: error: ‘getGlobalContext’ was not declared in this scope toy.cpp:408: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp: In member function ‘llvm::Function* FunctionAST::Codegen()’: toy.cpp:454: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In function ‘int main()’: toy.cpp:543: error: ‘LLVMContext’ was not declared in this scope toy.cpp:543: error: ‘Context’ was not declared in this scope toy.cpp:543: error: ‘getGlobalContext’ was not declared in this scope I cannot find the LLVMContext.h file too. so i guess this might be a version problem. what should i do to make it work? some help would be good! thanks in advance... :)

    Read the article

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • Why array size in llvm created by llvm's internal tools is not good enough for llvm?

    - by vava
    I'm getting strange error message from the following code: ArrayType * arrayType = ArrayType::get(Type::getInt32Ty(ctx), 0); stack = builder->CreateAlloca(arrayType, ConstantArray::get(arrayType, std::vector<Constant *>())); This code compiles fine but it gives me llvm::Value* getAISize(llvm::LLVMContext&, llvm::Value*): Assertion `Amt-getType() == Type::getInt32Ty(Context) && "Malloc/Allocation array size is not a 32-bit integer!"' failed. during execution. What's interesting, is that I never said of what type array size should be, it was created somewhere inside helper functions. So I naturally see no good reason for assert to be there. But I must be doing something wrong anyway, can you help me spot the problem? Thanks in advance. PS. LLVM, as good as it is, lacking documentation tremendously.

    Read the article

  • Help me select a "Simpler" target to create a new language: .NET, LLVM, Go, Own VM

    - by mamcx
    Lets define "Simple". This is my first language. I have no previous experience I will not dedicate +4 years to learn it properly. I'm a professional software [developer], but as an amateur in this area, I want instant gratification. If the idea shows a future, I could rewrite it. I don't want to do everything from scratch. In fact, if there exists a way to get GO (for example), change its syntax, add some sugar, give some extra functions and leave intact everything else, that would be perfect! From the example of coffescript/scala I think is better to build on top of some rich runtime like .NET/GO so I don't need to rewrite everything. HOWEVER, if is better other way, no problem for the first try! I want it in a week. I need it in a week so it will really take a month. Then it truly takes 3 months. But I don't want to put more that 3 months on this. I could reduce the scope of my language, but I hope the tools will help me a lot... I want to build a new language. Similar to python, but typed. I wonder what to build it on top of. I like the idea of building on top of GO. To get their sane (IMHO) OO paradigm (I plan to do the same, using interfaces, not inheritance), get goroutines and some other stuff. In my naive thinking I imagine that spit another language could help me to debug it more easily. However, look like everyone is building on top of something like .NET (don't like Java), LLVM or make it own VM. I read http://createyourproglang.com/ (great!) and the part of the VM look "easy" to me. So, what I need is the proper criteria and question I need to know in advance to have a fair shot at make this.

    Read the article

  • Make LLVM inline a function from a library

    - by capitrane
    I am trying to make LLVM inline a function from a library. I have LLVM bitcode files (manually generated) that I linked together with llvm-link, I also have a library (written in C) compiled into bitcode by clang and archived with llvm-ar. I manage to link everything together and to execute but i can't manage to get LLVM to inline a function from the library. Any clue about how this should be done?

    Read the article

  • How to make working binary using llvm

    - by prosseek
    I want to get the working binary out of llvm, using step by step method. I'm working on Snow Leopard. llvm-gcc h.c -emit-llvm -S -o hi.ll - hi.ll llvm-as hi.ll - hi.bc (jit binary?) llc hi.bc - hi.s (assembly code) How can I get the binary to run on Mac OS X with hi.bc?

    Read the article

  • Where is the VM in LLVM?

    - by anon
    Note: marked as community wiki. Where is the Low Level Virtual Machine in LLVM? I see that we have llvm-g++ and c-lang, but to me, a LLVM is something almost like Valgrind of a simulator, where instructions are executed on it, and I can write programs to instrument the running code / interrupt when certain conditions happen / etc ... Where are the tools like this built on LLVM? Thanks!

    Read the article

  • How to preserve struct member identifiers when compiling to LLVM IR with Clang?

    - by smokris
    Say I have the following C structure definition: struct stringStructure { char *stringVariable; }; For the above, Clang produces the following LLVM IR: %struct.stringStructure = type { i8* } ...which includes everything in my definition except the variable identifier stringVariable. I'd like to find some way to export the identifier into the generated LLVM IR, so that I can refer to it by name from my application (which uses the LLVM C++ API). I've tried adding the annotate attribute, as follows: char *stringVariable __attribute__((annotate("stringVariable"))); ...but the annotation doesn't seem to make it through (the structure is still just defined as type { i8* }). Any ideas?

    Read the article

  • Creating full, global clang+llvm environment

    - by Griwes
    What is the easiest way to setup full Clang, libc++ and LLVM as default global toolchain? All of my attempts to build it, in most of the configurations I could think of, resulted in working Clang, but it didn't use libc++ headers, but default GCC's libstd++'s ones, resulting in numerous faults in incompatible pieces of library code. I would like it working out of the box, without having to do magic in .bashrc or passing all those -stdlib=libc++ and -lc++ to compiler and linker.

    Read the article

  • Poor LLVM JIT performance

    - by Paul J. Lucas
    I have a legacy C++ application that constructs a tree of C++ objects. I want to use LLVM to call class constructors to create said tree. The generated LLVM code is fairly straight-forward and looks repeated sequences of: ; ... %11 = getelementptr [11 x i8*]* %Value_array1, i64 0, i64 1 %12 = call i8* @T_string_M_new_A_2Pv(i8* %heap, i8* getelementptr inbounds ([10 x i8]* @0, i64 0, i64 0)) %13 = call i8* @T_QueryLoc_M_new_A_2Pv4i(i8* %heap, i8* %12, i32 1, i32 1, i32 4, i32 5) %14 = call i8* @T_GlobalEnvironment_M_getItemFactory_A_Pv(i8* %heap) %15 = call i8* @T_xs_integer_M_new_A_Pvl(i8* %heap, i64 2) %16 = call i8* @T_ItemFactory_M_createInteger_A_3Pv(i8* %heap, i8* %14, i8* %15) %17 = call i8* @T_SingletonIterator_M_new_A_4Pv(i8* %heap, i8* %2, i8* %13, i8* %16) store i8* %17, i8** %11, align 8 ; ... Where each T_ function is a C "thunk" that calls some C++ constructor, e.g.: void* T_string_M_new_A_2Pv( void *v_value ) { string *const value = static_cast<string*>( v_value ); return new string( value ); } The thunks are necessary, of course, because LLVM knows nothing about C++. The T_ functions are added to the ExecutionEngine in use via ExecutionEngine::addGlobalMapping(). When this code is JIT'd, the performance of the JIT'ing itself is very poor. I've generated a call-graph using kcachegrind. I don't understand all the numbers (and this PDF seems not to include commas where it should), but if you look at the left fork, the bottom two ovals, Schedule... is called 16K times and setHeightToAtLeas... is called 37K times. On the right fork, RAGreed... is called 35K times. Those are far too many calls to anything for what's mostly a simple sequence of call LLVM instructions. Something seems horribly wrong. Any ideas on how to improve the performance of the JIT'ing?

    Read the article

  • Static library not included in resulting LLVM executable

    - by Matthew Glubb
    Hi, I am trying to compile a c program using LLVM and I am having trouble getting some static libraries included. I have successfully compiled those static libraries using LLVM and, for example, libogg.a is present, as is ogg.l.bc. However, when I try to build the final program, it does not include the static ogg library. I've tried various compiler options with the most notable being: gcc oggvorbis.c -O3 -Wall -I$OV_DIR/include -l$OV_DIR/lib/libogg.a -l$OV_DIR/lib/libvorbis.a -o test.exe This results in the following output (directories shortened for brevity): $OV_DIR/include/vorbis/vorbisfile.h:75: warning: ‘OV_CALLBACKS_DEFAULT’ defined but not used $OV_DIR/include/vorbis/vorbisfile.h:82: warning: ‘OV_CALLBACKS_NOCLOSE’ defined but not used $OV_DIR/include/vorbis/vorbisfile.h:89: warning: ‘OV_CALLBACKS_STREAMONLY’ defined but not used $OV_DIR/include/vorbis/vorbisfile.h:96: warning: ‘OV_CALLBACKS_STREAMONLY_NOCLOSE’ defined but not used llvm-ld: warning: Cannot find library '$OV_DIR/lib/ogg.l.bc' llvm-ld: warning: Cannot find library '$OV_DIR/lib/vorbis.l.bc' WARNING: While resolving call to function 'main' arguments were dropped! I find this perplexing because $OV_DIR/lib/ogg.l.bc DOES exit, as does vorbis.l.bc and they are both readable (as are their containing directories) by everyone. Does anyone have any idea with what I am doing wrong? Thanks, Matt

    Read the article

  • LLVM C++ IDE for windows

    - by osgx
    Hello Is there some C/C++ IDE for windows, which is integrated with LLVM compiler (and clang C/C++ analyzer), just like modern Xcode do. I have Dev-Cpp (it uses outdated gcc) and Code::Blocks (with some gcc). But Gcc gives me very cryptic error messages. I want to get some more user-friendly error messages from clang frontend. Yes, clang was not able to be used with complex C++ code, but trunk clang already can compile LLVM itself. So I wonder if is there any of llvm IDEs in development or in beta versions. Thanks.

    Read the article

  • Xcode 3.2 + LLVM = no local symbols when debugging

    - by glebd
    I have a project for Mac OS X 10.5 that I'm building on 10.6 using Xcode 3.2. When I use GCC 4.2 for Debug build and hit a breakpoint, Xcode debugger displays local variable information normally. If I choose LLVM GCC 4.2 or Clang LLVM, when I hit breakpoint, local symbols are not available, and GDB says No symbol 'self' in current context if I try to print self or any other local symbol. In all cases Generate debug info option is set. The Debug configuration is set to $(NATIVE_ARCH) and 10.5 SDK, Build active architecture only option is set. When GDB starts, I can see it is being configured as x86_64-apple-darwin. I must be missing something obvious. How do I make GDB show local symbols when using a LLVM compiler?

    Read the article

  • llvm-py questions

    - by DSblizzard
    1) Is it possible to use llvm-py on Windows without Visual Studio 2008? Maybe I can compile files on another computer and use on my? 2) Is llvm-py mature enough in your opinion? If not, what are the problems?

    Read the article

  • LLVM JIT segfaults. What am I doing wrong?

    - by bugspy.net
    It is probably something basic because I am just starting to learn LLVM.. The following creates a factorial function and tries to git and execute it (I know the generated func is correct because I was able to static compile and execute it). But I get segmentation fault upon execution of the function (in EE-runFunction(TheF, Args)) #include <iostream> #include "llvm/Module.h" #include "llvm/Function.h" #include "llvm/PassManager.h" #include "llvm/CallingConv.h" #include "llvm/Analysis/Verifier.h" #include "llvm/Assembly/PrintModulePass.h" #include "llvm/Support/IRBuilder.h" #include "llvm/Support/raw_ostream.h" #include "llvm/ExecutionEngine/JIT.h" #include "llvm/ExecutionEngine/GenericValue.h" using namespace llvm; Module* makeLLVMModule() { // Module Construction LLVMContext& ctx = getGlobalContext(); Module* mod = new Module("test", ctx); Constant* c = mod->getOrInsertFunction("fact64", /*ret type*/ IntegerType::get(ctx,64), IntegerType::get(ctx,64), /*varargs terminated with null*/ NULL); Function* fact64 = cast<Function>(c); fact64->setCallingConv(CallingConv::C); /* Arg names */ Function::arg_iterator args = fact64->arg_begin(); Value* x = args++; x->setName("x"); /* Body */ BasicBlock* block = BasicBlock::Create(ctx, "entry", fact64); BasicBlock* xLessThan2Block= BasicBlock::Create(ctx, "xlst2_block", fact64); BasicBlock* elseBlock = BasicBlock::Create(ctx, "else_block", fact64); IRBuilder<> builder(block); Value *One = ConstantInt::get(Type::getInt64Ty(ctx), 1); Value *Two = ConstantInt::get(Type::getInt64Ty(ctx), 2); Value* xLessThan2 = builder.CreateICmpULT(x, Two, "tmp"); //builder.CreateCondBr(xLessThan2, xLessThan2Block, cond_false_2); builder.CreateCondBr(xLessThan2, xLessThan2Block, elseBlock); /* Recursion */ builder.SetInsertPoint(elseBlock); Value* xMinus1 = builder.CreateSub(x, One, "tmp"); std::vector<Value*> args1; args1.push_back(xMinus1); Value* recur_1 = builder.CreateCall(fact64, args1.begin(), args1.end(), "tmp"); Value* retVal = builder.CreateBinOp(Instruction::Mul, x, recur_1, "tmp"); builder.CreateRet(retVal); /* x<2 */ builder.SetInsertPoint(xLessThan2Block); builder.CreateRet(One); return mod; } int main(int argc, char**argv) { long long x; if(argc > 1) x = atol(argv[1]); else x = 4; Module* Mod = makeLLVMModule(); verifyModule(*Mod, PrintMessageAction); PassManager PM; PM.add(createPrintModulePass(&outs())); PM.run(*Mod); // Now we going to create JIT ExecutionEngine *EE = EngineBuilder(Mod).create(); // Call the function with argument x: std::vector<GenericValue> Args(1); Args[0].IntVal = APInt(64, x); Function* TheF = cast<Function>(Mod->getFunction("fact64")) ; /* The following CRASHES.. */ GenericValue GV = EE->runFunction(TheF, Args); outs() << "Result: " << GV.IntVal << "\n"; delete Mod; return 0; }

    Read the article

  • Recompile a x86 code with LLVM to some faster one x86

    - by osgx
    Hello Is it possible to run LLVM compiler with input of x86 32bit code? There is a huge algorithm which I have no source code and I want to make it run faster on the same hardware. Can I translate it from x86 back to x86 with optimizations. This Code runs a long time, so I want to do static recompilation of it. Also, I can do a runtime profile of it and give to LLVM hints, which branches are more probable. The original Code is written for x86, and uses no SSE/MMX/SSE2. After recompilation It has a chances to use x86_64 and/or SSE3. Also, The code will be regenerated in more optimal way to hardware decoder. Thanks.

    Read the article

  • Functional languages targeting the LLVM

    - by Matthew
    Are there any languages that target the LLVM that: Are statically typed Use type inference Are functional (i.e. lambda expressions, closures, list primitives, list comprehensions, etc.) Have first class object-oriented features (inheritance, polymorphism, mixins, etc.) Have a sophisticated type system (generics, covariance and contravariance, etc.) Scala is all of these, but only targets the JVM. F# (and to some extent C#) is most if not all of these, but only targets .NET. What similar language targets the LLVM?

    Read the article

  • llvm clang struct creating functions on the fly

    - by anon
    I'm using LLVM-clang on Linux. Suppose in foo.cpp I have: struct Foo { int x, y; }; How can I create a function "magic" such that: typedef (Foo) SomeFunc(Foo a, Foo b); SomeFunc func = magic("struct Foo { int x, y; };"); so that: func(SomeFunc a, SomeFunc b); // returns a.x + b.y; ? Note: So basically, "magic" needs to take a char*, have LLVM parse it to get how C++ lays out the struct, then create a function on the fly that returns a.x + b.y; Thanks!

    Read the article

  • Static source code analysis with LLVM

    - by Phong
    I recently discover the LLVM (low level virtual machine) project, and from what I have heard It can be used to performed static analysis on a source code. I would like to know if it is possible to extract the different function call through function pointer (find the caller function and the callee function) in a program. I could find the kind of information in the website so it would be really helpful if you could tell me if such an library already exist in LLVM or can you point me to the good direction on how to build it myself (existing source code, reference, tutorial, example...). EDIT: With my analysis I actually want to extract caller/callee function call. In the case of a function pointer, I would like to return a set of possible callee. both caller and callee must be define in the source code (this does not include third party function in a library).

    Read the article

1 2 3 4 5  | Next Page >