Search Results

Search found 1264 results on 51 pages for 'fat binaries'.

Page 3/51 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Mach-O binaries using FASM

    - by vruz
    is anybody using FASM to produce Mach-O binaries? it's my assembler of choice and I thought it would be nice to learn whether that's possible to accomplish and whether somebody is already doing it. thanks in advance.

    Read the article

  • Refactoring FAT client legacy application

    - by Paul
    I am working on a fat client legacy C++ application which has a lot of business logic mixed in with the presentation side of things. I want to clean things out and refactor the code out completely, so there is a clear seperation of concerns. I am looking at MVC or some other suitable design pattern in order to achieve this. I would like to get recommendations from people who have walked this road before - Do I use MVP or MVC (or another pattern)? What is/are the best practices for undertaking something like this (i.e. useful steps/checks) ?

    Read the article

  • Eclipse CDT on Snow Leopard cannot find binaries

    - by ejel
    After upgraded to Snow Leopard, I can no longer run Eclipse CDT project on my computer. While the build process completes without any error, Eclipse does not recognize the binary file it created. When try to point to the binary file in Run Configuration.. dialog, it cannot find any binary in the project. Though executing the file from Terminal works fine. According to a post at on Eclipse forum, this might be a problem that Mach-O parser does not recognize 64-bit binaries. Does anyone know what are the solutions or workarounds to the problem so that I can run/debug my C++ projects on Snow Leopard. UPDATED The solution suggested by Shane, though allowing the binary created to be recognized, does introduce another problem. Since system libraries in Snow Leopard are all 64 bits, it is no longer possible to link the code created with -arch i386 with these libraries, and hence not a feasible solution yet.

    Read the article

  • How to pass binaries built upstream to a remote downstream build slave

    - by sbi
    We're using Hudson on Windows to build a .NET solution and run the unit tests (NUnit). Hudson is thereby used to start batch files that do the actual work. I am now trying to set up a new test that is to run on a build slave and will run very long. The test should use the binaries produced by the upstream build. I have searched the Hudson documentation but I cannot find how to pass upstream build artifacts to downstream slaves. How do I do this?

    Read the article

  • How to pass binaries build upstream to a remote downstream build slave

    - by sbi
    We're using hudson on Windows to build a .NET solution and run the unit tests (NUnit). Hudson is thereby used to start batch files that do the actual work. I am now trying to set up a new test that is to run on a build slave and will run very long. The test should use the binaries produced by the upstream build. I have searched the hudson documentation but I cannot find how to pass upstream build artifacts to downstream slaves. How do I do this?

    Read the article

  • Deploying locally compiled binaries on server

    - by nano
    Hi, I have a Zen based VPS server that runs on a dual-core AMD Opteron 64-bit machine. I have some locally developed C++ based daemons that I would want to deploy in that machine. My local machine is an Intel core 2 duo laptop. Can I execute binaries compiled from source code on my machine directly on the above mentioned server? I am a newbie in this area. Would be great if someone could throw light on the standard practices in this kind of situation. Thanks in advance

    Read the article

  • gcc compiled binaries w/different sizes?

    - by BillTorpey
    If the same code is built at different times w/gcc, the resulting binary will have different contents. OK, I'm not wild about that, but that's what it is. However, I've recently run into a situation where the same code, built with the same version of gcc, is generating a binary with a different size than a prior build (by about 1900 bytes). Does anyone have any idea what may be causing either of these situations? Is this some kind of ELF issue? Are there any tools out there (other than ldd) that can be used to dump contents of binaries to see what exactly is different? Thanks in advance.

    Read the article

  • OO Design / Patterns - Fat Model Vs Transaction Script?

    - by ben
    Ok, 'Fat' Model and Transaction Script both solve design problems associated with where to keep business logic. I've done some research and popular thought says having all business logic encapsulated within the model is the way to go (mainly since Transaction Script can become really complex and often results in code duplication). However, how does this work if I want to use the TDG of a second Model in my business logic? Surely Transaction Script presents a neater, less coupled solution than using one Model inside the business logic of another? A practical example... I have two classes: User & Alert. When pushing User instances to the database (eg, creating new user accounts), there is a business rule that requires inserting some default Alerts records too (eg, a default 'welcome to the system' message etc). I see two options here: 1) Add this rule as a User method, and in the process create a dependency between User and Alert (or, at least, Alert's Table Data Gateway). 2) Use a Transaction Script, which avoids the dependency between models. (Also, means the business logic is kept in a 'neutral' class & easily accessible by Alert. That probably isn't too important here, though). User takes responsibility for it's own validation etc, however, but because we're talking about a business rule involving two Models, Transaction Script seems like a better choice to me. Anyone spot flaws with this approach?

    Read the article

  • Simple problem with mod_rewrite in the Fat Free Framework

    - by ian
    I am trying to setup and learn the Fat Free Framework for PHP. http://fatfree.sourceforge.net/ It's is fairly simple to setup and I am running it on my machine using MAMP. I was able to get the 'hello world' example running just fin: require_once 'path/to/F3.php'; F3::route('GET /','home'); function home() { echo 'Hello, world!'; } F3::run(); But when I try to add in the second part, which has two routes: require_once 'F3/F3.php'; F3::route('GET /','home'); function home() { echo 'Hello, world!'; } F3::route('GET /about','about'); function about() { echo 'About Us.'; } F3::run(); I get a 404 error if I try the second URL: /about Not sure why one of the mod_rewrite commands would be working and not the other. Below is my .htaccess file: # Enable rewrite engine and route requests to framework RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-l RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php [L,QSA] # Disable ETags Header Unset ETag FileETag none # Default expires header if none specified (stay in browser cache for 7 days) <IfModule mod_expires.c> ExpiresActive On ExpiresDefault A604800 </IfModule>

    Read the article

  • Side by side madness - running binaries on different computer (with a twist)

    - by sbk
    Here's my configuration: Computer A - Windows 7, MS Visual Studio 2005 patched for Win7 compatibility (8.0.50727.867) Computer B - Windows XP SP2, MS Visual Studio 2005 installed (8.0.50727.42) My project has some external dependencies (prebuilt DLLs - either build on A or downloaded from the Internet), a couple of DLLs built from sources and one executable. I am mostly developing on A and all is fine there. At some point I try to build my project on computer B, copying the prebuilt DLLs to the output folder. Everything builds fine, but trying to start my application I get The application failed to initialize properly (0xc0150002).... The event log contains two of those: Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system. plus the slightly more amusing Generate Activation Context failed for some.dll. Reference error message: The operation completed successfully. At this point I'm trying my Google-Fu, but in vain - virtually all hits are about running binaries on machines without Visual Studio installed. In my case, however, the executables fail to run on the computer they are built. Next step was to try dependency walker and it baffled me even more - my DLLs built from sources on the same box cannot find MSVCR80.DLL and MSVCP80.DLL, however the executable seems to be alright in respect to those two DLLs i.e. when I open the executable with dependency walker it shows that the MSVC?80.DLLs can be found, but when I open one of my DLLs it says they cannot. That's where I am completely out of ideas what to do so I'm asking you, dear stackoverflow :) I admit I'm a bit blurry on the whole side-by-side thing, so general reading on the topic will also be appreciated.

    Read the article

  • Using 32 bit g++ to build 64bit binaries on AIX

    - by Thumbeti
    I am trying to build a 64 bit binary from C++ code using 32bit g++ compiler. I am getting the following errors while building: ============================================================================= => /usr/local/bin/g++ -shared -maix64 -fPIC -Wl,-bM:SRE -Wl,-bnoentry -Wl,-bE:gcc_shr_lib.so.exp -o gcc_shr_lib.so gcc_shr_lib.o -L/usr/local/lib ld: 0711-319 WARNING: Exported symbol not defined: gcc_whereAmI ld: 0711-317 ERROR: Undefined symbol: typeinfo for std::bad_alloc ld: 0711-317 ERROR: Undefined symbol: __gxx_personality_v0 ld: 0711-317 ERROR: Undefined symbol: vtable for std::exception ld: 0711-317 ERROR: Undefined symbol: vtable for std::bad_alloc ld: 0711-317 ERROR: Undefined symbol: .std::ios_base::Init::Init() ld: 0711-317 ERROR: Undefined symbol: .std::ios_base::Init::~Init() ld: 0711-317 ERROR: Undefined symbol: .operator new(unsigned long) ld: 0711-317 ERROR: Undefined symbol: .operator delete(void*) ld: 0711-317 ERROR: Undefined symbol: ._Unwind_Resume ld: 0711-317 ERROR: Undefined symbol: .__cxa_get_exception_ptr ld: 0711-317 ERROR: Undefined symbol: .__cxa_begin_catch ld: 0711-317 ERROR: Undefined symbol: std::cout ld: 0711-317 ERROR: Undefined symbol: .std::basic_ostream<char, std::char_traits<char> >& std::operator<< <std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*) ld: 0711-317 ERROR: Undefined symbol: std::basic_ostream<char, std::char_traits<char> >& std::endl<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&) ld: 0711-317 ERROR: Undefined symbol: .std::basic_ostream<char, std::char_traits<char> >::operator<<(std::basic_ostream<char, std::char_traits<char> >& (*)(std::basic_ostream<char, std::char_traits<char> >&)) ld: 0711-317 ERROR: Undefined symbol: .std::bad_alloc::~bad_alloc() ld: 0711-317 ERROR: Undefined symbol: .__cxa_end_catch ld: 0711-317 ERROR: Undefined symbol: .__register_frame_info_table ld: 0711-317 ERROR: Undefined symbol: .__deregister_frame_info ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. collect2: ld returned 8 exit status ============================================================================= It seems I need 64bit libstdc++ available on my build system. Could you please throw some light to solve this. Q1) Is it ok to build 64 bit binaries using 32 bit g++ compiler on AIX 5.2 Q2) Where should I get 64 bit libstdc++? Will this 64 bit libstdc++ work with 32bit g++ compiler?

    Read the article

  • How can I move a library inside a project's source tree and compiling static binaries?

    - by AbrahamVanHelpsing
    How can I move a library inside a project's source tree and compiling static binaries? I want to use a particular tool that utilizes ANCIENT binaries without upgrading it's API usage. This way I can use the old libraries inside the single binary without wrecking the local host environment. I am on nix with netbeans/eclipse/code::blocks. I don't have a problem reading, just looking for a starting point. Any thoughts?

    Read the article

  • WCF service to send large binaries to server

    - by Ali Shafai
    I need to upload large (100 meg max) binairies to server using WCF. I followed instructions from this: http://www.c-sharpcorner.com/UploadFile/dhananjaycoder/fileuploadsilverlightwcf07142009104020AM/fileuploadsilverlightwcf.aspx it workds for anything less than 50K. going above that I get 415 errors. any idea?

    Read the article

  • Project with multiple binaries in Eclipse CDT

    - by Robert Schneider
    I think it is quite normal to have more than one binary in a project. However, with Eclipse CDT I don't know how to set up the IDE to get things done. I know I can create several projects - one per binary. And I know I can set the dependencies per project. However, I cannot regard them as one project in Eclipse. If I'd like to share the code with a version control system (like svn), each developer has to import the projects separately. What I miss is something like the Solution (sln file) in Visual Studio. Should I create a single project and create the make files by myself? I haven't tried it out yet, but there is this 'project set' which can be ex- and imported. Is this the solution? Can this be put into version control? My goal it to put everything under version control, not only subprojects. I cannot imagine that CDT makes only sense for single-binary applications. How can I work properly?

    Read the article

  • Silverlight binaries what are .ni.dlls?

    - by BrettRobi
    In browsing around the Silverlight installation directory I see a number of framework DLLs as expected. But I also see a separate DLL with the same name but with .ni inserted between the dll name and extension. For example there is a System.dll and System.ni.dll. There appears to be a sister .ni dll for almost all of the system dlls. Looking at the quickly in Reflector they appear to include the same content, but are much bigger in binary size. Just out of curiosity, can anyone explain what these are?

    Read the article

  • Treating a fat webservice in .net 3.5 c#

    - by Chris M
    I'm dealing with an obese 3rd party webservice that returns about 3mb of data for a simple search results, about 50% of the data in that response is junk. Would it make sense then to remap this data to my own result object and ditch the response so I'm storing 1-2 mb in memory for filtering and sorting rather than using the web-responses own object and using 2-4 or am I missing a point? So far I've been accessing the webservice from a separate project and using a new class to provide the interaction and to handle the persistence so my project looks like this |- Web (mvc2 proj) |- DAL (database/storage fluent-nhibernate) |- SVCGateway (interaction layer + webservice related models) |- Services -------------- |- Tests |- Specs I'm trying to make the application behave fast and I also need to store the result set temporarily in case a customer goes to view the product and wants to go back to the results. (Service returns only 500 of possible 14K results). So basically I'm looking for confirmation that I'm doing the right thing in pushing the results into my own objects or if I'm breaking some rule or even if there's a better way of handling it. Thanks

    Read the article

  • What is a good FAT file system for ARM7-TDMI

    - by Seidleroni
    I'm using the ARM7TDMI-S (NXP processor) and I need a file system capable of reading/writing to an SD card. There are so many available, what have people used and been happy with? One that requires the least amount of setup is best - so the less I have to do to get it started (i.e. write device drivers to NXP's hardware) the better.

    Read the article

  • Can anyone tell me what the authors mean on this line?

    - by Anirudh Goel
    i was going through this link: FAT16 Basics to Assemble Clusters. I have read the structures involved in defining a directory entry in FAT. Now when giving the example for a FAT16 File, it says the data cluster is 0x03 for the example file MyFile.txt. Which means if we logically compute the Data Cluster we will be able to reach to the first node which happens to be cluster no 3. But what I fail to understand is what the author is trying to say in the next line where it says What we can see in the File Allocation Table at this moment? How suddenly we reach to the File Allocation Table? Weren't we already there when we were going through the information of Myfile.txt? I couldn't find any reason how suddenly the author jumped to an offset location of 00000200 and is identifying the emptiness of the clusters. It will be great if someone can help me understand. Thanks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >