Search Results

Search found 4423 results on 177 pages for 'compiler'.

Page 149/177 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Commenting out portions of code in Scala

    - by akauppi
    I am looking for a C(++) #if 0 -like way of being able to comment out whole pieces of Scala source code, for keeping around experimental or expired code for a while. I tried out a couple of alternatives and would like to hear what you use, and if you have come up with something better? // Simply block-marking N lines by '//' is one way... // <tags> """ anything My editor makes this easy, but it's not really The Thing. It gets easily mixed with actual one-line comments. Then I figured there's native XML support, so: <!-- ... did not work --> Wrapping in XML works, unless you have <tags> within the block: class none { val a= <ignore> ... cannot have //<tags> <here> (not even in end-of-line comments!) </ignore> } The same for multi-line strings seems kind of best, but there's an awful lot of boilerplate (not fashionable in Scala) to please the compiler (less if you're doing this within a class or an object): object none { val ignore= """ This seems like ... <truly> <anything goes> but three "'s of course """ } The 'right' way to do this might be: /*** /* ... works but not properly syntax highlighed in SubEthaEdit (or StackOverflow) */ ***/ ..but that matches the /* and */ only, not i.e. /*** to ***/. This means the comments within the block need to be balanced. And - the current Scala syntax highlighting mode for SubEthaEdit fails miserably on this. As a comparison, Lua has --[==[ matching ]==] and so forth. I think I'm spoilt? So - is there some useful trick I'm overseeing?

    Read the article

  • C# custom control to get internal text as string

    - by Ed Woodcock
    ok, I'm working on a custom control that can contain some javascript, and read this out of the page into a string field. This is a workaround for dynamic javascript inside an updatepanel. At the moment, I've got it working, but if I try to put a server tag inside the block: <custom:control ID="Custom" runat="server"> <%= ControlName.ClientID %> </custom:control> The compiler does not like it. I know these are generated at runtime, and so might not be compatible with what I'm doing, but does anyone have any idea how I can get that working? EDIT Error message is: Code blocks are not supported in this context EDIT 2 The control: [DataBindingHandler("System.Web.UI.Design.TextDataBindingHandler, System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"), ControlValueProperty("Text"), DefaultProperty("Text"), ParseChildren(true, "Text"), AspNetHostingPermission(SecurityAction.LinkDemand, Level = AspNetHostingPermissionLevel.Minimal), AspNetHostingPermission(SecurityAction.InheritanceDemand, Level = AspNetHostingPermissionLevel.Minimal)] public class CustomControl : Control, ITextControl { [DefaultValue(""), Bindable(true), Localizable(true)] public string Text { get { return (string)(ViewState["Text"] ?? string.Empty); } set { ViewState["Text"] = value; } } }

    Read the article

  • Is a call to the following method considered late binding?

    - by AspOnMyNet
    1) Assume: • B1 defines methods virtualM() and nonvirtualM(), where former method is virtual while the latter is non-virtual • B2 derives from B1 • B2 overrides virtualM() • B2 is defined inside assembly A • Application app doesn’t have a reference to assembly A In the following code application app dynamically loads an assembly A, creates an instance of a type B2 and calls methods virtualM() and nonvirtualM(): Assembly a=Assembly.Load(“A”); Type t= a.GetType(“B2”); B1 a = ( B1 ) Activator.CreateInstance ( “t” ); a.virtualM(); a.nonvirtualM(); a) Is call to a.virtualM() considered early binding or late binding? b) I assume a call to a.nonvirtualM() is resolved during compilation time? 2) Does the term late binding refer only to looking up the target method at run time or does it also refer to creating an instance of given type at runtime? thanx EDIT: 1) A a=new A(); a.M(); As far as I know, it is not known at compile time where on the heap (thus at which memory address ) will instance a be created during runtime. Now, with early binding the function calls are replaced with memory addresses during compilation process. But how can compiler replace function call with memory address, if it doesn’t know where on the heap will object a be created during runtime ( here I’m assuming the address of method a.M will also be at same memory location as a )? 2) The method slot is determined at compile time I assume that by method slot you’re referring to the entry point in V-table?

    Read the article

  • Macro to improve callback registration readability

    - by Warren Seine
    I'm trying to write a macro to make a specific usage of callbacks in C++ easier. All my callbacks are member functions and will take this as first argument and a second one whose type inherits from a common base class. The usual way to go is: register_callback(boost::bind(&my_class::member_function, this, _1)); I'd love to write: register_callback(HANDLER(member_function)); Note that it will always be used within the same class. Even if typeof is considered as a bad practice, it sounds like a pretty solution to the lack of __class__ macro to get the current class name. The following code works: typedef typeof(*this) CLASS; boost::bind(& CLASS :: member_function, this, _1)(my_argument); but I can't use this code in a macro which will be given as argument to register_callback. I've tried: #define HANDLER(FUN) \ boost::bind(& typeof(*this) :: member_function, this, _1); which doesn't work for reasons I don't understand. Quoting GCC documentation: A typeof-construct can be used anywhere a typedef name could be used. My compiler is GCC 4.4, and even if I'd prefer something standard, GCC-specific solutions are accepted.

    Read the article

  • How do I mock a method with an open array parameter in PascalMock?

    - by Oliver Giesen
    I'm currently in the process of getting started with unit testing and mocking for good and I stumbled over the following method that I can't seem to fabricate a working mock implementation for: function GetInstance(const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID = CID_DEFAULT): Boolean; (TImplContextID is just an alias for Integer) I thought it would have to look something like this: function TImplementationProviderMock.GetInstance( const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID): Boolean; begin Result := AddCall('GetInstance') .WithParams([@AIID, AContextID]) .ReturnsOutParams([AInstance]) .ReturnValue; end; But the compiler complains about the .ReturnsOutParams([AInstance]) saying "Bad argument type in variable type array constructor.". Also I haven't found a way to specify the open array parameter AArgs at all. Also, is using the @-notation for the TGUID-typed parameter the right way to go? Is it possible to mock this method with the current version of PascalMock at all? Update: I now realize I got the purpose of ReturnsOutParams completely wrong: It's intended to be used for populating the values to be returned when defining the expectations rather than for mocking the call itself. I now think the correct syntax for mocking the out parameter would probably have to look more like this: function TImplementationProviderMock.GetInstance( const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID): Boolean; var lCall: TMockMethod; begin lCall := AddCall('GetInstance').WithParams([@AIID, AContextID]); Pointer(AInstance) := lCall.OutParams[0]; Result := lCall.ReturnValue; end; The questions that remain are how to mock the open array parameter AArgs and whether passing the TGUID argument (i.e. a value type) by address will work out...

    Read the article

  • Dynamic stack allocation in C++

    - by Poni
    I want to allocate memory on the stack. Heard of _alloca / alloca and I understand that these are compiler-specific stuff, which I don't like. So, I came-up with my own solution (which might have it's own flaws) and I want you to review/improve it so for once and for all we'll have this code working: /*#define allocate_on_stack(pointer, size) \ __asm \ { \ mov [pointer], esp; \ sub esp, [size]; \ }*/ /*#define deallocate_from_stack(size) \ __asm \ { \ add esp, [size]; \ }*/ void test() { int buff_size = 4 * 2; char *buff = 0; __asm { // allocate mov [buff], esp; sub esp, [buff_size]; } // playing with the stack-allocated memory for(int i = 0; i < buff_size; i++) buff[i] = 0x11; __asm { // deallocate add esp, [buff_size]; } } void main() { __asm int 3h; test(); } Compiled with VC9. What flaws do you see in it? Me for example, not sure that subtracting from ESP is the solution for "any kind of CPU". Also, I'd like to make the commented-out macros work but for some reason I can't.

    Read the article

  • Boost ASIO Headache

    - by bobber205
    Man... thought using ASIO in Boost was going to be easy and intuitive. :P I am starting to get it finally but I am having some trouble. Here's a snippet. I am having several compiler errors on the async_accept line. What am I doing wrong? :P I've based my code off of this page: http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html bool TestSocket::StartListening(int port) { bool didStart = false; if (!this->listening) { //try to listen acceptor = new tcp::acceptor(this->myService, tcp::endpoint(tcp::v4(), port)); didStart = true; //probably change? tcp::socket* tempNewSocket = new tcp::socket(this->myService); acceptor->async_accept(tempNewSocket, boost::bind(&AlexSocket::NewConnection, this, tempNewSocket, boost::asio::placeholders::error) ); } else //already started! return false; this->listening = didStart; return didStart; } void TestSocket::NewConnection(tcp::socket* s, const boost::system::error_code& error) { }

    Read the article

  • Jaxb Simplify Plugin

    - by wrm
    i try to use the simplify plugin to simplify the generated code. I have a defined type: <xsd:complexType name="typeWithReferencesProperty"> <xsd:choice maxOccurs="unbounded"> <xsd:annotation> <xsd:appinfo> <simplify:as-element-property/> </xsd:appinfo> </xsd:annotation> <xsd:element name="a" type="AttributeValueIntegerType"/> <xsd:element name="b" type="AttributeValueIntegerType"/> </xsd:choice> </xsd:complexType> but it does not work, as it results in the following error: compiler was unable to honor this as-element-property customization. It is attached to a wrong place, or its inconsistent with other bindings. i used exactly the configuration, i also have other jaxb plugins which work, so i am not quite sure, if the plugin is broken or something? has anybody managed to get this running?

    Read the article

  • Visibility of reintroduced constructor

    - by avenmore
    I have reintroduced the form constructor in a base form, but if I override the original constructor in a descendant form, the reintroduced constructor is no longer visible. type TfrmA = class(TForm) private FWndParent: HWnd; public constructor Create(AOwner: TComponent; const AWndParent: Hwnd); reintroduce; overload; virtual; end; constructor TfrmA.Create(AOwner: TComponent; const AWndParent: Hwnd); begin FWndParent := AWndParent; inherited Create(AOwner); end; type TfrmB = class(TfrmA) private public end; type TfrmC = class(TfrmB) private public constructor Create(AOwner: TComponent); override; end; constructor TfrmC.Create(AOwner: TComponent); begin inherited Create(AOwner); end; When creating: frmA := TfrmA.Create(nil, 0); frmB := TfrmB.Create(nil, 0); frmC := TfrmC.Create(nil, 0); // Compiler error My work-around is to override the reintroduced constructor or to declare the original constructor overloaded, but I'd like to understand the reason for this behavior. type TfrmA = class(TForm) private FWndParent: HWnd; public constructor Create(AOwner: TComponent); overload; override; constructor Create(AOwner: TComponent; const AWndParent: Hwnd); reintroduce; overload; virtual; end; type TfrmC = class(TfrmB) private public constructor Create(AOwner: TComponent; const AWndParent: Hwnd); override; end;

    Read the article

  • Why is Delphi unable to infer the type for a parameter TEnumerable<T>?

    - by deepc
    Consider the following declaration of a generic utility class in Delphi 2010: TEnumerableUtils = class public class function InferenceTest<T>(Param: T): T; class function Count<T>(Enumerable: TEnumerable<T>): Integer; overload; class function Count<T>(Enumerable: TEnumerable<T>; Filter: TPredicate<T>): Integer; overload; end; Somehow the compiler type inference seems to have problems here: var I: Integer; L: TList<Integer>; begin TEnumerableUtils.InferenceTest(I); // no problem here TEnumerableUtils.Count(L); // does not compile: E2250 There is no overloaded version of 'Count' that can be called with these arguments TEnumerableUtils.Count<Integer>(L); // compiles fine end; The first call works as expected and T is correctly inferred as Integer. The second call does not work, unless I also add <Integer -- then it works, as can be seen in the third call. Am I doing something wrong or is the type inference in Delphi just not supporting this (I don't think it is a problem in Java which is why expected it to work in Delphi, too).

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • Compilation errors calling find_if using a functor

    - by Jim Wong
    We are having a bit of trouble using find_if to search a vector of pairs for an entry in which the first element of the pair matches a particular value. To make this work, we have defined a trivial functor whose operator() takes a pair as input and compares the first entry against a string. Unfortunately, when we actually add a call to find_if using an instance of our functor constructed using a temporary string value, the compiler produces a raft of error messages. Oddly (to me, anyway), if we replace the temporary with a string that we've created on the stack, things seem to work. Here's what the code (including both versions) looks like: typedef std::pair<std::string, std::string> MyPair; typedef std::vector<MyPair> MyVector; struct MyFunctor: std::unary_function <const MyPair&, bool> { explicit MyFunctor(const std::string& val) : m_val(val) {} bool operator() (const MyPair& p) { return p.first == m_val; } const std::string m_val; }; bool f(const char* s) { MyFunctor f(std::string(s)); // ERROR // std::string str(s); // MyFunctor f(str); // OK MyVector vec; MyVector::const_iterator i = std::find_if(vec.begin(), vec.end(), f); return i != vec.end(); } And here's what the most interesting error message looks like: /usr/include/c++/4.2.1/bits/stl_algo.h:260: error: conversion from ‘std::pair, std::allocator , std::basic_string, std::allocator ’ to non-scalar type ‘std::string’ requested Because we have a workaround, we're mostly curious as to why the first form causes problems. I'm sure we're missing something, but we haven't been able to figure out what it is.

    Read the article

  • Does cout need to be terminated with a semicolon ?

    - by Philippe Harewood
    I am reading Bjarne Stroustrup's Programming : Principles and Practice Using C++ In the drill section for Chapter 2 it talks about various ways to look at typing errors when compiling the hello_world program #include "std_lib_facilities.h" int main() //C++ programs start by executing the function main { cout << "Hello, World!\n", // output "Hello, World!" keep_window_open(); // wait for a character to be entered return 0; } In particular this section asks: Think of at least five more errors you might have made typing in your program (e.g. forget keep_window_open(), leave the Caps Lock key on while typing a word, or type a comma instead of a semicolon) and try each to see what happens when you try to compile and run those versions. For the cout line, you can see that there is a comma instead of a semicolon. This compiles and runs (for me). Is it making an assumption ( like in the javascript question: Why use semicolon? ) that the statement has been terminated ? Because when I try for keep_terminal_open(); the compiler informs me of the semicolon exclusion.

    Read the article

  • Using JRE 1.5, still maven says annotation not supported in -source 1.3

    - by Abhijeet
    Hi, I am using JRE 1.5. Still when I try to compile my code it fails by saying to use JRE 1.5 instead of 1.3 C:\temp\SpringExamplemvn -e clean install + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building SpringExample [INFO] task-segment: [clean, install] [INFO] ------------------------------------------------------------------------ [INFO] [clean:clean {execution: default-clean}] [INFO] Deleting directory C:\temp\SpringExample\target [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 6 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Compiling 6 source files to C:\temp\SpringExample\target\classes [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.BuildFailureException: Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:715) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:516) at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:114) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2 seconds [INFO] Finished at: Wed Dec 22 10:04:53 IST 2010 [INFO] Final Memory: 9M/16M [INFO] ------------------------------------------------------------------------ C:\temp\SpringExamplejavac -version javac 1.5.0_08 javac: no source files

    Read the article

  • Boost link error when using "--layout=system" on VS2005

    - by Kevin
    I'm new to boost, and thought I'd try it out with some realistic deployment scenarios for the .dlls, so I used the following command to compile/install the libraries: .\bjam install --layout=system variant=debug runtime-link=shared link=shared --with-date_time --with-thread --with-regex --with-filesystem --includedir=<my include directory> --libdir=<my bin directory> > installlog.txt That seemed to work, but my simple program (taken right from the "Getting Started" page) fails: #include <boost/regex.hpp> #include <iostream> #include <string> // Place your functions after this line int main() { std::string line; boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" ); while (std::cin) { std::getline(std::cin, line); boost::smatch matches; if (boost::regex_match(line, matches, pat)) std::cout << matches[2] << std::endl; } } This fails with the following linker error: fatal error LNK1104: cannot open file 'libboost_regex-vc80-mt-1_42.lib' I'm sure that both the .lib and the .dlls are in that directory, and named how I want them to be (ie: boost_regex.lib, etc, all unversioned, as the --layout=system says). So why is it looking for the versioned type of it? And how do I get it to look for the unversioned type of the library? I've tried this with more "normal" options, such as below: .\bjam stage --build-type=complete --with-date_time --with-thread --with-filesystem --with-regex > mybuildlog.txt And that works fine. I made sure my compiler saw the "stage\lib" directory, and it compiled and ran fine with nothing beyond having the environment looking into the right lib directory. But when I took those "testing" directories away, and wanted to use these others (unversioned), then it failed. I'm under VS2005 here on XP. Any ideas?

    Read the article

  • Writing a VM - well formed bytecode?

    - by David Titarenco
    Hi, I'm writing a virtual machine in C just for fun. Lame, I know, but luckily I'm on SO so hopefully no one will make fun :) I wrote a really quick'n'dirty VM that reads lines of (my own) ASM and does stuff. Right now, I only have 3 instructions: add, jmp, end. All is well and it's actually pretty cool being able to feed lines (doing it something like write_line(&prog[1], "jmp", regA, regB, 0); and then running the program: while (machine.code_pointer <= BOUNDS && DONE != true) { run_line(&prog[machine.cp]); } I'm using an opcode lookup table (which may not be efficient but it's elegant) in C and everything seems to be working OK. My question is more of a "best practices" question but I do think there's a correct answer to it. I'm making the VM able to read binary files (storing bytes in unsigned char[]) and execute bytecode. My question is: is it the VM's job to make sure the bytecode is well formed or is it just the compiler's job to make sure the binary file it spits out is well formed? I only ask this because what would happen if someone would edit a binary file and screw stuff up (delete arbitrary parts of it, etc). Clearly, the program would be buggy and probably not functional. Is this even the VM's problem? I'm sure that people much smarter than me have figured out solutions to these problems, I'm just curious what they are!

    Read the article

  • Keeping sync in multiplayer RTS game that uses floating point arithmetic

    - by Calmarius
    I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player. And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized. And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution). So I have 2 options so far: Say bye to the current code and restart from scratch using integers Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame... So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

    Read the article

  • not able to run c/cpp execs in eclipse cdt

    - by user1658323
    i installed eclipse and then cdt on an ubuntu system recently and was trying to make the first runnable c/c++ proj.. i installed g++ also, and then created the first executable cpp 'Hello World' project some files are created... then some issues... 1) even though Build Automatically is selected, I have to goto the project n do a Build Project to build it manually, and this i have to do everytime i make a change 2) After Building manually, there are some new folders created with Binaries and Debug files and i can see g++ commands in the console being executed. The project binary is output both to debug n binaries folder. But i am not able to run these through the Green Play Button or any other way in eclipse. Even Run configuration is not showing any option for c/C++ proj.. though i can goto terminal and run the binary myself through ./ But i want to be able to run n debug this through eclipse. plz help in fixing me this problem as i really love eclipse n have some c/cpp assignments coming soon.. Console info on doing a manual project build - Build of configuration Debug for project qwe ** make all Building file: ../src/qwe.cpp Invoking: GCC C++ Compiler g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"src/qwe.d" -MT"src/qwe.d" -o "src/qwe.o" "../src/qwe.cpp" Finished building: ../src/qwe.cpp Building target: qwe Invoking: GCC C++ Linker g++ -o "qwe" ./src/qwe.o Finished building target: qwe Build Finished **

    Read the article

  • Safe and polymorphic toEnum

    - by jetxee
    I'd like to write a safe version of toEnum: safeToEnum :: (Enum t, Bounded t) => Int -> Maybe t A naive implementation: safeToEnum :: (Enum t, Bounded t) => Int -> Maybe t safeToEnum i = if (i >= fromEnum (minBound :: t)) && (i <= fromEnum (maxBound :: t)) then Just . toEnum $ i else Nothing main = do print $ (safeToEnum 1 :: Maybe Bool) print $ (safeToEnum 2 :: Maybe Bool) And it doesn't work: safeToEnum.hs:3:21: Could not deduce (Bounded t1) from the context () arising from a use of `minBound' at safeToEnum.hs:3:21-28 Possible fix: add (Bounded t1) to the context of an expression type signature In the first argument of `fromEnum', namely `(minBound :: t)' In the second argument of `(>=)', namely `fromEnum (minBound :: t)' In the first argument of `(&&)', namely `(i >= fromEnum (minBound :: t))' safeToEnum.hs:3:56: Could not deduce (Bounded t1) from the context () arising from a use of `maxBound' at safeToEnum.hs:3:56-63 Possible fix: add (Bounded t1) to the context of an expression type signature In the first argument of `fromEnum', namely `(maxBound :: t)' In the second argument of `(<=)', namely `fromEnum (maxBound :: t)' In the second argument of `(&&)', namely `(i <= fromEnum (maxBound :: t))' As well as I understand the message, the compiler does not recognize that minBound and maxBound should produce exactly the same type as in the result type of safeToEnum inspite of the explicit type declaration (:: t). Any idea how to fix it?

    Read the article

  • User defined literal arguments are not constexpr?

    - by Pubby
    I'm testing out user defined literals. I want to make _fac return the factorial of the number. Having it call a constexpr function works, however it doesn't let me do it with templates as the compiler complains that the arguments are not and cannot be constexpr. I'm confused by this - aren't literals constant expressions? The 5 in 5_fac is always a literal that can be evaluated during compile time, so why can't I use it as such? First method: constexpr int factorial_function(int x) { return (x > 0) ? x * factorial_function(x - 1) : 1; } constexpr int operator "" _fac(unsigned long long x) { return factorial_function(x); // this works } Second method: template <int N> struct factorial { static const unsigned int value = N * factorial<N - 1>::value; }; template <> struct factorial<0> { static const unsigned int value = 1; }; constexpr int operator "" _fac(unsigned long long x) { return factorial_template<x>::value; // doesn't work - x is not a constexpr }

    Read the article

  • Strange inheritance behaviour in Objective-C

    - by Smikey
    Hi all, I've created a class called SelectableObject like so: #define kNumberKey @"Object" #define kNameKey @"Name" #define kThumbStringKey @"Thumb" #define kMainStringKey @"Main" #import <Foundation/Foundation.h> @interface SelectableObject : NSObject <NSCoding> { int number; NSString *name; NSString *thumbString; NSString *mainString; } @property (nonatomic, assign) int number; @property (nonatomic, retain) NSString *name; @property (nonatomic, retain) NSString *thumbString; @property (nonatomic, retain) NSString *mainString; @end So far so good. And the implementation section conforms to the NSCoding protocol as expected. HOWEVER, when I add a new class which inherits from this class, i.e. #import <Foundation/Foundation.h> #import "SelectableObject.h" @interface Pet : SelectableObject <NSCoding> { } @end I suddenly get the following compiler error in the Selectable object class! SelectableObject.h:16: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'interface' This makes no sense to me. Why is the interface declaration for the SelectableObject class suddenly broken? I also import it in a couple of other classes I've written... Any help would be very much appreciated. Thanks! Michael

    Read the article

  • Qt4Dotnet on Mac OS X

    - by Tony
    Hello everyone. I'm using Qt4Dotnet project in order to port application originally written in C# on Linux and Mac. Port to Linux hasn't taken much efforts and works fine. But Mac (10.4 Tiger) is a bit more stubborn. The problem is: when I try to start my application it throws an exception. Exception states that com.trolltech.qt.QtJambi_LibraryInitializer is unable to find all necessary ibraries. QtJambi library initializer uses java.library.path VM environment variable. This variable includes current working directory. I put all necessary libraries in a working directory. When I try to run the application from MonoDevelop IDE, initializer is able to load one library, but the other libraries are 'missing': An exception was thrown by the type initializer for com.trolltech.qt.QtJambi_LibraryInitializer --- java.lang.RuntimeException: Loading library failed, progress so far: No 'qtjambi-deployment.xml' found in classpath, loading libraries via 'java.library.path' Loading library: 'libQtCore.4.dylib'... - using 'java.library.path' - ok, path was: /Users/chin/test/bin/Debug/libQtCore.4.dylib Loading library: 'libqtjambi.jnilib'... - using 'java.library.path' Both libQtCore.4.dylib and libqtjambi.jnilib are in the same directory. When I try to run it from the command prompt, the initializer is unable to load even libQtCore.4.dylib. I'm using Qt4Dotnet v4.5.0 (currently the latest) with QtJambi v4.5.2 libraries. This might be the source of the problem, but I'm neither able to compile Qt4Dotnet v4.5.2 by myself nor to find QtJambi v4.5.0 libraries. Project's page states that some sort of patch should be applied to QtJambi's source code in order to be compatible with Mono framework, but this patch hasn't been released yet. Without this patch application crashes in a strange manner (other than library seek fault). I must note that original QtJambi loads all necessary libraries perfectly, so it might be issues of IKVM compiler used to translate QtJambi into .Net library. Any suggestions how can I overcome this problem?

    Read the article

  • Install h5py in Mac OS X 10.6.3

    - by zyq524
    I'm trying to install h5py in Mac OS X 10.6.3. First I installed HDF5 1.8, which used the following commands: ./configure \ --prefix=/Library/Frameworks/Python.framework/Versions/Current \ --enable-shared \ --enable-production \ --enable-threadsafe \ CPPFLAGS=-I/Library/Frameworks/Python.framework/Versions/Current/include \ LDFLAGS=-L/Library/Frameworks/Python.framework/Versions/Current/lib make make check sudo make install Then install h5py: /Library/Frameworks/Python.framework/Versions/Current/bin/python \ setup.py \ build \ --api=18 \ --hdf5=/Library/Frameworks/Python.framework/Versions/Current Then I got the errors: Configure: Autodetecting HDF5 settings... Custom HDF5 dir: /Library/Frameworks/Python.framework/Versions/Current Custom API level: (1, 8) ld: warning: in detect/vers.o, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: in /Library/Frameworks/Python.framework/Versions/Current/lib/libhdf5.dylib, file was built for unsupported file format which is not the architecture being linked (i386) Undefined symbols: "_main", referenced from: start in crt1.10.5.o ld: symbol(s) not found collect2: ld returned 1 exit status Failed to compile HDF5 test program. Please check to make sure: * You have a C compiler installed * A development version of Python is installed (including header files) * A development version of HDF5 is installed (including header files) * If HDF5 is not in a default location, supply the argument --hdf5=<path> error: command 'cc' failed with exit status 1 I just updated my Xcode, I don't know whether this is because my gcc's default setting. If so, how can I get rid of this error? Thanks.

    Read the article

  • help understanding differences between #define, const and enum in C and C++ on assembly level.

    - by martin
    recently, i am looking into assembly codes for #define, const and enum: C codes(#define): 3 #define pi 3 4 int main(void) 5 { 6 int a,r=1; 7 a=2*pi*r; 8 return 0; 9 } assembly codes(for line 6 and 7 in c codes) generated by GCC: 6 mov $0x1, -0x4(%ebp) 7 mov -0x4(%ebp), %edx 7 mov %edx, %eax 7 add %eax, %eax 7 add %edx, %eax 7 add %eax, %eax 7 mov %eax, -0x8(%ebp) C codes(enum): 2 int main(void) 3 { 4 int a,r=1; 5 enum{pi=3}; 6 a=2*pi*r; 7 return 0; 8 } assembly codes(for line 4 and 6 in c codes) generated by GCC: 6 mov $0x1, -0x4(%ebp) 7 mov -0x4(%ebp), %edx 7 mov %edx, %eax 7 add %eax, %eax 7 add %edx, %eax 7 add %eax, %eax 7 mov %eax, -0x8(%ebp) C codes(const): 4 int main(void) 5 { 6 int a,r=1; 7 const int pi=3; 8 a=2*pi*r; 9 return 0; 10 } assembly codes(for line 7 and 8 in c codes) generated by GCC: 6 movl $0x3, -0x8(%ebp) 7 movl $0x3, -0x4(%ebp) 8 mov -0x4(%ebp), %eax 8 add %eax, %eax 8 imul -0x8(%ebp), %eax 8 mov %eax, 0xc(%ebp) i found that use #define and enum, the assembly codes are the same. The compiler use 3 add instructions to perform multiplication. However, when use const, imul instruction is used. Anyone knows the reason behind that?

    Read the article

  • Why Does Private Access Remain Non-Private in .NET Within a Class?

    - by AMissico
    While cleaning some code today written by someone else, I changed the access modifier from Public to Private on a class variable/member/field. I expected a long list of compiler errors that I use to "refactor/rework/review" the code that used this variable. Imagine my surprise when I didn't get any errors. After reviewing, it turns out that another instance of the Class can access the private members of another instance declared within the Class. Totally unexcepted. Is this normal? I been coding in .NET since the beginning and never ran into this issue, nor read about it. I may have stumbled onto it before, but only "vaguely noticed" and move on. Can anyone explain this behavoir to me? Am I doing something wrong? I found this behavior in both C# and VB.NET. The code seems to take advantage of the ability to access private variables. Sincerely, Totally Confused Class Foo Private _int As Integer Private _foo As Foo Private _jack As Jack Private _fred As Fred Public Sub SetPrivate() _foo = New Foo _foo._int = 3 'TOTALLY UNEXPECTED _jack = New Jack '_jack._int = 3 'expected compile error because Foo doesn't know Jack _fred = New Fred '_fred._int = 3 'expected compile error because Fred hides from Foo End Sub Private Class Fred Private _int As Integer End Class End Class Class Jack Private _int As Integer End Class

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >