Search Results

Search found 5086 results on 204 pages for 'compiler constants'.

Page 57/204 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to optimise Andengine's PathModifer (with singleton or pool)?

    - by Casla
    I am trying to build a game where the character find and follows a new path when a new destination is issued by the player, kinda like how units in RTS games work. This is done on a TMX map and I am using the A Star path finder utilities in Andengine to do this.David helped me on that: How can I change the path a sprite is following in real time? At the moment, every-time a new path is issued, I have to abandon the existing PathModifer and Path instances, and create new ones, and from what I read so far, creating new objects when you could re-use existing ones are a big waste for mobile applications. This is how I coded it at the moment: private void loadPathFound() { if (mAStarPath != null) { modifierPath = new org.andengine.entity.modifier.PathModifier.Path(mAStarPath.getLength()); /* replace the first node in the path as the player's current position */ modifierPath.to(player.convertLocalToSceneCoordinates(12, 31)[Constants.VERTEX_INDEX_X]-12, player.convertLocalToSceneCoordinates(12, 31)[Constants.VERTEX_INDEX_Y]-31); for (int i=1; i<mAStarPath.getLength(); i++) { modifierPath.to(mAStarPath.getX(i)*TILE_WIDTH, mAStarPath.getY(i)*TILE_HEIGHT); /* passing in the duration depended on the length of the path, so that the animation has a constant duration for every step */ player.registerEntityModifier(new PathModifier(modifierPath.getLength()/100, modifierPath, null, mIPathModifierListener)); } } The ideal implementation will be to always have just one object of PathModifer and just reset the destination of the path. But I don't know how you can apply the singleton patther on Andengine's PathModifer, there is no method to reset attribute of the path nor the pathModifer. So without re-write the PathModifer and the Path class, or use reflection, is there any other way to implement singleton PathModifer? Thanks for your help.

    Read the article

  • Default Parameters vs Method Overloading

    - by João Angelo
    With default parameters introduced in C# 4.0 one might be tempted to abandon the old approach of providing method overloads to simulate default parameters. However, you must take in consideration that both techniques are not interchangeable since they show different behaviors in certain scenarios. For me the most relevant difference is that default parameters are a compile time feature while method overloading is a runtime feature. To illustrate these concepts let’s take a look at a complete, although a bit long, example. What you need to retain from the example is that static method Foo uses method overloading while static method Bar uses C# 4.0 default parameters. static void CreateCallerAssembly(string name) { // Caller class - Invokes Example.Foo() and Example.Bar() string callerCode = String.Concat( "using System;", "public class Caller", "{", " public void Print()", " {", " Console.WriteLine(Example.Foo());", " Console.WriteLine(Example.Bar());", " }", "}"); var parameters = new CompilerParameters(new[] { "system.dll", "Common.dll" }, name); new CSharpCodeProvider().CompileAssemblyFromSource(parameters, callerCode); } static void Main() { // Example class - Foo uses overloading while Bar uses C# 4.0 default parameters string exampleCode = String.Concat( "using System;", "public class Example", "{{", " public static string Foo() {{ return Foo(\"{0}\"); }}", " public static string Foo(string key) {{ return \"FOO-\" + key; }}", " public static string Bar(string key = \"{0}\") {{ return \"BAR-\" + key; }}", "}}"); var compiler = new CSharpCodeProvider(); var parameters = new CompilerParameters(new[] { "system.dll" }, "Common.dll"); // Build Common.dll with default value of "V1" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V1")); // Caller1 built against Common.dll that uses a default of "V1" CreateCallerAssembly("Caller1.dll"); // Rebuild Common.dll with default value of "V2" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V2")); // Caller2 built against Common.dll that uses a default of "V2" CreateCallerAssembly("Caller2.dll"); dynamic caller1 = Assembly.LoadFrom("Caller1.dll").CreateInstance("Caller"); dynamic caller2 = Assembly.LoadFrom("Caller2.dll").CreateInstance("Caller"); Console.WriteLine("Caller1.dll:"); caller1.Print(); Console.WriteLine("Caller2.dll:"); caller2.Print(); } And if you run this code you will get the following output: // Caller1.dll: // FOO-V2 // BAR-V1 // Caller2.dll: // FOO-V2 // BAR-V2 You see that even though Caller1.dll runs against the current Common.dll assembly where method Bar defines a default value of “V2″ the output show us the default value defined at the time Caller1.dll compiled against the first version of Common.dll. This happens because the compiler will copy the current default value to each method call, much in the same way a constant value (const keyword) is copied to a calling assembly and changes to it’s value will only be reflected if you rebuild the calling assembly again. The use of default parameters is also discouraged by Microsoft in public API’s as stated in (CA1026: Default parameters should not be used) code analysis rule.

    Read the article

  • WP E Commerce Safe Mode restriction error [on hold]

    - by Mustafa Kamal
    I have my online shop, created with WP Ecommerce getting broken after I moved it to another server. I could be sure that the problem comes from WP Ecommerce because when I disable that plugin. Everything run as expected. This is the exact error message Warning: session_start() [function.session-start]: SAFE MODE Restriction in effect. The script whose uid is 515 is not allowed to access /tmp owned by uid 0 in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 Fatal error: session_start() [<a href='function.session-start'>function.session-start</a>]: Failed to initialize storage module: files (path: ) in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 I've tried to turn off safe mode on my php configuration. nothing happens. the error's still there. I thought it was some kind of permission issue, so I tried to change /tmp permission to 777. Nothing happens. I googled it some more and suspect it might have something to do with fastCGI configuration and stuff. Which I totally don't understand. My googling result mostly suggest me to consult the web hosting provider or even to move to another host. But in this case, I am the owner of the server (VPS with cPanel/WHM). And I don't have any idea how to solve this kind of problem.

    Read the article

  • Sales and Procurement Contracts 12.1.3++ Release Information

    - by LuciaC
    New functionality has been released for Sales and Procurement Contracts in a new patch: Contracts 12.1.3++: Patch 13877401: 12.1.3 Rollup for Oracle Contracts Core. The new functionality includes: APIs for Import of Contract Templates, Contract Expert rules, Questions and Constants: The three APIs are as follows: API for Templates, API for Rules, and API for Questions and Constants. These can be used to both create entities and update existing templates and rules. The APIs will display error and warning messages which can be processed and analyzed by the customer. Ability to Apply Multiple Templates to a Sourcing, Procurement or Sales Document: The buyer can select and add multiple templates to a quote,sales agreement document, sourcing or purchasing odcument.  All the clauses and deliverables from the new templates are synchronized with the document. The Contract Expert rules are from the original template. The buyer can also view the list of templates that are added to any sales or procurement document. Ability to Define Multi-Row Variables: You can create user defined manual variables that are tables containing one row per line or multiple rows. Contract Preview will print the variable values according to the layout defined for the variable. These variables are not available for Contract Expert Rules and Supplier. Enhancement to Suggested Sections for Clauses by Contract Expert: You can associate multiple default sections with a clause. A clause is associated with multiple values of any system variable and for each such value a section name is associated in Contracts Terms Library. When Contract Expert is run in the contract authoring flow, the clause is automatically placed in the associated section name. Plus many more new features. Read the following notes for details on all the new and changed functionality: Oracle Procurement Contracts Release Notes, Release 12.1.3++ (Doc ID 1467140.1) Oracle Sales Contracts Release Notes, Release 12.1.3++ (Doc ID 1467149.1) Oracle E-Business Suite Releases 12.1 and 12.2 Release Content Documents (Doc ID 1302189.1)

    Read the article

  • Error on installing SVN extension with pecl

    - by thedp
    Hello, I'm trying to install the following PHP extension: http://php.net/manual/en/book.svn.php But when I do pecl install svn-beta I receive an error message that it can't locate the svn_client.h file. I searched the net but couldn't find any useful reference to this error. Thank you for your help. Installation result: root@myUbuntu:/home/thedp# pecl install svn-beta downloading svn-0.5.1.tgz ... Starting to download svn-0.5.1.tgz (23,563 bytes) .....done: 23,563 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 1. Please provide the prefix of Subversion installation : autodetect 1-1, 'all', 'abort', or Enter to continue: 1. Please provide the prefix of the APR installation used with Subversion : autodetect 1-1, 'all', 'abort', or Enter to continue: building in /var/tmp/pear-build-root/svn-0.5.1 running: /tmp/pear/temp/svn/configure --with-svn --with-svn-apr checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc and cc understand -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM -I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 checking for PHP extension directory... /usr/lib/php5/20060613+lfs checking for PHP installed headers prefix... /usr/include/php5 checking for re2c... no configure: WARNING: You will need re2c 0.12.0 or later if you want to regenerate PHP parsers. checking for gawk... no checking for nawk... nawk checking if nawk is broken... no checking for svn support... yes, shared checking for specifying the location of apr for svn... yes, shared checking for svn includes... configure: error: failed to find svn_client.h ERROR: `/tmp/pear/temp/svn/configure --with-svn --with-svn-apr' failed

    Read the article

  • Are Java's public fields just a tragic historical design flaw at this point?

    - by Avi Flax
    It seems to be Java orthodoxy at this point that one should basically never use public fields for object state. (I don't necessarily agree, but that's not relevant to my question.) Given that, would it be right to say that from where we are today, it's clear that Java's public fields were a mistake/flaw of the language design? Or is there a rational argument that they're a useful and important part of the language, even today? Thanks! Update: I know about the more elegant approaches, such as in C#, Python, Groovy, etc. I'm not directly looking for those examples. I'm really just wondering if there's still someone deep in a bunker, muttering about how wonderful public fields really are, and how the masses are all just sheep, etc. Update 2: Clearly static final public fields are the standard way to create public constants. I was referring more to using public fields for object state (even immutable state). I'm thinking that it does seem like a design flaw that one should use public fields for constants, but not for state… a language's rules should be enforced naturally, by syntax, not by guidelines.

    Read the article

  • Lost access to the unity interface how to fix? (ubuntu 11.10)

    - by Tal Galili
    o.k, this is embarrassing: I have installed Compiz Config Settings Manager and tried to fix it so that the transition time between changing tabs (using alt+tab) will be short. by accident I un-pressed V from something else, and it asked me about a conflict - I pressed the "x" button to close the window and as a result I stopped seeing the unity interface. That is - I can not see any buttons of the left side. I went to the terminal (ctrl+alt+F1) and ran ccsm As a result I got the following error: $ ccsm /usr/lib/python2.7/site-packages/gtk-2.0/gtk/__init__.py:57: GtkWarning: could not open display warnings.warn(str(e), _gtk.Warning) Traceback (most recent call last): File "/usr/bin/ccsm", line 93, in <module> import ccm File "/usr/lib/python2.7/site-packages/ccm/__init__.py", line 1, in <module> from ccm.Conflicts import * File "/usr/lib/python2.7/site-packages/ccm/Conflicts.py", line 26, in <module> from ccm.Constants import * File "/usr/lib/python2.7/site-packages/ccm/Constants.py", line 29, in <module> CurrentScreenNum = gtk.gdk.display_get_default().get_default_screen().get_number() AttributeError: 'NoneType' object has no attribute 'get_default_screen' What should I do next? Thanks.

    Read the article

  • Maven error: Unable to get resource / Server redirected too many times

    - by tewe
    Our proxy went down and I tried to update dependencies with maven while it was off. Since then I can't download anything with maven. I get this error for everything. I tried -U option, deleting my local repository and tried different maven version (2.0.9, 2.2.1) but it doesn't work. Any idea how to solve this? Earlier it also said 'repository will be blacklisted' to all of them. Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-compiler-plugin/2.1/maven-compiler-plugin-2.1.pom [WARNING] Unable to get resource 'org.apache.maven.plugins:maven-compiler-plugin:pom:2.1' from repository central (http://repo1.maven.org/maven2): Error transferring file: Server redirected too many times (20) org.apache.maven.plugins:maven-compiler-plugin:pom:2.1 from the specified remote repositories: jboss-snapshot (http://snapshots.jboss.org/maven2), central (http://repo1.maven.org/maven2), JBoss Repo (http://repository.jboss.com/maven2), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.com/maven/bundles/external), com.springsource.repository.bundles.snapshot (http://repository.springsource.com/maven/bundles/snapshot), jboss (http://repository.jboss.com/maven2), com.springsource.repository.bundles.release (http://repository.springsource.com/maven/bundles/release), jboss-snapshot-plugins (http://snapshots.jboss.org/maven2), com.springsource.repository.bundles.milestone (http://repository.springsource.com/maven/bundles/milestone), jboss-plugins (http://repository.jboss.com/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 25 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 27 more

    Read the article

  • cpptask ordering of static libraries in gcc command line

    - by AC
    How do I force cpptask to move the static libraries to the end on arg list issued to the compiler? Here is the clause I am using <cpptasks:cc description="appname" subsystem="console" objdir="obj" outfile="dist/app_test"> <compiler refid="testsslcc" /> <linkerarg value="-L${libdir}" /> <linkerarg value="-L/usr/local/devl/lib" /> <linkerarg value="-Wl,-rpath,../lib" /> <libset libs="unittest ${libs} dsg readline ncurses gcov" /> <fileset dir="test/obj" includes="main.o" /> <fileset dir="." includes="${TCFILES}" /> <fileset dir="../lib" includes="libboost_thread.a libboost_date_time.a" /> </cpptasks:cc> when this executes, libboost_thread.a libboost_date_time.a are first files in the argument list passed the compiler, gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k ../../lib/libboost_date_time.a ../../lib/libboost_thread.a x.cpp ... which causes compiler error. By manually moving them to the end of the argument list, the application compiles without error. gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k x.cpp ... ../../lib/libboost_date_time.a ../../lib/libboost_thread.a And yes I have tried changing the order in the xml, and that of course didn't work. For now I am using an exec task to call gcc with the files in the correct order but this of course is a hack.

    Read the article

  • Titanium won't run iPhone/Android Emulator

    - by BeOliveira
    I just installed Titanium SDK (1.5.1) and all the Android SDKs. Also, I already have iPhone SDK 4.2 installed. I downloaded KitchenSink and imported it into Titanium but whenever I try to run it on iPhone Emulator, I get this error: [INFO] One moment, building ... [INFO] Titanium SDK version: 1.5.1 [INFO] iPhone Device family: iphone [INFO] iPhone SDK version: 4.0 [INFO] Detected compiler plugin: ti.log/0.1 [INFO] Compiler plugin loaded and working for ios [INFO] Performing clean build [INFO] Compiling localization files [INFO] Detected custom font: comic_zine_ot.otf [ERROR] Error: Traceback (most recent call last): File "/Library/Application Support/Titanium/mobilesdk/osx/1.5.1/iphone/builder.py", line 1003, in main execute_xcode("iphonesimulator%s" % iphone_version,["GCC_PREPROCESSOR_DEFINITIONS=LOG__ID=%s DEPLOYTYPE=development TI_DEVELOPMENT=1 DEBUG=1 TI_VERSION=%s" % (log_id,sdk_version)],False) File "/Library/Application Support/Titanium/mobilesdk/osx/1.5.1/iphone/builder.py", line 925, in execute_xcode output = run.run(args,False,False,o) File "/Library/Application Support/Titanium/mobilesdk/osx/1.5.1/iphone/run.py", line 31, in run sys.exit(rc) SystemExit: 1 And for Android, it runs the OS but not the KitchenSink app, here's the log: [INFO] Launching Android emulator...one moment [INFO] Building KitchenSink for Android ... one moment [INFO] plugin=/Library/Application Support/Titanium/plugins/ti.log/0.1/plugin.py [INFO] Detected compiler plugin: ti.log/0.1 [INFO] Compiler plugin loaded and working for android [INFO] Titanium SDK version: 1.5.1 (12/16/10 16:25 16bbb92) [INFO] Waiting for the Android Emulator to become available [ERROR] Timed out waiting for android.process.acore [INFO] Copying project resources.. [INFO] Detected tiapp.xml change, forcing full re-build... [INFO] Compiling Javascript Resources ... [INFO] Copying platform-specific files ... [INFO] Compiling localization files [INFO] Compiling Android Resources... This could take some time Any ideas on how to get Titanium to work?

    Read the article

  • Delegates in .NET: how are they constructed ?

    - by Saulius
    While inspecting delegates in C# and .NET in general, I noticed some interesting facts: Creating a delegate in C# creates a class derived from MulticastDelegate with a constructor: .method public hidebysig specialname rtspecialname instance void .ctor(object 'object', native int 'method') runtime managed { } Meaning that it expects the instance and a pointer to the method. Yet the syntax of constructing a delegate in C# suggests that it has a constructor new MyDelegate(int () target) where I can recognise int () as a function instance (int *target() would be a function pointer in C++). So obviously the C# compiler picks out the correct method from the method group defined by the function name and constructs the delegate. So the first question would be, where does the C# compiler (or Visual Studio, to be precise) pick this constructor signature from ? I did not notice any special attributes or something that would make a distinction. Is this some sort of compiler/visualstudio magic ? If not, is the T (args) target construction valid in C# ? I did not manage to get anything with it to compile, e.g.: int () target = MyMethod; is invalid, so is doing anything with MyMetod, e.g. calling .ToString() on it (well this does make some sense, since that is technically a method group, but I imagine it should be possible to explicitly pick out a method by casting, e.g. (int())MyFunction. So is all of this purely compiler magic ? Looking at the construction through reflector reveals yet another syntax: Func CS$1$0000 = new Func(null, (IntPtr) Foo); This is consistent with the disassembled constructor signature, yet this does not compile! One final interesting note is that the classes Delegate and MulticastDelegate have yet another sets of constructors: .method family hidebysig specialname rtspecialname instance void .ctor(class System.Type target, string 'method') cil managed Where does the transition from an instance and method pointer to a type and a string method name occur ? Can this be explained by the runtime managed keywords in the custom delegate constructor signature, i.e. does the runtime do it's job here ?

    Read the article

  • Covariance and Contravariance type inference in C# 4.0

    - by devoured elysium
    When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that. Question: If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way? Example: interface MyInterface<out T> { T abracadabra(); } //works OK interface MyInterface2<in T> { T abracadabra(); } //compiler raises an error. //This makes me think that the compiler is cappable //of understanding what situations might generate //run-time problems and then prohibits them. Also, isn't it what Java does in the same situation? From what I recall, you just do something like IMyInterface<? extends whatever> myInterface; //covariance IMyInterface<? super whatever> myInterface2; //contravariance Or am I mixing things? Thanks

    Read the article

  • Covariance and Contravariance inference in C# 4.0

    - by devoured elysium
    When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that. Question: If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way? Example: interface MyInterface<out T> { T abracadabra(); } //works OK interface MyInterface2<in T> { T abracadabra(); } //compiler raises an error. //This makes me think that the compiler is cappable //of understanding what situations might generate //run-time problems and then prohibits them. Also, isn't it what Java does in the same situation? From what I recall, you just do something like IMyInterface<? extends whatever> myInterface; //covariance IMyInterface<? super whatever> myInterface2; //contravariance Or am I mixing things? Thanks

    Read the article

  • How to use VC++ intrinsic functions w/o run-time library

    - by Adrian McCarthy
    I'm involved in one of those challenges where you try to produce the smallest possible binary, so I'm building my program without the C or C++ run-time libraries (RTL). I don't link to the DLL version or the static version. I don't even #include the header files. I have this working fine. For some code constructs, the compiler generates calls to memset(). For example: struct MyStruct { int foo; int bar; }; MyStruct blah = {}; // calls memset() Since I don't include the RTL, this results in a missing symbol at link time. I've been getting around this by avoiding those constructs. For the given example, I'll explicitly initialize the struct. MyStruct blah; blah.foo = 0; blah.bar = 0; But memset() can be useful, so I tried adding my own implementation. It works fine in Debug builds, even for those places where the compiler generates an implicit call to memset(). But in Release builds, I get an error saying that I cannot define an intrinsic function. You see, in Release builds, intrinsic functions are enabled, and memset() is an intrinsic. I would love to use the intrinsic for memset() in my release builds, since it's probably inlined and smaller and faster than my implementation. But I seem to be a in catch-22. If I don't define memset(), the linker complains that it's undefined. If I do define it, the compiler complains that I cannot define an intrinsic function. I've tried adding #pragma intrinsic(memset) with and without declarations of memset, but no luck. Does anyone know the right combination of definition, declaration, #pragma, and compiler and linker flags to get an intrinsic function without pulling in RTL overhead? Visual Studio 2008, x86, Windows XP+.

    Read the article

  • Nested Class member function can't access function of enclosing class. Why?

    - by Rahul
    Please see the example code below: class A { private: class B { public: foobar(); }; public: foo(); bar(); }; Within class A & B implementation: A::foo() { //do something } A::bar() { //some code foo(); //more code } A::B::foobar() { //some code foo(); //<<compiler doesn't like this } The compiler flags the call to foo() within the method foobar(). Earlier, I had foo() as private member function of class A but changed to public assuming that B's function can't see it. Of course, it didn't help. I am trying to re-use the functionality provided by A's method. Why doesn't the compiler allow this function call? As I see it, they are part of same enclosing class (A). I thought the accessibility issue for nested class meebers for enclosing class in C++ standards was resolved. How can I achieve what I am trying to do without re-writing the same method (foo()) for B, which keeping B nested within A? I am using VC++ compiler ver-9 (Visual Studio 2008). Thank you for your help.

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • C++ Switch won't compile with externally defined variable used as case

    - by C Nielsen
    I'm writing C++ using the MinGW GNU compiler and the problem occurs when I try to use an externally defined integer variable as a case in a switch statement. I get the following compiler error: "case label does not reduce to an integer constant". Because I've defined the integer variable as extern I believe that it should compile, does anyone know what the problem may be? Below is an example: test.cpp #include <iostream> #include "x_def.h" int main() { std::cout << "Main Entered" << std::endl; switch(0) { case test_int: std::cout << "Case X" << std::endl; break; default: std::cout << "Case Default" << std::endl; break; } return 0; } x_def.h extern const int test_int; x_def.cpp const int test_int = 0; This code will compile correctly on Visual C++ 2008. Furthermore a Montanan friend of mine checked the ISO C++ standard and it appears that any const-integer expression should work. Is this possibly a compiler bug or have I missed something obvious? Here's my compiler version information: Reading specs from C:/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.5 (mingw-vista special r3)

    Read the article

  • Syntactical analysis with Flex/Bison part 2

    - by Imran
    Hallo, I need help in Lex/Yacc Programming. I wrote a compiler for a syntactical analysis for inputs of many statements. Now i have a special problem. In case of an Input the compiler gives the right output, which statement is uses, constant operator or a jmp instructor to which label, now i have to write so, if now a if statement comes, first the first command (before the else) must be give out when the assignment of the if is yes then it must jump to the end because the command after the else isnt needed, so after this jmp then the second command must be give out. I show it in an example maybe you understand what i mean. Input adr. Output if(x==0) 10 if(x==0) Wait 5 20 WAIT 5 else 30 JMP 50 Wait 1 40 WAIT 1 end 50 END like so. I have an idea, maybe i can do it whith a special if statement like IF exp jmp_stmt_end stmt_seq END when the if statement is given in the input the compiler has to recognize the end ofthe statement and like my jmp_stmt in my compiler ( you have to download the files from http://bitbucket.org/matrix/changed-tiny) only to jump to the end. I hope you understand my problem.thanks.

    Read the article

  • How best to deal with warning c4305 when type could change?

    - by identitycrisisuk
    I'm using both Ogre and NxOgre, which both have a Real typedef that is either float or double depending on a compiler flag. This has resulted in most of our compiler warnings now being: warning C4305: 'argument' : truncation from 'double' to 'Ogre::Real' When initialising variables with 0.1 for example. Normally I would use 0.1f but then if you change the compiler flag to double precision then you would get the reverse warning. I guess it's probably best to pick one and stick with it but I'd like to write these in a way that would work for either configuration if possible. One fix would be to use #pragma warning (disable : 4305) in files where it occurs, I don't know if there are any other more complex problems that can be hidden by not having this warning. I understand I would push and pop these in header files too so that they don't end up spreading across code. Another is to create some macro based on the accuracy compiler flag like: #if OGRE_DOUBLE_PRECISION #define INIT_REAL(x) (x) #else #define INIT_REAL(x) static_cast<float>( x ) #endif which would require changing all the variable initialisation done so far but at least it would be future proof. Any preferences or something I haven't thought of?

    Read the article

  • Having Issue with Bounded Wildcards in Generic

    - by Sanjiv
    I am new to Java Generics, and I'm currently experimenting with Generic Coding....final goal is to convert old Non-Generic legacy code to generic one... I have defined two Classes with IS-A i.e. one is sub-class of other. public class Parent { private String name; public Parent(String name) { super(); this.name = name; } } public class Child extends Parent{ private String address; public Child(String name, String address) { super(name); this.address = address; } } Now, I am trying to create a list with bounded Wildcard. and getting Compiler Error. List<? extends Parent> myList = new ArrayList<Child>(); myList.add(new Parent("name")); // compiler-error myList.add(new Child("name", "address")); // compiler-error myList.add(new Child("name", "address")); // compiler-error Bit confused. please help me on whats wrong with this ?

    Read the article

  • Problem with "moveable-only types" in VC++ 2010

    - by Luc Touraille
    I recently installed Visual Studio 2010 Professional RC to try it out and test the few C++0x features that are implemented in VC++ 2010. I instantiated a std::vector of std::unique_ptr, without any problems. However, when I try to populate it by passing temporaries to push_back, the compiler complains that the copy constructor of unique_ptr is private. I tried inserting an lvalue by moving it, and it works just fine. #include <utility> #include <vector> int main() { typedef std::unique_ptr<int> int_ptr; int_ptr pi(new int(1)); std::vector<int_ptr> vec; vec.push_back(std::move(pi)); // OK vec.push_back(int_ptr(new int(2)); // compiler error } As it turns out, the problem is neither unique_ptr nor vector::push_back but the way VC++ resolves overloads when dealing with rvalues, as demonstrated by the following code: struct MoveOnly { MoveOnly() {} MoveOnly(MoveOnly && other) {} private: MoveOnly(const MoveOnly & other); }; void acceptRValue(MoveOnly && mo) {} int main() { acceptRValue(MoveOnly()); // Compiler error } The compiler complains that the copy constructor is not accessible. If I make it public, the program compiles (even though the copy constructor is not defined). Did I misunderstand something about rvalue references, or is it a (possibly known) bug in VC++ 2010 implementation of this feature?

    Read the article

  • Constructor and Destructors in C++ work?

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • Shaping EF LINQ Query Results Using Multi-Table Includes

    - by sisdog
    I have a simple LINQ EF query below using the method syntax. I'm using my Include statement to join four tables: Event and Doc are the two main tables, EventDoc is a many-to-many link table, and DocUsage is a lookup table. My challenge is that I'd like to shape my results by only selecting specific columns from each of the four tables. But, the compiler is giving a compiler is giving me the following error: 'System.Data.Objects.DataClasses.EntityCollection does not contain a definition for "Doc' and no extension method 'Doc' accepting a first argument of type 'System.Data.Objects.DataClasses.EntityCollection' could be found. I'm sure this is something easy but I'm not figuring it out. I haven't been able to find an example of someone using the multi-table include but also shaping the projection. Thx,Mark var qry= context.Event .Include("EventDoc.Doc.DocUsage") .Select(n => new { n.EventDate, n.EventDoc.Doc.Filename, //<=COMPILER ERROR HERE n.EventDoc.Doc.DocUsage.Usage }) .ToList(); EventDoc ed; Doc d = ed.Doc; //<=NO COMPILER ERROR SO I KNOW MY MODEL'S CORRECT DocUsage du = d.DocUsage;

    Read the article

  • Why doesn't ${locale} resolve in my <compc> Ant task?

    - by user165462
    I've seen a number of examples, e.g. here, where people are including locale resource bundles by referencing the locale attribute in the element. For some reason this doesn't work for me. Here's what I have for the task: <compc output="${deploy.dir}/myfrmwrk.swc" locale="en_US"> <source-path path-element="${basedir}/src/main/flex"/> <include-sources dir="${basedir}/src/main/flex" includes="*" /> <include-libraries file="${basedir}/libs"/> <compiler.external-library-path dir="${FLEX_HOME}/frameworks/libs/player/9" append="true"> <include name="playerglobal.swc"/> </compiler.external-library-path> <compiler.library-path dir="${FLEX_HOME}/frameworks" append="true"> <include name="libs"/> <include name="locale/${locale}"/> </compiler.library-path> <load-config filename="${basedir}/fb3config.xml" /> </compc> This fails with a bunch of errors of the form: [compc] Error: could not find source for resource bundle ... I can make it build with this one change: <include name="locale/en_US"/> The configuration file generated by Flex Builder 3 actually renders this as "locale/{locale}" (notice the $ is missing). I've tried that as well with the same (failing) results. For now, I'm doing OK directly injecting en_US as we won't be doing localization bundles for quite some time, but I will eventually need to get this working. Also, it bugs me that I can't make it work the way that it SHOULD work!

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >