Search Results

Search found 5729 results on 230 pages for 'compiler dependent'.

Page 201/230 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • ojspc always returns 0 on errors

    - by Matt McCormick
    In my Ant build.xml file, I am trying to compile JSPs using ojspc. The files are being compiled, however, the build process is still running to completion when the JSP compilation has errors. This is part of my build.xml: <java fork="true" jar="${env.ORACLE_HOME}\j2ee\home\ojspc.jar" resultproperty="result"> <jvmarg value="-Djava.compiler=NONE"/> <arg value="-extend"/> <arg value="com.orionserver.http.OrionHttpJspPage"/> <arg value="-batchMask"/> <arg value="*.jsp"/> <arg value="${target-directory}/build/target/ear/${module-dir-name}-jsp.war"/> </java> <echo level="info">Result Property: ${result}</echo> I have tried setting the property failonerror="true" but that does not change anything. I receive the following output: [java] Detected archive, now processing contents of ../build/target/ear/web-module-jsp.war... [java] Setting up temp area... [java] Expanding archive in temp area... [java] C:\DOCUME~1\MMCCOR~1\LOCALS~1\Temp\tmp12940\_web_2d_inf\_jsp\_password.java:60: cannot resolve symbol [java] symbol : variable reqvst [java] location: class _web_2d_inf._jsp._password [java] out.print(reqvst.getAttribute("test")); [java] ^ [java] 1 error [java] Creating D:\eclipse-workspace\jdw\build\..\build\target\ear\web-module-jsp.war ... [java] Removing temp area... [echo] Result Property: 0 ...(more commands) BUILD SUCCESSFUL In the password.jsp file, I intentionally introduced an error to test. How can I get the build to fail on an error? At the Ant Java page, I am confused by: By default the return code of a is ignored. Alternatively, you can set resultproperty to the name of a property and have it assigned to the result code (barring immutability, of course). When you set failonerror="true", the only possible value for resultproperty is 0. Any non-zero response is treated as an error and would mean the build exits.

    Read the article

  • Trying to make a plugin system in C++

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • C++ Constructor initialization list strangeness

    - by Andy
    I have always been a good boy when writing my classes, prefixing all member variables with m_: class Test { int m_int1; int m_int2; public: Test(int int1, int int2) : m_int1(int int1), m_int2(int int2) {} }; void main() { Test t(10, 20); // Just an example } However, recently I forgot to do that and ended up writing: class Test { int int1; int int2; public: // Very questionable, but of course I meant to assign ::int1 to this->int1! Test(int int1, int int2) : int1(int1), int2(int2) {} }; Believe it or not, the code compiled with no errors/warnings and the assignments took place correctly! It was only when doing the final check before checking in my code when I realised what I had done. My question is: why did my code compile? Is something like that allowed in the C++ standard, or is it simply a case of the compiler being clever? In case you were wondering, I was using Visual Studio 2008 Thank you.

    Read the article

  • Why do .NET developers offer 32-bit/64-bit versions of .NET assemblies?

    - by Tyler
    Evey now and then I see both x86 and x64 versions of a .NET assembly. Consider the following web part for SharePoint. Why wouldn't the developer just offer a single version and have let the JIT compiler sort out the rest? When I see these kinds offering is it just that the developer decided to create a native image using a tool like ngen in order to avoid a JIT? Someone please help me out here, I feel like I'm missing something of note. Updated From what I got below, both x86 and x64 builds are offered because one or more of the following reasons: The developer wanted to avoid JITing and created a native image of his code, targeting a given architecture using a tool like ngen.exe. The assembly contains platform specific COM calls and so it makes no point to build it as AnyCPU. In these cases builds that target different platforms may contain different code. The assembly may contain Win32 calls using pinvoke which won't get remapped by a JIT and so the build should target the platform it is bound to.

    Read the article

  • How do I access Dictionary items?

    - by salvationishere
    I am developing a C# VS2008 / SQL Server website app and am new to the Dictionary class. Can you please advise on best method of accomplishing this? Here is a code snippet: SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); cmd.CommandText = "dbo.AppendDataCT"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter p1, p2, p3; foreach (string s in dt.Rows[1].ItemArray) { DataRow dr = dt.Rows[1]; // second row p1 = cmd.Parameters.AddWithValue((string)dic[0], (string)dr[0]); p1.SqlDbType = SqlDbType.VarChar; p2 = cmd.Parameters.AddWithValue((string)dic[1], (string)dr[1]); p2.SqlDbType = SqlDbType.VarChar; p3 = cmd.Parameters.AddWithValue((string)dic[2], (string)dr[2]); p3.SqlDbType = SqlDbType.VarChar; } but this is giving me compiler error: The best overloaded method match for 'System.Collections.Generic.Dictionary<string,string>.this[string]' has some invalid arguments I just want to access each value from "dic" and load into these SQL parameters. How do I do this? Do I have to enter the key? The keys are named "col1", "col2", etc., so not the most user-friendly. Any other tips? Thanks!

    Read the article

  • Putting all methods in class definition

    - by Amnon
    When I use the pimpl idiom, is it a good idea to put all the methods definitions inside the class definition? For example: // in A.h class A { class impl; boost::scoped_ptr<impl> pimpl; public: A(); int foo(); } // in A.cpp class A::impl { // method defined in class int foo() { return 42; } // as opposed to only declaring the method, and defining elsewhere: float bar(); }; A::A() : pimpl(new impl) { } int A::foo() { return pimpl->foo(); } As far as I know, the only problems with putting a method definition inside a class definition is that (1) the implementation is visible in files that include the class definition, and (2) the compiler may make the method inline. These are not problems in this case since the class is defined in a private file, and inlining has no effect since the methods are called in only one place. The advantage of putting the definition inside the class is that you don't have to repeat the method signature. So, is this OK? Are there any other issues to be aware of?

    Read the article

  • how to filter files from the root "classes" and "test-classes" folders in Eclipse?

    - by Kidburla
    I am using ClearCase in my application which generates a whole load of ".copyarea.db" files (one in every folder). These cause conflicts when publishing to Tomcat as Eclipse will bundle the "classes" and "test-classes" folders into one JAR (not sure why it does this - as there is no need to have test classes available on the application server). Any folders with the same names will have a separate .copyarea.db in the classes and test-classes branches. I managed to get around this problem in general by adding ".copyarea.db" to the Filtered resources on the Java->Compiler->Building->Output Folder preference page. This stops the file appearing in source output (package/class folders), the vast majority of cases. However there remains the problem of the root folder, i.e. "target/classes/.copyarea.db" and "target/test-classes/.copyarea.db". These files are not filtered as they are not part of the compile task. Just deleting the files manually doesn't help either, as Eclipse expects to find them and doesn't. How can I exclude these ".copyarea.db" files from the root "classes" and "test-classes" folders?

    Read the article

  • GWT - problems with constants in css

    - by hba
    Hi, I'm new to GWT; I'm building a small sample app. I have several CSS files. I'm able to successfully use the ClientBundle and CssResource to assign styles to the elements defined in my UiBinder script. Now I'd like to take it one step further and introduce CSS constants using @def css-rule. The @def works great when I define a constant and use it in the same CSS file. However I cannot use it in another CSS file. When I try to use the @eval rule to evaluate an existing constant the compiler throws an execption: "cannot make a static reference to the non-static method ". Here is an example of what I'm trying to do: ConstantStyle.css @def BACKGROUND red; ConstantStyle.java package abc; import ...; interface ConstantStyle extends cssResource { String BACKGROUND(); } MyStyle.css @eval BACKGROUND abc.ConstantStyle.BACKGROUND(); .myClass {background-color: BACKGROUND;} MyStyle.java package abc; import ...; interface ConstantStyle extends cssResource { String myClass; } MyResources.java package abc; import ...; interface MyResources extends ClientBundle { @Source("ConstantStyle.css") ConstantStyle constantStyle(); @Source("MyStyle.css") MyStyle myStyle(); } Thanks in advance!

    Read the article

  • Freestanding ARM C++ Code - empty .ctors section

    - by Matthew Iselin
    I'm writing C++ code to run in a freestanding environment (basically an ARM board). It's been going well except I've run into a stumbling block - global static constructors. To my understanding the .ctors section contains a list of addresses to each static constructor, and my code simply needs to iterate this list and make calls to each function as it goes. However, I've found that this section in my binary is in fact completely empty! Google pointed towards using ".init_array" instead of ".ctors" (an EABI thing), but that has not changed anything. Any ideas as to why my static constructors don't exist? Relevant linker script and objdump output follows: .ctors : { . = ALIGN(4096); start_ctors = .; *(.init_array); *(.ctors); end_ctors = .; } .dtors : { . = ALIGN(4096); start_dtors = .; *(.fini_array); *(.dtors); end_dtors = .; } -- 2 .ctors 00001000 8014c000 8014c000 00054000 2**2 CONTENTS, ALLOC, LOAD, DATA <snip> 8014d000 g O .ctors 00000004 start_ctors <snip> 8014d000 g O .ctors 00000004 end_ctors I'm using an arm-elf targeted GCC compiler (4.4.1).

    Read the article

  • On C++ global operator new: why it can be replaced

    - by Jimmy
    I wrote a small program in VS2005 to test whether C++ global operator new can be overloaded. It can. #include "stdafx.h" #include "iostream" #include "iomanip" #include "string" #include "new" using namespace std; class C { public: C() { cout<<"CTOR"<<endl; } }; void * operator new(size_t size) { cout<<"my overload of global plain old new"<<endl; // try to allocate size bytes void *p = malloc(size); return (p); } int main() { C* pc1 = new C; cin.get(); return 0; } In the above, my definition of operator new is called. If I remove that function from the code, then operator new in C:\Program Files (x86)\Microsoft Visual Studio 8\VC\crt\src\new.cpp gets called. All is good. However, in my opinion, my implementations of operator new does NOT overload the new in new.cpp, it CONFLICTS with it and violates the one-definition rule. Why doesn't the compiler complain about it? Or does the standard say since operator new is so special, one-definition rule does not apply here? Thanks.

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • Sharepoint fails to load a C++ dll on windows 2008

    - by Nathan
    I have a sharepoint DLL that does some licensing things and as part of the code it uses an external C++ DLL to get the serial number of the hardisk. When i run this application on windows server 2003 it works fine, but on 2008 the whole site (loaded on load) crashes and resets continually. This is not 2008 R2 and is the same in 64 or 32 bits. If i put a debugger.break before the dll execution then I see the code get to the point of the break then never come back into the dll again. I do get some debug assertion warnings from within the function, again only in 2008, but im not sure this is related. I created a console app that runs the c# dll, which in turn loads the c++ dll, and this works perfectly on 2008 (although does show the assertion errors, but I have suppressed these now). The assertion errors are not in my code but within ICtypes.c and not something I can debug. If i put a breakpoint in the DLL it is never hit and the compiler says : "step in: Stepping over non user code" if i try to debug into the DLL using VS. I have tried wrapping the code used to call the DLL in: SPSecurity.RunWithElevatedPrivileges(delegate() but this also does not help. I have the sourcecode for this DLL so that is not a problem. If i delete the DLL from the directory I get an error about a missing DLL, if i replace it back to no error or warning just a complete failure. If i replace this code with a hardcoded string the whole application works fine. Any advice would be much appreciated, I can't understand why it works as a console app yet not when run by sharepoint, this is with the same user account, on the same machine... This is the code used to call the DLL: [DllImport("idDll.dll", EntryPoint = "GetMachineId", SetLastError = true)] extern static string GetComponentId([MarshalAs(UnmanagedType.LPStr)]String s); public static string GetComponentId() { Debugger.Break(); if (_machine == string.Empty) { string temp = ""; id= ComponentId.GetComponentId(temp); } return id; }

    Read the article

  • Problems with variadic function

    - by morpheous
    I have the following function from some legacy code that I am maintaining. long getMaxStart(long start, long count, const myStruct *s1, ...) { long i1, maxstart; myStruct *s2; va_list marker; maxstart = start; /*BUGFIX: 003 */ /*(va_start(marker, count);*/ va_start(marker, s1); for (i1 = 1; i1 <= count; i1++) { s2 = va_arg(marker, myStruct *); /* <- s2 is assigned null here */ maxstart = MAX(maxstart, s2->firstvalid); /* <- SEGV here */ } va_end(marker); return (maxstart); } When the function is called with only one myStruct argument, it causes a SEGV. The code compiled and run without crashing on Windows XP when I compiled it using VS2005. I have now moved the code to Ubuntu Karmic and I am having problems with the stricter compiler on Linux. Is anyone able to spot what is causing the parameter not to be read correctly in the var_arg() statement? I am compiling using gcc version 4.4.1 Edit The statement that causes the SEGV is this one: start = getMaxStart(start, 1, ms1); The variables 'start' and 'ms1' have valid values when the code execution first reaches this line.

    Read the article

  • Syntax Error? When parsing XML value

    - by Ace Munim
    I don't know if I'm having a syntax error but the compiler is giving me TypeError: 'undefined' is not an object (evaluating 'xmlDoc.getElementsByTagName("icon")[i].childNodes') Its me giving me this problem when im parsing the XML from my server, my actual javascript code is like this var xmlDoc = Obj.responseXML; var count = 0; if(xmlDoc){ while(count <= xmlDoc.getElementsByTagName("item").length){ document.getElementById("flow").innerHTML += "<div class='item'><img class='content' src='" + xmlDoc.getElementsByTagName("icon")[i].childNodes[0].nodeValue.replace(/\s+$/g,' ') +"' /></div>"; count++; } }else{ alert("Unable to parse!"); } and my XML goes like this. <feed> <item> <title>Given Title</title> <icon> http://i178.photobucket.com/albums/w255/ace003_album/Logo-ETC-RGB-e1353503652739.jpg </icon> </item> <item>...</item> <item>...</item> <item>...</item> <item>...</item> <item>...</item> <item>...</item> </feed> i just want to parse the image link and to show it.

    Read the article

  • different thread accessing MemoryStream

    - by Wayne
    There's a bit of code which writes data to a MemoryStream object directly into it's data buffer by calling GetBuffer(). It also uses and updates the Position and SetLength() properties appropriately. This code works purposes 99.9999% of the time. Literally. Only every so many 100,000's of iterations it will barf. The specific problem is that the memory.Position property suddenly returns zero instead of the appropriate value. However, code was added that checks for the 0 and throws an exception which include log of the MemoryStream properties like Position and Length in a separate method. Those return the correct value. Further addition shows that when this rare condition occurs, the memory.Position only has zero inside this particular method. Okay. Obviously, this must be a threading issue. But this code is well locked. However, the nature of this software is that it's organized by "tasks" with a scheduler and so any one of several actual O/S thread may run this code at any give time--but never more than one at a time. So it's my guess that ordinarily it so happens that the same thread keeps getting used for this method and then on a rare occasion a different thread get used. Then due to compiler optimizations, the different thread never gets the correct value. It gets a "stale" value. Ordinarily in a situation like this I would apply a "volatile" keyword to the variable in question. But that (those) variables are inside the MemoryStream object. Does anyone have any other idea? Or does this mean we have to implement our own MemoryStream object? (Just like we end up having to do with practically every collection in .NET?) It's a shame to have such an awesome platform as .NET and have virtually the entire system useless as-is for seriously parallelized applications. If I'm wrong or you have other ideas, please advise. Sincerely, Wayne

    Read the article

  • Using child visitor in C#

    - by Thomas Matthews
    I am setting up a testing component and trying to keep it generic. I want to use a generic Visitor class, but not sure about using descendant classes. Example: public interface Interface_Test_Case { void execute(); void accept(Interface_Test_Visitor v); } public interface Interface_Test_Visitor { void visit(Interface_Test_Case tc); } public interface Interface_Read_Test_Case : Interface_Test_Case { uint read_value(); } public class USB_Read_Test : Interface_Read_Test_Case { void execute() { Console.WriteLine("Executing USB Read Test Case."); } void accept(Interface_Test_Visitor v) { Console.WriteLine("Accepting visitor."); } uint read_value() { Console.WriteLine("Reading value from USB"); return 0; } } public class USB_Read_Visitor : Interface_Test_Visitor { void visit(Interface_Test_Case tc) { Console.WriteLine("Not supported Test Case."); } void visit(Interface_Read_Test_Case rtc) { Console.WriteLine("Not supported Read Test Case."); } void visit(USB_Read_Test urt) { Console.WriteLine("Yay, visiting USB Read Test case."); } } // Code fragment USB_Read_Test test_case; USB_Read_Visitor visitor; test_case.accept(visitor); What are the rules the C# compiler uses to determine which of the methods in USB_Read_Visitor will be executed by the code fragment? I'm trying to factor out dependencies of my testing component. Unfortunately, my current Visitor class contains visit methods for classes not related to the testing component. Am I trying to achieve the impossible?

    Read the article

  • Disable ARC with Xcode 5

    - by user2187565
    First, sorry for my bad english, I'm french and had 15years old but StackOverFlow is for me the best forum for developers. So, in the previous versions of Xcode, we can disable ARC (Automatic Reference Counting) in the project settings when we create the project. Not now with Xcode 5 and ARC to pose me a problem: with an property list file, for the reading step, Xcode send me an error: "implicit conversion of 'int' to 'id' is disallowed with ARC". I had not the problem with the same code with Xcode 4. In my property list file, The keys are numbers and also in my viewController.m . NIKOS M.: No problem, but I don't see how I can add compiler flag with the 5th version of Xcode. The code (with french string...): NSString *error; NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *plistPath = [rootPath stringByAppendingPathComponent:@"Save.plist"]; NSArray *keys = [NSArray arrayWithObjects:@"valeurCompteur1", @"valeurCompteur2", @"valeurCompteur3", @"valeurCompteur4", @"valeurCompteur5", @"nomCompteur1", @"nomCompteur2", @"nomCompteur3", @"nomCompteur4", @"nomCompteur5", nil]; NSArray *objs = [NSArray arrayWithObjects: compteur1, compteur2, compteur3, compteur4, compteur5, nameC1, nameC2, nameC3, nameC4, nameC5, nil]; REVIEW: When I disallow ARC for the target, an warning persist. How I can resolve that please ? Thank you very much.

    Read the article

  • `enable_shared_from_this` has a non-virtual destructor

    - by Shtééf
    I have a pet project with which I experiment with new features of the upcoming C++0x standard. While I have experience with C, I'm fairly new to C++. To train myself into best practices, (besides reading a lot), I have enabled some strict compiler parameters (using GCC 4.4.1): -std=c++0x -Werror -Wall -Winline -Weffc++ -pedantic-errors This has worked fine for me. Until now, I have been able to resolve all obstacles. However, I have a need for enable_shared_from_this, and this is causing me problems. I get the following warning (error, in my case) when compiling my code (probably triggered by -Weffc++): base class ‘class std::enable_shared_from_this<Package>’ has a non-virtual destructor So basically, I'm a bit bugged by this implementation of enable_shared_from_this, because: A destructor of a class that is intended for subclassing should always be virtual, IMHO. The destructor is empty, why have it at all? I can't imagine anyone would want to delete their instance by reference to enable_shared_from_this. But I'm looking for ways to deal with this, so my question is really, is there a proper way to deal with this? And: am I correct in thinking that this destructor is bogus, or is there a real purpose to it?

    Read the article

  • C#: Need one of my classes to trigger an event in another class to update a text box

    - by Matt
    Total n00b to C# and events although I have been programming for a while. I have a class containing a text box. This class creates an instance of a communication manager class that is receiving frames from the Serial Port. I have this all working fine. Every time a frame is received and its data extracted, I want a method to run in my class with the text box in order to append this frame data to the text box. So, without posting all of my code I have my form class... public partial class Form1 : Form { CommManager comm; public Form1() { InitializeComponent(); comm = new CommManager(); } private void updateTextBox() { //get new values and update textbox } . . . and I have my CommManager class class CommManager { //here we manage the comms, recieve the data and parse the frame } SO... essentially, when I parse that frame, I need the updateTextBox method from the form class to run. I'm guessing this is possible with events but I can't seem to get it to work. I tried adding an event handler in the form class after creating the instance of CommManager as below... comm = new CommManager(); comm.framePopulated += new EventHandler(updateTextBox); ...but I must be doing this wrong as the compiler doesn't like it... Any ideas?!

    Read the article

  • Are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • Problem with inner classes of the same name in Visual C++

    - by starblue
    I have a problem with Visual C++, where apparently inner classes with the same name but in different outer classes are confused. The problem occurs for two layers, where each layer has a listener interface as an inner class. B is a listener of A, and has its own listener in a third layer above it (not shown). The structure of the code looks like this: A.h class A { class Listener { Listener(); virtual ~Listener() = 0; } [...] } B.h class B : public A::Listener { class Listener { Listener(); virtual ~Listener() = 0; } [...] } B.cpp B::Listener::Listener() {} B::Listener::~Listener() {} I get the error B.cpp(49) : error C2509: '{ctor}' : member function not declared in 'B' The C++ compiler for Renesas sh2a has no problem with this, but then it is more liberal than Visual C++ in some other respects, too. If I rename the listener interfaces to have different names the problem goes away, but I'd like to avoid that (the real class names instead of A or B are rather long). Is what I'm doing correct C++, or is the complaint by Visual C++ justified? Is there a way to work around this problem without renaming the listener interfaces?

    Read the article

  • Why do I get this strange output behavior?

    - by WilliamKF
    I have the following program test.cc: #include <iostream> unsigned char bogus1[] = { // Changing # of periods (0x2e) changes output after periods. 0x2e, 0x2e, 0x2e, 0x2e }; unsigned int bogus2 = 1816; // Changing this value changes output. int main() { std::clog << bogus1; } I build it with: g++ -g -c -o test.o test.cc; g++ -static-libgcc -o test test.o Using g++ version 3.4.6 I run it through valgrind and nothing is reported wrong. However the output has two extra control characters and looks like this: .... Thats a control-X and a control-G at the end. If you change the value of bogus2 you get different control characters. If you change the number of periods in the array the issue goes away or changes. I suspect it is a memory corruption bug in the compiler or iostream package. What is going on here?

    Read the article

  • Enumeration trouble: redeclared as different kind of symbol

    - by Matt
    Hello all. I am writing a program that is supposed to help me learn about enumeration data types in C++. The current trouble is that the compiler doesn't like my enum usage when trying to use the new data type as I would other data types. I am getting the error "redeclared as different kind of symbol" when compiling my trangleShape function. Take a look at the relevant code. Any insight is appreciated! Thanks! (All functions are their own .cpp files.) header file #ifndef HEADER_H_INCLUDED #define HEADER_H_INCLUDED #include <iostream> #include <iomanip> using namespace std; enum triangleType {noTriangle, scalene, isoceles, equilateral}; //prototypes void extern input(float&, float&, float&); triangleType extern triangleShape(float, float, float); /*void extern output (float, float, float);*/ void extern myLabel(const char *, const char *); #endif // HEADER_H_INCLUDED main function //8.1 main // this progam... #include "header.h" int main() { float sideLength1, sideLength2, sideLength3; char response; do //main loop { input (sideLength1, sideLength2, sideLength3); triangleShape (sideLength1, sideLength2, sideLength3); //output (sideLength1, sideLength2, sideLength3); cout << "\nAny more triangles to analyze? (y,n) "; cin >> response; } while (response == 'Y' || response == 'y'); myLabel ("8.1", "2/11/2011"); return 0; } triangleShape shape # include "header.h" triangleType triangleShape(sideLenght1, sideLength2, sideLength3) { triangleType triangle; return triangle; }

    Read the article

  • Commenting out portions of code in Scala

    - by akauppi
    I am looking for a C(++) #if 0 -like way of being able to comment out whole pieces of Scala source code, for keeping around experimental or expired code for a while. I tried out a couple of alternatives and would like to hear what you use, and if you have come up with something better? // Simply block-marking N lines by '//' is one way... // <tags> """ anything My editor makes this easy, but it's not really The Thing. It gets easily mixed with actual one-line comments. Then I figured there's native XML support, so: <!-- ... did not work --> Wrapping in XML works, unless you have <tags> within the block: class none { val a= <ignore> ... cannot have //<tags> <here> (not even in end-of-line comments!) </ignore> } The same for multi-line strings seems kind of best, but there's an awful lot of boilerplate (not fashionable in Scala) to please the compiler (less if you're doing this within a class or an object): object none { val ignore= """ This seems like ... <truly> <anything goes> but three "'s of course """ } The 'right' way to do this might be: /*** /* ... works but not properly syntax highlighed in SubEthaEdit (or StackOverflow) */ ***/ ..but that matches the /* and */ only, not i.e. /*** to ***/. This means the comments within the block need to be balanced. And - the current Scala syntax highlighting mode for SubEthaEdit fails miserably on this. As a comparison, Lua has --[==[ matching ]==] and so forth. I think I'm spoilt? So - is there some useful trick I'm overseeing?

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >