Search Results

Search found 5819 results on 233 pages for 'compiler theory'.

Page 204/233 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Can C++ do something like an ML case expression?

    - by Nathan Andrew Mullenax
    So, I've run into this sort of thing a few times in C++ where I'd really like to write something like case (a,b,c,d) of (true, true, _, _ ) => expr | (false, true, _, false) => expr | ... But in C++, I invariably end up with something like this: bool c11 = color1.count(e.first)>0; bool c21 = color2.count(e.first)>0; bool c12 = color1.count(e.second)>0; bool c22 = color2.count(e.second)>0; // no vertex in this edge is colored // requeue if( !(c11||c21||c12||c22) ) { edges.push(e); } // endpoints already same color // failure condition else if( (c11&&c12)||(c21&&c22) ) { results.push_back("NOT BICOLORABLE."); return true; } // nothing to do: nodes are already // colored and different from one another else if( (c11&&c22)||(c21&&c12) ) { } // first is c1, second is not set else if( c11 && !(c12||c22) ) { color2.insert( e.second ); } // first is c2, second is not set else if( c21 && !(c12||c22) ) { color1.insert( e.second ); } // first is not set, second is c1 else if( !(c11||c21) && c12 ) { color2.insert( e.first ); } // first is not set, second is c2 else if( !(c11||c21) && c22 ) { color1.insert( e.first ); } else { std::cout << "Something went wrong.\n"; } I'm wondering if there's any way to clean all of those if's and else's up, as it seems especially error prone. It would be even better if it were possible to get the compiler complain like SML does when a case expression (or statement in C++) isn't exhaustive. I realize this question is a bit vague. Maybe, in sum, how would one represent an exhaustive truth table with an arbitrary number of variables in C++ succinctly? Thanks in advance.

    Read the article

  • C++ Constructor initialization list strangeness

    - by Andy
    I have always been a good boy when writing my classes, prefixing all member variables with m_: class Test { int m_int1; int m_int2; public: Test(int int1, int int2) : m_int1(int int1), m_int2(int int2) {} }; void main() { Test t(10, 20); // Just an example } However, recently I forgot to do that and ended up writing: class Test { int int1; int int2; public: // Very questionable, but of course I meant to assign ::int1 to this->int1! Test(int int1, int int2) : int1(int1), int2(int2) {} }; Believe it or not, the code compiled with no errors/warnings and the assignments took place correctly! It was only when doing the final check before checking in my code when I realised what I had done. My question is: why did my code compile? Is something like that allowed in the C++ standard, or is it simply a case of the compiler being clever? In case you were wondering, I was using Visual Studio 2008 Thank you.

    Read the article

  • Extending the method pool of a concrete class which is derived by an interface

    - by CelGene
    Hello, I had created an interface to abstract a part of the source for a later extension. But what if I want to extend the derived classes with some special methods? So I have the interface here: class virtualFoo { public: virtual ~virtualFoo() { } virtual void create() = 0; virtual void initialize() = 0; }; and one derived class with an extra method: class concreteFoo : public virtualFoo { public: concreteFoo() { } ~concreteFoo() { } virtual void create() { } virtual void initialize() { } void ownMethod() { } }; So I try to create an Instance of concreteFoo and try to call ownMethod like this: void main() { virtualFoo* ptr = new concreteFoo(); concreteFoo* ptr2 = dynamic_cast(ptr); if(NULL != ptr2) ptr2->ownMethod(); } It works but is not really the elegant way. If I would try to use ptr-ownMethod(); directly the compiler complains that this method is not part of virtualFoo. Is there a chance to do this without using dynamic_cast? Thanks in advance!

    Read the article

  • Why do .NET developers offer 32-bit/64-bit versions of .NET assemblies?

    - by Tyler
    Evey now and then I see both x86 and x64 versions of a .NET assembly. Consider the following web part for SharePoint. Why wouldn't the developer just offer a single version and have let the JIT compiler sort out the rest? When I see these kinds offering is it just that the developer decided to create a native image using a tool like ngen in order to avoid a JIT? Someone please help me out here, I feel like I'm missing something of note. Updated From what I got below, both x86 and x64 builds are offered because one or more of the following reasons: The developer wanted to avoid JITing and created a native image of his code, targeting a given architecture using a tool like ngen.exe. The assembly contains platform specific COM calls and so it makes no point to build it as AnyCPU. In these cases builds that target different platforms may contain different code. The assembly may contain Win32 calls using pinvoke which won't get remapped by a JIT and so the build should target the platform it is bound to.

    Read the article

  • How do I access Dictionary items?

    - by salvationishere
    I am developing a C# VS2008 / SQL Server website app and am new to the Dictionary class. Can you please advise on best method of accomplishing this? Here is a code snippet: SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); cmd.CommandText = "dbo.AppendDataCT"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter p1, p2, p3; foreach (string s in dt.Rows[1].ItemArray) { DataRow dr = dt.Rows[1]; // second row p1 = cmd.Parameters.AddWithValue((string)dic[0], (string)dr[0]); p1.SqlDbType = SqlDbType.VarChar; p2 = cmd.Parameters.AddWithValue((string)dic[1], (string)dr[1]); p2.SqlDbType = SqlDbType.VarChar; p3 = cmd.Parameters.AddWithValue((string)dic[2], (string)dr[2]); p3.SqlDbType = SqlDbType.VarChar; } but this is giving me compiler error: The best overloaded method match for 'System.Collections.Generic.Dictionary<string,string>.this[string]' has some invalid arguments I just want to access each value from "dic" and load into these SQL parameters. How do I do this? Do I have to enter the key? The keys are named "col1", "col2", etc., so not the most user-friendly. Any other tips? Thanks!

    Read the article

  • How do I implement configurations and settings?

    - by Malvolio
    I'm writing a system that is deployed in several places and each site needs its own configurations and settings. A "configuration" is a named value that is necessary to a particular site (e.g., the database URL, S3 bucket name); every configuration is necessary, there is not usually a default, and it's typically string-valued. A setting is a named value but it just tweaks the behavior of the system; it's often numeric or Boolean, and there's usually some default. So far, I've been using property files or thing like them, but it's a terrible solution. Several times, a developer has added a requirement for a configuration but not added the value to file for the live configuration, so the new release passed all the tests, then failed when released to live. Better, of course, for every file to be compiled — so if there's a missing configuration, or one of the wrong type, it won't get past the compiler — and inject the site-specific class into the build for each site. As a bones, a Scala file can easy model more complex values, especially lists, but also maps and tuples. The downside is, the files are sometimes maintained by people who aren't developers, so it has to be pretty self-explanatory, which was the advantage of property files. (Someone explain XML configurations to me: all the complexity of a compilable file but the run-time risk of a property file.) What I'm looking for is an easy pattern for defining a group required names and allowable values. Any suggestions?

    Read the article

  • final transient fields and serialization

    - by doublep
    Is it possible to have final transient fields that are set to any non-default value after serialization in Java? My usecase is a cache variable — that's why it is transient. I also have a habit of making Map fields that won't be changed (i.e. contents of the map is changed, but object itself remains the same) final. However, these attributes seem to be contradictory — while compiler allows such a combination, I cannot have the field set to anything but null after unserialization. I tried the following, without success: simple field initialization (shown in the example): this is what I normally do, but the initialization doesn't seem to happen after unserialization; initialization in constructor (I believe this is semantically the same as above though); assigning the field in readObject() — cannot be done since the field is final. In the example cache is public only for testing. import java.io.*; import java.util.*; public class test { public static void main (String[] args) throws Exception { X x = new X (); System.out.println (x + " " + x.cache); ByteArrayOutputStream buffer = new ByteArrayOutputStream (); new ObjectOutputStream (buffer).writeObject (x); x = (X) new ObjectInputStream (new ByteArrayInputStream (buffer.toByteArray ())).readObject (); System.out.println (x + " " + x.cache); } public static class X implements Serializable { public final transient Map <Object, Object> cache = new HashMap <Object, Object> (); } } Output: test$X@1a46e30 {} test$X@190d11 null

    Read the article

  • Sharepoint fails to load a C++ dll on windows 2008

    - by Nathan
    I have a sharepoint DLL that does some licensing things and as part of the code it uses an external C++ DLL to get the serial number of the hardisk. When i run this application on windows server 2003 it works fine, but on 2008 the whole site (loaded on load) crashes and resets continually. This is not 2008 R2 and is the same in 64 or 32 bits. If i put a debugger.break before the dll execution then I see the code get to the point of the break then never come back into the dll again. I do get some debug assertion warnings from within the function, again only in 2008, but im not sure this is related. I created a console app that runs the c# dll, which in turn loads the c++ dll, and this works perfectly on 2008 (although does show the assertion errors, but I have suppressed these now). The assertion errors are not in my code but within ICtypes.c and not something I can debug. If i put a breakpoint in the DLL it is never hit and the compiler says : "step in: Stepping over non user code" if i try to debug into the DLL using VS. I have tried wrapping the code used to call the DLL in: SPSecurity.RunWithElevatedPrivileges(delegate() but this also does not help. I have the sourcecode for this DLL so that is not a problem. If i delete the DLL from the directory I get an error about a missing DLL, if i replace it back to no error or warning just a complete failure. If i replace this code with a hardcoded string the whole application works fine. Any advice would be much appreciated, I can't understand why it works as a console app yet not when run by sharepoint, this is with the same user account, on the same machine... This is the code used to call the DLL: [DllImport("idDll.dll", EntryPoint = "GetMachineId", SetLastError = true)] extern static string GetComponentId([MarshalAs(UnmanagedType.LPStr)]String s); public static string GetComponentId() { Debugger.Break(); if (_machine == string.Empty) { string temp = ""; id= ComponentId.GetComponentId(temp); } return id; }

    Read the article

  • Resources and techniques/methods for SCJP preparation ?

    - by BenoitParis
    I am passing the SCJP 6 exam in a month. I have the "SCJP Sun Certified Programmer for Java 6 Exam 310-065" book. It seems great for the exam. But I want your advice on this. Getting the closest possible to 100% would be great. I have found a site that answered some of the questions you ask yourself when you go trough the book. Here is it : http://www.janeg.ca/java2.html As you can see it was written for Java 2 :/ I have written another specific question here on StackOverflow about the usefulness of JVM specification and Java compiler code for the SCJP. Will Update the results here. Here it is. Please share the resources you used in preparing the exam. Please also specify any resources that you think might help. Any type of resource is welcome: books, code, specs, sites, wikies, papers, online tests, grandmas... Please also share on any method/technique that helped you prepare the exam. Please also comment on the return you got from the resource and the method (for the learning process and for points in the exam) I'll begin: Book : "SCJP Sun Certified Programmer for Java 6 Exam 310-065". Seems like the official book for the preparation. Technique : Writing code in a text editor and compiling it with javac to test a question. NO IDEs! It helps you get a a straight answer to a question you have. It helps you pay attention to every word in the code (and this is very important in the SCJP) EDIT: Added dimension: Are there good, up-to-date online tests?

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • Using child visitor in C#

    - by Thomas Matthews
    I am setting up a testing component and trying to keep it generic. I want to use a generic Visitor class, but not sure about using descendant classes. Example: public interface Interface_Test_Case { void execute(); void accept(Interface_Test_Visitor v); } public interface Interface_Test_Visitor { void visit(Interface_Test_Case tc); } public interface Interface_Read_Test_Case : Interface_Test_Case { uint read_value(); } public class USB_Read_Test : Interface_Read_Test_Case { void execute() { Console.WriteLine("Executing USB Read Test Case."); } void accept(Interface_Test_Visitor v) { Console.WriteLine("Accepting visitor."); } uint read_value() { Console.WriteLine("Reading value from USB"); return 0; } } public class USB_Read_Visitor : Interface_Test_Visitor { void visit(Interface_Test_Case tc) { Console.WriteLine("Not supported Test Case."); } void visit(Interface_Read_Test_Case rtc) { Console.WriteLine("Not supported Read Test Case."); } void visit(USB_Read_Test urt) { Console.WriteLine("Yay, visiting USB Read Test case."); } } // Code fragment USB_Read_Test test_case; USB_Read_Visitor visitor; test_case.accept(visitor); What are the rules the C# compiler uses to determine which of the methods in USB_Read_Visitor will be executed by the code fragment? I'm trying to factor out dependencies of my testing component. Unfortunately, my current Visitor class contains visit methods for classes not related to the testing component. Am I trying to achieve the impossible?

    Read the article

  • ojspc always returns 0 on errors

    - by Matt McCormick
    In my Ant build.xml file, I am trying to compile JSPs using ojspc. The files are being compiled, however, the build process is still running to completion when the JSP compilation has errors. This is part of my build.xml: <java fork="true" jar="${env.ORACLE_HOME}\j2ee\home\ojspc.jar" resultproperty="result"> <jvmarg value="-Djava.compiler=NONE"/> <arg value="-extend"/> <arg value="com.orionserver.http.OrionHttpJspPage"/> <arg value="-batchMask"/> <arg value="*.jsp"/> <arg value="${target-directory}/build/target/ear/${module-dir-name}-jsp.war"/> </java> <echo level="info">Result Property: ${result}</echo> I have tried setting the property failonerror="true" but that does not change anything. I receive the following output: [java] Detected archive, now processing contents of ../build/target/ear/web-module-jsp.war... [java] Setting up temp area... [java] Expanding archive in temp area... [java] C:\DOCUME~1\MMCCOR~1\LOCALS~1\Temp\tmp12940\_web_2d_inf\_jsp\_password.java:60: cannot resolve symbol [java] symbol : variable reqvst [java] location: class _web_2d_inf._jsp._password [java] out.print(reqvst.getAttribute("test")); [java] ^ [java] 1 error [java] Creating D:\eclipse-workspace\jdw\build\..\build\target\ear\web-module-jsp.war ... [java] Removing temp area... [echo] Result Property: 0 ...(more commands) BUILD SUCCESSFUL In the password.jsp file, I intentionally introduced an error to test. How can I get the build to fail on an error? At the Ant Java page, I am confused by: By default the return code of a is ignored. Alternatively, you can set resultproperty to the name of a property and have it assigned to the result code (barring immutability, of course). When you set failonerror="true", the only possible value for resultproperty is 0. Any non-zero response is treated as an error and would mean the build exits.

    Read the article

  • GWT - problems with constants in css

    - by hba
    Hi, I'm new to GWT; I'm building a small sample app. I have several CSS files. I'm able to successfully use the ClientBundle and CssResource to assign styles to the elements defined in my UiBinder script. Now I'd like to take it one step further and introduce CSS constants using @def css-rule. The @def works great when I define a constant and use it in the same CSS file. However I cannot use it in another CSS file. When I try to use the @eval rule to evaluate an existing constant the compiler throws an execption: "cannot make a static reference to the non-static method ". Here is an example of what I'm trying to do: ConstantStyle.css @def BACKGROUND red; ConstantStyle.java package abc; import ...; interface ConstantStyle extends cssResource { String BACKGROUND(); } MyStyle.css @eval BACKGROUND abc.ConstantStyle.BACKGROUND(); .myClass {background-color: BACKGROUND;} MyStyle.java package abc; import ...; interface ConstantStyle extends cssResource { String myClass; } MyResources.java package abc; import ...; interface MyResources extends ClientBundle { @Source("ConstantStyle.css") ConstantStyle constantStyle(); @Source("MyStyle.css") MyStyle myStyle(); } Thanks in advance!

    Read the article

  • Does it ever make sense to make a fundamental (non-pointer) parameter const?

    - by Scott Smith
    I recently had an exchange with another C++ developer about the following use of const: void Foo(const int bar); He felt that using const in this way was good practice. I argued that it does nothing for the caller of the function (since a copy of the argument was going to be passed, there is no additional guarantee of safety with regard to overwrite). In addition, doing this prevents the implementer of Foo from modifying their private copy of the argument. So, it both mandates and advertises an implementation detail. Not the end of the world, but certainly not something to be recommended as good practice. I'm curious as to what others think on this issue. Edit: OK, I didn't realize that const-ness of the arguments didn't factor into the signature of the function. So, it is possible to mark the arguments as const in the implementation (.cpp), and not in the header (.h) - and the compiler is fine with that. That being the case, I guess the policy should be the same for making local variables const. One could make the argument that having different looking signatures in the header and source file would confuse others (as it would have confused me). While I try to follow the Principle of Least Astonishment with whatever I write, I guess it's reasonable to expect developers to recognize this as legal and useful.

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • c++ floating point precision loss: 3015/0.00025298219406977296

    - by SigTerm
    The problem. Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu. Code: double a = 3015.0; double b = 0.00025298219406977296; //*((unsigned __int64*)(&a)) == 0x40a78e0000000000 //*((unsigned __int64*)(&b)) == 0x3f30945640000000 double f = a/b;//3015/0.00025298219406977296; the result of calculation (i.e. "f") is 11917835.000000000 (*((unsigned __int64*)(&f)) == 0x4166bb4160000000) although it should be 11917834.814763514 (i.e. *((unsigned __int64*)(&f)) == 0x4166bb415a128aef). I.e. fractional part is lost. Unfortunately, I need fractional part to be correct. Questions: 1) Why does this happen? 2) How can I fix the problem? Additional info: 0) The result is taken directly from "watch" window (it wasn't printed, and I didn't forget to set printing precision). I also provided hex dump of floating point variable, so I'm absolutely sure about calculation result. 1) The disassembly of f = a/b is: fld qword ptr [a] fdiv qword ptr [b] fstp qword ptr [f] 2) f = 3015/0.00025298219406977296; yields correct result (f == 11917834.814763514 , *((unsigned __int64*)(&f)) == 0x4166bb415a128aef ), but it looks like in this case result is simply calculated during compile-time: fld qword ptr [__real@4166bb415a128aef (828EA0h)] fstp qword ptr [f] So, how can I fix this problem? P.S. I've found a temporary workaround (i need only fractional part of division, so I simply use f = fmod(a/b)/b at the moment), but I still would like to know how to fix this problem properly - double precision is supposed to be 16 decimal digits, so such calculation isn't supposed to cause problems.

    Read the article

  • Putting all methods in class definition

    - by Amnon
    When I use the pimpl idiom, is it a good idea to put all the methods definitions inside the class definition? For example: // in A.h class A { class impl; boost::scoped_ptr<impl> pimpl; public: A(); int foo(); } // in A.cpp class A::impl { // method defined in class int foo() { return 42; } // as opposed to only declaring the method, and defining elsewhere: float bar(); }; A::A() : pimpl(new impl) { } int A::foo() { return pimpl->foo(); } As far as I know, the only problems with putting a method definition inside a class definition is that (1) the implementation is visible in files that include the class definition, and (2) the compiler may make the method inline. These are not problems in this case since the class is defined in a private file, and inlining has no effect since the methods are called in only one place. The advantage of putting the definition inside the class is that you don't have to repeat the method signature. So, is this OK? Are there any other issues to be aware of?

    Read the article

  • Freestanding ARM C++ Code - empty .ctors section

    - by Matthew Iselin
    I'm writing C++ code to run in a freestanding environment (basically an ARM board). It's been going well except I've run into a stumbling block - global static constructors. To my understanding the .ctors section contains a list of addresses to each static constructor, and my code simply needs to iterate this list and make calls to each function as it goes. However, I've found that this section in my binary is in fact completely empty! Google pointed towards using ".init_array" instead of ".ctors" (an EABI thing), but that has not changed anything. Any ideas as to why my static constructors don't exist? Relevant linker script and objdump output follows: .ctors : { . = ALIGN(4096); start_ctors = .; *(.init_array); *(.ctors); end_ctors = .; } .dtors : { . = ALIGN(4096); start_dtors = .; *(.fini_array); *(.dtors); end_dtors = .; } -- 2 .ctors 00001000 8014c000 8014c000 00054000 2**2 CONTENTS, ALLOC, LOAD, DATA <snip> 8014d000 g O .ctors 00000004 start_ctors <snip> 8014d000 g O .ctors 00000004 end_ctors I'm using an arm-elf targeted GCC compiler (4.4.1).

    Read the article

  • how to filter files from the root "classes" and "test-classes" folders in Eclipse?

    - by Kidburla
    I am using ClearCase in my application which generates a whole load of ".copyarea.db" files (one in every folder). These cause conflicts when publishing to Tomcat as Eclipse will bundle the "classes" and "test-classes" folders into one JAR (not sure why it does this - as there is no need to have test classes available on the application server). Any folders with the same names will have a separate .copyarea.db in the classes and test-classes branches. I managed to get around this problem in general by adding ".copyarea.db" to the Filtered resources on the Java->Compiler->Building->Output Folder preference page. This stops the file appearing in source output (package/class folders), the vast majority of cases. However there remains the problem of the root folder, i.e. "target/classes/.copyarea.db" and "target/test-classes/.copyarea.db". These files are not filtered as they are not part of the compile task. Just deleting the files manually doesn't help either, as Eclipse expects to find them and doesn't. How can I exclude these ".copyarea.db" files from the root "classes" and "test-classes" folders?

    Read the article

  • Syntax Error? When parsing XML value

    - by Ace Munim
    I don't know if I'm having a syntax error but the compiler is giving me TypeError: 'undefined' is not an object (evaluating 'xmlDoc.getElementsByTagName("icon")[i].childNodes') Its me giving me this problem when im parsing the XML from my server, my actual javascript code is like this var xmlDoc = Obj.responseXML; var count = 0; if(xmlDoc){ while(count <= xmlDoc.getElementsByTagName("item").length){ document.getElementById("flow").innerHTML += "<div class='item'><img class='content' src='" + xmlDoc.getElementsByTagName("icon")[i].childNodes[0].nodeValue.replace(/\s+$/g,' ') +"' /></div>"; count++; } }else{ alert("Unable to parse!"); } and my XML goes like this. <feed> <item> <title>Given Title</title> <icon> http://i178.photobucket.com/albums/w255/ace003_album/Logo-ETC-RGB-e1353503652739.jpg </icon> </item> <item>...</item> <item>...</item> <item>...</item> <item>...</item> <item>...</item> <item>...</item> </feed> i just want to parse the image link and to show it.

    Read the article

  • Problem with inner classes of the same name in Visual C++

    - by starblue
    I have a problem with Visual C++, where apparently inner classes with the same name but in different outer classes are confused. The problem occurs for two layers, where each layer has a listener interface as an inner class. B is a listener of A, and has its own listener in a third layer above it (not shown). The structure of the code looks like this: A.h class A { class Listener { Listener(); virtual ~Listener() = 0; } [...] } B.h class B : public A::Listener { class Listener { Listener(); virtual ~Listener() = 0; } [...] } B.cpp B::Listener::Listener() {} B::Listener::~Listener() {} I get the error B.cpp(49) : error C2509: '{ctor}' : member function not declared in 'B' The C++ compiler for Renesas sh2a has no problem with this, but then it is more liberal than Visual C++ in some other respects, too. If I rename the listener interfaces to have different names the problem goes away, but I'd like to avoid that (the real class names instead of A or B are rather long). Is what I'm doing correct C++, or is the complaint by Visual C++ justified? Is there a way to work around this problem without renaming the listener interfaces?

    Read the article

  • Problems with variadic function

    - by morpheous
    I have the following function from some legacy code that I am maintaining. long getMaxStart(long start, long count, const myStruct *s1, ...) { long i1, maxstart; myStruct *s2; va_list marker; maxstart = start; /*BUGFIX: 003 */ /*(va_start(marker, count);*/ va_start(marker, s1); for (i1 = 1; i1 <= count; i1++) { s2 = va_arg(marker, myStruct *); /* <- s2 is assigned null here */ maxstart = MAX(maxstart, s2->firstvalid); /* <- SEGV here */ } va_end(marker); return (maxstart); } When the function is called with only one myStruct argument, it causes a SEGV. The code compiled and run without crashing on Windows XP when I compiled it using VS2005. I have now moved the code to Ubuntu Karmic and I am having problems with the stricter compiler on Linux. Is anyone able to spot what is causing the parameter not to be read correctly in the var_arg() statement? I am compiling using gcc version 4.4.1 Edit The statement that causes the SEGV is this one: start = getMaxStart(start, 1, ms1); The variables 'start' and 'ms1' have valid values when the code execution first reaches this line.

    Read the article

  • different thread accessing MemoryStream

    - by Wayne
    There's a bit of code which writes data to a MemoryStream object directly into it's data buffer by calling GetBuffer(). It also uses and updates the Position and SetLength() properties appropriately. This code works purposes 99.9999% of the time. Literally. Only every so many 100,000's of iterations it will barf. The specific problem is that the memory.Position property suddenly returns zero instead of the appropriate value. However, code was added that checks for the 0 and throws an exception which include log of the MemoryStream properties like Position and Length in a separate method. Those return the correct value. Further addition shows that when this rare condition occurs, the memory.Position only has zero inside this particular method. Okay. Obviously, this must be a threading issue. But this code is well locked. However, the nature of this software is that it's organized by "tasks" with a scheduler and so any one of several actual O/S thread may run this code at any give time--but never more than one at a time. So it's my guess that ordinarily it so happens that the same thread keeps getting used for this method and then on a rare occasion a different thread get used. Then due to compiler optimizations, the different thread never gets the correct value. It gets a "stale" value. Ordinarily in a situation like this I would apply a "volatile" keyword to the variable in question. But that (those) variables are inside the MemoryStream object. Does anyone have any other idea? Or does this mean we have to implement our own MemoryStream object? (Just like we end up having to do with practically every collection in .NET?) It's a shame to have such an awesome platform as .NET and have virtually the entire system useless as-is for seriously parallelized applications. If I'm wrong or you have other ideas, please advise. Sincerely, Wayne

    Read the article

  • Disable ARC with Xcode 5

    - by user2187565
    First, sorry for my bad english, I'm french and had 15years old but StackOverFlow is for me the best forum for developers. So, in the previous versions of Xcode, we can disable ARC (Automatic Reference Counting) in the project settings when we create the project. Not now with Xcode 5 and ARC to pose me a problem: with an property list file, for the reading step, Xcode send me an error: "implicit conversion of 'int' to 'id' is disallowed with ARC". I had not the problem with the same code with Xcode 4. In my property list file, The keys are numbers and also in my viewController.m . NIKOS M.: No problem, but I don't see how I can add compiler flag with the 5th version of Xcode. The code (with french string...): NSString *error; NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *plistPath = [rootPath stringByAppendingPathComponent:@"Save.plist"]; NSArray *keys = [NSArray arrayWithObjects:@"valeurCompteur1", @"valeurCompteur2", @"valeurCompteur3", @"valeurCompteur4", @"valeurCompteur5", @"nomCompteur1", @"nomCompteur2", @"nomCompteur3", @"nomCompteur4", @"nomCompteur5", nil]; NSArray *objs = [NSArray arrayWithObjects: compteur1, compteur2, compteur3, compteur4, compteur5, nameC1, nameC2, nameC3, nameC4, nameC5, nil]; REVIEW: When I disallow ARC for the target, an warning persist. How I can resolve that please ? Thank you very much.

    Read the article

  • Why do I have to specify pure virtual functions in the declaration of a derived class in Visual C++?

    - by neuviemeporte
    Given the base class A and the derived class B: class A { public: virtual void f() = 0; }; class B : public A { public: void g(); }; void B::g() { cout << "Yay!"; } void B::f() { cout << "Argh!"; } I get errors saying that f() is not declared in B while trying do define void B::f(). Do I have to declare f() explicitly in B? I think that if the interface changes I shouldn't have to correct the declarations in every single class deriving from it. Is there no way for B to get all the virtual functions' declarations from A automatically? EDIT: I found an article that says the inheritance of pure virtual functions is dependent on the compiler: http://www.objectmentor.com/resources/articles/abcpvf.pdf I'm using VC++2008, wonder if there's an option for this.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >