Search Results

Search found 5819 results on 233 pages for 'compiler theory'.

Page 202/233 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Jaxb Simplify Plugin

    - by wrm
    i try to use the simplify plugin to simplify the generated code. I have a defined type: <xsd:complexType name="typeWithReferencesProperty"> <xsd:choice maxOccurs="unbounded"> <xsd:annotation> <xsd:appinfo> <simplify:as-element-property/> </xsd:appinfo> </xsd:annotation> <xsd:element name="a" type="AttributeValueIntegerType"/> <xsd:element name="b" type="AttributeValueIntegerType"/> </xsd:choice> </xsd:complexType> but it does not work, as it results in the following error: compiler was unable to honor this as-element-property customization. It is attached to a wrong place, or its inconsistent with other bindings. i used exactly the configuration, i also have other jaxb plugins which work, so i am not quite sure, if the plugin is broken or something? has anybody managed to get this running?

    Read the article

  • AssemblyResolve event is not firing during compilation of a dynamic assembly for an aspx page.

    - by John
    This one is really pissing me off. Here goes: My goal is to load assemblies at run-time that contain embedded aspx,ascx etc. What I would also like is to not lock the assembly file on disk so I can update it at run-time without having to restart the application (I know this will leave the previous version(s) loaded). To that end I have written a virtual path provider that does the trick. I have subscribed to the CurrentDomain.AssemblyResolve event so as to redirect the framework to my assemblies. The problem is that the when the framework tries to compile the dynamic assembly for the aspx page I get the following: Compiler Error Message: CS0400: The type or namespace name 'Pages' could not be found in the global namespace (are you missing an assembly reference?) Source Error: public class app_resource_pages__version_1_0_0_0__culture_neutral__publickeytoken_null_default_aspx : global::Pages._Default, System.Web.SessionState.IRequiresSessionState, System.Web.IHttpHandle I noticed that if I load the assembly with Assembly.Load(AssemblyName) or Assembly.LoadFrom(filename) I dont get the above error. If I load it with Assembly.Load(byte[]) (so as to not lock it), the exception is thrown but my AssemblyResolve handler, when called is returning the assembly correctly (it is called once). So I am guessing that it is called once when the framework parses the asp markup but not when it tries to create the dynamic assembly for the aspx page.

    Read the article

  • How do I make my ArrayList Thread-Safe? Another approach to problem in Java?

    - by thechiman
    I have an ArrayList that I want to use to hold RaceCar objects that extend the Thread class as soon as they are finished executing. A class, called Race, handles this ArrayList using a callback method that the RaceCar object calls when it is finished executing. The callback method, addFinisher(RaceCar finisher), adds the RaceCar object to the ArrayList. This is supposed to give the order in which the Threads finish executing. I know that ArrayList isn't synchronized and thus isn't thread-safe. I tried using the Collections.synchronizedCollection(c Collection) method by passing in a new ArrayList and assigning the returned Collection to an ArrayList. However, this gives me a compiler error: Race.java:41: incompatible types found : java.util.Collection required: java.util.ArrayList finishingOrder = Collections.synchronizedCollection(new ArrayList(numberOfRaceCars)); Here is the relevant code: public class Race implements RaceListener { private Thread[] racers; private ArrayList finishingOrder; //Make an ArrayList to hold RaceCar objects to determine winners finishingOrder = Collections.synchronizedCollection(new ArrayList(numberOfRaceCars)); //Fill array with RaceCar objects for(int i=0; i<numberOfRaceCars; i++) { racers[i] = new RaceCar(laps, inputs[i]); //Add this as a RaceListener to each RaceCar ((RaceCar) racers[i]).addRaceListener(this); } //Implement the one method in the RaceListener interface public void addFinisher(RaceCar finisher) { finishingOrder.add(finisher); } What I need to know is, am I using a correct approach and if not, what should I use to make my code thread-safe? Thanks for the help!

    Read the article

  • Java Generics Class Parameter Type Inference

    - by Pindatjuh
    Given the interface: public interface BasedOnOther<T, U extends BasedList<T>> { public T getOther(); public void staticStatisfied(final U list); } The BasedOnOther<T, U extends BasedList<T>> looks very ugly in my use-cases. It is because the T type parameter is already defined in the BasedList<T> part, so the "uglyness" comes from that T needs to be typed twice. Problem: is it possible to let the Java compiler infer the generic T type from BasedList<T> in a generic class/interface definition? Ultimately, I'd like to use the interface like: class X implements BasedOnOther<BasedList<SomeType>> { public SomeType getOther() { ... } public void staticStatisfied(final BasedList<SomeType> list) { ... } } // Does not compile, due to invalid parameter count. Instead: class X implements BasedOnOther<SomeType, BasedList<SomeType>> { public SomeType getOther() { ... } public void staticStatisfied(final BasedList<SomeType> list) { ... } }

    Read the article

  • Boost ASIO Headache

    - by bobber205
    Man... thought using ASIO in Boost was going to be easy and intuitive. :P I am starting to get it finally but I am having some trouble. Here's a snippet. I am having several compiler errors on the async_accept line. What am I doing wrong? :P I've based my code off of this page: http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html bool TestSocket::StartListening(int port) { bool didStart = false; if (!this->listening) { //try to listen acceptor = new tcp::acceptor(this->myService, tcp::endpoint(tcp::v4(), port)); didStart = true; //probably change? tcp::socket* tempNewSocket = new tcp::socket(this->myService); acceptor->async_accept(tempNewSocket, boost::bind(&AlexSocket::NewConnection, this, tempNewSocket, boost::asio::placeholders::error) ); } else //already started! return false; this->listening = didStart; return didStart; } void TestSocket::NewConnection(tcp::socket* s, const boost::system::error_code& error) { }

    Read the article

  • Why is TRest in Tuple<T1... TRest> not constrained?

    - by Anthony Pegram
    In a Tuple, if you have more than 7 items, you can provide an 8th item that is another tuple and define up to 7 items, and then another tuple as the 8th and on and on down the line. However, there is no constraint on the 8th item at compile time. For example, this is legal code for the compiler: var tuple = new Tuple<int, int, int, int, int, int, int, double> (1, 1, 1, 1, 1, 1, 1, 1d); Even though the intellisense documentation says that TRest must be a Tuple. You do not get any error when writing or building the code, it does not manifest until runtime in the form of an ArgumentException. You can roughly implement a Tuple in a few minutes, complete with a Tuple-constrained 8th item. I just wonder why it was left off the current implementation? Is it possibly a forward-compatibility issue where they could add more elements with a hypothetical C# 5? Short version of rough implementation interface IMyTuple { } class MyTuple<T1> : IMyTuple { public T1 Item1 { get; private set; } public MyTuple(T1 item1) { Item1 = item1; } } class MyTuple<T1, T2> : MyTuple<T1> { public T2 Item2 { get; private set; } public MyTuple(T1 item1, T2 item2) : base(item1) { Item2 = item2; } } class MyTuple<T1, T2, TRest> : MyTuple<T1, T2> where TRest : IMyTuple { public TRest Rest { get; private set; } public MyTuple(T1 item1, T2 item2, TRest rest) : base(item1, item2) { Rest = rest; } } ... var mytuple = new MyTuple<int, int, MyTuple<int>>(1, 1, new MyTuple<int>(1)); // legal var mytuple2 = new MyTuple<int, int, int>(1, 2, 3); // illegal

    Read the article

  • How to implement a sharepoint lists webservice

    - by 1800 INFORMATION
    I want to implement a web-service that uses the same interface as the Lists web service in sharepoint. I do not want to run this through sharepoint. What is a good way to get started in this? I have tried to use the wsdl.exe tool to generate some wrapper classes but the generated wrappers seem to have punted on the structure parameters and just specified them as XML. For example below is the generated wrapper for GetList - it should return a structure which has the information in the list, but instead it is returning XML. What is going on? ... /// <remarks/> [System.CodeDom.Compiler.GeneratedCodeAttribute("wsdl", "2.0.50727.1432")] [System.Web.Services.WebServiceBindingAttribute(Name="ListsSoap", Namespace="http://schemas.microsoft.com/sharepoint/soap/")] public interface IListsSoap { /// <remarks/> [System.Web.Services.WebMethodAttribute()] [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://schemas.microsoft.com/sharepoint/soap/GetList", RequestNamespace="http://schemas.microsoft.com/sharepoint/soap/", ResponseNamespace="http://schemas.microsoft.com/sharepoint/soap/", Use=System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] System.Xml.XmlNode GetList(string listName); }

    Read the article

  • Does cout need to be terminated with a semicolon ?

    - by Philippe Harewood
    I am reading Bjarne Stroustrup's Programming : Principles and Practice Using C++ In the drill section for Chapter 2 it talks about various ways to look at typing errors when compiling the hello_world program #include "std_lib_facilities.h" int main() //C++ programs start by executing the function main { cout << "Hello, World!\n", // output "Hello, World!" keep_window_open(); // wait for a character to be entered return 0; } In particular this section asks: Think of at least five more errors you might have made typing in your program (e.g. forget keep_window_open(), leave the Caps Lock key on while typing a word, or type a comma instead of a semicolon) and try each to see what happens when you try to compile and run those versions. For the cout line, you can see that there is a comma instead of a semicolon. This compiles and runs (for me). Is it making an assumption ( like in the javascript question: Why use semicolon? ) that the statement has been terminated ? Because when I try for keep_terminal_open(); the compiler informs me of the semicolon exclusion.

    Read the article

  • Boost Binary Endian parser not working?

    - by Hai
    I am studying how to use boost spirit Qi binary endian parser. I write a small test parser program according to here and basics examples, but it doesn't work proper. It gave me the msg:"Error:no match". Here is my code. #include "boost/spirit/include/qi.hpp" #include "boost/spirit/include/phoenix_core.hpp" #include "boost/spirit/include/phoenix_operator.hpp" #include "boost/spirit/include/qi_binary.hpp" // parsing binary data in various endianness template '<'typename P, typename T void binary_parser( char const* input, P const& endian_word_type, T& voxel, bool full_match = true) { using boost::spirit::qi::parse; char const* f(input); char const* l(f + strlen(f)); bool result1 = parse(f,l,endian_word_type,voxel); bool result2 =((!full_match) || (f ==l)); if ( result1 && result2) { //doing nothing, parsing data is pass to voxel alreay } else { std::cerr << "Error: not match!!" << std::endl; exit(1); } } typedef boost::uint16_t bs_int16; typedef boost::uint32_t bs_int32; int main ( int argc, char *argv[] ) { namespace qi = boost::spirit::qi; namespace ascii = boost::spirit::ascii; using qi::big_word; using qi::big_dword; boost::uint32_t ui; float uf; binary_parser("\x01\x02\x03\x04",big_word,ui); assert(ui=0x01020304); binary_parser("\x01\x02\x03\x04",big_word,uf); assert(uf=0x01020304); return 0; }' I almost copy the example, but why this binary parser doesn't work. I use Mac OS 10.5.8 and gcc 4.01 compiler.

    Read the article

  • Install h5py in Mac OS X 10.6.3

    - by zyq524
    I'm trying to install h5py in Mac OS X 10.6.3. First I installed HDF5 1.8, which used the following commands: ./configure \ --prefix=/Library/Frameworks/Python.framework/Versions/Current \ --enable-shared \ --enable-production \ --enable-threadsafe \ CPPFLAGS=-I/Library/Frameworks/Python.framework/Versions/Current/include \ LDFLAGS=-L/Library/Frameworks/Python.framework/Versions/Current/lib make make check sudo make install Then install h5py: /Library/Frameworks/Python.framework/Versions/Current/bin/python \ setup.py \ build \ --api=18 \ --hdf5=/Library/Frameworks/Python.framework/Versions/Current Then I got the errors: Configure: Autodetecting HDF5 settings... Custom HDF5 dir: /Library/Frameworks/Python.framework/Versions/Current Custom API level: (1, 8) ld: warning: in detect/vers.o, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: in /Library/Frameworks/Python.framework/Versions/Current/lib/libhdf5.dylib, file was built for unsupported file format which is not the architecture being linked (i386) Undefined symbols: "_main", referenced from: start in crt1.10.5.o ld: symbol(s) not found collect2: ld returned 1 exit status Failed to compile HDF5 test program. Please check to make sure: * You have a C compiler installed * A development version of Python is installed (including header files) * A development version of HDF5 is installed (including header files) * If HDF5 is not in a default location, supply the argument --hdf5=<path> error: command 'cc' failed with exit status 1 I just updated my Xcode, I don't know whether this is because my gcc's default setting. If so, how can I get rid of this error? Thanks.

    Read the article

  • Keeping sync in multiplayer RTS game that uses floating point arithmetic

    - by Calmarius
    I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player. And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized. And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution). So I have 2 options so far: Say bye to the current code and restart from scratch using integers Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame... So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

    Read the article

  • erlide, which eclipse/which packages?

    - by KevinDTimm
    I have downloaded eclipse 3.4 (java version) for MacOSX (carbon). I have tried to 'update' to the erlide, but see many (duplicated) options (many erlide, options that say 'only for erl SDK updates', etc.) Sometimes I get 403 errors when attempting to access http://erlide.org/update and http://erlide.sourceforge.net/update. Finally, when I get some set of options installed, I either get errors like : Loading of /Users/kevindtimm/Documents/eclipse-java-ganymede-SR2-macosx-carbon/eclipse/plugins/org.erlide.kernel.common_0.8.1.201005250801/ebin/erlide_kernel_common.beam failed: badfile (hello_world@ktmac)1> =ERROR REPORT==== 24-Nov-2010::19:17:32 === beam/beam_load.c(1768): Error loading function erlide_kernel_common:monitor/0: op put_string u u x: please re-compile this module with an R14B compiler or, when I've done different installations of erlide, I get no response in the console to : hello:hello(). Does anybody have a good reference for how to load this plug-in and which items I should install? -module(hello). -export([hello/0]). hello() -> io:write("Hello World\n"). [edit] I have installed eclipse 3.6 (c++) as requested below, and the following code still can't find hello:hello(). %%file_comment -module(hello). %% %% Include files %% %% %% Exported Functions %% -export([hello/0]). %% %% API Functions %% %% %% Local Functions %% hello() -> io:write("Hello World\n"). [/edit]

    Read the article

  • how to store a file handle in perl class

    - by Haiyuan Zhang
    please look at the following code first. #! /usr/bin/perl package foo; sub new { my $pkg = shift; my $self = {}; my $self->{_fd} = undef; bless $self, $pkg; return $self; } sub Setfd { my $self = shift; my $fd = shift; $self_->{_fd} = $fd; } sub write { my $self = shift; print $self->{_fd} "hello word"; } my $foo = new foo; My intention is to store a file handle within a class using hash. the file handle is undefined at first, but can be initilized afterwards by calling Setfd function. then write can be called to actually write string "hello word" to a file indicated by the file handle, supposed that the file handle is the result of a success "write" open. but, perl compiler just complains that there are syntax error in the "print" line. can anyone of you tells me what's wrong here? thanks in advance.

    Read the article

  • Abstract base class puzzle

    - by 0x80
    In my class design I ran into the following problem: class MyData { int foo; }; class AbstraktA { public: virtual void A() = 0; }; class AbstraktB : public AbstraktA { public: virtual void B() = 0; }; template<class T> class ImplA : public AbstraktA { public: void A(){ cout << "ImplA A()"; } }; class ImplB : public ImplA<MyData>, public AbstraktB { public: void B(){ cout << "ImplB B()"; } }; void TestAbstrakt() { AbstraktB *b = (AbstraktB *) new ImplB; b->A(); b->B(); }; The problem with the code above is that the compiler will complain that AbstraktA::A() is not defined. Interface A is shared by multiple objects. But the implementation of A is dependent on the template argument. Interface B is the seen by the outside world, and needs to be abstrakt. The reason I would like this is that it would allow me to define object C like this: Define the interface C inheriting from abstrakt A. Define the implementation of C using a different datatype for template A. I hope I'm clear. Is there any way to do this, or do I need to rethink my design?

    Read the article

  • ParseKit.framework won't work, Foundation.h not found

    - by Jeremy
    I'm really stumped trying to get the ParseKit.framework (this) to work in general, not even bothering to implement it till it runs the demo app that comes with it. What happens is the compiler can't locate < Foundation/Foundation.h or something, which I thought the header was in the linked framework. Exact error: "Lexical or Preprocessor Issue: 'Foundation/Foundation.h' file not found." Here's the code, just from the ParseKit_Prefix.pch: // // Prefix header for all source files of the 'ParseKit' target in the 'ParseKit' project. //#ifdef __OBJC__ #import <Foundation/Foundation.h> #endif Nothing unusual about it, did I mess up the file paths some how? I've reinstalled Xcode, re-downloaded the ParseKit, and nothing is helping. The suggestions here did nothing and it's not this. When I make a new project or use a different project and load the Foundation.framework and #import the header it works just fine. If I unlink the framework I can't find it to re-link again. Has anyone else had this kind of problem? Did I download it wrong somewhere? I have a very difficult time finding where exactly the Xcode UI links stuff, apple must get a kick out of frustrating people, so if anyone has anything they can think of please give me some feedback, I'm horribly confused right now. Thanks,

    Read the article

  • Trying to make a plugin system in C++/Qt

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • (Cpp) Linker, Libraries & Directories Information

    - by m00st
    I've finished both my C++ 1/2 classes and we did not cover anything on Linking to libraries or adding additional libraries to C++ code. I've been having a hay-day trying to figure this out; I've been unable to find basic information linking to objects. Initially I thought the problem was the IDE (Netbeans; and Code::Blocks). However I've been unable to get wxWidgets and GTKMM setup. Can someone point me in the right direction on the terminology and basic information about #including files and linking files in a Cpp application? Basically I want/need to know everything in regards to this process. The difference between .dll, .lib, .o, .lib.a, .dll.a. The difference between a .h and a "library" (.dll, .lib correct?) I understand I need to read the compiler documentation I am using; however all compilers (that I know of) use linker and headers; I need to learn this information. Please point me in the right direction! :] Thanks

    Read the article

  • Opa app does not load in Internet Explorer when compiled with Opa 1.1.1

    - by Marcin Skórzewski
    I did a minor update to the already working application and then had problems using new version of Opa compiler. First problem - runtime exception Since the original deployment Opa 1.1.1 has been released and it resulted in error: events.js:72 throw er; // Unhandled 'error' event ^ Error: listen EADDRINUSE at errnoException (net.js:901:11) at Server._listen2 (net.js:1039:14) at listen (net.js:1061:10) at Server.listen (net.js:1127:5) at global.BslNet_Http_server_init_server (/opt/mlstate/lib/opa/stdlib/server.opp/serverNodeJsPackage.js:223:1405) at global.BslNet_Http_server_init_server_cps (/opt/mlstate/lib/opa/stdlib/server.opp/serverNodeJsPackage.js:226:15) at __v1_bslnet_http_server_init_server_cps_b970f080 (/opt/mlstate/lib/opa/stdlib/stdlib.qmljs/stdlib.core.web.server.opx/main.js:1:175) at /opt/mlstate/lib/opa/stdlib/stdlib.qmljs/stdlib.core.web.server.opx/main.js:440:106 at global.execute_ (/opt/mlstate/lib/opa/static/opa-js-runtime-cps/main.js:19:49) at /opt/mlstate/lib/opa/static/opa-js-runtime-cps/main.js:17:78 I decided to build Opa from sources and it helped, but another problem occurred :( Second problem - stops to support the IE Application stopped to work in Internet Explorer. I tried two different machines (Windows XP and 7) with IE 8 and 10. Web page does not load at all (looks like the network problem, but the same URL works fine in Firefox). I confirmed the same problem with "Hello world" from the Opa tutorial compiled with both Opa stable 1.1.1 and build from sources. I suspected that the problem is due to Node.js update (Opa = 1.1.1 requires Node 0.10.* - now I am using 0.10.12, but I also tried other 0.10-s), but "Hello world" from the Node's from page works fine. I am running the app on OSX developer box and Linux Debian 7.0 server. Any suggestions what am I doing wrong? PS. I was off the business for a while. Anyone knows what happened to the Opa forum? Signing is seams not to work.

    Read the article

  • C++ casted realloc causing memory leak

    - by wyatt
    I'm using a function I found here to save a webpage to memory with cURL: struct WebpageData { char *pageData; size_t size; }; size_t storePage(void *input, size_t size, size_t nmemb, void *output) { size_t realsize = size * nmemb; struct WebpageData *page = (struct WebpageData *)output; page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); if(page->pageData) { memcpy(&(page->pageData[page->size]), input, realsize); page->size += realsize; page->pageData[page->size] = 0; } return realsize; } and find the line: page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); is causing a memory leak of a few hundred bytes per call. The only real change I've made from the original source is casting the line in question to a (char *), which my compiler (gcc, g++ specifically if it's a c/c++ issue, but gcc also wouldn't compile with the uncast statement) insisted upon, but I assume this is the source of the leak. Can anyone elucidate? Thanks

    Read the article

  • adding virtual function to the end of the class declaration avoids binary incompatibility?

    - by bob
    Could someone explain to me why adding a virtual function to the end of a class declaration avoids binary incompatibility? If I have: class A { public: virtual ~A(); virtual void someFuncA() = 0; virtual void someFuncB() = 0; virtual void other1() = 0; private: int someVal; }; And later modify this function to: class A { public: virtual ~A(); virtual void someFuncA(); virtual void someFuncB(); virtual void someFuncC(); virtual void other1() = 0; private: int someVal; }; I get a coredump from another .so compiled against the previous declaration. But if I put someFuncC() at the end of the class declaration (after "int someVal"), I don't see coredump anymore. Could someone tell me why this is? And does this trick always work? PS. compiler is gcc, does this work with other compilers?

    Read the article

  • Derived interface from generic method

    - by Sunit
    I'm trying to do this: public interface IVirtualInterface{ } public interface IFabricationInfo : IVirtualInterface { int Type { get; set; } int Requirement { get; set; } } public interface ICoatingInfo : IVirtualInterface { int Type { get; set; } int Requirement { get; set; } } public class FabInfo : IFabricationInfo { public int Requirement { get { return 1; } set { } } public int Type { get {return 1;} set{} } } public class CoatInfo : ICoatingInfo { public int Type { get { return 1; } set { } } public int Requirement { get { return 1; } set { } } } public class BusinessObj { public T VirtualInterface<T>() where T : IVirtualInterface { Type targetInterface = typeof(T); if (targetInterface.IsAssignableFrom(typeof(IFabricationInfo))) { var oFI = new FabInfo(); return (T)oFI; } if (targetInterface.IsAssignableFrom(typeof(ICoatingInfo))) { var oCI = new CoatInfo(); return (T)oCI; } return default(T); } } But getting a compiler error: Canot convert type 'GenericIntf.FabInfo' to T How do I fix this? thanks Sunit

    Read the article

  • what is meant by normalization in huge pointers

    - by wrapperm
    Hi, I have a lot of confusion on understanding the difference between a "far" pointer and "huge" pointer, searched for it all over in google for a solution, couldnot find one. Can any one explain me the difference between the two. Also, what is the exact normalization concept related to huge pointers. Please donot give me the following or any similar answers: "The only difference between a far pointer and a huge pointer is that a huge pointer is normalized by the compiler. A normalized pointer is one that has as much of the address as possible in the segment, meaning that the offset is never larger than 15. A huge pointer is normalized only when pointer arithmetic is performed on it. It is not normalized when an assignment is made. You can cause it to be normalized without changing the value by incrementing and then decrementing it. The offset must be less than 16 because the segment can represent any value greater than or equal to 16 (e.g. Absolute address 0x17 in a normalized form would be 0001:0001. While a far pointer could address the absolute address 0x17 with 0000:0017, this is not a valid huge (normalized) pointer because the offset is greater than 0000F.). Huge pointers can also be incremented and decremented using arithmetic operators, but since they are normalized they will not wrap like far pointers." Here the normalization concept is not very well explained, or may be I'm unable to understand it very well. Can anyone try explaining this concept from a beginners point of view. Thanks, Rahamath

    Read the article

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

  • LuaEdit can't find module when Lua files all in the same folder

    - by joverboard
    I downloaded LuaEdit to use as an IDE and debug tool however I'm having trouble using it for even the simplest things. I've created a solution with 2 files in it, all of which are stored in the same folder. My files are as follows: --startup.lua require("foo") test("Testing", "testing", "one, two, three") --foo.lua foo = {} print("In foo.lua") function test(a,b,c) print(a,b,c) end This works fine when in my C++ compiler when accessed through some embed code, however when I attempt to use the same code in LuaEdit, it crashes on line 3 require("foo") with an error stating: module 'foo' not found: no field package.preload['foo'] no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo\init.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo\init.lua' no file '.\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.dll' no file 'C:\Program Files (x86)\LuaEdit 2010\loadall.dll' no file '.\battle.dll' I have also tried creating these files prior to adding them to a solution and still get the same error. Is there some setting I'm missing? It would be great to have an IDE/debugger but it's useless to me if it can't run linked functions.

    Read the article

  • Does C# allow method overloading, PHP style (__call)?

    - by mr.b
    In PHP, there is a special method named __call($calledMethodName, $arguments), which allows class to catch calls to non-existing methods, and do something about it. Since most of classic languages are strongly typed, compiler won't allow calling a method that does not exist, I'm clear with that part. What I want to accomplish (and I figured this is how I would do it in PHP, but C# is something else) is to proxy calls to a class methods and log each of these calls. Right now, I have code similar to this: class ProxyClass { static logger; public AnotherClass inner { get; private set; } public ProxyClass() { inner = new AnotherClass(); } } class AnotherClass { public void A() {} public void B() {} public void C() {} // ... } // meanwhile, in happyCodeLandia... ProxyClass pc = new ProxyClass(); pc.inner.A(); pc.inner.B(); // ... So, how can I proxy calls to an object instance in extensible way? Extensible, meaning that I don't have to modify ProxyClass whenever AnotherClass changes. In my case, AnotherClass can have any number of methods, so it wouldn't be appropriate to overload or wrap all methods to add logging. I am aware that this might not be the best approach for this kind of problem, so if anyone has idea what approach to use, shoot. Thanks!

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >