Search Results

Search found 4771 results on 191 pages for 'aspnet compiler'.

Page 166/191 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Trying to make a plugin system in C++

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • ISO C90 forbids mixed declarations and code sscanf

    - by Need4Sleep
    I'm getting a strange error attempting to compile my unit test code,. For some reason the compiler treats my sscanf call as a mixed declaration? I don't quite understand, here is the entire error: cc1: warnings being treated as errors /home/brlcad/brlcad/src/libbn/tests/bn_complex.c: In function 'main': /home/brlcad/brlcad/src/libbn/tests/bn_complex.c:53: error: ISO C90 forbids mixed declarations and code make[2]: *** [src/libbn/tests/CMakeFiles/tester_bn_complex.dir/bn_complex.c.o] Error 1 make[1]: *** [src/libbn/tests/CMakeFiles/tester_bn_complex.dir/all] Error 2 make: *** [all] Error 2 int main(int argc, char *argv[]) { double expRe1, expIm2, expSqRe1, expSqIm2; double actRe1, actIm2, actSqRe1, actSqIm2; actRe1 = actIm2 = actSqRe1 = actSqIm2 = expRe1 = expIm2 = expSqRe1 = expSqIm2 = 0.0; bn_complex_t com1,com2; //a struct that holds two doubles if(argc < 5) bu_exit(1, "ERROR: Invalid parameters[%s]\n", argv[0]); sscanf(argv[1], "%lf,%lf", &com1.re, &com1.im); /* Error is HERE */ sscanf(argv[2], "%lf,%lf", &com2.re, &com2.im); sscanf(argv[3], "%lf,%lf", &expRe1, &expIm2); sscanf(argv[4], "%lf,%lf", &expSqRe1, &expSqIm2); test_div(com1, com2, &actRe1, &actIm2); test_sqrt(com1,com2, &actSqRe1, &actSqIm2); if((fabs(actRe1 - expRe1) < 0.00001) || (fabs(actIm2 - expIm2) < 0.00001)){ printf("Division failed...\n"); return 1; } if((fabs(actSqRe1 - expSqRe1) < 0.00001) || (fabs(actSqIm2 - expSqIm2) < 0.00001)){ printf("Square roots failed...\n"); return 1; } return 0; }

    Read the article

  • Please help with C++ syntax for const accessor by reference.

    - by Hamish Grubijan
    Right not my implementation returns the thing by value. The member m_MyObj itself is not const - it's value changes depending on what the user selects with a Combo Box. I am no C++ guru, but I want to do this right. If I simply stick a & in front of GetChosenSourceSystem in both decl. and impl., I get one sort of compiler error. If I do one but not another - another error. If I do return &m_MyObj;. I will not list the errors here for now, unless there is a strong demand for it. I assume that an experienced C++ coder can tell what is going on here. I could omit constness or reference, but I want to make it tight and learn in the process as well. Thanks! // In header file MyObj GetChosenThingy() const; // In Implementation file. MyObj MyDlg::GetChosenThingy() const { return m_MyObj; }

    Read the article

  • final transient fields and serialization

    - by doublep
    Is it possible to have final transient fields that are set to any non-default value after serialization in Java? My usecase is a cache variable — that's why it is transient. I also have a habit of making Map fields that won't be changed (i.e. contents of the map is changed, but object itself remains the same) final. However, these attributes seem to be contradictory — while compiler allows such a combination, I cannot have the field set to anything but null after unserialization. I tried the following, without success: simple field initialization (shown in the example): this is what I normally do, but the initialization doesn't seem to happen after unserialization; initialization in constructor (I believe this is semantically the same as above though); assigning the field in readObject() — cannot be done since the field is final. In the example cache is public only for testing. import java.io.*; import java.util.*; public class test { public static void main (String[] args) throws Exception { X x = new X (); System.out.println (x + " " + x.cache); ByteArrayOutputStream buffer = new ByteArrayOutputStream (); new ObjectOutputStream (buffer).writeObject (x); x = (X) new ObjectInputStream (new ByteArrayInputStream (buffer.toByteArray ())).readObject (); System.out.println (x + " " + x.cache); } public static class X implements Serializable { public final transient Map <Object, Object> cache = new HashMap <Object, Object> (); } } Output: test$X@1a46e30 {} test$X@190d11 null

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • Creating an object in the loop

    - by Jacob
    std::vector<double> C(4); for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } is much faster than for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { std::vector<double> C(4); C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } I realize this happens because std::vector is repeatedly being created and instantiated in the loop, but I was under the impression this would be optimized away. Is it completely wrong to keep variables local in a loop whenever possible? I was under the (perhaps false) impression that this would provide optimization opportunities for the compiler. EDIT: I use VC++2005 (release mode) with full optimization (/Ox)

    Read the article

  • Does it ever make sense to make a fundamental (non-pointer) parameter const?

    - by Scott Smith
    I recently had an exchange with another C++ developer about the following use of const: void Foo(const int bar); He felt that using const in this way was good practice. I argued that it does nothing for the caller of the function (since a copy of the argument was going to be passed, there is no additional guarantee of safety with regard to overwrite). In addition, doing this prevents the implementer of Foo from modifying their private copy of the argument. So, it both mandates and advertises an implementation detail. Not the end of the world, but certainly not something to be recommended as good practice. I'm curious as to what others think on this issue. Edit: OK, I didn't realize that const-ness of the arguments didn't factor into the signature of the function. So, it is possible to mark the arguments as const in the implementation (.cpp), and not in the header (.h) - and the compiler is fine with that. That being the case, I guess the policy should be the same for making local variables const. One could make the argument that having different looking signatures in the header and source file would confuse others (as it would have confused me). While I try to follow the Principle of Least Astonishment with whatever I write, I guess it's reasonable to expect developers to recognize this as legal and useful.

    Read the article

  • ojspc always returns 0 on errors

    - by Matt McCormick
    In my Ant build.xml file, I am trying to compile JSPs using ojspc. The files are being compiled, however, the build process is still running to completion when the JSP compilation has errors. This is part of my build.xml: <java fork="true" jar="${env.ORACLE_HOME}\j2ee\home\ojspc.jar" resultproperty="result"> <jvmarg value="-Djava.compiler=NONE"/> <arg value="-extend"/> <arg value="com.orionserver.http.OrionHttpJspPage"/> <arg value="-batchMask"/> <arg value="*.jsp"/> <arg value="${target-directory}/build/target/ear/${module-dir-name}-jsp.war"/> </java> <echo level="info">Result Property: ${result}</echo> I have tried setting the property failonerror="true" but that does not change anything. I receive the following output: [java] Detected archive, now processing contents of ../build/target/ear/web-module-jsp.war... [java] Setting up temp area... [java] Expanding archive in temp area... [java] C:\DOCUME~1\MMCCOR~1\LOCALS~1\Temp\tmp12940\_web_2d_inf\_jsp\_password.java:60: cannot resolve symbol [java] symbol : variable reqvst [java] location: class _web_2d_inf._jsp._password [java] out.print(reqvst.getAttribute("test")); [java] ^ [java] 1 error [java] Creating D:\eclipse-workspace\jdw\build\..\build\target\ear\web-module-jsp.war ... [java] Removing temp area... [echo] Result Property: 0 ...(more commands) BUILD SUCCESSFUL In the password.jsp file, I intentionally introduced an error to test. How can I get the build to fail on an error? At the Ant Java page, I am confused by: By default the return code of a is ignored. Alternatively, you can set resultproperty to the name of a property and have it assigned to the result code (barring immutability, of course). When you set failonerror="true", the only possible value for resultproperty is 0. Any non-zero response is treated as an error and would mean the build exits.

    Read the article

  • Questions regarding detouring by modifying the virtual table

    - by Elliott Darfink
    I've been practicing detours using the same approach as Microsoft Detours (replace the first five bytes with a jmp and an address). More recently I've been reading about detouring by modifying the virtual table. I would appreciate if someone could shed some light on the subject by mentioning a few pros and cons with this method compared to the one previously mentioned! I'd also like to ask about patched vtables and objects on the stack. Consider the following situation: // Class definition struct Foo { virtual void Call(void) { std::cout << "FooCall\n"; } }; // If it's GCC, 'this' is passed as the first parameter void MyCall(Foo * object) { std::cout << "MyCall\n"; } // In some function Foo * foo = new Foo; // Allocated on the heap Foo foo2; // Created on the stack // Arguments: void ** vtable, uint offset, void * replacement PatchVTable(*reinterpret_cast<void***>(foo), 0, MyCall); // Call the methods foo->Call(); // Outputs: 'MyCall' foo2.Call(); // Outputs: 'FooCall' In this case foo->Call() would end up calling MyCall(Foo * object) whilst foo2.Call() call the original function (i.e Foo::Call(void) method). This is because the compiler will try to decide any virtual calls during compile time if possible (correct me if I'm wrong). Does that mean it does not matter if you patch the virtual table or not, as long as you use objects on the stack (not heap allocated)?

    Read the article

  • Scala path dependent return type from parameter

    - by Rich Oliver
    In the following code using 2.10.0M3 in Eclipse plugin 2.1.0 for 2.10M3. I'm using the default setting which is targeting JVM 1.5 class GeomBase[T <: DTypes] { abstract class NewObjs { def newHex(gridR: GridBase, coodI: Cood): gridR.HexRT } class GridBase { selfGrid => type HexRT = HexG with T#HexTr def uniformRect (init: NewObjs) { val hexCood = Cood(2 ,2) val hex: HexRT = init.newHex(selfGrid, hexCood)// won't compile } } } Error message: Description Resource Path Location Type type mismatch; found: GeomBase.this.GridBase#HexG with T#HexTr required: GridBase.this.HexRT (which expands to) GridBase.this.HexG with T#HexTr GeomBase.scala Why does the compiler think the method returns the type projection GridBase#HexG when it should be this specific instance of GridBase? Edit transferred to a simpler code class in responce to comments now getting a different error message. package rStrat class TestClass { abstract class NewObjs { def newHex(gridR: GridBase): gridR.HexG } class GridBase { selfGrid => def uniformRect (init: NewObjs) { val hex: HexG = init.newHex(this) //error here } class HexG { val test12 = 5 } } } . Error line 11:Description Resource Path Location Type type mismatch; found : gridR.HexG required: GridBase.this.HexG possible cause: missing arguments for method or constructor TestClass.scala /SStrat/src/rStrat line 11 Scala Problem Update I've switched to 2.10.0M4 and updated the plug-in to the M4 version on a fresh version of Eclipse and switched to JVM 1.6 (and 1.7) but the problems are unchanged.

    Read the article

  • Problems with variadic function

    - by morpheous
    I have the following function from some legacy code that I am maintaining. long getMaxStart(long start, long count, const myStruct *s1, ...) { long i1, maxstart; myStruct *s2; va_list marker; maxstart = start; /*BUGFIX: 003 */ /*(va_start(marker, count);*/ va_start(marker, s1); for (i1 = 1; i1 <= count; i1++) { s2 = va_arg(marker, myStruct *); /* <- s2 is assigned null here */ maxstart = MAX(maxstart, s2->firstvalid); /* <- SEGV here */ } va_end(marker); return (maxstart); } When the function is called with only one myStruct argument, it causes a SEGV. The code compiled and run without crashing on Windows XP when I compiled it using VS2005. I have now moved the code to Ubuntu Karmic and I am having problems with the stricter compiler on Linux. Is anyone able to spot what is causing the parameter not to be read correctly in the var_arg() statement? I am compiling using gcc version 4.4.1 Edit The statement that causes the SEGV is this one: start = getMaxStart(start, 1, ms1); The variables 'start' and 'ms1' have valid values when the code execution first reaches this line.

    Read the article

  • Freestanding ARM C++ Code - empty .ctors section

    - by Matthew Iselin
    I'm writing C++ code to run in a freestanding environment (basically an ARM board). It's been going well except I've run into a stumbling block - global static constructors. To my understanding the .ctors section contains a list of addresses to each static constructor, and my code simply needs to iterate this list and make calls to each function as it goes. However, I've found that this section in my binary is in fact completely empty! Google pointed towards using ".init_array" instead of ".ctors" (an EABI thing), but that has not changed anything. Any ideas as to why my static constructors don't exist? Relevant linker script and objdump output follows: .ctors : { . = ALIGN(4096); start_ctors = .; *(.init_array); *(.ctors); end_ctors = .; } .dtors : { . = ALIGN(4096); start_dtors = .; *(.fini_array); *(.dtors); end_dtors = .; } -- 2 .ctors 00001000 8014c000 8014c000 00054000 2**2 CONTENTS, ALLOC, LOAD, DATA <snip> 8014d000 g O .ctors 00000004 start_ctors <snip> 8014d000 g O .ctors 00000004 end_ctors I'm using an arm-elf targeted GCC compiler (4.4.1).

    Read the article

  • Can't get node.js built on cygwin

    - by mwt
    Following the instructions here: https://github.com/ry/node/wiki/Building-node.js-on-Cygwin-(Windows) I've tried installing on two machines, either of which I'd be happy to get up and running. WinXP On 'make', I get: Build failed: -> task failed <err #2>: {task: libv8.a SConstruct -> libv8.a} According to the instructions, this is caused by having $SHELL set to a Windows style path, but I've set it to /bin/bash and get the same error. Win7 On './configure', I get: $ ./configure Checking for program g++ or c++ : /usr/bin/g++ Checking for program cpp : /usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /usr/bin/ranlib Checking for g++ : ok Checking for program gcc or cc : /usr/bin/gcc 0 [main] python 1092 C:\bin\python.exe: *** fatal error - unable to remap \\?\C:\lib\python2.6\lib-dynload\_functools.dll to same address as parent: 0x360000 != 0x3E0000 Stack trace: Frame Function Args 002891E8 6102749B (002891E8, 00000000, 00000000, 00000000) 002894D8 6102749B (61177B80, 00008000, 00000000, 61179977) 0028A508 61004AFB (611A136C, 61241CF4, 00360000, 003E0000) End of stack trace 0 [main] python 3536 fork: child 1092 - died waiting for dll loading, errno 11 /Users/Michael/Desktop/node/wscript:177: error: could not configure a c compiler! I've run 'rebaseall' and restarted the machine but still get that error. Edit: Ok, rebaseall was apparently erroring on some mingw stuff, so I edited the rebaseall script to fix that, and now it configures on Win7. The new problem is that it emits the exact same error as my XP machine now when I try to make. This is on tag v0.3.5.

    Read the article

  • different thread accessing MemoryStream

    - by Wayne
    There's a bit of code which writes data to a MemoryStream object directly into it's data buffer by calling GetBuffer(). It also uses and updates the Position and SetLength() properties appropriately. This code works purposes 99.9999% of the time. Literally. Only every so many 100,000's of iterations it will barf. The specific problem is that the memory.Position property suddenly returns zero instead of the appropriate value. However, code was added that checks for the 0 and throws an exception which include log of the MemoryStream properties like Position and Length in a separate method. Those return the correct value. Further addition shows that when this rare condition occurs, the memory.Position only has zero inside this particular method. Okay. Obviously, this must be a threading issue. But this code is well locked. However, the nature of this software is that it's organized by "tasks" with a scheduler and so any one of several actual O/S thread may run this code at any give time--but never more than one at a time. So it's my guess that ordinarily it so happens that the same thread keeps getting used for this method and then on a rare occasion a different thread get used. Then due to compiler optimizations, the different thread never gets the correct value. It gets a "stale" value. Ordinarily in a situation like this I would apply a "volatile" keyword to the variable in question. But that (those) variables are inside the MemoryStream object. Does anyone have any other idea? Or does this mean we have to implement our own MemoryStream object? (Just like we end up having to do with practically every collection in .NET?) It's a shame to have such an awesome platform as .NET and have virtually the entire system useless as-is for seriously parallelized applications. If I'm wrong or you have other ideas, please advise. Sincerely, Wayne

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • cython setup.py gives .o instead of .dll

    - by alok1974
    Hi, I am a newbie to cython, so pardon me if I am missing something obvious here. I am trying to build c extensions to be used in python for enhanced performance. I have fc.py module with a bunch of function and trying to generate a .dll through cython using dsutils and running on win64: c:\python26\python c:\cythontest\setup.py build_ext --inplace I have the dsutils.cfg in C:\Python26\Lib\distutils. As required the disutils.cfg has the following config settings: [build] compiler = mingw32 My startup.py looks like this: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension('fc', [r'C:\cythonTest\fc.pyx'])] setup( name = 'FC Extensions', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) I have latest version mingw for target/host amdwin64 type builds. I have the latest version of cython for python26 for win64. Cython does give me an fc.c without errors, only a few warning for type conversions, which I will handle once I have it right. Further it produces fc.def an fc.o files Instead of giving a .dll. I get no errors. I find on threads that it will create the .so or .dll automatically as required, which is not happening.

    Read the article

  • OCaml delimiters and scopes

    - by Jack
    Hello! I'm learning OCaml and although I have years of experience with imperative programming languages (C, C++, Java) I'm getting some problems with delimiters between declarations or expressions in OCaml syntax. Basically I understood that I have to use ; to concatenate expressions and the value returned by the sequence will be the one of last expression used, so for example if I have exp1; exp2; exp3 it will be considered as an expression that returns the value of exp3. Starting from this I could use let t = something in exp1; exp2; exp3 and it should be ok, right? When am I supposed to use the double semicol ;;? What does it exactly mean? Are there other delimiters that I must use to avoid syntax errors? I'll give you an example: let rec satisfy dtmc state pformula = match (state, pformula) with (state, `Next sformula) -> let s = satisfy_each dtmc sformula and adder a state = let p = 0.; for i = 0 to dtmc.matrix.rows do p <- p +. get dtmc.matrix i state.index done; a +. p in List.fold_left adder 0. s | _ -> [] It gives me syntax error on | but I don't get why.. what am I missing? This is a problem that occurs often and I have to try many different solutions until it suddently works :/ A side question: declaring with let instead that let .. in will define a var binding that lasts whenever after it has been defined? What I basically ask is: what are the delimiters I have to use and when I have to use them. In addition are there differences I should consider while using the interpreter ocaml instead that the compiler ocamlc? Thanks in advance!

    Read the article

  • Sharepoint fails to load a C++ dll on windows 2008

    - by Nathan
    I have a sharepoint DLL that does some licensing things and as part of the code it uses an external C++ DLL to get the serial number of the hardisk. When i run this application on windows server 2003 it works fine, but on 2008 the whole site (loaded on load) crashes and resets continually. This is not 2008 R2 and is the same in 64 or 32 bits. If i put a debugger.break before the dll execution then I see the code get to the point of the break then never come back into the dll again. I do get some debug assertion warnings from within the function, again only in 2008, but im not sure this is related. I created a console app that runs the c# dll, which in turn loads the c++ dll, and this works perfectly on 2008 (although does show the assertion errors, but I have suppressed these now). The assertion errors are not in my code but within ICtypes.c and not something I can debug. If i put a breakpoint in the DLL it is never hit and the compiler says : "step in: Stepping over non user code" if i try to debug into the DLL using VS. I have tried wrapping the code used to call the DLL in: SPSecurity.RunWithElevatedPrivileges(delegate() but this also does not help. I have the sourcecode for this DLL so that is not a problem. If i delete the DLL from the directory I get an error about a missing DLL, if i replace it back to no error or warning just a complete failure. If i replace this code with a hardcoded string the whole application works fine. Any advice would be much appreciated, I can't understand why it works as a console app yet not when run by sharepoint, this is with the same user account, on the same machine... This is the code used to call the DLL: [DllImport("idDll.dll", EntryPoint = "GetMachineId", SetLastError = true)] extern static string GetComponentId([MarshalAs(UnmanagedType.LPStr)]String s); public static string GetComponentId() { Debugger.Break(); if (_machine == string.Empty) { string temp = ""; id= ComponentId.GetComponentId(temp); } return id; }

    Read the article

  • C: Fifo between threads, writing and reading strings

    - by Yonatan
    Hello once more dear internet, I writing a small program that among other things, writes a log file of commands received. to do that, I want to use a thread that all it should do is just attempt to read from a pipe, while the main thread will write into that pipe whenever it should. Since i don't know the length of each string command, i thought about writing and reading the pointer to the char buf[MAX_MESSAGE_LEN]. Since what i've tried so far doesn't work, i'll post my best effort :P char str[] = "hello log thread 123456789 10 11 12 13 14 15 16 17 18 19\n"; if (pipe(pipe_fd) != 0) return -1; pthread_t log_thread; pthread_create(&log_thread,NULL, log_thread_start, argv[2]); success_write = 0; do { write(pipe_fd[1],(void*)&str,sizeof(char*)); } while (success_write < sizeof(char*)); and the thread does this: char buffer[MAX_MSGLEN]; int success_read; success_read = 0; //while(1) { do { success_read += read(pipe_fd[0],(void*)&buffer, sizeof(char*)); } while (success_read < sizeof(char*)); //} printf("%s",buffer); (Sorry if this doesn't indent, i can't seem to figure out this editor...) oh, and pipe_fd[2] is a global parameter. So, any help with this, either by the way i thought of, or another way i could read strings without knowing the length, would be much appreciated. On a side note, i'm working on Eclipse IDE C/C++, version 1.2.1 and i can't seem to set up the compiler so it will link the pthread library to my project. I've resorted to writing my own Makefile to make it (pun intended :P) work. Anyone knows what to do ? i've looked online, but all i find are solutions that are probably good on an older version because the tabs and option keys are different. Anyways, Thanks a bunch internet ! Yonatan

    Read the article

  • Using static variables for Strings

    - by Vivart
    below content is taken from Best practice: Writing efficient code but i didn't understand why private static String x = "example"; faster than private static final String x ="example"; Can anybody explain this. Using static variables for Strings When you define static fields (also called class fields) of type String, you can increase application speed by using static variables (not final) instead of constants (final). The opposite is true for primitive data types, such as int. For example, you might create a String object as follows: private static final String x = "example"; For this static constant (denoted by the final keyword), each time that you use the constant, a temporary String instance is created. The compiler eliminates "x" and replaces it with the string "example" in the bytecode, so that the BlackBerry® Java® Virtual Machine performs a hash table lookup each time that you reference "x". In contrast, for a static variable (no final keyword), the String is created once. The BlackBerry JVM performs the hash table lookup only when it initializes "x", so access is faster. private static String x = "example"; You can use public constants (that is, final fields), but you must mark variables as private.

    Read the article

  • c++ floating point precision loss: 3015/0.00025298219406977296

    - by SigTerm
    The problem. Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu. Code: double a = 3015.0; double b = 0.00025298219406977296; //*((unsigned __int64*)(&a)) == 0x40a78e0000000000 //*((unsigned __int64*)(&b)) == 0x3f30945640000000 double f = a/b;//3015/0.00025298219406977296; the result of calculation (i.e. "f") is 11917835.000000000 (*((unsigned __int64*)(&f)) == 0x4166bb4160000000) although it should be 11917834.814763514 (i.e. *((unsigned __int64*)(&f)) == 0x4166bb415a128aef). I.e. fractional part is lost. Unfortunately, I need fractional part to be correct. Questions: 1) Why does this happen? 2) How can I fix the problem? Additional info: 0) The result is taken directly from "watch" window (it wasn't printed, and I didn't forget to set printing precision). I also provided hex dump of floating point variable, so I'm absolutely sure about calculation result. 1) The disassembly of f = a/b is: fld qword ptr [a] fdiv qword ptr [b] fstp qword ptr [f] 2) f = 3015/0.00025298219406977296; yields correct result (f == 11917834.814763514 , *((unsigned __int64*)(&f)) == 0x4166bb415a128aef ), but it looks like in this case result is simply calculated during compile-time: fld qword ptr [__real@4166bb415a128aef (828EA0h)] fstp qword ptr [f] So, how can I fix this problem? P.S. I've found a temporary workaround (i need only fractional part of division, so I simply use f = fmod(a/b)/b at the moment), but I still would like to know how to fix this problem properly - double precision is supposed to be 16 decimal digits, so such calculation isn't supposed to cause problems.

    Read the article

  • "Function object is unsubscriptable" in basic integer to string mapping function

    - by IanWhalen
    I'm trying to write a function to return the word string of any number less than 1000. Everytime I run my code at the interactive prompt it appears to work without issue but when I try to import wordify and run it with a test number higher than 20 it fails as "TypeError: 'function' object is unsubscriptable". Based on the error message, it seems the issue is when it tries to index numString (for example trying to extract the number 4 out of the test case of n = 24) and the compiler thinks numString is a function instead of a string. since the first line of the function is me defining numString as a string of the variable n, I'm not really sure why that is. Any help in getting around this error, or even just help in explaining why I'm seeing it, would be awesome. def wordify(n): # Convert n to a string to parse out ones, tens and hundreds later. numString = str(n) # N less than 20 is hard-coded. if n < 21: return numToWordMap(n) # N between 21 and 99 parses ones and tens then concatenates. elif n < 100: onesNum = numString[-1] ones = numToWordMap(int(onesNum)) tensNum = numString[-2] tens = numToWordMap(int(tensNum)*10) return tens+ones else: # TODO pass def numToWordMap(num): mapping = { 0:"", 1:"one", 2:"two", 3:"three", 4:"four", 5:"five", 6:"six", 7:"seven", 8:"eight", 9:"nine", 10:"ten", 11:"eleven", 12:"twelve", 13:"thirteen", 14:"fourteen", 15:"fifteen", 16:"sixteen", 17:"seventeen", 18:"eighteen", 19:"nineteen", 20:"twenty", 30:"thirty", 40:"fourty", 50:"fifty", 60:"sixty", 70:"seventy", 80:"eighty", 90:"ninety", 100:"onehundred", 200:"twohundred", 300:"threehundred", 400:"fourhundred", 500:"fivehundred", 600:"sixhundred", 700:"sevenhundred", 800:"eighthundred", 900:"ninehundred", } return mapping[num] if __name__ == '__main__': pass

    Read the article

  • c# Why can't open generic types be passed as parameters?

    - by Rich Oliver
    Why can't open generic types be passed as parameters. I frequently have classes like: public class Example<T> where T: BaseClass { public int a {get; set;} public List<T> mylist {get; set;} } Lets say BaseClass is as follows; public BaseClass { public int num; } I then want a method of say: public int MyArbitarySumMethod(Example example)//This won't compile Example not closed { int sum = 0; foreach(BaseClass i in example.myList)//myList being infered as an IEnumerable sum += i.num; sum = sum * example.a; return sum; } I then have to write an interface just to pass this one class as a parameter as follows: public interface IExample { public int a {get; set;} public IEnumerable<BaseClass> myIEnum {get;} } The generic class then has to be modified to: public class Example<T>: IExample where T: BaseClass { public int a {get; set;} public List<T> mylist {get; set;} public IEnumerable<BaseClass> myIEnum {get {return myList;} } } That's a lot of ceremony for what I would have thought the compiler could infer. Even if something can't be changed I find it psychologically very helpful if I know the reasons / justifications for the absence of Syntax short cuts.

    Read the article

  • Are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • `enable_shared_from_this` has a non-virtual destructor

    - by Shtééf
    I have a pet project with which I experiment with new features of the upcoming C++0x standard. While I have experience with C, I'm fairly new to C++. To train myself into best practices, (besides reading a lot), I have enabled some strict compiler parameters (using GCC 4.4.1): -std=c++0x -Werror -Wall -Winline -Weffc++ -pedantic-errors This has worked fine for me. Until now, I have been able to resolve all obstacles. However, I have a need for enable_shared_from_this, and this is causing me problems. I get the following warning (error, in my case) when compiling my code (probably triggered by -Weffc++): base class ‘class std::enable_shared_from_this<Package>’ has a non-virtual destructor So basically, I'm a bit bugged by this implementation of enable_shared_from_this, because: A destructor of a class that is intended for subclassing should always be virtual, IMHO. The destructor is empty, why have it at all? I can't imagine anyone would want to delete their instance by reference to enable_shared_from_this. But I'm looking for ways to deal with this, so my question is really, is there a proper way to deal with this? And: am I correct in thinking that this destructor is bogus, or is there a real purpose to it?

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >