Search Results

Search found 5819 results on 233 pages for 'compiler theory'.

Page 161/233 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • addObjectsFromArray

    - by derek.lo
    I have a question about memory management when using the addObjectsFromArray method. Basically, I have 2 arrays defined in the appDelegate. I need these 2 arrays for the duration of my application's runtime. I therefore release them in my appDelegate's dealloc method. When I go to use these two arrays in a class, I want one array to store the values from the other, so that the other can have it's contents removed, but still stick around for use. Something like this: [appDelegate.arrayTwo addObjectsFromArray:appDelegate.arrayOne]; [appDelegate.arrayOne removeAllObjects]; I'm getting the compiler error: EXC_BAD_ACCESS because of a pointer issue? retaining issue? Any help would be greatly appreciated! Thanks

    Read the article

  • How to modify TList<record> value?

    - by Astronavigator
    How to modify TList < record value ? type TTest = record a,b,c:Integer end; var List:TList<TTest>; A:TTest; P:Pointer; .... .... List[10] := A; <- OK List[10].a:=1; <- Here compiler error : Left side cannot be assined to P:=@List[10]; <- Error: Variable requied

    Read the article

  • Two questions on ensuring EndInvoke() gets called on a list of IAsyncResult objects

    - by RobV
    So this question is regarding the .Net IAsyncResult design pattern and the necessity of calling EndInvoke as covered in this question Background I have some code where I'm firing off potentially many asynchronous calls to a particular function and then waiting for all these calls to finish before using EndInvoke() to get back all the results. Question 1 I don't know whether any of the calls has encountered an exception until I call EndInvoke() and in the event that an exception occurs in one of the calls the entire method should fail and the exception gets wrapped into an API specific exception and thrown upwards. So my first question is what's the best way then to ensure that the remainder of the async calls get properly terminated? Is a finally block which calls EndInvoke() on the remainder of the unterminated calls (and ignores any further exceptions) the best way to do this? Question 2 Secondly when I first fire off all my asyc calls I then call WaitHandle.WaitAll() on the array of WaitHandle instances that I've got from my IAsyncResult instances. The method which is firing all these async calls has a timeout to adhere to so I provide this to the WaitAll() method. Then I test whether all the calls have completed, if not then the timeout must have been reached so the method should also fail and throw another API specific exception. So my second question is what should I do in this case? I need to call EndInvoke() to terminate all these async calls before I throw the error but at the same time I don't want the code to get stuck since EndInvoke() is blocking. In theory at least if the WaitAll() call times out then all the async calls should themselves have timed out and thrown exceptions (thus completing the call) since they are also governed by a timeout but this timeout is potentially different from the main timeout

    Read the article

  • Declaring pointers; asterisk on the left or right of the space between the type and name?

    - by GenTiradentes
    I've seen mixed versions of this in a lot of code. (This applies to C and C++, by the way.) People seem to declare pointers in one of two ways, and I have no idea which one is correct, of if it even matters. The first way it to put the asterisk adjacent the type name, like so: someType* somePtr; The second way is to put the asterisk adjacent the name of the variable, like so: someType *somePtr; This has been driving me nuts for some time now. Is there any standard way of declaring pointers? Does it even matter how pointers are declared? I've used both declarations before, and I know that the compiler doesn't care which way it is. However, the fact that I've seen pointers declared in two different ways leads me to believe that there's a reason behind it. I'm curious if either method is more readable or logical in some way that I'm missing.

    Read the article

  • Preventing symbols from being stripped in IBM Visual Age C/C++ for AIX

    - by smountcastle
    I'm building a shared library which I dynamically load (using dlopen) into my AIX application using IBM's VisualAge C/C++ compiler. Unfortunately, it appears to be stripping out necessary symbols: rtld: 0712-002 fatal error: exiting. rtld: 0712-001 Symbol setVersion__Q2_3CIF17VersionReporterFRCQ2_3std12basic_stringXTcTQ2_3std11char_traitsXTc_TQ2_3std9allocatorXTc__ was referenced from module ./object/AIX-6.1-ppc/plugins/plugin.so(), but a runtime definition of the symbol was not found. Both the shared library and the application which loads the shared library compile/link against the static library which contains the VersionReporter mentioned in the error message. To link the shared library I'm using these options: -bM:SRE -bnoentry -bexpall To link the application, I'm using this option: -brtl Is there an option I can use to prevent this symbol from being stripped in the application? I've tried using -nogc as stated in the IBM docs, but that causes the shared library to be in an invalid format or the application to fail to link (depending on which one I use it with).

    Read the article

  • How to redefine symbol names in objects with RVCT?

    - by Batuu
    I currently develop a small OS for an embedded platform based on a ARM Cortex-M3 microcontroller. The OS provides an API for customer application development. The OS kernel and the API is compiled into a static lib by the ARMCC compiler and customer can link his application against it. The lib and the containing object files offer the complete list of symbols used in kernel. To "protect" the kernel and its inner states from extern hooking into obvious variables and functions, I would like to do some easy obfuscation by renaming the symbols randomly. The GNU binutils seems to do this by calling objcopy with the --redefine-sym flag. The GNU binutils cannot read the ARMCC / RVCT objects. Is there any solution to do this kind of obfuscation with RVCT?

    Read the article

  • C++ class is not being included properly.

    - by ravloony
    Hello all, I have a problem which is either something I have completely failed to understand, or very strange. It's probably the first one, but I have spent the whole afternoon googling with no success, so here goes... I have a class called Schedule, which has as a member a vector of Room. However, when I compile using cmake, or even by hand, I get the following: In file included from schedule.cpp:1: schedule.h:13: error: ‘Room’ was not declared in this scope schedule.h:13: error: template argument 1 is invalid schedule.h:13: error: template argument 2 is invalid schedule.cpp: In constructor ‘Schedule::Schedule(int, int, int)’: schedule.cpp:12: error: ‘Room’ was not declared in this scope schedule.cpp:12: error: expected ‘;’ before ‘r’ schedule.cpp:13: error: request for member ‘push_back’ in ‘((Schedule*)this)->Schedule::_sched’, which is of non-class type ‘int’ schedule.cpp:13: error: ‘r’ was not declared in this scope Here are the relevant bits of code: #include <vector> #include "room.h" class Schedule { private: std::vector<Room> _sched; //line 13 int _ndays; int _nrooms; int _ntslots; public: Schedule(); ~Schedule(); Schedule(int nrooms, int ndays, int ntslots); }; Schedule::Schedule(int nrooms, int ndays, int ntslots):_ndays(ndays), _nrooms(nrooms),_ntslots(ntslots) { for (int i=0; i<nrooms;i++) { Room r(ndays,ntslots); _sched.push_back(r); } } In theory, g++ should compile a class before the one that includes it. There are no circular dependencies here, it's all straightforward stuff. I am completely stumped on this one, which is what leads me to believe that I must be missing something. :-D

    Read the article

  • Are there any C++ tools that detect misuse of static_cast, dynamic_cast, and reinterpret_cast?

    - by chrisp451
    The answers to the following question describe the recommended usage of static_cast, dynamic_cast, and reinterpret_cast in C++: http://stackoverflow.com/questions/332030/when-should-static-cast-dynamic-cast-and-reinterpret-cast-be-used Do you know of any tools that can be used to detect misuse of these kinds of cast? Would a static analysis tool like PC-Lint or Coverity Static Analysis do this? The particular case that prompted this question was the inappropriate use of static_cast to downcast a pointer, which the compiler does not warn about. I'd like to detect this case using a tool, and not assume that developers will never make this mistake.

    Read the article

  • Which format does static library (*.lib) files use? Where can I find "Official" specifications of *.

    - by claws
    Just now I found that static libraries in *nix systems, in other words *.a libraries are nothing but archives of relocatables(*.o files) in ar fromat. What about static libraries(*.lib files) in windows? Which format are they in? I found an article: http://www.microsoft.com/msj/0498/hood0498.aspx which explains *.lib file structure. But Where can I find "Official" specifications of *.lib file structure/format? Other than ar.exe of mingw is there any tool from Microsoft which extracts relocatable objects of *.lib & *.a files? EDIT: I wonder why I'm unable to get to this question. If there are no official specifications. Then how does the compiler ('linker' to be more correct) writers work with *.LIB files?

    Read the article

  • performSelector:withObject:afterDelay: not working from scrollViewDidZoom

    - by oldbeamer
    Hi all, I feel like I should know this but I've been stumped for hours now and I've run out of ideas. The theory is simple, the user manipulates the zoom and positioning in a scrollview using a pinch. If they hold that pinch for a short period of time then the scrollview records the zoom level and content offsets. So I thought I'd start a performSelector:withObject:withDelay on the scrollViewDidZoom delegate method. If it expires then I record the settings. I delete the scheduled call every time scrollViewDidZoom is called and when the pinch gesture finishes. This is what I have: - (void)scrollViewDidZoom:(UIScrollView *)scrollView{ NSLog(@"resetting timer"); [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(positionLock) object:nil]; [self performSelector:@selector(positionLock) withObject:nil afterDelay:0.4]; } -(void)positionLock{ NSLog(@"position locked "); } - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale{ NSLog(@"deleting timer"); [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(positionLock) object:nil]; } This is the output: 2010-05-19 22:43:01.931 resetting timer 2010-05-19 22:43:01.964 resetting timer 2010-05-19 22:43:02.231 resetting timer 2010-05-19 22:43:02.253 resetting timer 2010-05-19 22:43:02.269 resetting timer 2010-05-19 22:43:02.298 resetting timer 2010-05-19 22:43:05.399 deleting timer As you can see the delay between the last and second last events should have been more than enough for the delayed selector call to execute. If I replace performSelector:withObject:withDelay with plain old performSelector: I get this: 2010-05-19 23:08:30.333 resetting timer 2010-05-19 23:08:30.333 position locked 2010-05-19 23:08:30.366 resetting timer 2010-05-19 23:08:30.367 position locked 2010-05-19 23:08:30.688 deleting timer Which isn't what I want but serves to show that it's only the delay that's causing it to not function, not something to do with the selector not being declared in the header, being misspelt or any other reason. Any ideas as to why it's not working?

    Read the article

  • Opening Large (24 GB) File In C

    - by zacaj
    I'm trying to read in a 24 GB XML file in C, but it won't work. I'm printing out the current position using ftell() as I read it in, but once it gets to a big enough number, it goes back to a small number and starts over, never even getting 20% through the file. I assume this is a problem with the range of the variable that's used to store the position (long), which can go up to about 4,000,000,000 according to http://msdn.microsoft.com/en-us/library/s3f49ktz%28VS.80%29.aspx, while my file is 25,000,000,000 bytes in size. A long long should work, but how would I change what my compiler(Cygwin/mingw32) uses or get it to have fopen64?

    Read the article

  • Boost::Container::Vector with Enum Template Argument - Not Legal Base Class

    - by CuppM
    Hi, I'm using Visual Studio 2008 with the Boost v1.42.0 library. If I use an enum as the template argument, I get a compile error when adding a value using push_back(). The compiler error is: 'T': is not a legal base class and the location of the error is move.hpp line 79. #include <boost/interprocess/containers/vector.hpp> class Test { public: enum Types { Unknown = 0, First = 1, Second = 2, Third = 3 }; typedef boost::container::vector<Types> TypesVector; }; int main() { Test::TypesVector o; o.push_back(Test::First); return 0; } If I use a std::vector instead it works. And if I resize the Boost version first and then set the values using the [] operator it also works. Is there some way to make this work using push_back()?

    Read the article

  • How many instructions to access pointer in C?

    - by Derek
    Hi All, I am trying to figure out how many clock cycles or total instructions it takes to access a pointer in C. I dont think I know how to figure out for example, p-x = d-a + f-b i would assume two loads per pointer, just guessing that there would be a load for the pointer, and a load for the value. So in this operations, the pointer resolution would be a much larger factor than the actual addition, as far as trying to speed this code up, right? This may depend on the compiler and architecture implemented, but am I on the right track? I have seen some code where each value used in say, 3 additions, came from a f2->sum = p1->p2->p3->x + p1->p2->p3->a + p1->p2->p3->m type of structure, and I am trying to define how bad this is

    Read the article

  • Understanding Ruby Enumerable#map (with more complex blocks)

    - by mstksg
    Let's say I have a function def odd_or_even n if n%2 == 0 return :even else return :odd end end And I had a simple enumerable array simple = [1,2,3,4,5] And I ran it through map, with my function, using a do-end block: simple.map do |n| odd_or_even(n) end # => [:odd,:even,:odd,:even,:odd] How could I do this without, say, defining the function in the first place? For example, # does not work simple.map do |n| if n%2 == 0 return :even else return :odd end end # Desired result: # => [:odd,:even,:odd,:even,:odd] is not valid ruby, and the compiler gets mad at me for even thinking about it. But how would I implement an equivalent sort of thing, that works?

    Read the article

  • Extract Generic types from extended Generic

    - by Brigham
    I'm trying to refactor a class and set of subclasses where the M type does extend anything, even though we know it has to be a subclass of a certain type. That type is parametrized and I would like its parametrized types to be available to subclasses that already have values for M. Is there any way to define this class without having to include the redundant K and V generic types in the parameter list. I'd like to be able to have the compiler infer them from whatever M is mapped to by subclasses. public abstract class NewParametrized<K, V, M extends SomeParametrized<K, V>> { public void someMethodThatTakesKAndV(K k1, V v1) { } } In other words, I'd like the class declaration to look something like: public class NewParametrized<M extends SomeParametrized<K, V>> { And K and V's types would be inferred from the definition of M.

    Read the article

  • About enumerations in Delphi and c++ in 64-bit environments

    - by sum1stolemyname
    I recently had to work around the different default sizes used for enumerations in Delphi and c++ since i have to use a c++ dll from a delphi application. On function call returns an array of structs (or records in delphi), the first element of which is an enum. To make this work, I use packed records (or aligned(1)-structs). However, since delphi selects the size of an enum-variable dynamically by default and uses the smallest datatype possible (it was a byte in my case), but C++ uses an int for enums, my data was not interpreted correctly. Delphi offers a compiler switch to work around this, so the declaration of the enum becomes {$Z4} TTypeofLight = ( V3d_AMBIENT, V3d_DIRECTIONAL, V3d_POSITIONAL, V3d_SPOT ); {$Z1} My Questions are: What will become of my structs when they are compiled on/for a 64-bit environment? Does the default c++ integer grow to 8 Bytes? Are there other memory alignment / data type size modifications (other than pointers)?

    Read the article

  • how to create a new variant in bjam

    - by steve jaffe
    I've tried reading the documentation but it is rather impenetrable so I'm hoping someone may have a simple answer. I want to define a new 'variant', based on 'debug', which just adds some macro definitions to the compiler command line, eg "-DSOMEMACRO". I think I may be able to do this as a "sub-variant" of debug, or else just define a new variant copying 'debug', but I'm not even sure where to do this. It looks like feature.jam in $BOOST_BUILD_DIR/build may be the place. Perhaps what I really want is simply a new 'feature' but it's still not clear to me exactly what I need to do and where, and I don't know if a 'feature' allows me to direct the build products to a different directory to the 'debug' build. Any suggestions will be appreciated. (In case you're wondering, I have to use bjam since it has been adopted as our corporate standard.)

    Read the article

  • Why C++ virtual function defined in header may not be compiled and linked in vtable?

    - by 0xDEAD BEEF
    Situation is following. I have shared library, which contains class definition - QueueClass : IClassInterface { virtual void LOL() { do some magic} } My shared library initialize class member QueueClass *globalMember = new QueueClass(); My share library export C function which returns pointer to globalMember - void * getGlobalMember(void) { return globalMember;} My application uses globalMember like this ((IClassInterface*)getGlobalMember())->LOL(); Now the very uber stuff - if i do not reference LOL from shared library, then LOL is not linked in and calling it from application raises exception. Reason - VTABLE contains nul in place of pointer to LOL() function. When i move LOL() definition from .h file to .cpp, suddenly it appears in VTABLE and everything works just great. What explains this behavior?! (gcc compiler + ARM architecture_)

    Read the article

  • Do COM Dll References Require Manual Disposal? If so, How?

    - by Drew
    I have written some code in VB that verifies that a particular port in the Windows Firewall is open, and opens one otherwise. The code uses references to three COM DLLs. I wrote a WindowsFirewall class, which Imports the primary namespace defined by the DLLs. Within members of the WindowsFirewall class I construct some of the types defined by the DLLs referenced. The following code isn't the entire class, but demonstrates what I am doing. Imports NetFwTypeLib Public Class WindowsFirewall Public Shared Function IsFirewallEnabled as Boolean Dim icfMgr As INetFwMgr icfMgr = CType(System.Activator.CreateInstance(Type.GetTypeFromProgID("HNetCfg.FwMgr")), INetFwMgr) Dim profile As INetFwProfile profile = icfMgr.LocalPolicy.CurrentProfile Dim fIsFirewallEnabled as Boolean fIsFirewallEnabled = profile.FirewallEnabled return fIsFirewallEnabled End Function End Class I do not reference COM DLLs very often. I have read that unmanaged code may not be cleaned up by the garbage collector and I would like to know how to make sure that I have not introduced any memory leaks. Please tell me (a) if I have introduced a memory leak, and (b) how I may clean it up. (My theory is that the icfMgr and profile objects do allocate memory that remains unreleased until after the application closes. I am hopeful that setting their references equal to nothing will mark them for garbage collection, since I can find no other way to dispose of them. Neither one implements IDisposable, and neither contains a Finalize method. I suspect they may not even be relevant here, and that both of those methods of releasing memory only apply to .Net types.)

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • Problem with array of elements of a structure type

    - by kobac
    I'm writing an app in Visual Studio C++ and I have problem with assigning values to the elements of the array, which is array of elements of structure type. Compiler is reporting syntax error for the assigning part of the code. Is it possible in anyway to assign elements of array which are of structure type? typedef struct { CString x; double y; } Point; Point p[3]; p[0] = {"first", 10.0}; p[1] = {"second", 20.0}; p[2] = {"third", 30.0};

    Read the article

  • Technology and language for a stable Digital Audio Workstation development

    - by Kill KRT
    Hi, I'm designing a cross platform (Windows/Linux/OS X) application, something like a digital audio workstation. I'd like to create a software where users have a fully featured sequencer (multiple tracks with automation) and where it is possible to create instruments using a visual language (as Pure Data/Max MSP). Ehm... I know that I've already posted a question about a related issue... But in order to decide which technology I should use, I think I'd better to make more investigation. I'm a quite experted user of audio trackers (Renoise, Protracker,...) and sequencers (FL Studio, Cubase 5), but I didn't ever try to develop even a basic audio tracker. I know just the basic theory of mixing sound and know how basically a DSP works. My questions are: Where I can find a good tutorial/guide/book about this issue? Do you think using C# (with NAudio) could dramatically reduce performance? I know C++ would be the best choice, but I find C# so elegant and easy to build and port, while C++ is so powerful and fast, but there are too #define and bad things for my taste! ;-) Thank you.

    Read the article

  • How well does Scala Perform Comapred to Java?

    - by Teja Kantamneni
    The Question actually says it all. The reason behind this question is I am about to start a small side project and want to do it in Scala. I am learning scala for the past one month and now I am comfortable working with it. The scala compiler itself is pretty slow (unless you use fsc). So how well does it perform on JVM? I previously worked on groovy and I had seen sometimes over performed than java. My Question is how well scala perform on JVM compared to Java. I know scala has some very good features(FP, dynamic lang, statically typed...) but end of the day we need the performance...

    Read the article

  • Why calling ISet<dynamic>.Contains() compiles, but throws an exception at runtime?

    - by Andrey Breslav
    Please, help me to explain the following behavior: dynamic d = 1; ISet<dynamic> s = new HashSet<dynamic>(); s.Contains(d); The code compiles with no errors/warnings, but at the last line I get the following exception: Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: 'System.Collections.Generic.ISet<object>' does not contain a definition for 'Contains' at CallSite.Target(Closure , CallSite , ISet`1 , Object ) at System.Dynamic.UpdateDelegates.UpdateAndExecuteVoid2[T0,T1](CallSite site, T0 arg0, T1 arg1) at FormulaToSimulation.Program.Main(String[] args) in As far as I can tell, this is related to dynamic overload resolution, but the strange things are (1) If the type of s is HashSet<dynamic>, no exception occurs. (2) If I use a non-generic interface with a method accepting a dynamic argument, no exception occurs. Thus, it looks like this problem is related particularly with generic interfaces, but I could not find out what exactly causes the problem. Is it a bug in the compiler/typesystem, or legitimate behavior?

    Read the article

  • What should I do if a IOException is thrown?

    - by Roman
    I have the following 3 lines of the code: ServerSocket listeningSocket = new ServerSocket(earPort); Socket serverSideSocket = listeningSocket.accept(); BufferedReader in = new BufferedReader(new InputStreamReader(serverSideSocket.getInputStream())); The compiler complains about all of these 3 lines and its complain is the same for all 3 lines: unreported exception java.io.IOException; In more details, these exception are thrown by new ServerSocket, accept() and getInputStream(). I know I need to use try ... catch .... But for that I need to know what this exceptions mean in every particular case (how should I interpret them). When they happen? I mean, not in general, but in these 3 particular cases.

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >