Search Results

Search found 9889 results on 396 pages for 'behind the compiler'.

Page 318/396 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • question regarding templatization of virtual function

    - by jan
    Hi, I am new to this forum and sorry If I am repeating this question. I know that you cannot templatize the virtual function and I do understand the concept behind it. But I still need a way to get across some errors I am getting it. I am able to make my stuff work but it doesn't look to me. Here's the deal, I have class called System, #include "Vector.h" class System { virtual void VectorToLocal(Vector<T>& global_dir,const Vector<T>* global_pos = 0) const = 0; }; class UnresolvedSystem : public System { virtual void VectorToLocal(Vector<T>& global_dir,const Vector<T>* global_pos = 0) const { //do something } }; In Vector.h tenplate<typename T> class Vector { //some functions }; See now I want to templatize VectorToLocal in system.h to take just Vector, but I cannot do it as it is a virtual function. I want a work around. I know I can have VectorToLocal take Vector, Vector etc as arguments. But I do not want to do it. Any help would be really appreciated. Thanks in advance, Jan

    Read the article

  • write to fifo/pipe from shell, with timeout

    - by Tim
    I have a pair of shell programs that talk over a named pipe. The reader creates the pipe when it starts, and removes it when it exits. Sometimes, the writer will attempt to write to the pipe between the time that the reader stops reading and the time that it removes the pipe. reader: while condition; do read data <$PIPE; do_stuff; done writer: echo $data >>$PIPE reader: rm $PIPE when this happens, the writer will hang forever trying to open the pipe for writing. Is there a clean way to give it a timeout, so that it won't stay hung until killed manually? I know I can do #!/bin/sh # timed_write <timeout> <file> <args> # like "echo <args> >> <file>" with a timeout TIMEOUT=$1 shift; FILENAME=$1 shift; PID=$$ (X=0; # don't do "sleep $TIMEOUT", the "kill %1" doesn't kill the sleep while [ "$X" -lt "$TIMEOUT" ]; do sleep 1; X=$(expr $X + 1); done; kill $PID) & echo "$@" >>$FILENAME kill %1 but this is kind of icky. Is there a shell builtin or command to do this more cleanly (without breaking out the C compiler)?

    Read the article

  • Noob question about a statement in a Java program

    - by happysoul
    I am beginner to java and was trying out this code puzzle from the book head first java which I solved as follows and got the output correct :D class DrumKit { boolean topHat=true; boolean snare=true; void playSnare() { System.out.println("bang bang ba-bang"); } void playTopHat() { System.out.println("ding ding da-ding"); } } public class DrumKitTestDriver { public static void main(String[] args) { DrumKit d =new DrumKit(); if(d.snare==true) { d.playSnare(); } d.playTopHat(); } } Output is :: bang bang ba-bang ding ding da-ding Now the problem is that in that code puzzle one code snippet is left that I did not include..it's as follows d.snare=false; Even though I did not write it , I got the output like the book. I am wondering why is there need for us to set it's value as false even when we know the code is gonna run without it too !?? I am wondering what the coder had in mind ..I mean what could be the possible future use and motive behind doing this ? I am sorry if it's a dumb question. I just wanna know why or why not to include that particular statement ? It's not like there's a loop or something that we need to come out of. Why is that statement there ?

    Read the article

  • How To Get the Name of the Current Procedure/Function in Delphi (As a String)

    - by Andreas Rejbrand
    Is it possible to obtain the name of the current procedure/function as a string, within a procedure/function? I suppose there would be some "macro" that is expanded at compile-time. My scenario is this: I have a lot of procedures that are given a record and they all need to start by checking the validity of the record, and so they pass the record to a "validator procedure". The validator procedure raises an exception if the record is invalid, and I want the message of the exception to include not the name of the validator procedure, but the name of the function/procedure that called the validator procedure (naturally). That is, I have procedure ValidateStruct(const Struct: TMyStruct; const Sender: string); begin if <StructIsInvalid> then raise Exception.Create(Sender + ': Structure is invalid.'); end; and then procedure SomeProc1(const Struct: TMyStruct); begin ValidateStruct(Struct, 'SomeProc1'); ... end; ... procedure SomeProcN(const Struct: TMyStruct); begin ValidateStruct(Struct, 'SomeProcN'); ... end; It would be somewhat less error-prone if I instead could write something like procedure SomeProc1(const Struct: TMyStruct); begin ValidateStruct(Struct, {$PROCNAME}); ... end; ... procedure SomeProcN(const Struct: TMyStruct); begin ValidateStruct(Struct, {$PROCNAME}); ... end; and then each time the compiler encounters a {$PROCNAME}, it simply replaces the "macro" with the name of the current function/procedure as a string literal.

    Read the article

  • SFINAE failing with enum template parameter

    - by zeroes00
    Can someone explain the following behaviour (I'm using Visual Studio 2010). header: #pragma once #include <boost\utility\enable_if.hpp> using boost::enable_if_c; enum WeekDay {MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY}; template<WeekDay DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<WeekDay DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} source: bool b = goToWork<MONDAY>(); compiler this gives error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY!=6,bool>::type goToWork(void)' and error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY==6,bool>::type goToWork(void)' But if I change the function template parameter from the enum type WeekDay to int, it compiles fine: template<int DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<int DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} Also the normal function template specialization works fine, no surprises there: template<WeekDay DAY> bool goToWork() {return true;} template<> bool goToWork<SUNDAY>() {return false;} To make things even weirder, if I change the source file to use any other WeekDay than MONDAY or TUESDAY, i.e. bool b = goToWork<THURSDAY>(); the error changes to this: error C2440: 'specialization' : cannot convert from 'int' to 'const WeekDay' Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)

    Read the article

  • Why doesen't it work to write this NSMutableArray to a plist?

    - by Emil
    edited. Hey, I am trying to write an NSMutableArray to a plist. The compiler does not show any errors, but it does not write to the plist anyway. I have tried this on a real device too, not just the Simulator. Basically, what this code does, is that when you click the accessoryView of a UITableViewCell, it gets the indexPath pressed, edits an NSMutableArray and tries to write that NSMutableArray to a plist. It then reloads the arrays mentioned (from multiple plists) and reloads the data in a UITableView from the arrays. Code: NSIndexPath *indexPath = [table indexPathForRowAtPoint:[[[event touchesForView:sender] anyObject] locationInView:table]]; [arrayFav removeObjectAtIndex:[arrayFav indexOfObject:[NSNumber numberWithInt:[[arraySub objectAtIndex:indexPath.row] intValue]]]]; NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *plistPath = [rootPath stringByAppendingPathComponent:@"arrayFav.plist"]; NSLog(@"%@ - %@", rootPath, plistPath); [arrayFav writeToFile:plistPath atomically:YES]; // Reloads data into the arrays [self loadDataFromPlists]; // Reloads data in tableView from arrays [tableFarts reloadData];

    Read the article

  • ASP.Net MVC vs ASP.Net for Complex workflows

    - by Grant Sutcliffe
    I have just become involved in migrating a series of complex workflows with InfoPath UIs to Web-based UIs. I am new to ASP.Net MVC but have started to evaluate it as the technology versus classic ASP.Net for the job. As is typical of most workflows, in each state there are a number of business rules that determine (a) who can view what content; (2) who can edit what content; (3) what the user action options might be (Edit; Reject; Approve), etc. In essence, there is a lot of logic that needs to be applied to each request before presenting the appropriate view. Being more experienced in ASP.Net, I know that presenting the form(s) as required can be easily achieved through code behind pages (enable / disable / hide fields). I have not seen how this can be achieved with ASP.Net MVC (but am realising that new thinking is required of me when working with MVC - ‘Give only the content on a particular View + limited user action options’). Therefore, if using ASP.Net MVC, it looks like I would need to create a lot of views. Much of the content in each view would be the same. Only field enabled status or buttons would differ in most instances for these views in each state. For example: Step01Initiate (‘Has Save’ button); Step01OriginatorView (has ‘Edit’ Button) ; Step01OriginatorEdit (has ‘Save’ button); Step01Review (has ‘Accept’ / ‘Reject’ buttons); Step01ReviewReject (for reviewer notes; has ‘Save’ / ‘Cancel’ buttons). With workflows of up to six states, this would result in a lot of views. I can see the advantages of choosing ASP.MVC (1) ‘thin’ Views in terms of content; and (2) with logic consolidation in Controllers and different Models. Am I thinking along the right lines in terms of applying the MVC – ‘plenty of views’; or is there a better way to achieve my goal (using ASP.Net MVC or classic ASP.Net)?

    Read the article

  • Can't set Visible attribute in ASP.NET Panels

    - by RichW
    I am having trouble with visible attribute of an ASP.NET Panel control. I have a page that calls a database table and returns the results in a datagrid. Requirements If some of the returned values are null I need to hide the image that's next to it. I am using a Panel to determine whether to hide or show the image but am having trouble with the statement: visible='<%# Eval("addr1") <> DBNull.Value %>' I have tried these as well: visible='<%# Eval("addr1") <> DBNull.Value %>' visible='<%# IIf(Eval("addr1") Is DbNull.Value, "False","True") %>' Code is below: <asp:TemplateField > <ItemTemplate> <%# Eval("Name")%> <p> <asp:Panel runat="server" ID="Panel1" visible='<%# Eval("addr1") <> DBNull.Value %>'> <asp:Image Id="imgHouse" runat="server" AlternateText="Address" SkinId="imgHouse"/> </asp:Panel> <%# Eval("addr1") %><p> </ItemTemplate> </asp:TemplateField> What am I doing wrong? Edit If I use visible='<%# IIf(Eval("addr1") Is DbNull.Value, "False","True") %>' I get the following error: Compiler Error Message: CS1026: ) expected

    Read the article

  • WordPerfect programmers refusing to use anything but assembler

    - by Totophil
    There is a version (popularised by Joel Spolsky) attributing the demise of WordPerfect to a refusal of its programmers to use anything but assembler that led to delay of the first WPwin release and as result eventually to losing the all important battle with Microsoft. There are a few references to programming work being done using assembler in the autobiographical book "Almost Perfect" by W. E. Pete Peterson who used to have a major influence at running the corporation. But these references go back to early 80's when WordPerfect was trying to gain a significant market share by defeating WordStar and not early nineties when the battle with MS took place. I am looking for a second independent source to confirm the assumption. Maybe someone who worked for WordPerfect Corporation at a time, who was close to the company, or had a chance to see the source could clarify the issue. Your help is much appreciated, thanks! Please note that this question is not about any other theories or reasons behind WordPerfect demise. I really just need to clarify whether they used assembler as a primary language for WPwin and (as a bonus really) whether there were discussions held within the corporation about assembler being the right choice. Concisely: Did WPCorp use assembler as a primary language for WPwin? Were discussions held at a time amongst WP Corp staff about assembler being the right choice (was it management or programmers decision)?

    Read the article

  • Calling a SLSB with Seam security from a servlet

    - by wilth
    Hello, I have an existing application written in SEAM that uses SEAM Security (http://docs.jboss.org/seam/2.1.1.GA/reference/en-US/html/security.html). In a stateless EJB, I might find something like this: @In Identity identity; ... if(identity.hasRole("admin")) throw new AuthException(); As far as I understand, Seam injects the Identity object from the SessionContext of the servlet that invokes the EJB (this happens "behind the scenes", since Seam doesn't really use servlets) and removes it after the call. Is this correct? Is it now possible to access this EJB from another servlet (in this case, that servlet is the server side of a GWT application)? Do I have to "inject" the correct Identity instance? If I don't do anything, Seam injects an instance, but doesn't correctly correlate the sessions and instances of Identity (so the instances of Identity are shared between sessions and sometimes calls get new instances etc.). Any help and pointers are very welcome - thanks! Technology: EJB3, Seam 2.1.2. The servlets are actually the server-side of a GWT app, although I don't think this matters much. I'm using JBoss 5.

    Read the article

  • Core Data - How to check if a managed object's properties have been deallocated?

    - by georryan
    I've created a program that uses core data and it works beautifully. I've since attempted to move all my core data methods calls and fetch routines into a class that is self contained. My main program then instantiates that class and makes some basic method calls into that class, and the class then does all the core data stuff behind the scenes. What I'm running into, is that sometimes I'll find that when I grab a managed object from the context, I'll have a valid object, but its properties have been deallocated, and I'll cause a crash. I've played with the zombies and looked for memory leaks, and what I have gathered is it seems that the run loop is probably responsible for deallocating the memory, but I'm not sure. Is there a way to determine if that memory has been deallocated and force the core data to get it back if I need to access it? My managedObjectContext never gets deallocated, and the fetchedResultsController never does, either. I thought maybe I needed to use the [managedObjectContext refreshObject:mergeData:] method, or the [managedObjectContext setRetainsRegisteredObjects:] method. Although, I'm under the impression that last one may not be the best bet since it will be more memory intensive (from what I understand). These errors only popped up when I moved the core data calls into another class file, and they are random when they show up. Any insight would be appreciated. -Ryan

    Read the article

  • Eclipse (Aptana) Typing Lag

    - by Zack
    Hello SO, I've been using Aptana for some time now, and as of recent I've been dealing with files that are really, really big (500+ lines of code, which is huge for me, being a novice developer). Whenever I deal with smaller files, I get that weird sensation that I'm "in front of" what's typing, but now I'm quite sure of it--there is a significant lag between when I type something and when I see the text appear on screen. I don't have this issue with Dreamweaver CS3, so I know my computer has the capability to edit these files without this happening, but Eclipse still lags. I also don't see when something is being deleted if I hold down backspace, I see the first few characters get deleted, but then everything "hangs." Once I release the backspace key, the characters that would've been shown deleting instantly vanish all at once. The same thing happens with the forward delete key. I'm beginning to think this is an issue with Java, since I have the same feeling that everything is slightly "behind me" when I'm using -any- Java application. The computer is an intel Pentium 4 3.2 GHz Prescott, with 2GB's of DDR400 RAM and a Radeon HD3650 graphics card. If anyone knows how to fix this lagging issue, I'm all ears (eyes?); if anyone can recommend a different IDE with capabilities similar to Aptana (I do Python, HTML, CSS and JS; I use Git for SCM), I'd be glad to give it a try. Thanks!

    Read the article

  • Silverlight - Get the ItemsControl of a DataTemplate

    - by user208662
    Hello, I have a Silverlight application that is using a DataGrid. Inside of that DataGrid I have a DataTemplate that is defined like the following: <Grid x:Name="myGrid" Tag="{Binding}" Loaded="myGrid_Loaded"> <ItemsControl ItemsSource="{Binding MyItems}" Tag="{Binding}"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <StackPanel Orientation="Horizontal" Width="138"> <TextBlock Text="{Binding Type}" /> <TextBox x:Name="myTextBox" TextChanged="myTextBox_TextChanged" /> </StackPanel> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Grid> When a user enters text into the TextBox, I have an event (myTextBox_TextChanged) that must be fired at this point. When that event gets fired, I would like to get the ItemsControl element that is the container for this TextBox. How do I get that ItemsControl from my event handler? Please note: Because the ItemsControl is in the DataTemplate of DataGrid, I don't believe I can just add an x:Name and reference it from my code-behind. Or is there a way to do that? Thank you!

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • How to force the build to be out of date, when a text file is modified?

    - by demoncodemonkey
    The Scenario My project has a post-build phase set up to run a batch file, which reads a text file "version.txt". The batch file uses the information in version.txt to inject the DLL with a version block using this tool. The version.txt is included in my project to make it easy to modify. It looks a bit like this: @set #Description="TankFace Utility Library" @set #FileVersion="0.1.2.0" @set #Comments="" Basically the batch file renames this file to version.bat, calls it, then renames it back to version.txt afterwards. The Problem When I modify version.txt (e.g. to increment the file version), and then press F7, the build is not seen as out-of-date, so the post-build step is not executed, so the DLL's version doesn't get updated. I really want to include the .txt file as an input to the build, but without anything actually trying to use it. If I #include the .txt file from a CPP file in the project, the compiler fails because it obviously doesn't understand what "@set" means. If I add /* ... */ comments around the @set commands, then the batch file has some syntax errors but eventually succeeds. But this is a poor solution I think. So... how would you do it?

    Read the article

  • Silverlight databinding error

    - by Petezah
    I found an example online that explains how to perform databinding to a ListBox control using LINQ in WPF. The example works fine but when I replicate the same code in Silverlight it doesn't work. Is there a fundamental difference between Silverlight and WPF that I'm not aware of? Here is an Example of the XAML: <ListBox x:Name="listBox1"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Name}" FontSize="18"/> <TextBlock Text="{Binding Role}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Here is an example of my code behind: private void UserControl_Loaded(object sender, RoutedEventArgs e) { string[] names = new string[] { "Captain Avatar", "Derek Wildstar", "Queen Starsha" }; string[] roles = new string[] { "Hero", "Captain", "Queen of Iscandar" }; listBox1.ItemSource = from n in names from r in roles select new { Name = n, Role = r} }

    Read the article

  • Common practice for higher-order-polymorphism in scala

    - by raichoo
    Hi, I'm trying to grasp higher-order-polymophism in scala by implementing a very basic interface that describes a monad but I come across a problem that I don't really understand. I implemented the same with C++ and the code looks like this: #include <iostream> template <typename T> class Value { private: T value; public: Value(const T& t) { this->value = t; } T get() { return this->value; } }; template < template <typename> class Container > class Monad { public: template <typename A> Container<A> pure(const A& a); }; template <template <typename> class Container> template <typename A> Container<A> Monad<Container>::pure(const A& a) { return Container<A>(a); } int main() { Monad<Value> m; std::cout << m.pure(1).get() << std::endl; return 0; } When trying to do the same with scala I fail: class Value[T](val value: T) class Monad[Container[T]] { def pure[A](a: A): Container[A] = Container[A](a) } object Main { def main(args: Array[String]): Unit = { val m = new Monad[Value] m.pure(1) } } The compiler complains about: [raichoo@lain:Scala]:434> scalac highorder.scala highorder.scala:5: error: not found: value Container Container[A](a) ^ one error found What am I doing wrong here? There seems to be a fundamental concept I don't seem to understand about scala typeconstructors. Regards, raichoo

    Read the article

  • structure inside structure - c++ Error

    - by gamadeus
    First of all the error I am getting is of the type: Request for member 's' of struct1.struct1::struct2, which is of non class type '__u32' where: struct struct1 { struct x struct2; struct x struct3; struct x struct4; }; The usage is of the form: struct struct1 st1; st1.struct2.s = Value; Now my struct1 is: struct ip_mreq_source { struct in_addr imr_multiaddr; struct in_addr imr_sourceaddr; struct in_addr imr_interface; }; struct 'x' is in_addr Where: typedef uint32_t in_addr_t; struct in_addr { in_addr_t s_addr; }; element 's' is the element s_addr in in_addr. My detailed error coming out of g++ (GCC 4.4.3) from the Android based compiler: arm-linux-androideabi-g++ -MMD -MP -MF groupsock/GroupsockHelper.o.d.org -fpic -ffunction-sections -funwind-tables -fstack-protector -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -Wno-psabi -march=armv5te -mtune=xscale -msoft-float -fno-exceptions -fno-rtti -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline- limit=64 -Igroupsock/include -Igroupsock/../UsageEnvironment/include -Iandroid- ndk-r5b/sources/cxx-stl/system/include -Igroupsock -DANDROID -Wa,--noexecstack -DANDROID_NDK -Wall -fexceptions -O2 -DNDEBUG -g -Iandroid-8/arch-arm/usr/include -c groupsock/GroupsockHelper.cpp -o groupsock/GroupsockHelper.o && rm -f groupsock/GroupsockHelper.o.d && mv groupsock/GroupsockHelper.o.d.org groupsock/GroupsockHelper.o.d groupsock/GroupsockHelper.cpp: In function 'Boolean socketJoinGroupSSM(UsageEnvironment&, int, netAddressBits, netAddressBits)': groupsock/GroupsockHelper.cpp:427: error: request for member 's_addr' in 'imr.ip_mreq_source::imr_multiaddr', which is of non-class type '__u32' groupsock/GroupsockHelper.cpp:428: error: request for member 's_addr' in 'imr.ip_mreq_source::imr_sourceaddr', which is of non-class type '__u32' groupsock/GroupsockHelper.cpp:429: error: request for member 's_addr' in 'imr.ip_mreq_source::imr_interface', which is of non-class type '__u32' I am not sure what is causing the error. Any pointers would be great - no pun intended. Thanks

    Read the article

  • AJAX - ASP.NET - Timer delay problem

    - by Julian
    Hi, I'm trying to make an webapplication where you see an Ajax countdown timer. Whenever I push a button the countdown should go back to 30 and keep counting down. Now the problem is whenever I push the button the timer keeps counting down for a second or 2 and most of the time after that the timer keeps standing on 30 for to long. WebForm code: <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <asp:Label ID="Label1" runat="server" Text="geen verbinding"></asp:Label> <br /> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Button" /> <br /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="Timer1" EventName="Tick" /> </Triggers> </asp:UpdatePanel> <asp:Timer ID="Timer1" runat="server" Interval="1000" ontick="Timer1_Tick"> </asp:Timer> </form> Code Behind: static int timer = 30; protected void Page_Load(object sender, EventArgs e) { Label1.Text = timer.ToString(); } protected void Timer1_Tick(object sender, EventArgs e) { timer--; } protected void Button1_Click(object sender, EventArgs e) { timer = 30; } Hope somebody knows what the problem is and if there is anyway to fix this. Thanks in advance!

    Read the article

  • Eclipse CDT: cannot debug or terminate application

    - by Paul Lammertsma
    I have Eclipse set up fairly nicely to run the G++ compiler through Cygwin. Even the character encoding is set up correctly! There still seems to be something wrong with my configuration: I can't debug. The pause button in the debug view is simply disabled, and no threads appear in my application tree. It seems that gdb is simply not communicating with Eclipse. Presently, I have the debug settings as follows: Debugger: "Cygwin gdb Debugger" GDB debugger: gdb GDB command file: .gdbinit Protocol: Default I should mention here that I have no idea what .gdbinit does; in my project it is merely an empty file. What is wrong with my configuration? Debugging When attempting to terminate the application in debug mode, Eclipse displays the following error: Target request failed: failed to interrupt. I can't kill the process, either; I have to kill its parent gdb.exe, which in turn kills my application. Running When running it normally, a bunch of kill.exes are called, doing nothing, while Eclipse displays the following error: Terminate failed. I can kill FaceDetector.exe from the task manager. Process Explorer This is what it looks like in Process Explorer (debugging left, running right):

    Read the article

  • What does the subversion error "Could not read status line" mean?

    - by Jergason
    Exact duplicate: SVN: Could not read status line: connection was closed by server This is not an exact duplicate. The other question was asking about getting the error in a specific situation, and the answer was vauge at best. This is a fairly basic question, but it is driving me nuts. I have set up a brand new repository at beanstalk.com. They give me the url, http://.svn.beanstalkapp.com/blog. They also automatically create the tag, trunk and branches folder in the repository. I have checked out the trunk folder and used svn add to add the new file. I am trying to do my first commit, but I get this error: Commit failed (details follow): CHECKOUT of '/foo/!svn/bln/1': Could not read status line: connection was closed by server. (http://user_name@my_name.svn.beanstalkapp.com) What does this mean, and what causes it? I have googled for a definition of what "Could not read status line" means, but was unable to find anything explaining it. edit: I was getting this error while trying to manipulate my repository from behind a firewall. I still don't know what was causing it, but I don't have this problem at home. Strangeness.

    Read the article

  • How close can I get C# to the performance of C++ for small intensive tasks?

    - by SLC
    I was thinking about the speed difference of C++ to C# being mostly about C# compiling to byte-code that is taken in by the JIT compiler (is that correct?) and all the checks C# does. I notice that it is possible to turn a lot of these functions off, both in the compile options, and possibly through using the unsafe keyword as unsafe code is not verifiable by the common language runtime. Therefore if you were to write a simple console application in both languages, that flipped an imaginary coin an infinite number of times and displayed the results to the screen every 10,000 or so iterations, how much speed difference would there be? I chose this because it's a very simple program. I'd like to test this but I don't know C++ or have the tools to compile it. This is my C# version though: static void Main(string[] args) { unsafe { Random rnd = new Random(); int heads = 0, tails = 0; while (true) { if (rnd.NextDouble() > 0.5) heads++; else tails++; if ((heads + tails) % 1000000 == 0) Console.WriteLine("Heads: {0} Tails: {1}", heads, tails); } } } Is the difference enough to warrant deliberately compiling sections of code "unsafe" or into DLLs that do not have some of the compile options like overflow checking enabled? Or does it go the other way, where it would be beneficial to compile sections in C++? I'm sure interop speed comes into play too then. To avoid subjectivity, I reiterate the specific parts of this question as: Does C# have a performance boost from using unsafe code? Do the compile options such as disabling overflow checking boost performance, and do they affect unsafe code? Would the program above be faster in C++ or negligably different? Is it worth compiling long intensive number-crunching tasks in a language such as C++ or using /unsafe for a bonus? Less subjectively, could I complete an intensive operation faster by doing this?

    Read the article

  • Moral fits the story or suggest me a nice moral?

    - by Gobi
    A 25 year old son was sitting beside his old father in a train one day. When the train was about to leave, all the passengers started settling down in their seats. The son was filled with joy and anxiety. He was seated by the window. He put his hand out and felt the breeze and screamed, “ Papa look at all the trees, they are moving behind”. The old father smiled and admired his son’s feelings. Beside the old man, a couple was also travelling and observed this strange behavior. They found something awkward and childish in the behavior of this 25 year old man. All of a sudden, the son shouted again “Papa see! The clouds are moving about; there is a pond down and many cows are drinking it’s water”. It soon started drizzling. Once again, the young man felt exited and said “papa, I can see and feel the rain drops touching my hand”. The couple seeing this and feeling concerned, asked the old man “why don’t you consult a good doctor and treat your son; don’t you find something abnormally different in him ?” The old man replied, “Yes, I have provided the best treatment for my only boy. We are just returning from the hospital. I am happy for today is the day he has received his sense of sight. It’s for the first time my son is seeing and relishing these little wonders which we have been watching and ignoring in our routine life!” The couple had no words to reply and felt sorry for their remarks. Moral of the story: “ “don’t judge a book by its cover”. is this the moral fits the story or provide me some moral for this story :)

    Read the article

  • How do attribute classes work?

    - by AaronLS
    My searches keep turning up only guides explaining how to use and apply attributes to a class. I want to learn how to create my own attribute classes and the mechanics of how they work. How are attribute classes instantiated? Are they instantiated when the class they are applied to is instantiated? Is one instantiated for each class instantiated that it is applied to? E.g. if I apply the SerializableAttribute class to a MyData class, and I instantiate 5 MyData instances, will there be 5 instances of the SerializbleAttribute class created behind the scenes? Or is there just one instance shared between all of them? How do attribute class instances access the class they are associated with? How does a SerializableAttribute class access the class it is applied to so that it can serialize it's data? Does it have some sort of SerializableAttribute.ThisIsTheInstanceIAmAppliedTo property? :) Or does it work in the reverse direction that whenever I serialize something, the Serialize function I pass the MyClass instance to will reflectively go through the Attributes and find the SerialiableAttribute instance?

    Read the article

  • question/problem regarding assigning an array of char *

    - by Fantastic Fourier
    Hi I'm working with C and I have a question about assigning pointers. struct foo { int _bar; char * _car[MAXINT]; // this is meant to be an array of char * so that it can hold pointers to names of cars } int foofunc (void * arg) { int bar; char * car[MAXINT]; struct foo thing = (struct foo *) arg; bar = arg->_bar; // this works fine car = arg->_car; // this gives compiler errors of incompatible types in assignment } car and _car have same declaration so why am I getting an error about incompatible types? My guess is that it has something to do with them being pointers (because they are pointers to arrays of char *, right?) but I don't see why that is a problem. when i declared char * car; instead of char * car[MAXINT]; it compiles fine. but I don't see how that would be useful to me later when I need to access certain info using index, it would be very annoying to access that info later. in fact, I'm not even sure if I am going about the right way, maybe there is a better way to store a bunch of strings instead of using array of char *?

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >