Search Results

Search found 7666 results on 307 pages for 'pointer to member'.

Page 241/307 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Help matching fields between two classes

    - by Michael
    I'm not too experienced with Java yet, and I'm hoping someone can steer me in the right direction because right now I feel like I'm just beating my head against a wall... The first class is called MeasuredParams, and it's got 40+ numeric fields (height, weight, waistSize, wristSize - some int, but mostly double). The second class is a statistical classifier called Classifier. It's been trained on a subset of the MeasuredParams fields. The names of the fields that the Classifier has been trained on is stored, in order, in an array called reqdFields. What I need to do is load a new array, toClassify, with the values stored in the fields from MeasuredParams that match the field list (including order) found in reqdFields. I can make any changes necessary to the MeasuredParams class, but I'm stuck with Classifier as it is. My brute-force approach was to get rid of the fields in MeasuredParams and use an arrayList instead, and store the field names in an Enum object to act as an index pointer. Then loop through the reqdFields list, one element at a time, and find the matching name in the Enum object to find the correct position in the arrayList. Load the value stored at that positon into toClassify, and then continue on to the next element in reqdFields. I'm not sure how exactly I would search through the Enum object - it would be a lot easier if the field names were stored in a second arrayList. But then the index positions between the two would have to stay matched, and I'm back to using an Enum. I think. I've been running around in circles all afternoon, and I keep thinking there must be an easier way of doing it. I'm just stuck right now and can't see past what I've started. Any help would be GREATLY appreciated. Thanks so much! Michael

    Read the article

  • javascript class calling XMLHttpRequest internally, then handling onreadystatechange

    - by Radu M
    this thing almost works: function myClass(url) { this.source = url; this.rq = null; this.someOtherProperty = "hello"; // open connection to the ajax server this.start = function() { if (window.XMLHttpRequest) { this.rq = new XMLHttpRequest(); if (this.rq.overrideMimeType) this.rq.overrideMimeType("text/xml"); } else this.rq = new ActiveXObject("Microsoft.XMLHTTP"); try { this.rq.onreadystatechange = connectionEvent; this.rq.open("GET", this.source, true); this.rq.send(null); this.state = 1; } catch (err) { // some error handler here } } function connectionEvent() { alert("i'm here"); alert("this doesnt work: " + this.someOtherProperty); } } // myClass so it's nothing more than having the XMLHttpRequest object as a member of my class, instead of globally defined, and invoking it in the traditional way. however, inside my connectionEvent callback function, the meaning of "this" is lost, even though the function itself is scoped inside myClass. i also made sure that the object that i instantiate from myClass is kept alive long enough (declared global in the script). in all the examples of using javascript classes that i saw, "this" was still available inside the inner functions. for me, it is not, even if i take my function outside and make it a myClass.prototype.connectionEvent. what am i doing wrong? thank you.

    Read the article

  • realloc() & ARC

    - by RynoB
    How would I be able to rewrite the the following utility class to get all the class string values for a specific type - using the objective-c runtime functions as shown below? The ARC documentation specifically states that realloc should be avoided and I also get the following compiler error on this this line: classList = realloc(classList, sizeof(Class) * numClasses); "Implicit conversion of a non-Objective-C pointer type 'void *' to '__unsafe_unretained Class *' is disallowed with ARC" The the below code is a reference to the original article which can be found here. + (NSArray *)classStringsForClassesOfType:(Class)filterType { int numClasses = 0, newNumClasses = objc_getClassList(NULL, 0); Class *classList = NULL; while (numClasses < newNumClasses) { numClasses = newNumClasses; classList = realloc(classList, sizeof(Class) * numClasses); newNumClasses = objc_getClassList(classList, numClasses); } NSMutableArray *classesArray = [NSMutableArray array]; for (int i = 0; i < numClasses; i++) { Class superClass = classList[i]; do { superClass = class_getSuperclass(superClass); if (superClass == filterType) { [classesArray addObject:NSStringFromClass(classList[i])]; break; } } while (superClass); } free(classList); return classesArray; } Your help will be much appreciated. Thanks

    Read the article

  • android bindservice

    - by mnish
    hi, I get null pointer exception at line mService.start() when i try to bind to an already started service. I do the same thing from different activity(where the service gets started) everythig goes right. All these activities are part of one application. What do you think I do wrong? public class RouteOnMap extends MapActivity{ private static final int NEW_LOCATION = 1; private static final int GPS_OFF = 2; private MapView mMapView; private ILocService mService; private boolean mServiceStarted; private boolean mBound; private Intent mServiceIntent; private double mLatitude, mLongitude; private ServiceConnection connection = new ServiceConnection() { public void onServiceConnected(ComponentName className, IBinder iservice) { mService = ILocService.Stub.asInterface(iservice); mBound = true; } public void onServiceDisconnected(ComponentName className) { mService = null; mBound = false; } }; public void onCreate(Bundle savedInstanceState){ super.onCreate(savedInstanceState); setContentView(R.layout.mapview); mMapView = (MapView) findViewById(R.id.mapview); mMapView.setBuiltInZoomControls(true); mServiceIntent = new Intent(); mLatitude = 0.0; mLongitude = 0.0; mBound = false; } @Override public void onStart(){ super.onStart(); mServiceIntent.setClass(this, LocationService.class); //startService(mServiceIntent); if(!mBound){ mBound = true; this.bindService(mServiceIntent, connection, Context.BIND_AUTO_CREATE); } } @Override public void onResume(){ super.onResume(); try { mService.start(); } catch (RemoteException e) { e.printStackTrace(); } } @Override public void onPause(){ super.onPause(); if(mBound){ this.unbindService(connection); } } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } }

    Read the article

  • Can't send client certificate via SslStream

    - by Jonathan
    I am doing an SSL3 handshake using an SslStream, but, in spite of my best efforts, the SslStream never sends a client certificate on my behalf. Here is the code: SSLConnection = new System.Net.Security.SslStream(SSLInOutStream, false, new System.Net.Security.RemoteCertificateValidationCallback(AlwaysValidRemoteCertificate), new System.Net.Security.LocalCertificateSelectionCallback(ChooseLocalCertificate)); X509CertificateCollection CC = new X509CertificateCollection(); CC.Add(Org.BouncyCastle.Security.DotNetUtilities.ToX509Certificate(MyLocalCertificate)); SSLConnection.AuthenticateAsClient("test", CC, System.Security.Authentication.SslProtocols.Ssl3, false); and then I have AlwaysValidRemoteCertificate just returning true, and ChooseLocalCertificate returning the zeroth element of the array. The code probably looks a little weird because the project is a little weird, but I think that is beside the point here. The SSL handshake completes. The issue is that instead of sending a certificate message on my behalf (in the handshake process), with the ASN.1 encoded certificate (MyLocalCertificate), the SslStream sends an SSL alert number 41 (no certificate) and then carries on. I know this from packet sniffing. After the handshake is completed, the SslStream marks IsAuthenticated as true, IsMutuallyAuthenticated as false, and its LocalCertificate member is null. I feel like I'm probably missing something pretty obvious here, so any ideas would be appreciated. I am a novice with SSL, and this project is off the beaten path, so I am kind of at a loss. P.S. 1: My ChooseLocalCertificate routine is called twice during the handshake, and returns a valid (as far as I can tell), non-null certificate both times. P.S. 2: SSLInOutStream is my own class, not a NetworkStream. Like I said, though, the handshake proceeds mostly normally, so I doubt this is the culprit... but who knows?

    Read the article

  • What sorting algorithm is this?

    - by Mike
    Hi, I have a sorting algorithm as follows. My question is, which sorting algorithm is this? I thought it was bubble sort, but it does not do multiple runs. Any idea? Thanks! //sorting in descending order struct node { int value; node* NEXT; } //Assume HEAD pointer denotes the first element in the //linked list // only change the values…don’t have to change the //pointers Sort( Node *Head) { node* first,second,temp; first= Head; while(first!=null) { second=first->NEXT; while(second!=null) { if(first->value < second->value) { temp = new node(); temp->value=first->value; first->value=second->value; second->value=temp->value; delete temp; } second=second->NEXT; } first=first->NEXT; } }

    Read the article

  • Why doesn't my cursor change to an Hourglass in my FindDialog in Delphi?

    - by lkessler
    I am simply opening my FindDialog with: FindDialog.Execute; In my FindDialog.OnFind event, I want to change the cursor to an hourglass for searches through large files, which may take a few seconds. So in the OnFind event I do this: Screen.Cursor := crHourglass; (code that searches for the text and displays it) ... Screen.Cursor := crDefault; What happens is while searching for the text, the cursor properly changes to the hourglass (or rotating circle in Vista) and then back to the pointer when the search is completed. However, this only happens on the main form. It does not happen on the FindDialog itself. The default cursor remains on the FindDialog during the search. While the search is happening if I move the cursor over the FindDialog it changes to the default, and if I move it off and over the main form it becomes the hourglass. This does not seem like what is supposed to happen. Am I doing something wrong or does something special need to be done to get the cursor to be the hourglass on all forms? For reference, I'm using Delphi 2009.

    Read the article

  • Setting ivar in objective-c from child view in the iPhone

    - by Ivan
    Hi there! Maybe a FAQ at this website. I have a TableViewController that holds a form. In that form I have two fields (each in it's own cell): one to select who paid (single selection), and another to select people expense is paid for (multiple selection). Both fields open a new TableViewController included in an UINavigationController. Single select field (Paid By) holds an object Membership Multiple select field (Paid For) holds an object NSMutableArray Both vars are being sent to the new controller identically the same way: mySingleSelectController.crSelectedMember = self.crPaidByMember; myMultipleSelectController.crSelectedMembers = self.crSelectedMembers; From Paid for controller I use didSelectAtIndexPath method to set a mutable array of Memberships for whom is paid: if ([[tableView cellForRowAtIndexPath:indexPath] accessoryType] == UITableViewCellAccessoryCheckmark) { [self.crSelectedMembers removeObject:[self.crGroupMembers objectAtIndex:indexPath.row]]; //... } else { [self.crSelectedMembers addObject:[self.crGroupMembers objectAtIndex:indexPath.row]]; //... } So far everything goes well. An mutable array (crSelectedMembers) is perfectly set from child view. But... I have trouble setting Membership object. From Paid By controller I use didSelectAtIndexPath to set Membership: [self setCrSelectedMember:[crGroupMembers objectAtIndex:indexPath.row]]; By NSlogging crSelectedMember I get the right selected member in self, but in parent view, to which ivar is pointed, nothing is changed. Am I doing something wrong? Cause I CAN call the method of crSelectedMembers, but I can't change the value of crSelectedMember.

    Read the article

  • Linux Kernel - Red/Black Trees

    - by CodeRanger
    I'm trying to implement a red/black tree in Linux per task_struct using code from linux/rbtree.h. I can get a red/black tree inserting properly in a standalone space in the kernel such as a module but when I try to get the same code to function with the rb_root declared in either task_struct or task_struct-files_struct, I get a SEGFAULT everytime I try an insert. Here's some code: In task_struct I create a rb_root struct for my tree (not a pointer). In init_task.h, macro INIT_TASK(tsk), I set this equal to RB_ROOT. To do an insert, I use this code: rb_insert(&(current-fd_tree), &rbnode); This is where the issue occurs. My insert command is the standard insert that is documented in all RBTree documentation for the kernel: int my_insert(struct rb_root *root, struct mytype *data) { struct rb_node **new = &(root->rb_node), *parent = NULL; /* Figure out where to put new node */ while (*new) { struct mytype *this = container_of(*new, struct mytype, node); int result = strcmp(data->keystring, this->keystring); parent = *new; if (result < 0) new = &((*new)->rb_left); else if (result > 0) new = &((*new)->rb_right); else return FALSE; } /* Add new node and rebalance tree. */ rb_link_node(&data->node, parent, new); rb_insert_color(&data->node, root); return TRUE; } Is there something I'm missing? Some reason this would work fine if I made a tree root outside of task_struct? If I make rb_root inside of a module this insert works fine. But once I put the actual tree root in the task_struct or even in the task_struct-files_struct, I get a SEGFAULT. Can a root node not be added in these structs? Any tips are greatly appreciated. I've tried nearly everything I can think of.

    Read the article

  • Make an image transparent in IE to show non-transparent background

    - by Select0r
    Hi, I'm trying to get this thing to work in IE (any version - works in FF, Opera, Safari, Chrome ...): I have a DIV with a background-image. The DIV also contains an image that should be transparent onMouseOver. The expected behaviour now is that the DIV-background would shine through the transparent image (which it does in all browsers but IE). Instead it looks like the image is getting transparent but on a white background, I can't see the DIV's background through the image. Here's some code: <div><a href="#" class"dragItem"><img /></a></div> And some CSS: .dojoDndItemOver { cursor : pointer; filter : alpha(opacity = 50); opacity : 0.5; -moz-opacity : 0.5; -khtml-opacity : 0.5; } .dragItem:hover { filter : alpha(opacity = 30); opacity : 0.3; -moz-opacity : 0.3; -khtml-opacity : 0.3; background : none; } All of this is embedded in a Dojo Drag-n-Drop-system, so dojoDndItemOver will automatically be set to the DIV on MouseOver, dragItem is set to the href around the image (using the same class on the image directly doesn't work at all as IE doesn't support "hover" on other items that href). Any ideas? Or is it an IE-speciality to just "simulate" transparency on images by somehow just greying them out instead of providing real transparency and showing whatever is beneath?

    Read the article

  • AST with fixed nodes instead of error nodes in antlr

    - by ahe
    I have an antlr generated Java parser that uses the C target and it works quite well. The problem is I also want it to parse erroneous code and produce a meaningful AST. If I feed it a minimal Java class with one import after which a semicolon is missing it produces two "Tree Error Node" objects where the "import" token and the tokens for the imported class should be. But since it parses the following code correctly and produces the correct nodes for this code it must recover from the error by adding the semicolon or by resyncing. Is there a way to make antlr reflect this fixed input it produces internally in the AST? Or can I at least get the tokens/text that produced the "Tree Node Errors" somehow? In the C targets antlr3commontreeadaptor.c around line 200 the following fragment indicates that the C target only creates dummy error nodes so far: static pANTLR3_BASE_TREE errorNode (pANTLR3_BASE_TREE_ADAPTOR adaptor, pANTLR3_TOKEN_STREAM ctnstream, pANTLR3_COMMON_TOKEN startToken, pANTLR3_COMMON_TOKEN stopToken, pANTLR3_EXCEPTION e) { // Use the supplied common tree node stream to get another tree from the factory // TODO: Look at creating the erronode as in Java, but this is complicated by the // need to track and free the memory allocated to it, so for now, we just // want something in the tree that isn't a NULL pointer. // return adaptor->createTypeText(adaptor, ANTLR3_TOKEN_INVALID, (pANTLR3_UINT8)"Tree Error Node"); } Am I out of luck here and only the error nodes the Java target produces would allow me to retrieve the text of the erroneous nodes?

    Read the article

  • Deriving an HTMLElement Object from jQuery Object

    - by Jasconius
    I'm doing a fairly exhaustive series of DOM manipulations where a few elements (specifically form elements) have some events. I am dynamically creating (actually cloning from a source element) several boxes and assigning a change() event to them. The change event executes, and within the context of the event, "this" is the HTML Element Object. What I need to do at this point however is determine a contact for this HTML Element Object. I have these objects stored already as jQuery entities in assorted arrays, but obviously [HTMLElement Object] != [Object Object] And the trick is that I cannot cast $(this) and make a valid comparison since that would create a new object and the pointer would be different. So... I've been banging my head against this for a while. In the past I've been able to circumvent this problem by doing an innerHTML comparison, but in this case the objects I am comparing are 100% identical, just there's lots of them. Therefore I need a solid comparison. This would be easy if I could somehow derive the HTMLElement object from my originating jQuery object. Thoughts, other ideas? Help. :(

    Read the article

  • WinForms databind to collection property (such as count)

    - by Ornus
    I want to databind to a collection property, such as Count. in general, when I data bind I specify data member for the object property in the collection, not for the actual properties collection itself exposes. for example, I have a list of custom objects. I show them in datagridview. but I also want to show their total count using a separate label. is there a way to do this through databinding? I imagine that somehow I need to force PropertyManager to be used, instead of CurrentyManager? I get the following exception. Notice that DataSource is collection and has TotalValue property. System.ArgumentException: Cannot bind to the property or column TotalValue on the DataSource. Parameter name: dataMember at System.Windows.Forms.BindToObject.CheckBinding() at System.Windows.Forms.Binding.SetListManager(BindingManagerBase bindingManagerBase) at System.Windows.Forms.ListManagerBindingsCollection.AddCore(Binding dataBinding) at System.Windows.Forms.BindingsCollection.Add(Binding binding) at System.Windows.Forms.BindingContext.UpdateBinding(BindingContext newBindingContext, Binding binding) at System.Windows.Forms.Binding.SetBindableComponent(IBindableComponent value) at System.Windows.Forms.ControlBindingsCollection.AddCore(Binding dataBinding) at System.Windows.Forms.BindingsCollection.Add(Binding binding)

    Read the article

  • Boost bind with asio::placeholders::error

    - by Leandro
    Why it doesn't work? --- boost_bind.cc --- #include #include #include void func1 (const int& i) { } void func2 (const ::asio::error_code& e) { } int main () { ::boost::function f1 = ::boost::bind (&func1, 1); // it doesn't work! ::boost::function f2 = ::boost::bind (&func2, ::asio::placeholders::error); return 0; } This is the error: while_true@localhost:~ g++ -lpthread boost_bind.cc -o boost_bind In file included from boost_bind.cc:2: /usr/include/boost/bind.hpp: In member function ‘void boost::_bi::list1::operator()(boost::_bi::type, F&, A&, int) [with F = void (*)(const asio::error_code&), A = boost::_bi::list0, A1 = boost::arg (*)()]’: /usr/include/boost/bind/bind_template.hpp:20: instantiated from ‘typename boost::_bi::result_traits::type boost::_bi::bind_t::operator()() [with R = void, F = void (*)(const asio::error_code&), L = boost::_bi::list1 (*)()]’ /usr/include/boost/function/function_template.hpp:152: instantiated from ‘static void boost::detail::function::void_function_obj_invoker0::invoke(boost::detail::function::function_buffer&) [with FunctionObj = boost::_bi::bind_t (*)() , R = void]’ /usr/include/boost/function/function_template.hpp:904: instantiated from ‘void boost::function0::assign_to(Functor) [with Functor = boost::_bi::bind_t (*)() , R = void]’ /usr/include/boost/function/function_template.hpp:720: instantiated from ‘boost::function0::function0(Functor, typename boost::enable_if_c::type) [with Functor = boost::_bi::bind_t (*)() , R = void]’ /usr/include/boost/function/function_template.hpp:1040: instantiated from ‘boost::function::function(Functor, typename boost::enable_if_c::type) [with Functor = boost::_bi::bind_t (*)() , R = void]’ boost_bind.cc:14: instantiated from here /usr/include/boost/bind.hpp:232: error: no match for ‘operator[]’ in ‘a[boost::_bi::storage1 (*)()::a1_ [with int I = 1]]’ while_true@localhost:~

    Read the article

  • function's return address is different from its supposed value, buffer overflow,

    - by ultrajohn
    Good day everyone! I’m trying to understand how buffer overflow works. I’m doing this for my project in a computer security course I’m taking. Right now, I’m in the process of determining the address of the function’s return address which I’m supposed to change to perform a buffer overflow attack. I’ve written a simple program based from an example I’ve read in the internet. What this program does is it creates an integer pointer that will be made to point to the address of the function return address in the stack. To do this, (granted I understand how a function/program variables get organized in the stack), I add 8 to the buffer variable’ address and set it as the value of ret. I’m not doing anything here that would change the address contained in the location of func’s return address. here's the program: Output of the program when gets excecuted: As you can see, I’m printing the address of the variables buffer and ret. I’ve added an additional statement printing the value of the ret variable (supposed location of func return address, so this should print the address of the next instruction which will get executed after func returns from execution). Here is the dump which shows the supposed address of the instruction to be executed after func returns. (Underlined in green) As you can see, that value is way different from the value printed contained in the variable ret. My question is, why are they different? (of course in the assumption that what I’ve done are all right). Else, what have I done wrong? Is my understanding of the program’s runtime stack wrong? Please, help me understand this. My project is due nextweek and I’ve barely touched it yet. I’m sorry if I’m being demanding, I badly need your help.

    Read the article

  • Help with Android LinearLayout or RelativeLayout

    - by PeEll
    I need to create two views programmatically (because I need to access the ondraw of one of the views). For some reason, no matter what I do to add the views to the contentview, they don't show up vertically stacked, one below the other. I can do it just fine using the XML using a RelativeLayout and layout positioning, but with the XML I can't create a view object and overload the ondraw method. What am I doing wrong with the programmatic approach, and how do I solve this problem? LinearLayout mLinearLayout; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Create a LinearLayout in which to add the ImageView mLinearLayout = new LinearLayout(this); TextView tv = new TextView(this); tv.setBackgroundColor(0xff333333); tv.setText("Enter your member number:"); tv.setLayoutParams(new ViewGroup.LayoutParams(LayoutParams.FILL_PARENT,LayoutParams.WRAP_CONTENT)); DrawableView i = new DrawableView(this); i.layout(0,40,0,0); i.setLayoutParams(new Gallery.LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT)); mLinearLayout.addView(tv); mLinearLayout.addView(i,300,300); setContentView(mLinearLayout); }

    Read the article

  • operator<< overload,

    - by mr.low
    //using namespace std; using std::ifstream; using std::ofstream; using std::cout; class Dog { friend ostream& operator<< (ostream&, const Dog&); public: char* name; char* breed; char* gender; Dog(); ~Dog(); }; im trying to overload the << operator. I'm also trying to practice good coding. But my code wont compile unless i uncomment the using namespace std. i keep getting this error and i dont know. im using g++ compiler. Dog.h:20: error: ISO C++ forbids declaration of ‘ostream’ with no type Dog.h:20: error: ‘ostream’ is neither function nor member function; cannot be declared friend. if i add line using std::cout; then i get this error. Dog.h:21: error: ISO C++ forbids declaration of ‘ostream’ with no type. Can somebody tell me the correct way to overload the << operator with out using namespace std;

    Read the article

  • How can I get the correct DisplayMetrics from an AppWidget in Android?

    - by Gary
    I need to determine the screen density at runtime in an Android AppWidget. I've set up an HDPI emulator device (avd). If set up a regular executable project, and insert this code into the onCreate method: DisplayMetrics dm = getResources().getDisplayMetrics(); Log.d("MyTag", "screen density " + dm.densityDpi); This outputs "screen density 240" as expected. However, if I set up an AppWidget project, and insert this code into the onUpdate method: DisplayMetrics dm = context.getResources().getDisplayMetrics(); Log.d("MyTag", "screen density " + dm.densityDpi); This outputs "screen density 160". I noticed, hooking up the debugger, that the mDefaultDisplay member of the Resources object here is null in the AppWidget case. Similarly, if I get a resource at runtime using the Resources object obtained from context.getResources() in the AppWidget, it returns the wrong resource based on screen density. For instance, I have a 60x60px drawable for mdpi, and an 80x80 drawable for hdpi. If I get this Drawable object using context.getResources().getDrawable(...), it returns the 60x60 version. Is there any way to correctly deal with resources at runtime from the context of an AppWidget? Thanks!

    Read the article

  • WPF binding fails with custom add and remove accessors for INotifyPropertyChanged.PropertyChanged

    - by emddudley
    I have a scenario which is causing strange behavior with WPF data binding and INotifyPropertyChanged. I want a private member of the data binding source to handle the INotifyPropertyChanged.PropertyChanged event. I get some exceptions which haven't helped me debug, even when I have "Enable .NET Framework source stepping" checked in Visual Studio's options: A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll A first chance exception of type 'System.InvalidOperationException' occurred in PresentationCore.dll Here's the source code: XAML <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="TestApplication.MainWindow" DataContext="{Binding RelativeSource={RelativeSource Self}}" Height="100" Width="100"> <StackPanel> <CheckBox IsChecked="{Binding Path=CheckboxIsChecked}" Content="A" /> <CheckBox IsChecked="{Binding Path=CheckboxIsChecked}" Content="B" /> </StackPanel> </Window> Normal implementation works public partial class MainWindow : Window, INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public bool CheckboxIsChecked { get { return this.mCheckboxIsChecked; } set { this.mCheckboxIsChecked = value; PropertyChangedEventHandler handler = this.PropertyChanged; if (handler != null) handler(this, new PropertyChangedEventArgs("CheckboxIsChecked")); } } private bool mCheckboxIsChecked = false; public MainWindow() { InitializeComponent(); } } Desired implementation doesn't work public partial class MainWindow : Window, INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged { add { lock (this.mHandler) { this.mHandler.PropertyChanged += value; } } remove { lock (this.mHandler) { this.mHandler.PropertyChanged -= value; } } } public bool CheckboxIsChecked { get { return this.mHandler.CheckboxIsChecked; } set { this.mHandler.CheckboxIsChecked = value; } } private HandlesPropertyChangeEvents mHandler = new HandlesPropertyChangeEvents(); public MainWindow() { InitializeComponent(); } public class HandlesPropertyChangeEvents : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public bool CheckboxIsChecked { get { return this.mCheckboxIsChecked; } set { this.mCheckboxIsChecked = value; PropertyChangedEventHandler handler = this.PropertyChanged; if (handler != null) handler(this, new PropertyChangedEventArgs("CheckboxIsChecked")); } } private bool mCheckboxIsChecked = false; } }

    Read the article

  • STLport crash (race condition, Darwin only?)

    - by Jonas Byström
    When I run STLport on Darwin I get a strange crash. (Haven't seen it anywhere else than on Mac, but exactly same thing crash on both i686 and PowerPC.) This is what it looks like in gdb: Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: 13 at address: 0x0000000000000000 [Switching to process 21097] 0x000000010120f47c in stlp_std::__node_alloc_impl::_M_allocate () It may be some setting in STLport, I noticed that Mac.h and MacOSX.h seemed far behind on features. I also know that it it must be some type of race condition, since it doesn't occur just by calling this method (implicity called). The crash happens mainly when I push the system, running 10 simultaneous threads that do a lot of string handling. Other theories I come up with have to do with compiler flags (configure script) and g++ 4.2 bugs (seems like 4.4.3 isn't on Mac yet with Objective-C support, which I need to link with). HELP!!! :) Edit: I run unit tests, which do all sorts of things. This problem arise when I start 10 threads that push the system; and it always comes down to std::string::append which eventually boils down to _M_allocate. Since I can't even get a descent dump of the code that's causing the problem, I figure I'm doing something bad. Could it be so since it's trying to execute at instruction pointer 0x000...000? Are dynlibs built as DLLs in Windows with a jump table? Could it perhaps be that such a jump table has been overwritten for some reason? That would probably explain this behavior. (The code is huge, if I run out of other ideas, I'll post a minimum crashing sample here.)

    Read the article

  • Windows Programming: ID2D1Bitmap Interface - Getting the Bitmap Data

    - by LostInBrackets
    I've been writing my own library of functions to access some of the new Direct2D Windows libraries. In particular, I've been working on the ID2D1Bitmap interface. I wanted to write a function to return a pointer to the start of the bitmap data (for the editing of particular pixels, or custom encoding or whatever else I might wish for in the future). Unfortunately... problem ahead... I can't seem to find a way to get access to the raw pixel data from the ID2D1Bitmap Interface. Does anyone have an idea how to access this? One of my friends suggested drawing the bitmap to a surface and extracting the bitmap data from there. I don't know if this would work. It definitely seems inefficient and I wouldn't know which kind of surface to use. Any help is appreciated. (c++ in particular, but I assume the code won't be tooo different between languages) (I know I could just read in the data direct from the file, but I'm using the WIC decoders which means it could be in any number of indecipherable formats)

    Read the article

  • vimscript: calling dictionary functions with call()

    - by intuited
    I'm hoping to call a "static" dictionary function using call(). By "static" I mean that the keyword 'dict' is not used in the function's definition. I use this nomenclature in the hopes that the effect of this keyword is to declare a static member function as is possible in java/C++/etc, ie to put the function name in the class namespace but allow it to be called without referencing an object. However this doesn't seem to work. For example: " Setup: let testdict = { } funct! testdict.funct() echo "called" endfunct " Tests: " Following each line is an indented comment " containing its output in message land, ie what was echoed. call testdict.funct() " called echo testdict.funct " 667 echo string(testdict.funct) " function('667') echo function('667') " E475: Invalid argument: 667 echo function('testdict.funct') " testdict.funct call call(testdict.funct, [ ]) " E725: Calling dict function without Dictionary: 667 " Same deal if there's an intermediate variable involved. let TestdictDotFunct = testdict.funct echo TestdictDotFunct " 667 echo string(TestdictDotFunct) " function('667') call TestdictDotFunct() " E725: Calling dict function without Dictionary: 667 From the help topic E725: It is also possible to add a function without the "dict" attribute as a Funcref to a Dictionary, but the "self" variable is not available then. So logic would seem to indicate that if "self" is not available, then it should be possible to call the function referenced by the Funcref without a Dictionary. However this doesn't seem to be the case. Am I missing something? Vim version info: $ aptitude show vim-gnome Package: vim-gnome State: installed Automatically installed: no Version: 2:7.2.245-2ubuntu2

    Read the article

  • How does virtual inheritance solve the diamond problem?

    - by cambr
    class A { public: void eat(){ cout<<"A";} }; class B: virtual public A { public: void eat(){ cout<<"B";} }; class C: virtual public A { public: void eat(){ cout<<"C";} }; class D: public B,C { public: void eat(){ cout<<"D";} }; int main(){ A *a = new D(); a->eat(); } I understand the diamond problem, and above piece of code does not have that problem. How exatly does virtual inheritance solve the problem? What I understand: When I say A *a = new D();, the compiler wants to know if an object of type D can be assigned to a pointer of type A, but it has two paths that it can follow, but cannot decide by itself. So, how does virtual inheritance resolve the issue (help compiler take the decision)?

    Read the article

  • argv Memory Allocation

    - by Joshua Green
    I was wondering if someone could tell me what I am doing wrong that I get this Unhandled Exception error message: 0xC0000005: Access violation reading location 0x0000000c. with a green pointer pointing at my first Prolog code (fid_t): Here is my header file: class UserTaskProlog { public: UserTaskProlog( ArRobot* r ); ~UserTaskProlog( ); protected: int cycles; char* argv[ 1 ]; term_t tf; term_t tx; term_t goal_term; functor_t goal_functor; ArRobot* robot; void logTask( ); }; And here is my main code: UserTaskProlog::UserTaskProlog( ArRobot* r ) : robot( r ), robotTaskFunc( this, &UserTaskProlog::logTask ) { cycles = 0; argv[ 0 ] = "libpl.dll"; argv[ 1 ] = NULL; PL_initialise( 1, argv ); PlCall( "consult( 'myPrologFile.pl' )" ); robot->addSensorInterpTask( "UserTaskProlog", 50, &robotTaskFunc ); } UserTaskProlog::~UserTaskProlog( ) { robot->remSensorInterpTask( &robotTaskFunc ); } void UserTaskProlog::logTask( ) { cycles++; fid_t fid = PL_open_foreign_frame( ); tf = PL_new_term_ref( ); PL_put_integer( tf, 5 ); tx = PL_new_term_ref( ); goal_term = PL_new_term_ref( ); goal_functor = PL_new_functor( PL_new_atom( "factorial" ), 2 ); PL_cons_functor( goal_term, goal_functor, tf, tx ); int fact; if ( PL_call( goal_term, NULL ) ) { PL_get_integer( tx, &fact ); cout << fact << endl; } PL_discard_foreign_frame( fid ); } int main( int argc, char** argv ) { ArRobot robot; ArArgumentParser argParser( &argc, argv ); UserTaskProlog talk( &robot ); } Thank you,

    Read the article

  • Call dll - pcshll32.dll using delphi

    - by Davis
    Hi, I need to call hllapi function of pcshll32.dll using delphi. It's works with personal communications of ibm. How can i change the code bellow to delphi ? Thanks !!! The EHLLAPI entry point (hllapi) is always called with the following four parameters: EHLLAPI Function Number (input) Data Buffer (input/output) Buffer Length (input/output) Presentation Space Position (input); Return Code (output) The prototype for IBM Standard EHLLAPI is: [long hllapi (LPWORD, LPSTR, LPWORD, LPWORD); The prototype for IBM Enhanced EHLLAPI is: [long hllapi (LPINT, LPSTR, LPINT, LPINT); Each parameter is passed by reference not by value. Thus each parameter to the function call must be a pointer to the value, not the value itself. For example, the following is a correct example of calling the EHLLAPI Query Session Status function: #include "hapi_c.h" struct HLDQuerySessionStatus QueryData; int Func, Len, Rc; long Rc; memset(QueryData, 0, sizeof(QueryData)); // Init buffer QueryData.qsst_shortname = ©A©; // Session to query Func = HA_QUERY_SESSION_STATUS; // Function number Len = sizeof(QueryData); // Len of buffer Rc = 0; // Unused on input hllapi(&Func, (char *)&QueryData, &Len, &Rc); // Call EHLLAPI if (Rc != 0) { // Check return code // ...Error handling } All the parameters in the hllapi call are pointers and the return code of the EHLLAPI function is returned in the value of the 4th parameter, not as the value of the function.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >