Search Results

Search found 26 results on 2 pages for 'sandor drieenhuizen'.

Page 1/2 | 1 2  | Next Page >

  • How do I 'addChild' an DisplayObject3d from another class? (Papervision3d)

    - by Sandor
    Hi All Im kind of new in the whole papervision scene. For a school assignment I'm making a panorama version of my own room using a cube with 6 pictures in it. It created the panorama, it works great. But now I want to add clickable objects in it. One of the requirements is that my code is OOP focused. So that's what I am trying right now. Currently I got two classes - Main.as (Here i make the panorama cube as the room) - photoWall.as (Here I want to create my first clickable object) Now my problem is: I want to addChild a clickable object from photoWall.as to my panorama room. But he doesn't show it? I think it has something to do with the scenes. I use a new scene in Main.as and in photoWall.as. No errors or warnings are reported This is the piece in photoWall.as were I want to addChild my object (photoList): private function portret():void { //defining my material for the clickable portret var material : BitmapFileMaterial = new BitmapFileMaterial('images/room.jpg'); var material_list : MaterialsList = new MaterialsList( { front: material, back: material } ); // I don't know if this is nessecary? that's my problem scene = new Scene3D(); material.interactive = true; // make the clickable object as a cube var photoList : DisplayObject3D = new Cube(material_list, 1400, 1400, 1750, 1, 4, 4, 4); // positioning photoList.x = -1400; photoList.y = -280; photoList.z = 5000; //mouse event photoList.addEventListener( InteractiveScene3DEvent.OBJECT_CLICK, onPress); // this is my problem! I cannot see 'photoList' within my scene!!! scene.addChild(photoList); // trace works, so the function must be loaded. trace('function loaded'); } Hope you guys can help me out here. Would really be great! Thanks, Sandor

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubiquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with mixed languages anyway. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options? UPDATE Today, about year later, having dealt with this issue on a daily basis, I have to say that option 3 has worked out pretty well for me. It wasn't as tedious as I initially feared and translating in real time while talking to the client wasn't a problem either. I also found the following advantages to be true, based on my experience. Translating the UL makes you pay more attention to defining the UL and even the domain itself, especially when you don't know how to translate a term and you have to start looking through dictionaries etc. This has even caused me to reconsider domain modeling decisions a few times. It helps you make your knowledge of the English language more profound. Obviously, your code is much more pleasant to look at instead of being a mind boggling obscenity.

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with a mixed languages. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options?

    Read the article

  • ash scripting: space-containing variable refuses to be grepped

    - by Luci Sandor
    I am trying to run the script listed at http://talk.maemo.org/showthread.php?t=70866&page=2 on its intended hardware, a Nokia Linux phone running BusyBox ash. The script receives the name of WiFi network as a parameter, and tries to connect the phone to it. I suspect the script works, but my SSID, BU (802.1x), has space and parentheses in it. So when I type at the command prompt autoconnect.sh BU\ \(802.1x\) I get various errors. First, LIST=`iwconfig wlan0 | awk -F":" '/ESSID/{print $2}'` if [ $LIST = "\"$1\"" ]; then ...fails, even I am connected to the network. The error is not avoided by using single or double quotes instead of escaping characters at the command prompt. Second, if [ -z `iwlist wlan0 scan | grep -m 1 -o \"$1\"` ]; then echo SSID \"$1\" not found; shows that grep does not find the string, although the same grep, typed directly into the command prompt, does find 'BU (802.1x)'. How do I quote $1 in the two circumstances above so that it will work with my network SSID, containing spaces and parentheses? Thank you.

    Read the article

  • How to configure multiple WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Configuring multiple WCF binding configurations for the same scheme doesn't work

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" bindingConfiguration="public" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" bindingConfiguration="public" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • How to configurie multiple distinct WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Using multiple distinct TCP security binding configurations in a single WCF IIS-hosted WCF service a

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Configuring multiple distinct WCF binding configurations causes an exception to be thrown

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • ASP.NET MVC 2 matches correct area route but generates URL to the first registered area instead.

    - by Sandor Drieënhuizen
    I'm working on a S#arpArchitecture 1.5 project, which uses ASP.NET MVC 2. I've been trying to get areas to work properly but I ran into a problem: The ASP.NET MVC 2 routing engine matches the correct route to my area but then it generates an URL that belongs to the first registered area instead. Here's my request URL: /Framework/Authentication/LogOn?ReturnUrl=%2fDefault.aspx I'm using the Route Tester from Phil Haack and it shows: Matched Route: Framework/{controller}/{action}/{id} Generated URL: /Data/Authentication/LogOn?ReturnUrl=%2FDefault.aspx using the route "Data/{controller}/{action}/{id}" That's clearly wrong, the URL should point to the Framework area, not the Data area. This is how I register my routes, nothing special there IMO. private static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); AreaRegistration.RegisterAllAreas(); routes.MapRoute( "default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } The area registration classes all look like this. Again, nothing special. public class FrameworkAreaRegistration : AreaRegistration { public override string AreaName { get { return "Framework"; } } public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Framework_default", "Framework/{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } }

    Read the article

  • Unit testing ASP.NET MVC 2 routes with areas bails out on AreaRegistration.RegisterAllAreas()

    - by Sandor Drieënhuizen
    I'm unit testing my routes in ASP.NET MVC 2. I'm using MSTest and I'm using areas as well. When I call AreaRegistration.RegisterAllAreas() however, it throws this exception: System.InvalidOperationException: System.InvalidOperationException: This method cannot be called during the application's pre-start initialization stage.. OK, so I reckon I can't call it from my class initializer. But when can I call it? I don't have an Application_Start in my test obviously.

    Read the article

  • Sharing constants across a WCF service

    - by Sandor Davidhazi
    I have certain strings which contain special characters so they can not be shared as enum members across a WCF service. (Actually, they are keys for configuration values.) I want to be able to pass in the keys at client side and get back the config values. If there is a change, I only want to change the config keys at one place. Constants would be ideal, because they can be changed as strong references across the entire solution, and the underlaying value could be updated with a service reference update. Currently I can think of two possible solutions: Create a shared assembly and place the constants there Share the constants across the service. The problem is, I can't get the datacontractserializer to serialize the constants. Is that possible at all? Is the shared assembly the only option I have?

    Read the article

  • ASP.NET MVC 2 router matches correct area route but generates URL to the first registered area inste

    - by Sandor Drieënhuizen
    I'm working on a S#arpArchitecture 1.5 project, which uses ASP.NET MVC 2. I've been trying to get areas to work properly but I ran into a problem: The ASP.NET MVC 2 routing engine matches the correct route to my area but then it generates an URL that belongs to the first registered area instead. Here's my request URL: /Framework/Authentication/LogOn?ReturnUrl=%2fDefault.aspx I'm using the Route Tester from Phil Haack and it shows: Matched Route: Framework/{controller}/{action}/{id} Generated URL: /Data/Authentication/LogOn?ReturnUrl=%2FDefault.aspx using the route "Data/{controller}/{action}/{id}" That's clearly wrong, the URL should point to the Framework area, not the Data area. This is how I register my routes, nothing special there IMO. private static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); AreaRegistration.RegisterAllAreas(); routes.MapRoute( "default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } The area registration classes all look like this. Again, nothing special. public class FrameworkAreaRegistration : AreaRegistration { public override string AreaName { get { return "Framework"; } } public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Framework_default", "Framework/{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } }

    Read the article

  • CSharpCodeProvider doesn't return compiler warnings when there are no errors

    - by Sandor Drieënhuizen
    I'm using the CSharpCodeProvider class to compile a C# script which I use as a DSL in my application. When there are warnings but no errors, the Errors property of the resulting CompilerResults instance contains no items. But when I introduce an error, the warnings suddenly get listed in the Errors property as well. string script = @" using System; using System; \\ generate a warning namespace MyNamespace { public class MyClass { public void MyMethod() { \\ uncomment the next statement to generate an error \\intx = 0; } } } "; CSharpCodeProvider provider = new CSharpCodeProvider( new Dictionary<string, string>() { { "CompilerVersion", "v4.0" } }); CompilerParameters compilerParameters = new CompilerParameters(); compilerParameters.GenerateExecutable = false; compilerParameters.GenerateInMemory = true; CompilerResults results = provider.CompileAssemblyFromSource( compilerParameters, script); foreach (CompilerError error in results.Errors) { Console.Write(error.IsWarning ? "Warning: " : "Error: "); Console.WriteLine(error.ErrorText); } So how to I get hold of the warnings when there are no errors? By the way, I don't want to set TreatWarningsAsErrors to true.

    Read the article

  • HiLo: how to control Low values

    - by Sandor Drieënhuizen
    I'm using the HiLo generator in my S#rpArchitecture/NHibernate project and I'm performing a large import batch. I've read somewhere about the possibility to predict the Low values of any new records because they are generated on the client. I figure this means I can control the Low values myself or at least fetch the next Low value from somewhere. The reason I want to use this is that I want to set relations to other entities I'm about to insert. They do not exist yet but will be inserted before the batch transaction completes. However, I cannot find information about how to set the Low values or how to get what Low value is up next. Any ideas?

    Read the article

  • Override the neutral language of a specific resource file within an assembly

    - by Sandor Drieënhuizen
    I have an assembly that contains several resource files. Most of them have the neutral language 'nl' (Dutch, specified on the assembly as the neutral language), so I don't specify the 'nl' in their filenames. However, I'm putting strings in the English language in some other resource files (they are internal error messages) and I will never provide Dutch translations of them. If I name those resource files something like 'Errors.en.resx', no designer class is generated (breaks the build) because there is no 'Errors.resx'. This is annoying because now I have to put 'en' strings into a 'nl'-implied resource file and I really don't want to translate those strings to 'nl' or provide empty strings just to satisfy the compiler. Is there a way to override the neutral language on a specific resource file or perhaps somehow have the 'Errors.en.resx' build a designer class?

    Read the article

  • How to get access to a window that is loaded into a panel

    - by Sandor Drieënhuizen
    I'm loading an external script (that creates a new window component) into a panel, which works fine. Now, I want to access the created window from a callback function to register a closed event handler. I've tried the following: panel.load({ scripts: true, url: '/createWindow', callback: function(el, success, response, options) { panel.findByType("window")[0].on("close", function { alert("Closed"); }); } }); However, the panel seems to be empty all the time, the findByType method keeps returning an empty collection. I've tried adding events handlers for events like added to the panel but none of them got fired. So the question is: how do I access the window in the panel to register my close event handler on it?

    Read the article

  • WPF dependency property setter not firing when PropertyChanged is fired, but source value is not cha

    - by Sandor Davidhazi
    I have an int dependency property on my custom Textbox, which holds a backing value. It is bound to an int? property on the DataContext. If I raise the PropertyChanged event in my DataContext, and the source property's value is not changed (stays null), then the dependency property's setter is not fired. This is a problem, because I want to update the custom Textbox (clear the text) on PropertyChanged, even if the source property stays the same. However, I didn't find any binding option that does what I want (there is an UpdateSourceTrigger property, but I want to update the target here, not the source). Maybe there is a better way to inform the Textbox that it needs to clear its text, I'm open to any suggestions.

    Read the article

  • How to perform two-way data binding of controls in a user control inside a FormView

    - by Sandor Drieënhuizen
    I'm trying to perform two-way data binding on the controls in my user control, which is hosted inside a FormView template. FormView: <asp:ObjectDataSource runat="server" ID="ObjectDataSource" TypeName="WebApplication1.Data" SelectMethod="GetItem" UpdateMethod="UpdateItem"> </asp:ObjectDataSource> <asp:FormView runat="server" ID="FormView" DataSourceID="ObjectDataSource"> <ItemTemplate> <uc:WebUserControl1 runat="server"></uc:WebUserControl1> </ItemTemplate> <EditItemTemplate> <uc:WebUserControl1 runat="server"></uc:WebUserControl1> </EditItemTemplate> </asp:FormView> User control: <%@ Control Language="C#" ... %> <asp:TextBox runat="server" ID="TitleTextBox" Text='<%# Bind("Title") %>'> </asp:TextBox> The binding works fine when the FormView is in View mode but when I switch to Edit mode, upon calling UpdateItem on the FormView, the bindings are lost. I know this because the FormView tries to call an update method on the ObjectDataSource that does not have an argument called 'Title'. I tried to solve this by implementing IBindableTemplate to load the controls that are inside my user control, directly into the templates (just like I had entered them declaratively like in the code above). However, when calling UpdateItem in edit mode, the container that gets passed into the ExtractValues method of the template, does not contain the TextBox anymore. It did in view mode! I have found some questions on SO that relate to this problem but they are rather dated and they don't provide any answers that helped me solve this problem. How do you think I could solve this problem? It seems to be such a simple requirement but apparently it's more like opening a can of worms...

    Read the article

  • Get compiler generated delegate for an event

    - by Sandor Davidhazi
    I need to know what handlers are subsribed to the CollectionChanged event of the ObservableCollection class. The only solution I found would be to use Delegate.GetInvocationList() on the delegate of the event. The problem is, I can't get Reflection to find the compiler generated delegate. AFAIK the delegate has the same name as the event. I used the following piece of code: PropertyInfo notifyCollectionChangedDelegate = collection.GetType().GetProperty("CollectionChanged", BindingFlags.Instance | BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.FlattenHierarchy);

    Read the article

  • How to get hold of the current NHibernate.Cfg.Configuration instance.

    - by Sandor Drieënhuizen
    My C# project has repositories that are instantiated using dependency injection. One of the repository methods needs access to the NHibernate.Cfg.Configuration instance (to generate the database schema) that was returned when initializing NHibernate. I cannot pass the configuration to the repository however, because that would break the persistence ignorance principle -- I really don't want to expose any implementation details through the repository interface. So what I'm looking for is a way of getting hold of the current NHibernate.Cfg.Configuration instance from within my repository. I have no trouble getting hold of the current session, it's just the configuration that I cannot get hold of.

    Read the article

  • How do I imply code contracts of chained methods to avoid superfluous checks while chaining?

    - by Sandor Drieënhuizen
    I'm using Code Contracts in C# 4.0. I'm applying the usual static method chaining to simulate optional parameters (I know C# 4.0 supports optional parameters but I really don't want to use them). The thing is that my contract requirements are executed twice (or possibly the number of chained overloads I'd implement) if I call the Init(string , string[]) method -- an obvious effect from the sample source code below. This can be expensive, especially due to relatively expensive requirements like the File.Exists I use. public static void Init(string configurationPath, string[] mappingAssemblies) { // The static contract checker 'makes' me put these here as well as // in the overload below. Contract.Requires<ArgumentNullException>(configurationPath != null, "configurationPath"); Contract.Requires<ArgumentException>(configurationPath.Length > 0, "configurationPath is an empty string."); Contract.Requires<FileNotFoundException>(File.Exists(configurationPath), configurationPath); Contract.Requires<ArgumentNullException>(mappingAssemblies != null, "mappingAssemblies"); Contract.ForAll<string>(mappingAssemblies, (n) => File.Exists(n)); Init(configurationPath, mappingAssemblies, null); } public static void Init(string configurationPath, string[] mappingAssemblies, string optionalArgument) { // This is the main implementation of Init and all calls to chained // overloads end up here. Contract.Requires<ArgumentNullException>(configurationPath != null, "configurationPath"); Contract.Requires<ArgumentException>(configurationPath.Length > 0, "configurationPath is an empty string."); Contract.Requires<FileNotFoundException>(File.Exists(configurationPath), configurationPath); Contract.Requires<ArgumentNullException>(mappingAssemblies != null, "mappingAssemblies"); Contract.ForAll<string>(mappingAssemblies, (n) => File.Exists(n)); //... } If however, I remove the requirements from that method, the static checker complains that the requirements of the Init(string, string[], string) overload are not met. I reckon that the static checker doesn't understand that there requirements of the Init(string, string[], string) overload implicitly apply to the Init(string, string[]) method as well; something that would be perfectly deductable from the code IMO. This is the situation I would like to achieve: public static void Init(string configurationPath, string[] mappingAssemblies) { // I don't want to repeat the requirements here because they will always // be checked in the overload called here. Init(configurationPath, mappingAssemblies, null); } public static void Init(string configurationPath, string[] mappingAssemblies, string optionalArgument) { // This is the main implementation of Init and all calls to chained // overloads end up here. Contract.Requires<ArgumentNullException>(configurationPath != null, "configurationPath"); Contract.Requires<ArgumentException>(configurationPath.Length > 0, "configurationPath is an empty string."); Contract.Requires<FileNotFoundException>(File.Exists(configurationPath), configurationPath); Contract.Requires<ArgumentNullException>(mappingAssemblies != null, "mappingAssemblies"); Contract.ForAll<string>(mappingAssemblies, (n) => File.Exists(n)); //... } So, my question is this: is there a way to have the requirements of Init(string, string[], string) implicitly apply to Init(string, string[]) automatically?

    Read the article

  • Two-way data binding of controls in a user control inside a FormView

    - by Sandor Drieënhuizen
    I'm trying to perform two-way data binding on the controls in my user control, which is hosted inside a FormView template. FormView: <asp:ObjectDataSource runat="server" ID="ObjectDataSource" TypeName="WebApplication1.Data" SelectMethod="GetItem" UpdateMethod="UpdateItem"> </asp:ObjectDataSource> <asp:FormView runat="server" ID="FormView"> <ItemTemplate> <uc:WebUserControl1 runat="server"></uc:WebUserControl1> </ItemTemplate> <EditItemTemplate> <uc:WebUserControl1 runat="server"></uc:WebUserControl1> </EditItemTemplate> </asp:FormView> User control: <%@ Control Language="C#" ... %> <asp:TextBox runat="server" ID="TitleTextBox" Text='<%# Bind("Title") %>'> </asp:TextBox> The binding works fine when the FormView is in View mode but when I switch to Edit mode, upon calling UpdateItem on the FormView, the bindings are lost. I know this because the FormView tries to call an update method on the ObjectDataSource that does not have an argument called 'Title'. I tried to solve this by implementing IBindableTemplate to load the controls that are inside my user control, directly into the templates (just like I had entered them declaratively like in the code above). However, when calling UpdateItem in edit mode, the container that gets passed into the ExtractValues method of the template, does not contain the TextBox anymore. It did in view mode! I have found some questions on SO that relate to this problem but they are rather dated and don't provide straight forward answers. How do you think I could solve this problem? It seems to be such a simple requirement but apparently it's more like opening a can of worms...

    Read the article

  • Monotouch Binding to Linea Pro SDK

    - by jeffrapp
    I'm trying to create a binding to the Linea Pro (it's the barcode scanner they use in the Apple Stores, Lowes) SDK. I'm using David Sandor's bindings as a reference, but the SDK has been updated a few times since January of 2011. I have most everything working, except for the playSound call, which is used to, well, play a sound on the Linea Pro device. The .h file from the SDK has the call as follows: -(BOOL)playSound:(int)volume beepData:(int *)data length:(int)length error:(NSError **)error; I've tried using int[], NSArray, and an IntPtr to the int[], but nothing seems to work. The last unsuccessful iteration of my binding looks like: [Export ("playSound:beepData:length:")] void PlaySound (int volume, NSArray data, int length); Now, this doesn't work at all. Also note that I have no idea what to do with the error:(NSError **)error part, either. I am lacking some serious familiarity with C, so any help would be extremely appreciated.

    Read the article

  • How can I set Vim to obey accents of my spoken language?

    - by naxa
    When pressing w or e in sentences with accents (written in my native language), such as the first one (marked **) here: **Éj-mélybol fölzengo** - csing-ling-ling - száncsengo. Száncsengo - csing-ling-ling - tél csendjén halkan ring. [1] the characters o, ö, among others [2], make my gVim think they are word-ends so it stops on them (in Normal mode). gVim stops on the positions marked with _ where it shouldn't: Éj-mélyb_ol f_ölzeng_o. I would like to set gVim so it properly handle words even when containing accents and other local characters. But where do I set this? I use it on Win32, vim v 7.3.46. [1] - excerpt of a poem by Weöres Sándor [2] - "others", not mentioned here :) like í, u are also a problem. On the other hand, gVim seems to already work with é and á. gVim version info: VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 27 2010 17:59:02) Included patches: 1-46 Compiled by Bram@KIBAALE Big version with GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse +mouseshape +multi_byte_ime/dyn +multi_lang -mzscheme +netbeans_intg +ole -osfiletype +path_extra +perl/dyn +persistent_undo -postscript +printer -profile +python/dyn +python3/dyn +quickfix +reltime +rightleft +ruby/dyn +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white +tcl/dyn -tgetent -termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save +xpm_w32

    Read the article

1 2  | Next Page >