Search Results

Search found 23207 results on 929 pages for 'node form'.

Page 204/929 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

  • Why are ISP's installing routers on my site when the feed is a form of ethernet already?

    - by Cosmin Prund
    I'm connected to 3 ISP's right now. Two of them already have routers at my site, the third one announced me "they need to install some equipment" when I requested BGP session. I can only assume they need to install a Router, since that connection is now working fine, using the usual /30 net block for the connection, and the "last-mile" solution is not going to change since they only installed it last week and the BGP was in the contract from the beginning. I simply don't understand this: the "feed" is already a form of ethernet. Even those they're using different technologies for the last mile, they're all entering the ISP router using an RJ45 WAN port. I assume the ISP router does something really important that can't be done by the Big Router on the other end of the connection. It must also be something that can hurt them if miss-configured, since they don't trust us (the client) to do the stuff on our router. And I'm not talking cheap throw-away routers here: One of the routers is Cisco 2800. Edit to add network details: I'm connected to 3 ISP's, two over Radio links, one over Fiber Optic. One of the radio links is going to get dropped and the other radio link will be turned into fiber sometime next year. The fiber is 20 Mbit, radio 1 is 40 Mbit and radio 2 is 2 Mbit. I've got a /24 of provider independent address space. I'm not doing out-of-the ordinary stuff with my network, I'm overly connected because my network needs to be "up" all the time.

    Read the article

  • AppEngine BlobStore upload failing when request is programmatic

    - by Joe Ludwig
    I have an AppEngine application that uses the blobstore to store user-provided image data. When I upload images to that application from a form in Chrome it works fine. When I try to upload an image from an Android application it fails. Both methods work fine if I am running against the development server, but the Android upload doesn't work against the live service. This is the request from Chrome: POST /_ah/upload/?userToken=11001/AMmfu6ZCyMQQ9YdiXal3SmSXIRTQIuSRXkNc-i3JmU0fqx_kJbUJ2OMLcS2lXhVJSK4qs7regViTKzOPz5ejoZYi0nAD5o8vNltiOViQw6DZO7_byZz3Ut0/ALBNUaYAAAAAS_lusgPMAGmpPrg0BuNsJyymX-57ob4i/ HTTP/1.1 Host: photohuntservice.appspot.com Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1064 Safari/532.5 Referer: http://photohuntservice.appspot.com/debug_newpuzzle?userToken=11001 Content-Length: 60360 Cache-Control: max-age=0 Origin: http://photohuntservice.appspot.com Content-Type: multipart/form-data; boundary=----WebKitFormBoundarybl05YLmLbFRf2MzN Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 ------WebKitFormBoundarybl05YLmLbFRf2MzN Content-Disposition: form-data; name="userToken" 11001 ------WebKitFormBoundarybl05YLmLbFRf2MzN Content-Disposition: form-data; name="img"; filename="Photo_020908_001.jpg" Content-Type: image/jpeg <image data> ------WebKitFormBoundarybl05YLmLbFRf2MzN Content-Disposition: form-data; name="longitude" -122.084095 ------WebKitFormBoundarybl05YLmLbFRf2MzN Content-Disposition: form-data; name="latitude" 37.422006 ------WebKitFormBoundarybl05YLmLbFRf2MzN-- This is the request from my client (which is written in Java on Android, but I don't think that's relevant): POST /_ah/upload/?userToken=11001/AMmfu6Zf9an6AU4lT9UuhIpxOZyOYb1LMwimFpeSh8zr6J1sX9F2ddJW3Qlsw0kwV3oALv-TNPWRQ6g4_Dgwk0UTwF47bbc78Yl44kDeV69MydTuR3N46S4/ALBNUaYAAAAAS_mMr3CYqTg3aVBDjhRxP0DyyRdvotyG/ HTTP/1.1 Content-Type: multipart/form-data;boundary=----WebKitFormBoundaryhdyNAhmOouRDGErG Cache-Control: max-age=0 Accept: */* Origin: http://photohuntservice.appspot.com Connection: keep-alive Referer: http://photohuntservice.appspot.com/getuploadurl?userToken=11001 Content-Length: 2638 Host: photohuntservice.appspot.com User-Agent: Apache-HttpClient/UNAVAILABLE (java 1.4) Expect: 100-Continue ------WebKitFormBoundaryhdyNAhmOouRDGErG Content-Disposition: form-data; name="userToken" 11001 ------WebKitFormBoundaryhdyNAhmOouRDGErG Content-Disposition: form-data; name="img";filename="PhotoHunt.jpg" Content-Type: image/jpeg <image data> ------WebKitFormBoundaryhdyNAhmOouRDGErG Content-Disposition: form-data; name="latitude" 37.422006 ------WebKitFormBoundaryhdyNAhmOouRDGErG Content-Disposition: form-data; name="longitude" -122.084095 ------WebKitFormBoundaryhdyNAhmOouRDGErG-- In both cases the AppEngine Python code to catch the request is the same: class UploadPuzzle( blobstore_handlers.BlobstoreUploadHandler ): def post(self): upload_files = self.get_uploads( ) The problem is that when running on the production AppEngine service self.get_uploads() returns an empty list when the request is made from my client app. Both requests return what I expect (a list with one blob_info in it) on the development server, and Chrome returns what I expect in both cases.

    Read the article

  • How to load and pass a Xforms form in Orbeon (How to Send instance to XForms) ?

    - by Clem
    Hi, I am using the Orbeon Forms solution to generate messages from filled-in web forms. I read different code snippetse in Orbeon's wiki on XForms submission from a pipeline, and I tried different solutions but it doesn't work, and there is no example with a POST from a pipeline, caught by a PFC and sent to an XForms view that receives the posted data (all examples are done in the same page). I have the following pipeline which is received on his instance input: pipelineWrite.xpl <p:config ...> <p:param name="instance" type="input"/> <!-- instance containing the data of the form filled by user --> <p:param name="data" type="output"/> <p:processor name="oxf:java"> <!-- transforms the data into a file --> <p:input name="config"> <config sourcepath="." class="ProcessorWriteCUSDECCD001B"/> </p:input> <p:input name="input" href="#instance"/> <p:output name="output" id="file"/> <!-- XML containing the url of the file --> </p:processor> <p:processor name="oxf:xforms-submission"> <!-- post the XML to the success view --> <p:input name="submission"> <xforms:submission method="post" action="/CUSDECCD001B/success" /> </p:input> <p:input name="request" href="#file"/> <p:output name="response" ref="data"/> </p:processor> </p:config> Then there is the PFC which catch the actions : page-flow.xml <config xmlns="http://www.orbeon.com/oxf/controller"> <page path-info="/CUSDECCD001B/" view="View/ViewForm.xhtml"/> <!-- load the form to be filled in by user --> <page path-info="/CUSDECCD001B/write" model="Controller/PipelineWrite.xpl"/> <!-- send the instance of the form filled to the pipeline above --> <page path-info="/CUSDECCD001B/success" view="View/ViewSuccess.xhtml"/> <!-- send the instance containing the url of the file to the success view --> <epilogue url="oxf:/config/epilogue.xpl"/> </config> Then there is the success view, which is very simple : ViewSuccess.xhtml <html ... > <head> <title>Generation OK</title> <xforms:model> <xforms:instance id="FILE" src="input:instance"> <files xmlns=""> <file mediaType="" filename="" size="" /> </files> </xforms:instance> </xforms:model> </head> <body> Click here to download : <xforms:output ref="//file" appearance="xxforms:download"> <xforms:filename ref="@filename"/> <xforms:mediatype ref="@mediatype"/> <xforms:label>Download</xforms:label> </xforms:output> </body> </html> The problem is that the post is done well, the PFC catches the action well, load the correct view, but the view is loaded with no data (the view doesn't find the data on his instance input). I tried with a GET in the view to retrieve the POST data, and that's the same thing. No data is retrieved. So the download button doesn't work. I hope I'm clear enough to find a solution. Thanks in advance.

    Read the article

  • AppEngine BlobStore upload failing with a request that works in the Development Environment

    - by Joe Ludwig
    I have an AppEngine application that uses the blobstore to store user-provided image data. When I upload images to that application from a form in Chrome it works fine. When I try to upload an image from an Android application it fails. Both methods work fine if I am running against the development server, but the Android upload doesn't work against the live service. This is the request from Chrome: POST /_ah/upload/?userToken=11001/AMmfu6ZCyMQQ9YdiXal3SmSXIRTQIuSRXkNc-i3JmU0fqx_kJbUJ2OMLcS2lXhVJSK4qs7regViTKzOPz5ejoZYi0nAD5o8vNltiOViQw6DZO7_byZz3Ut0/ALBNUaYAAAAAS_lusgPMAGmpPrg0BuNsJyymX-57ob4i/ HTTP/1.1 Host: photohuntservice.appspot.com Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1064 Safari/532.5 Referer: http://photohuntservice.appspot.com/debug_newpuzzle?userToken=11001 Content-Length: 60360 Cache-Control: max-age=0 Origin: http://photohuntservice.appspot.com Content-Type: multipart/form-data; boundary=----WebKitFormBoundarybl05YLmLbFRf2MzN Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 ------WebKitFormBoundarybl05YLmLbFRf2MzN Content-Disposition: form-data; name="userToken" 11001 ------WebKitFormBoundarybl05YLmLbFRf2MzN Content-Disposition: form-data; name="img"; filename="Photo_020908_001.jpg" Content-Type: image/jpeg <image data> ------WebKitFormBoundarybl05YLmLbFRf2MzN Content-Disposition: form-data; name="longitude" -122.084095 ------WebKitFormBoundarybl05YLmLbFRf2MzN Content-Disposition: form-data; name="latitude" 37.422006 ------WebKitFormBoundarybl05YLmLbFRf2MzN-- This is the request from my client (which is written in Java on Android, but I don't think that's relevant): POST /_ah/upload/?userToken=11001/AMmfu6Zf9an6AU4lT9UuhIpxOZyOYb1LMwimFpeSh8zr6J1sX9F2ddJW3Qlsw0kwV3oALv-TNPWRQ6g4_Dgwk0UTwF47bbc78Yl44kDeV69MydTuR3N46S4/ALBNUaYAAAAAS_mMr3CYqTg3aVBDjhRxP0DyyRdvotyG/ HTTP/1.1 Content-Type: multipart/form-data;boundary=----WebKitFormBoundaryhdyNAhmOouRDGErG Cache-Control: max-age=0 Accept: */* Origin: http://photohuntservice.appspot.com Connection: keep-alive Referer: http://photohuntservice.appspot.com/getuploadurl?userToken=11001 Content-Length: 2638 Host: photohuntservice.appspot.com User-Agent: Apache-HttpClient/UNAVAILABLE (java 1.4) Expect: 100-Continue ------WebKitFormBoundaryhdyNAhmOouRDGErG Content-Disposition: form-data; name="userToken" 11001 ------WebKitFormBoundaryhdyNAhmOouRDGErG Content-Disposition: form-data; name="img";filename="PhotoHunt.jpg" Content-Type: image/jpeg <image data> ------WebKitFormBoundaryhdyNAhmOouRDGErG Content-Disposition: form-data; name="latitude" 37.422006 ------WebKitFormBoundaryhdyNAhmOouRDGErG Content-Disposition: form-data; name="longitude" -122.084095 ------WebKitFormBoundaryhdyNAhmOouRDGErG-- In both cases the AppEngine Python code to catch the request is the same: class UploadPuzzle( blobstore_handlers.BlobstoreUploadHandler ): def post(self): upload_files = self.get_uploads( ) The problem is that when running on the production AppEngine service self.get_uploads() returns an empty list when the request is made from my client app. Both requests return what I expect (a list with one blob_info in it) on the development server, and Chrome returns what I expect in both cases.

    Read the article

  • Fill a list from JSP in Spring

    - by Javi
    Hello, I have something like this in my Spring Application: public class Book{ public Book(){ sheets = new LinkedList<Sheet>(); } protected List<Sheet> sheets; //getter and setter } I add several Sheets to the sheet list and I print a form in a JSP like this: <form:form modelAttribute="book" action="${dest_url}" method="POST"> <c:forEach items="${mybook.sheets}" var="sheet" varStatus="status"> <form:hidden path="sheet[${status.count -1}].header"/> <form:hidden path="sheet[${status.count -1}].footer"/> <form:hidden path="sheet[${status.count -1}].operador"/> <form:hidden path="sheet[${status.count -1}].number"/> <form:hidden path="sheet[${status.count -1}].lines"/> </c:forEach> ... </form:form> I need to get back this list in the controller when the form is submitted. So in my controller I have a method with a parameter like this: public String myMethod (@ModelAttribute("book") Book book, Model model){ ... } The problem is that it doesn't fill the sheets list unless in the constructor of Book I add as much Sheet's as I want to get. The problem is that I don't know in advance the number of Sheets the book is going to have. I think the problem is that in my method it instantiates Book which has a list of sheets with 0 elements. When it tries to access to sheets[0] the list is empty and it doen't add a Sheet. I've tried to create a getter method for the list with an index parameter (so it can create the element if it doesn't exists in the list like in Struts framework) like this one: public Sheet getSheets(int index){ if(sheets.size() <= index){ Sheet sheet = new Sheet(); sheets.add(index, sheet); } Sheet sheetToReturn = sheets.get(index); if(sheetToReturn == null){ sheetToReturn = new Sheet(); sheets.add(index, sheetToReturn); } return sheetToReturn; } but with this method the JSP doesn't work because sheets has an invalid getter. What's the proper way of filling a list when you don't know the number of items in advanced? Thanks

    Read the article

  • Non Document Centric SharePoint Workflow

    - by Dan Revell
    SharePoint workflows are document centric in that the base thing the workflow runs on has to be a thing; be it a document or just a list item. The workflow itself is task based, so stuff a user has to do. Now I can put any sort of code in these tasks that I want to and even put complex InfoPath forms in for the user to perform the task. This has been fine on all my previous workflows. But what if I want the tasks to be actual official forms themselves. The item that the workflow runs on is just some abstract concept like an event. An example could be an accident has happened. There isn't an accident form, but a whole set of forms that need to be completed by different people. Task forms aren't really a nice way to go, because it locks all the forms into the task list. You can only access the forms by not deleting the tasks when complete and going to the workflow summery and following the task links to the InfoPath forms or going straight to the tasks list and doing a filter on particular "accidents". These are official documents so ideally there would be a library for each type of document and the workflow would orchestrate the completion of the right forms. It would mean each task would have to create a new blank form and then link the user to that form. The user would go complete the form but then have to go back to the task form and click yes I've completed it until the workflow could progress. Well this is short of the workflow monitoring the forms library form for some completion trigger. But then it all gets messy with the user experience from clicking the link in the task email, to open the Infopath task form, to clicking the link in the subsequent Infopath library form and then return through these forms on completion. It just gets messy trying to retrofit this non document centric sort of workflow into SharePoint. I would really appreciate any input on what might be the best way to do this. Store the forms as task forms Store the forms as library forms and create/link from the task forms Store the forms as different infopath views, and use a forms library. The workflow would trigger variables that progress the view the infopath form shows. Using the same form template for both task forms and a forms library and when a task form is complete, copy the xml into the forms library to have a official record outside of the workflow. Thanks

    Read the article

  • MVC Bootstrap: Autocomplete doesn't show properly

    - by kicked11
    I have a MVC website and it had a searchbox with autocomplete, now I changed the layout using bootstrap. But now the autocomplete isn't been shown correctly anymore. See the picture the suggestions are not shown right. the autocomplete goes through the text. I was fine before I used bootstrap. I am using a SQL server to get the data and this is js file (I'm not good at ajax, i took it from a tutorial I followed) $(function () { var ajaxFormSubmit = function () { var $form = $(this); var options = { url: $form.attr("action"), type: $form.attr("method"), data: $form.serialize() }; $.ajax(options).done(function (data) { var $target = $($form.attr("data-aptitude-target")); var $newHtml = $(data); $target.replaceWith($newHtml); $newHtml.show("slide", 200); }); return false; }; var submitAutocompleteForm = function (event, ui) { var $input = $(this); $input.val(ui.item.label); var $form = $input.parents("form:first"); $form.submit(); }; var createAutocomplete = function () { var $input = $(this); var options = { source: $input.attr("data-aptitude-autocomplete"), select: submitAutocompleteForm }; $input.autocomplete(options); }; $("form[data-aptituder-ajax='true']").submit(ajaxFormSubmit); $("input[data-aptitude-autocomplete]").each(createAutocomplete); }); this is the form in my view <form method="get" action="@Url.Action("Index")" data-aptitude-ajax="true" data-aptitude-target="#testList"> <input type="search" name="searchTerm" data-aptitude-autocomplete="@Url.Action("Autocomplete")" /> <input type="submit" value=Search /> And this is part of the controller of the view public ActionResult Autocomplete(string term) { var model = db.tests .Where(r => term == null || r.name.Contains(term)) .Select(r => new { label = r.name }); return Json(model, JsonRequestBehavior.AllowGet); } // // GET: /Test/ public ActionResult Index(string searchTerm = null) { var model = db.tests .Where(r => searchTerm == null || r.name.StartsWith(searchTerm)); if (Request.IsAjaxRequest()) { return PartialView("_Test", model); } return View(model); } I'm new to ajax as well as bootstrap 3. I got the searchfunction and autocomplete from a tutorial I followed. Anybody any idea on how to fix this, because it worked fine? Thanks in advance!

    Read the article

  • jquery buttons icons for dialog

    - by khinester
    I have this code: $(function() { var mainButtons = [ {text: "Invite" , 'class': 'invite-button' , click: function() { // get list of members alert('Invite was clicked...'); } } // end Invite button , {text: "Options" , 'class': 'options-button' , click: function() { alert('Options...'); } } // end Options button ] // end mainButtons , commentButtons = [ {text: "Clear" , 'class': 'clear-button' , click: function() { $('#comment').val('').focus().select(); } } // end Clear button , {text: "Post comment" , 'class': 'post-comment-button' , click: function() { alert('send comment...'); } } // end Post comment ] // end commentButtons $( "#form" ).dialog({ autoOpen: false , height: 465 , width: 700 , modal: true , position: ['center', 35] , buttons: mainButtons }); $( "#user-form" ) .button() .click(function() { $(this).effect("transfer",{ to: $("#form") }, 1500); $( "#form" ).dialog( "open" ); $( ".invite-button" ).button({ icons: {primary:'ui-icon-person',secondary:'ui-icon-triangle-1-s'} }); $( ".options-button" ).button({ icons: {primary:'ui-icon-gear'} }); }); // Add comment... $("#comment, .comment").click(function(){ $('#comment').focus().select(); $("#form").dialog({buttons: commentButtons}); $( ".post-comment-button" ).button({ icons: {primary:'ui-icon-comment'} }); $( ".clear-button" ).button({ icons: {primary:'ui-icon-refresh'} }); }); //Add comment // Bind back the Invite, Options buttons $(".files, .email, .event, .map").click(function(){ $("#form").dialog({buttons: mainButtons}); $( ".invite-button" ).button({ icons: {primary:'ui-icon-person',secondary:'ui-icon-triangle-1-s'} }); $( ".options-button" ).button({ icons: {primary:'ui-icon-gear'} }); }); // Tabs $( "#tabs" ).tabs(); $( ".tabs-bottom .ui-tabs-nav, .ui-tabs-nav > *" ) .removeClass( "ui-widget-header" ) .addClass( "ui-corner-bottom" ); }); ? What is the right way to add the button icons? As in my code I had to add it two times, once: $( "#user-form" ) .button() .click(function() { $(this).effect("transfer",{ to: $("#form") }, 1500); $( "#form" ).dialog( "open" ); ... and $(".files, .email, .event, .map").click(function(){ ... Could this code be improved further? I don't seem to be able to get the "transfer" effect to work correctly in a modal. I added: , close: function() { $(this).effect("transfer",{ to: $("#user-form") }, 1500); } to the $( "#form" ).dialog({ How would you go about in getting the "transfer" to work nicely when you open and close the dialog box?

    Read the article

  • Multiple task in one page?(php - mysql - jquery)

    - by python
    My goal is to build an application in a page that can be use multiple task(crud) for example in this html code.there are multiple submit,multiple action in the same page after (user submit (CURD) it will load result table below.) In juery how Can I do this.? <script type="text/javascript" src="jquery.js"></script> <script> $(document).ready(function(){ $("#button1").click(function(){ $('form#crudform').attr({action: "script_1.php"}); $('form#crudform').submit(); }); $("#button2").click(function(){ $('form#crudform').attr({action: "script_2.php"}); $('form#crudform').submit(); }); $("#button3").click(function(){ $('form#crudform').attr({action: "script_3.php"}); $('form#crudform').submit(); }); }); </script> Form CRUD: <form id="crudform" method="post"> <p>Name: <input type="text" name="name"/></p> <p>Age: <input type="text" name="age"/></p> <input type="button" id="button1" value="Cancel" /> <input type="button" id="button2" value="Save" /> <input type="button" id="button3" value="Update" /> </form> Result: <form id="result" method="post"> <table border="1"> <tr> <tr><td></td><td>Name</td><td>Age</td> </tr> <tr><td><input type="checkbox" name="name1"></td><td>Name1</td><td>10</td><tr> <tr><td><input type="checkbox" name="name1"></td><td>Name2</td><td>15</td></tr> <tr><td><input type="checkbox" name="name3"></td><td>Name3</td><td>16</td></tr> </table> <input type="button" id="button4" value="change" /> <input type="button" id="button5" value="drop" /> </form> Anybody know the tutorials relating ..with my tasks.or tips,guide.....are welconme :)

    Read the article

  • SharpDX: best practice for multiple RenderForms?

    - by Rob Jellinghaus
    I have an XNA app, but I really need to add multiple render windows, which XNA doesn't do. I'm looking at SharpDX (both for multi-window support and for DX11 / Metro / many other reasons). I decided to hack up the SharpDX DX11 MultiCubeTexture sample to see if I could make it work. My changes are pretty trivial. The original sample had: [STAThread] private static void Main() { var form = new RenderForm("SharpDX - MiniCubeTexture Direct3D11 Sample"); ... I changed this to: struct RenderFormWithActions { internal readonly RenderForm Form; // should just be Action but it's not in System namespace?! internal readonly Action RenderAction; internal readonly Action DisposeAction; internal RenderFormWithActions(RenderForm form, Action renderAction, Action disposeAction) { Form = form; RenderAction = renderAction; DisposeAction = disposeAction; } } [STAThread] private static void Main() { // hackity hack new Thread(new ThreadStart(() = { RenderFormWithActions form1 = CreateRenderForm(); RenderLoop.Run(form1.Form, () = form1.RenderAction(0)); form1.DisposeAction(0); })).Start(); new Thread(new ThreadStart(() = { RenderFormWithActions form2 = CreateRenderForm(); RenderLoop.Run(form2.Form, () = form2.RenderAction(0)); form2.DisposeAction(0); })).Start(); } private static RenderFormWithActions CreateRenderForm() { var form = new RenderForm("SharpDX - MiniCubeTexture Direct3D11 Sample"); ... Basically, I split out all the Main() code into a separate method which creates a RenderForm and two delegates (a render delegate, and a dispose delegate), and bundles them all together into a struct. I call this method twice, each time from a separate, new thread. Then I just have one RenderLoop on each new thread. I was thinking this wouldn't work because of the [STAThread] declaration -- I thought I would need to create the RenderForm on the main (STA) thread, and run only a single RenderLoop on that thread. Fortunately, it seems I was wrong. This works quite well -- if you drag one of the forms around, it stops rendering while being dragged, but starts again when you drop it; and the other form keeps chugging away. My questions are pretty basic: Is this a reasonable approach, or is there some lurking threading issue that might make trouble? My code simply duplicates all the setup code -- it makes a duplicate SwapChain, Device, Texture2D, vertex buffer, everything. I don't have a problem with this level of duplication -- my app is not intensive enough to suffer resource issues -- but nonetheless, is there a better practice? Is there any good reference for which DirectX structures can safely be shared, and which can't? It appears that RenderLoop.Run calls the render delegate in a tight loop. Is there any standard way to limit the frame rate of RenderLoop.Run, if you don't want a 400FPS app eating 100% of your CPU? Should I just Thread.Sleep(30) in the render delegate? (I asked on the sharpdx.org forums as well, but Alexandre is on vacation for two weeks, and my sister wants me to do a performance with my app at her wedding in three and a half weeks, so I'm mighty incented here! http://robjsoftware.org for details of what I'm building....)

    Read the article

  • Solving File Upload Cancel Issue

    - by Frank Nimphius
    In Oracle JDeveloper 11g R1 (I did not test 11g R2) the file upload component is submitted even if users click a cancel button with immediate="true" set. Usually, immediate="true" on a command button by-passes all modle updates, which would make you think that the file upload isn't processed either. However, using a form like shown below, pressing the cancel button has no effect in that the file upload is not suppressed. <af:form id="f1" usesUpload="true">        <af:inputFile label="Choose file" id="fileup" clientComponent="true"                 value="#{FileUploadBean.file}"  valueChangeListener="#{FileUploadBean.onFileUpload}">   </af:inputFile>   <af:commandButton text="Submit" id="cb1" partialSubmit="true"                     action="#{FileUploadBean.onInputFormSubmit}"/>   <af:commandButton text="cancel" id="cb2" immediate="true"/> </af:form> The solution to this problem is a change of the event root, which you can achieve either by setting i) partialSubmit="true" on the command button, or by surrounding the form parts that should not be submitted when the cancel button is pressed with an ii) af:subform tag. i) partialSubmit solution <af:form id="f1" usesUpload="true">      <af:inputFile .../>   <af:commandButton text="Submit" .../>   <af:commandButton text="cancel" immediate="true" partialSubmit="true" .../> </af:form> ii) subform solution <af:form id="f1" usesUpload="true">   <af:subform id="sf1">     <af:inputFile ... />     <af:commandButton text="Submit" ..."/>   </af:subform>   <af:commandButton text="cancel" immediate="true" .../> </af:form> Note that the af:subform surrounds the input form parts that you want to submit when the submit button is pressed. By default, the af:subform only submits its contained content if the submit issued from within.

    Read the article

  • Announcing the release of the Windows Azure SDK 2.1 for .NET

    - by ScottGu
    Today we released the v2.1 update of the Windows Azure SDK for .NET.  This is a major refresh of the Windows Azure SDK and it includes some great new features and enhancements. These new capabilities include: Visual Studio 2013 Preview Support: The Windows Azure SDK now supports using the new VS 2013 Preview Visual Studio 2013 VM Image: Windows Azure now has a built-in VM image that you can use to host and develop with VS 2013 in the cloud Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources Virtual Machines: Start and Stop VM’s w/suspend billing directly from within Visual Studio Cloud Services: New Emulator Express option with reduced footprint and Run as Normal User support Service Bus: New high availability options, Notification Hub support, Improved VS tooling PowerShell Automation: Lots of new PowerShell commands for automating Web Sites, Cloud Services, VMs and more All of these SDK enhancements are now available to start using immediately and you can download the SDK from the Windows Azure .NET Developer Center.  Visual Studio’s Team Foundation Service (http://tfs.visualstudio.com/) has also been updated to support today’s SDK 2.1 release, and the SDK 2.1 features can now be used with it (including with automated builds + tests). Below are more details on the new features and capabilities released today: Visual Studio 2013 Preview Support Today’s Window Azure SDK 2.1 release adds support for the recent Visual Studio 2013 Preview. The 2.1 SDK also works with Visual Studio 2010 and Visual Studio 2012, and works side by side with the previous Windows Azure SDK 1.8 and 2.0 releases. To install the Windows Azure SDK 2.1 on your local computer, choose the “install the sdk” link from the Windows Azure .NET Developer Center. Then, chose which version of Visual Studio you want to use it with.  Clicking the third link will install the SDK with the latest VS 2013 Preview: If you don’t already have the Visual Studio 2013 Preview installed on your machine, this will also install Visual Studio Express 2013 Preview for Web. Visual Studio 2013 VM Image Hosted in the Cloud One of the requests we’ve heard from several customers has been to have the ability to host Visual Studio within the cloud (avoiding the need to install anything locally on your computer). With today’s SDK update we’ve added a new VM image to the Windows Azure VM Gallery that has Visual Studio Ultimate 2013 Preview, SharePoint 2013, SQL Server 2012 Express and the Windows Azure 2.1 SDK already installed on it.  This provides a really easy way to create a development environment in the cloud with the latest tools. With the recent shutdown and suspend billing feature we shipped on Windows Azure last month, you can spin up the image only when you want to do active development, and then shut down the virtual machine and not have to worry about usage charges while the virtual machine is not in use. You can create your own VS image in the cloud by using the New->Compute->Virtual Machine->From Gallery menu within the Windows Azure Management Portal, and then by selecting the “Visual Studio Ultimate 2013 Preview” template: Visual Studio Server Explorer: Improved Filtering/Management of Subscription Resources With the Windows Azure SDK 2.1 release you’ll notice significant improvements in the Visual Studio Server Explorer. The explorer has been redesigned so that all Windows Azure services are now contained under a single Windows Azure node.  From the top level node you can now manage your Windows Azure credentials, import a subscription file or filter Server Explorer to only show services from particular subscriptions or regions. Note: The Web Sites and Mobile Services nodes will appear outside the Windows Azure Node until the final release of VS 2013. If you have installed the ASP.NET and Web Tools Preview Refresh, though, the Web Sites node will appear inside the Windows Azure node even with the VS 2013 Preview. Once your subscription information is added, Windows Azure services from all your subscriptions are automatically enumerated in the Server Explorer. You no longer need to manually add services to Server Explorer individually. This provides a convenient way of viewing all of your cloud services, storage accounts, service bus namespaces, virtual machines, and web sites from one location: Subscription and Region Filtering Support Using the Windows Azure node in Server Explorer, you can also now filter your Windows Azure services in the Server Explorer by the subscription or region they are in.  If you have multiple subscriptions but need to focus your attention to just a few subscription for some period of time, this a handy way to hide the services from other subscriptions view until they become relevant. You can do the same sort of filtering by region. To enable this, just select “Filter Services” from the context menu on the Windows Azure node: Then choose the subscriptions and/or regions you want to filter by. In the below example, I’ve decided to show services from my pay-as-you-go subscription within the East US region: Visual Studio will then automatically filter the items that show up in the Server Explorer appropriately: With storage accounts and service bus namespaces, you sometimes need to work with services outside your subscription. To accommodate that scenario, those services allow you to attach an external account (from the context menu). You’ll notice that external accounts have a slightly different icon in server explorer to indicate they are from outside your subscription. Other Improvements We’ve also improved the Server Explorer by adding additional properties and actions to the service exposed. You now have access to most of the properties on a cloud service, deployment slot, role or role instance as well as the properties on storage accounts, virtual machines and web sites. Just select the object of interest in Server Explorer and view the properties in the property pane. We also now have full support for creating/deleting/update storage tables, blobs and queues from directly within Server Explorer.  Simply right-click on the appropriate storage account node and you can create them directly within Visual Studio: Virtual Machines: Start/Stop within Visual Studio Virtual Machines now have context menu actions that allow you start, shutdown, restart and delete a Virtual Machine directly within the Visual Studio Server Explorer. The shutdown action enables you to shut down the virtual machine and suspend billing when the VM is not is use, and easily restart it when you need it: This is especially useful in Dev/Test scenarios where you can start a VM – such as a SQL Server – during your development session and then shut it down / suspend billing when you are not developing (and no longer be billed for it). You can also now directly remote desktop into VMs using the “Connect using Remote Desktop” context menu command in VS Server Explorer.  Cloud Services: Emulator Express with Run as Normal User Support You can now launch Visual Studio and run your cloud services locally as a Normal User (without having to elevate to an administrator account) using a new Emulator Express option included as a preview feature with this SDK release.  Emulator Express is a version of the Windows Azure Compute Emulator that runs a restricted mode – one instance per role – and it doesn’t require administrative permissions and uses 40% less resources than the full Windows Azure Emulator. Emulator Express supports both web and worker roles. To run your application locally using the Emulator Express option, simply change the following settings in the Windows Azure project. On the shortcut menu for the Windows Azure project, choose Properties, and then choose the Web tab. Check the setting for IIS (Internet Information Services). Make sure that the option is set to IIS Express, not the full version of IIS. Emulator Express is not compatible with full IIS. On the Web tab, choose the option for Emulator Express. Service Bus: Notification Hubs With the Windows Azure SDK 2.1 release we are adding support for Windows Azure Notification Hubs as part of our official Windows Azure SDK, inside of Microsoft.ServiceBus.dll (previously the Notification Hub functionality was in a preview assembly). You are now able to create, update and delete Notification Hubs programmatically, manage your device registrations, and send push notifications to all your mobile clients across all platforms (Windows Store, Windows Phone 8, iOS, and Android). Learn more about Notification Hubs on MSDN here, or watch the Notification Hubs //BUILD/ presentation here. Service Bus: Paired Namespaces One of the new features included with today’s Windows Azure SDK 2.1 release is support for Service Bus “Paired Namespaces”.  Paired Namespaces enable you to better handle situations where a Service Bus service namespace becomes unavailable (for example: due to connectivity issues or an outage) and you are unable to send or receive messages to the namespace hosting the queue, topic, or subscription. Previously,to handle this scenario you had to manually setup separate namespaces that can act as a backup, then implement manual failover and retry logic which was sometimes tricky to get right. Service Bus now supports Paired Namespaces, which enables you to connect two namespaces together. When you activate the secondary namespace, messages are stored in the secondary queue for delivery to the primary queue at a later time. If the primary container (namespace) becomes unavailable for some reason, automatic failover enables the messages in the secondary queue. For detailed information about paired namespaces and high availability, see the new topic Asynchronous Messaging Patterns and High Availability. Service Bus: Tooling Improvements In this release, the Windows Azure Tools for Visual Studio contain several enhancements and changes to the management of Service Bus messaging entities using Visual Studio’s Server Explorer. The most noticeable change is that the Service Bus node is now integrated into the Windows Azure node, and supports integrated subscription management. Additionally, there has been a change to the code generated by the Windows Azure Worker Role with Service Bus Queue project template. This code now uses an event-driven “message pump” programming model using the QueueClient.OnMessage method. PowerShell: Tons of New Automation Commands Since my last blog post on the previous Windows Azure SDK 2.0 release, we’ve updated Windows Azure PowerShell (which is a separate download) five times. You can find the full change log here. We’ve added new cmdlets in the following areas: China instance and Windows Azure Pack support Environment Configuration VMs Cloud Services Web Sites Storage SQL Azure Service Bus China Instance and Windows Azure Pack We now support the following cmdlets for the China instance and Windows Azure Pack, respectively: China Instance: Web Sites, Service Bus, Storage, Cloud Service, VMs, Network Windows Azure Pack: Web Sites, Service Bus We will have full cmdlet support for these two Windows Azure environments in PowerShell in the near future. Virtual Machines: Stop/Start Virtual Machines Similar to the Start/Stop VM capability in VS Server Explorer, you can now stop your VM and suspend billing: If you want to keep the original behavior of keeping your stopped VM provisioned, you can pass in the -StayProvisioned switch parameter. Virtual Machines: VM endpoint ACLs We’ve added and updated a bunch of cmdlets for you to configure fine-grained network ACL on your VM endpoints. You can use the following cmdlets to create ACL config and apply them to a VM endpoint: New-AzureAclConfig Get-AzureAclConfig Set-AzureAclConfig Remove-AzureAclConfig Add-AzureEndpoint -ACL Set-AzureEndpoint –ACL The following example shows how to add an ACL rule to an existing endpoint of a VM. Other improvements for Virtual Machine management includes Added -NoWinRMEndpoint parameter to New-AzureQuickVM and Add-AzureProvisioningConfig to disable Windows Remote Management Added -DirectServerReturn parameter to Add-AzureEndpoint and Set-AzureEndpoint to enable/disable direct server return Added Set-AzureLoadBalancedEndpoint cmdlet to modify load balanced endpoints Cloud Services: Remote Desktop and Diagnostics Remote Desktop and Diagnostics are popular debugging options for Cloud Services. We’ve introduced cmdlets to help you configure these two Cloud Service extensions from Windows Azure PowerShell. Windows Azure Cloud Services Remote Desktop extension: New-AzureServiceRemoteDesktopExtensionConfig Get-AzureServiceRemoteDesktopExtension Set-AzureServiceRemoteDesktopExtension Remove-AzureServiceRemoteDesktopExtension Windows Azure Cloud Services Diagnostics extension New-AzureServiceDiagnosticsExtensionConfig Get-AzureServiceDiagnosticsExtension Set-AzureServiceDiagnosticsExtension Remove-AzureServiceDiagnosticsExtension The following example shows how to enable Remote Desktop for a Cloud Service. Web Sites: Diagnostics With our last SDK update, we introduced the Get-AzureWebsiteLog –Tail cmdlet to get the log streaming of your Web Sites. Recently, we’ve also added cmdlets to configure Web Site application diagnostics: Enable-AzureWebsiteApplicationDiagnostic Disable-AzureWebsiteApplicationDiagnostic The following 2 examples show how to enable application diagnostics to the file system and a Windows Azure Storage Table: SQL Database Previously, you had to know the SQL Database server admin username and password if you want to manage the database in that SQL Database server. Recently, we’ve made the experience much easier by not requiring the admin credential if the database server is in your subscription. So you can simply specify the -ServerName parameter to tell Windows Azure PowerShell which server you want to use for the following cmdlets. Get-AzureSqlDatabase New-AzureSqlDatabase Remove-AzureSqlDatabase Set-AzureSqlDatabase We’ve also added a -AllowAllAzureServices parameter to New-AzureSqlDatabaseServerFirewallRule so that you can easily add a firewall rule to whitelist all Windows Azure IP addresses. Besides the above experience improvements, we’ve also added cmdlets get the database server quota and set the database service objective. Check out the following cmdlets for details. Get-AzureSqlDatabaseServerQuota Get-AzureSqlDatabaseServiceObjective Set-AzureSqlDatabase –ServiceObjective Storage and Service Bus Other new cmdlets include Storage: CRUD cmdlets for Azure Tables and Queues Service Bus: Cmdlets for managing authorization rules on your Service Bus Namespace, Queue, Topic, Relay and NotificationHub Summary Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  All the above features/enhancements are shipped and available to use immediately as part of the 2.1 release of the Windows Azure SDK for .NET. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

  • MVVM in Task-It

    As I'm gearing up to write a post about dynamic XAP loading with MEF, I'd like to first talk a bit about MVVM, the Model-View-ViewModel pattern, as I will be leveraging this pattern in my future posts. Download Source Code Why MVVM? Your first question may be, "why do I need this pattern? I've been using a code-behind approach for years and it works fine." Well, you really don't have to make the switch to MVVM, but let me first explain some of the benefits I see for doing so. MVVM Benefits Testability - This is the one you'll probably hear the most about when it comes to MVVM. Moving most of the code from your code-behind to a separate view model class means you can now write unit tests against the view model without any knowledge of a view (UserControl). Multiple UIs - Let's just say that you've created a killer app, it's running in the browser, and maybe you've even made it run out-of-browser. Now what if your boss comes to you and says, "I heard about this new Windows Phone 7 device that is coming out later this year. Can you start porting the app to that device?". Well, now you have to create a new UI (UserControls, etc.) because you have a lot less screen real estate to work with. So what do you do, copy all of your existing UserControls, paste them, rename them, and then start changing the code? Hmm, that doesn't sound so good. But wait, if most of the code that makes your browser-based app tick lives in view model classes, now you can create new view (UserControls) for Windows Phone 7 that reference the same view model classes as your browser-based app. Page state - In Silverlight you're at some point going to be faced with the same issue you dealt with for years in ASP.NET, maintaining page state. Let's say a user hits your Products page, does some stuff (filters record, etc.), then leaves the page and comes back later. It would be best if the Products page was in the same state as when they left it right? Well, if you've thrown away your view (UserControl or Page) and moved off to another part of the UI, when you come back to Products you're probably going to re-instantiate your view...which will put it right back in the state it was when it started. Hmm, not good. Well, with a little help from MEF you can store the state in your view model class, MEF will keep that view model instance hanging around in memory, and then you simply rebind your view to the view model class. I made that sound easy, but it's actually a bit of work to properly store and restore the state. At least it can be done though, which will make your users a lot happier! I'll talk more about this in an upcoming blog post. No event handlers? Another nice thing about MVVM is that you can bind your UserControls to the view model, which may eliminate the need for event handlers in your code-behind. So instead of having a Click handler on a Button (or RadMenuItem), for example, you can now bind your control's Command property to a DelegateCommand in your view model (I'll talk more about Commands in an upcoming post). Instead of having a SelectionChanged event handler on your RadGridView you can now bind its SelectedItem property to a property in your view model, and each time the user clicks a row, the view model property's setter will be called. Now through the magic of binding we can eliminate the need for traditional code-behind based event handlers on our user interface controls, and the best thing is that the view model knows about everything that's going on...which means we can test things without a user interface. The brains of the operation So what we're seeing here is that the view is now just a dumb layer that binds to the view model, and that the view model is in control of just about everything, like what happens when a RadGridView row is selected, or when a RadComboBoxItem is selected, or when a RadMenuItem is clicked. It is also responsible for loading data when the page is hit, as well as kicking off data inserts, updates and deletions. Once again, all of this stuff can be tested without the need for a user interface. If the test works, then it'll work regardless of whether the user is hitting the browser-based version of your app, or the Windows Phone 7 version. Nice! The database Before running the code for this app you will need to create the database. First, create a database called MVVMProject in SQL Server, then run MVVMProject.sql in the MVVMProject/Database directory of your downloaded .zip file. This should give you a Task table with 3 records in it. When you fire up the solution you will also need to update the connection string in web.config to point to your database instead of IBM12\SQLSERVER2008. The code One note about this code is that it runs against the latest Silverlight 4 RC and WCF RIA Services code. Please see my first blog post about updating to the RC bits. Beta to RC - Part 1 At the top of this post is a link to a sample project that demonstrates a sample application with a Tasks page that uses the MVVM pattern. This is a simplified version of how I have implemented the Tasks page in the Task-It application. Youll notice that Tasks.xaml has very little code to it. Just a TextBlock that displays the page title and a ContentControl. <StackPanel>     <TextBlock Text="Tasks" Style="{StaticResource PageTitleStyle}"/>     <Rectangle Style="{StaticResource StandardSpacerStyle}"/>     <ContentControl x:Name="ContentControl1"/> </StackPanel> In List.xaml we have a RadGridView. Notice that the ItemsSource is bound to a property in the view model class call Tasks, SelectedItem is bound to a property in the view model called SelectedItem, and IsBusy is bound to a property in the view model called IsLoading. <Grid>     <telerikGridView:RadGridView ItemsSource="{Binding Tasks}" SelectedItem="{Binding SelectedItem, Mode=TwoWay}"                                  IsBusy="{Binding IsLoading}" AutoGenerateColumns="False" IsReadOnly="True" RowIndicatorVisibility="Collapsed"                IsFilteringAllowed="False" ShowGroupPanel="False">         <telerikGridView:RadGridView.Columns>             <telerikGridView:GridViewDataColumn Header="Name" DataMemberBinding="{Binding Name}" Width="3*"/>             <telerikGridView:GridViewDataColumn Header="Due" DataMemberBinding="{Binding DueDate}" DataFormatString="{}{0:d}" Width="*"/>         </telerikGridView:RadGridView.Columns>     </telerikGridView:RadGridView> </Grid> In Details.xaml we have a Save button that is bound to a property called SaveCommand in our view model. We also have a simple form (Im using a couple of controls here from Silverlight.FX for the form layout, FormPanel and Label simply because they make for a clean XAML layout). Notice that the FormPanel is also bound to the SelectedItem in the view model (the same one that the RadGridView is). The two form controls, the TextBox and RadDatePicker) are bound to the SelectedItem's Name and DueDate properties. These are properties of the Task object that WCF RIA Services creates. <StackPanel>     <Button Content="Save" Command="{Binding SaveCommand}" HorizontalAlignment="Left"/>     <Rectangle Style="{StaticResource StandardSpacerStyle}"/>     <fxui:FormPanel DataContext="{Binding SelectedItem}" Style="{StaticResource FormContainerStyle}">         <fxui:Label Text="Name:"/>         <TextBox Text="{Binding Name, Mode=TwoWay}"/>         <fxui:Label Text="Due:"/>         <telerikInput:RadDatePicker SelectedDate="{Binding DueDate, Mode=TwoWay}"/>     </fxui:FormPanel> </StackPanel> In the code-behind of the Tasks control, Tasks.xaml.cs, I created an instance of the view model class (TasksViewModel) in the constructor and set it as the DataContext for the control. The Tasks page will load one of two child UserControls depending on whether you are viewing the list of tasks (List.xaml) or the form for editing a task (Details.xaml). // Set the DataContext to an instance of the view model class var viewModel = new TasksViewModel(); DataContext = viewModel;   // Child user controls (inherit DataContext from this user control) List = new List(); // RadGridView Details = new Details(); // Form When the page first loads, the List is loaded into the ContentControl. // Show the RadGridView first ContentControl1.Content = List; In the code-behind we also listen for a couple of the view models events. The ItemSelected event will be fired when the user clicks on a record in the RadGridView in the List control. The SaveCompleted event will be fired when the user clicks Save in the Details control (the form). Here the view model is in control, and is letting the view know when something needs to change. // Listeners for the view model's events viewModel.ItemSelected += OnItemSelected; viewModel.SaveCompleted += OnSaveCompleted; The event handlers toggle the view between the RadGridView (List) and the form (Details). void OnItemSelected(object sender, RoutedEventArgs e) {     // Show the form     ContentControl1.Content = Details; }   void OnSaveCompleted(object sender, RoutedEventArgs e) {     // Show the RadGridView     ContentControl1.Content = List; } In TasksViewModel, we instantiate a DataContext object and a SaveCommand in the constructor. DataContext is a WCF RIA Services object that well use to retrieve the list of Tasks and to save any changes to a task. Ill talk more about this and Commands in future post, but for now think of the SaveCommand as an event handler that is called when the Save button in the form is clicked. DataContext = new DataContext(); SaveCommand = new DelegateCommand(OnSave); When the TasksViewModel constructor is called we also make a call to LoadTasks. This sets IsLoading to true (which causes the RadGridViews busy indicator to appear) and retrieves the records via WCF RIA Services.         public LoadOperation<Task> LoadTasks()         {             // Show the loading message             IsLoading = true;             // Get the data via WCF RIA Services. When the call has returned, called OnTasksLoaded.             return DataContext.Load(DataContext.GetTasksQuery(), OnTasksLoaded, false);         } When the data is returned, OnTasksLoaded is called. This sets IsLoading to false (which hides the RadGridViews busy indicator), and fires property changed notifications to the UI to let it know that the IsLoading and Tasks properties have changed. This property changed notification basically tells the UI to rebind. void OnTasksLoaded(LoadOperation<Task> lo) {     // Hide the loading message     IsLoading = false;       // Notify the UI that Tasks and IsLoading properties have changed     this.OnPropertyChanged(p => p.Tasks);     this.OnPropertyChanged(p => p.IsLoading); } Next lets look at the view models SelectedItem property. This is the one thats bound to both the RadGridView and the form. When the user clicks a record in the RadGridView its setter gets called (set a breakpoint and see what I mean). The other code in the setter lets the UI know that the SelectedItem has changed (so the form displays the correct data), and fires the event that notifies the UI that a selection has occurred (which tells the UI to switch from List to Details). public Task SelectedItem {     get { return _selectedItem; }     set     {         _selectedItem = value;           // Let the UI know that the SelectedItem has changed (forces it to re-bind)         this.OnPropertyChanged(p => p.SelectedItem);         // Notify the UI, so it can switch to the Details (form) page         NotifyItemSelected();     } } One last thing, saving the data. When the Save button in the form is clicked it fires the SaveCommand, which calls the OnSave method in the view model (once again, set a breakpoint to see it in action). public void OnSave() {     // Save the changes via WCF RIA Services. When the save is complete, call OnSaveCompleted.     DataContext.SubmitChanges(OnSaveCompleted, null); } In OnSave, we tell WCF RIA Services to submit any changes, which there will be if you changed either the Name or the Due Date in the form. When the save is completed, it calls OnSaveCompleted. This method fires a notification back to the UI that the save is completed, which causes the RadGridView (List) to show again. public virtual void OnSaveCompleted(SubmitOperation so) {     // Clear the item that is selected in the grid (in case we want to select it again)     SelectedItem = null;     // Notify the UI, so it can switch back to the List (RadGridView) page     NotifySaveCompleted(); } Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASP.NET MVC returning ContentResult using Ajax form - how to preserve whitespace?

    - by Ben
    In my application users can enter commands that are executed on the server. The results are added to a session object. I then stuff the session object into ViewData and add it to a textarea. When done with a standard HTML form whitespace is preserved. However, when I swap this out for an ajax form (Ajax.BeginForm) and return the result as ContentResult, the whitespace is removed. Controller Action: [HttpPost] public ActionResult Execute(string submitButton, string command) { if (submitButton == "Clear") { this.CurrentConsole = string.Empty; } if (submitButton == "Execute" && !string.IsNullOrEmpty(command)) { var script = new PSScript() { Name = "Ad hoc script", CommandText = command }; this.CurrentConsole += _scriptService.ExecuteScript(script); } if (Request.IsAjaxRequest()) { return Content(this.CurrentConsole, "text/plain"); } return RedirectToAction("Index"); } View: <fieldset> <legend>Shell</legend> <%=Html.TextArea("console", ViewData["console"].ToString(), new {@class = "console", @readonly = "readonly"})%> <% using (Ajax.BeginForm("Execute", new AjaxOptions { UpdateTargetId = "console", OnBegin = "console_begin", OnComplete = "console_complete"})) { %> <input type="text" id="command" name="command" class="commandtext" /> <input type="submit" value="Execute" class="runbutton" name="submitButton" /> <input type="submit" value="Clear" class="runbutton" name="submitButton" /> <%} %> </fieldset> How can I ensure that whitespace is preserved? When I inspect the response in FireBug it looks like the whitespace is transmitted, so can only assume it has something to do with the way in which the javascript handles the response data.

    Read the article

  • How to upload a file from iPhone SDK to an ASP.NET vb.net web form using ASIFormDataRequest

    - by user289348
    Download http://allseeing-i.com/ASIHTTPRequest/. This works like a charm and is a good wrapper and handles things nicely. The make the request this way to the asp.net code listed below. Create a asp.net webpage that has a file control. IPHONE CODE: NSURL *url = [NSURL URLWithString:@"http://YourWebSite/Upload.aspx"]; ASIFormDataRequest *request = [ASIFormDataRequest requestWithURL:url]; //These two must be added. ASP.NET Looks for them, if //they are not there in the request, the file will not upload [request setPostValue:@"" forKey:@"__VIEWSTATE"]; [request setPostValue:@"" forKey:@"__EVENTVALIDATION"]; [request setFile:@"PATH_TO_Local_File_on_Iphone/file/jpg" forKey:@"fu"]; [request startSynchronous]; This is the website code <%@ Page Language="VB" AutoEventWireup="false" CodeFile="Upload.aspx.vb" Inherits="Upload" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:FileUpload ID="fu" runat="server" EnableViewState="False" /> </div> <asp:Button ID="Submit" runat="server" Text="Submit" /> </form> </body> </html> //Code behind page Partial Class Upload Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim tMarker As New EventMarkers If fu.HasFile = True Then 'fu.PostedFile fu.SaveAs("E:\InetPub\UploadedImage\" & fu.FileName) End If End Sub End Class

    Read the article

  • Treeview - Hierarchical Data Template - Binding does not update on source change?

    - by ClearsTheScreen
    Greetings! I ran into this problem in my project (Silverlight 3 with C#): I have a TreeView which is data bound to, well, a tree. This TreeView has a HierarchicalDataTamplate in a resource dictionary, that defines various controls. Now I want to hide (Visibility.Collapse) some items depending on wether a node has children or not. Other items shall be visible under the same condition. It works like charm when I first bind the source tree to the TreeView, but when I change the source tree, the visibility in the treeview does not change. XAML - page: <controls:TreeView x:Name="SankeyTreeView" ItemContainerStyle="{StaticResource expandedTreeViewItemStyle}" ItemTemplate="{StaticResource SankeyTreeTemplate}"> <controls:TreeViewItem IsExpanded="True"> <controls:TreeViewItem.HeaderTemplate> <DataTemplate> <TextBlock Text="This is just for loading and will be replaced directly after the data becomes available..."/> </DataTemplate> </controls:TreeViewItem.HeaderTemplate> </controls:TreeViewItem> </controls:TreeView> XAML - ResourceDictionary <!-- Each node in the tree is structurally identical, hence only one Hierarchical Data Template that'll use itself on the children. --> <Data:HierarchicalDataTemplate x:Key="SankeyTreeTemplate" ItemsSource="{Binding Children}"> <Grid Height="24"> <TextBlock x:Name="TextBlockName" Text="{Binding Path=Value.name, Mode=TwoWay}" VerticalAlignment="Center" Foreground="Black"/> <TextBox x:Name="TextBoxFlow" Text="{Binding Path=Value.flow, Mode=TwoWay}" Grid.Column="1" Visibility="{Binding Children, Converter={StaticResource BoxConverter}, ConverterParameter=\{box\}}"/> <TextBlock x:Name="TextBlockThroughput" Text="{Binding Path=Value.throughput, Mode=TwoWay}" Grid.Column="1" Visibility="{Binding Children, Converter={StaticResource BoxConverter}, ConverterParameter=\{block\}}"/> <Button x:Name="ButtonAddNode"/> <Button x:Name="ButtonDeleteNode"/> <Button x:Name="ButtonEditNode"/> </Grid> </Data:HierarchicalDataTemplate> Now, as you can see, the TextBoxFlow and the TextBlockThroughput share the same space. What I aim at: The "Throughput" value of a node is how much of something 'flows' through this node from its children. It can't be changed directly, so I want to display a text block. Only leaf nodes have a TextBox to let someone enter the 'flow' that is generated in this leaf node. (I.E.: Node.Throughput = Node.Flow + Sum(Children.Throughput), where Node.Flow = 0 for each non-leaf.) What the BoxConverter (silly name -.-) does: public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if ((value as NodeList<TreeItem>).Count > 1) // Node has Children? { if ((parameter as String) == "{box}") { return Visibility.Collapsed; } else ((parameter as String) == "{block}") { return Visibility.Visible; } } else { /* * As above, just with Collapsed and Visible switched */ } } The structure of the tree that is bound to the TreeView is essentially stolen from Dan Vanderboom (a bit too much to dump the whole code here), except that I here of course use an ObservableCollection for the children and the value items implement INotifyPropertyChanged. I would be very grateful if someone could explain to me, why inserting items into the underlying tree does not update the visibility for box and block. Thank you in advance!

    Read the article

  • Force Postback from code behind? Or reload JavaScript from an Asynchronous Postback?

    - by sah302
    Hi all, I've got a Jquery UI dialog that pops up to confirm the creation of an item after filling out a form. I have the form in an update panel due to various needs of the form, and especially because I want validation being done on the form without reloading the page. JavaScript appears to not reload on an asynchronoous postback. This means when the form is a success and I change the variable 'formSubmitPass' to true, it does not get passed to the Javascript via <%= formSubmitPass %. If I add a trigger to the submit button to do a full postback, it works. However I don't want the submit button to do a full postback as I said so I can validate the form within the update panel. How can I have this so my form validates asynchronously, but my javaScript will properly reload when the form is completed successfully and the item is saved to the database? Javascript: var formSubmitPass = '<%= formSubmitPass %>'; var redirectUrl = '<%= redirectUrl %>'; function pageLoad() { $('#formPassBox').dialog({ autoOpen: false, width: 400, resizable: false, modal: true, draggable: false, buttons: { "Ok": function() { window.location.href = redirectUrl; } }, open: function(event, ui) { $(".ui-dialog-titlebar-close").hide(); var t = window.setTimeout("goToUrl()", 5000); } }); if(formSubmitPass == 'True') { $('#formPassBox').dialog({ autoOpen: true }); } So how can I force a postback from the code behind, or reload the JavaScript on an Asynchronous Postback, or do this in a way that will work such that I can continue to do Async form validation? Edit: I change formSubmitPass at the very end of the code behind: If errorCount = 0 Then formSubmitPass = True upForm.Update() Else formSubmitPass = False End If So on a full postback, the value does change.

    Read the article

  • OpenCV : How to display webcam capture in windows form application?

    - by sneixum
    generally we display webcam or video motion in opencv windows with : CvCapture* capture = cvCreateCameraCapture(0); cvNamedWindow( "title", CV_WINDOW_AUTOSIZE ); cvMoveWindow("title",x,y); while(1) { frame = cvQueryFrame( capture ); if( !frame ) { break; } cvShowImage( "title", frame ); char c = cvWaitKey(33); if( c == 27 ) { break; } } i tried to use pictureBox that is successful to display image in windows form with this : pictureBox1-Image = gcnew System::Drawing::Bitmap( image-width,image-height,image-widthStep,System::Drawing::Imaging::PixelFormat::Undefined, ( System::IntPtr ) image- imageData); but when im trying to display captured image from video it wont works, here is the source : CvCapture* capture = cvCreateCameraCapture(0); while(1) { frame = cvQueryFrame( capture ); if( !frame ) { break; } pictureBox1->Image = gcnew System::Drawing::Bitmap( frame->width,frame->height,frame->widthStep,System::Drawing::Imaging::PixelFormat::Undefined, ( System::IntPtr ) frame-> imageData); char c = cvWaitKey(33); if( c == 27 ) { break; } } is there anyway to use windows form instead opencv windows to show video or webcam? or is there something wrong with my code? thanks for your help.. :)

    Read the article

  • Overwrite clean method in Django Custom Forms

    - by John
    Hi I have wrote a custom widget class AutoCompleteWidget(widgets.TextInput): """ widget to show an autocomplete box which returns a list on nodes available to be tagged """ def render(self, name, value, attrs=None): final_attrs = self.build_attrs(attrs, name=name) if not self.attrs.has_key('id'): final_attrs['id'] = 'id_%s' % name if not value: value = '[]' jquery = u""" <script type="text/javascript"> $("#%s").tokenInput('%s', { hintText: "Enter the word", noResultsText: "No results", prePopulate: %s, searchingText: "Searching..." }); $("body").focus(); </script> """ % (final_attrs['id'], reverse('ajax_autocomplete'), value) output = super(AutoTagWidget, self).render(name, "", attrs) return output + mark_safe(jquery) class MyForm(forms.Form): AutoComplete = forms.CharField(widget=AutoCompleteWidget) this widget uses a jquery function which autocompletes a word based on entries from the database. You can preset its initial values by setting prePopulate to a json string in the form ['name': 'some name', 'id': 'some id'] I do this by setting the inital value of the form field to this json string jquery_string = ['name': 'some name', 'id': 'some id'] form = MyForm(initial={'AutoComplete':jquery_string}) When submitting the form the the value of AutoComplete is returned as a comma seperated list of the selected ids e.g. 12,45,43,66 which if what I want. However if there is an error in the form, for example a required field has not been entered the value of the AutoComplete field is now 12,45,43,66 and not the json string which it requires. What is the best way to solve this. I was thinking about overwriting the clean method in the form class but I'm not sure how to find out if any other element has returned an error. e.g. if forms.errors form.cleaned_date['autocomplete'] = json string return form.cleaned_data Thanks

    Read the article

  • Validate MVC 2 form using Data annotations and Linq-to-SQL, before the model binder kicks in (with D

    - by Stefanvds
    I'm using linq to SQL and MVC2 with data annotations and I'm having some problems on validation of some types. For example: [DisplayName("Geplande sessies")] [PositiefGeheelGetal(ErrorMessage = "Ongeldige ingave. Positief geheel getal verwacht")] public string Proj_GeplandeSessies { get; set; } This is an integer, and I'm validating to get a positive number from the form. public class PositiefGeheelGetalAttribute : RegularExpressionAttribute { public PositiefGeheelGetalAttribute() : base(@"\d{1,7}") { } } Now the problem is that when I write text in the input, I don't get to see THIS error, but I get the errormessage from the modelbinder saying "The value 'Tomorrow' is not valid for Geplande sessies." The code in the controller: [HttpPost] public ActionResult Create(Projecten p) { if (ModelState.IsValid) { _db.Projectens.InsertOnSubmit(p); _db.SubmitChanges(); return RedirectToAction("Index"); } else { SelectList s = new SelectList(_db.Verbonds, "Verb_ID", "Verb_Naam"); ViewData["Verbonden"] = s; } return View(); } What I want is being able to run the Data Annotations before the Model binder, but that sounds pretty much impossible. What I really want is that my self-written error messages show up on the screen. I have the same problem with a DateTime, which i want the users to write in the specific form 'dd/MM/yyyy' and i have a regex for that. but again, by the time the data-annotations do their job, all i get is a DateTime Object, and not the original string. So if the input is not a date, the regex does not even run, cos the data annotations just get a null, cos the model binder couldn't make it to a DateTime. Does anyone have an idea how to make this work?

    Read the article

  • installed openstack using devstack install shell script but getting 500 error when i try opening dashboard

    - by Arvind
    I followed the instructions at http://devstack.org/guides/single-machine.html to install OpenStack. I first installed Ubuntu on my Windows 7 PC using the officially supported Windows installer for Ubuntu 12.04 LTS. And after that I followed the instructions at above page to install OpenStack. As per instructions, I should be able to access the dashboard aka Horizon, at http://192.168.1.4/ (thats the IP of the PC on which I installed Ubuntu-OpenStack). However I am getting a 500 error web page when I open that. How do I resolve this error? I want to set up a dev environment for OpenStack. For your ref, the whole error message is given now-- FilterError at / /usr/bin/env: node: No such file or directory Request Method: GET Request URL: http://192.168.1.4/ Django Version: 1.4.2 Exception Type: FilterError Exception Value: /usr/bin/env: node: No such file or directory Exception Location: /usr/local/lib/python2.7/dist-packages/compressor/filters/base.py in input, line 133 Python Executable: /usr/bin/python Python Version: 2.7.3 Python Path: ['/opt/stack/horizon/openstack_dashboard/wsgi/../..', '/opt/stack/python-keystoneclient', '/opt/stack/python-novaclient', '/opt/stack/python-openstackclient', '/opt/stack/keystone', '/opt/stack/glance', '/opt/stack/python-glanceclient/setuptools_git-0.4.2-py2.7.egg', '/opt/stack/python-glanceclient', '/opt/stack/nova', '/opt/stack/horizon', '/opt/stack/cinder', '/opt/stack/python-cinderclient', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/ubuntuone-client', '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel', '/usr/lib/python2.7/dist-packages/ubuntuone-couch', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol', '/opt/stack/horizon/openstack_dashboard'] Server time: Sat, 27 Oct 2012 08:43:29 +0000 Error during template rendering In template /opt/stack/horizon/openstack_dashboard/templates/_stylesheets.html, error at line 3 /usr/bin/env: node: No such file or directory 1 {% load compress %} 2 3 {% compress css %} 4 <link href='{{ STATIC_URL }}dashboard/less/horizon.less' type='text/less' media='screen' rel='stylesheet' /> 5 {% endcompress %} 6 7 <link rel="shortcut icon" href="{{ STATIC_URL }}dashboard/img/favicon.ico"/> 8 Also, the traceback is now given below-- Environment: Request Method: GET Request URL: http://192.168.1.4/ Django Version: 1.4.2 Python Version: 2.7.3 Installed Applications: ('openstack_dashboard', 'django.contrib.contenttypes', 'django.contrib.auth', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.humanize', 'compressor', 'horizon', 'openstack_dashboard.dashboards.project', 'openstack_dashboard.dashboards.admin', 'openstack_dashboard.dashboards.settings', 'openstack_auth') Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'horizon.middleware.HorizonMiddleware', 'django.middleware.doc.XViewMiddleware', 'django.middleware.locale.LocaleMiddleware') Template error: In template /opt/stack/horizon/openstack_dashboard/templates/_stylesheets.html, error at line 3 /usr/bin/env: node: No such file or directory 1 : {% load compress %} 2 : 3 : {% compress css %} 4 : <link href='{{ STATIC_URL }}dashboard/less/horizon.less' type='text/less' media='screen' rel='stylesheet' /> 5 : {% endcompress %} 6 : 7 : <link rel="shortcut icon" href="{{ STATIC_URL }}dashboard/img/favicon.ico"/> 8 : Traceback: File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python2.7/dist-packages/django/views/decorators/vary.py" in inner_func 36. response = func(*args, **kwargs) File "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/views.py" in splash 38. return shortcuts.render(request, 'splash.html', {'form': form}) File "/usr/local/lib/python2.7/dist-packages/django/shortcuts/__init__.py" in render 44. return HttpResponse(loader.render_to_string(*args, **kwargs), File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py" in render_to_string 176. return t.render(context_instance) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 140. return self._render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 823. bit = self.render_node(node, context) File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py" in render_node 74. return node.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py" in render 155. return self.render_template(self.template, context) File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py" in render_template 137. output = template.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 140. return self._render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 823. bit = self.render_node(node, context) File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py" in render_node 74. return node.render(context) File "/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py" in render 147. return self.render_compressed(context, self.kind, self.mode, forced=forced) File "/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py" in render_compressed 107. rendered_output = self.render_output(compressor, mode, forced=forced) File "/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py" in render_output 119. return compressor.output(mode, forced=forced) File "/usr/local/lib/python2.7/dist-packages/compressor/css.py" in output 51. ret.append(subnode.output(*args, **kwargs)) File "/usr/local/lib/python2.7/dist-packages/compressor/css.py" in output 53. return super(CssCompressor, self).output(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in output 230. content = self.filter_input(forced) File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in filter_input 192. for hunk in self.hunks(forced): File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in hunks 167. precompiled, value = self.precompile(value, **options) File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in precompile 210. command=command, filename=filename).input(**kwargs) File "/usr/local/lib/python2.7/dist-packages/compressor/filters/base.py" in input 133. raise FilterError(err) Exception Type: FilterError at / Exception Value: /usr/bin/env: node: No such file or directory

    Read the article

  • MS-Access: What could cause one form with a join query to load right and another not?

    - by Daniel Straight
    Form1 Form1 is bound to Table1. Table1 has an ID field. Form2 Form2 is bound to Table2 joined to Table1 on Table2.Table1_ID=Table1.ID Here is the SQL (generated by Access): SELECT Table2.*, Table1.[FirstFieldINeed], Table1.[SecondFieldINeed], Table1.[ThirdFieldINeed] FROM Table1 INNER JOIN Table2 ON Table1.ID = Table2.[Table1_ID]; Form2 is opened with this code in Form1: DoCmd.RunCommand acCmdSaveRecord DoCmd.OpenForm "Form2", , , , acFormAdd, , Me.[ID] DoCmd.Close acForm, "Form1", acSaveYes And when loaded runs: Me.[Table1_ID] = Me.OpenArgs When Form2 is loaded, fields bound to columns from Table1 show up correctly. Form3 Form3 is bound to Table3 joined to Table2 on Table3.Table2_ID=Table2.ID Here is the SQL (generated by Access): SELECT Table3.*, Table2.[FirstFieldINeed], Table2.[SecondFieldINeed] FROM Table2 INNER JOIN Table3 ON Table2.ID = Table3.[Table2_ID]; Form3 is opened with this code in Form2: DoCmd.RunCommand acCmdSaveRecord DoCmd.OpenForm "Form3", , , , acFormAdd, , Me.[ID] DoCmd.Close acForm, "Form2", acSaveYes And when loaded runs: Me.[Table2_ID] = Me.OpenArgs When Form3 is loaded, fields bound to columns from Table2 do not show up correctly. WHY? UPDATES I tried making the join query into a separate query and using that as my record source, but it made no difference at all. If I go to the query for Form3 and view it in datasheet view, I can see that the information that should be pulled into the form is there. It just isn't showing up on the form.

    Read the article

  • Is it possible to run my Windows Form application in Windows CE platform?

    - by Fakhrul
    I am new in Windows CE development and never done it yet. Need some advise from the expert in here. In our current project, we are developing a client-server application. The client side is using a windows form application that are base on Windows XP OS while the server is a web base application. This question are related to the client application (Windows Form). This application are using Sql Server Express Edition for data storage. The data is stored in XML object format. It also can transfer a data from client to server via web service. It also interact with hardware such as Magnetic Stripe Reader, Contactless Smart Card Reader, and a thermal printer. Most of the communication between hardware device and systems are base on Serial Port. It is use standard app.config for the configuration and is a multi threaded application. There is a new requirement to use a Handheld device which is use a Windows CE platform. This handheld included the required equipment such as Contactless Smart Card Reader, Printer and Magnetic Stripe Reader. Instead of developing a new client application, is it possible to me to convert my current application that are base on Windows XP to Windows CE? If yes, how can I do that? If no, is it any other brilliant suggestion to do this? Thanks in advance. Software Engineer

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >