Search Results

Search found 19807 results on 793 pages for 'xml binding'.

Page 685/793 | < Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >

  • ServeletException, Property <variable name> not found

    - by k9yosh
    What i'm trying to do is add a new variable to this previously created Managed Bean Hello.java and use it in my xhtml file binding to a text field. But it seems that it is not being found when i run it on the server. So it throws a "ServeletException" and says that the "property 'lname'(my variable) is not found". How do i solve this and why is this happening? This is my managed bean, package stack.tute.malinda.model; import javax.faces.bean.ManagedBean; import javax.faces.bean.RequestScoped; @ManagedBean @RequestScoped public class Hello { private String fname; private String message; private String lname; //trying to add this new variable and use it in my xhtml file in a text field. public String getLname() { return lname; } public void setLname(String lname) { this.lname = lname; } public String getName() { return fname; } public String createMessage() { message="Hello " + fname + ""+ lname +"!"; return null; } public void setName(String fname) { this.fname=fname; } public String getMessage() { return message; } } This is my xhtml code, <h:body> <fieldset style="padding: 1em; float:left; margin-right:0.5em; padding-top:0.2em; text-align:left; border:1px solid green; font-weight:bold;"> <legend>Personal Details</legend> <h:form> <h:outputLabel for="name" value="First Name :" required="true"/> <h:inputText id="name" value="#{hello.name}"/> <br/> //Trying to access that variable here. <h:outputLabel for="name1" value="Last Name :" required="true"/> <h:inputText id="name1" value="#{hello.lname}"/> <h:message for="name"/> <br/> <h:commandButton value="Say hello" action="#{hello.createMessage}"> <f:ajax execute="@form" render="@form"/> </h:commandButton> <br/> <h:outputText value="#{hello.message}"/> </h:form> </fieldset>

    Read the article

  • Solving a cyclical dependency in Ninject (Compact Framework)

    - by Alex
    I'm trying to use Ninject for dependency injection in my MVP application. However, I have a problem because I have two types that depend on each other, thus creating a cyclic dependency. At first, I understand that it was a problem, because I had both types require each other in their constructors. Therefore, I moved one of the dependencies to a property injection instead, but I'm still getting the error message. What am I doing wrong? This is the presenter: public class LoginPresenter : Presenter<ILoginView>, ILoginPresenter { public LoginPresenter( ILoginView view ) : base( view ) { } } and this is the view: public partial class LoginForm : Form, ILoginView { [Inject] public ILoginPresenter Presenter { private get; set; } public LoginForm() { InitializeComponent(); } } And here's the code that causes the exception: static class Program { /// <summary> /// The main entry point for the application. /// </summary> [MTAThread] static void Main() { // Show the login form Views.LoginForm loginForm = Kernel.Get<Views.Interfaces.ILoginView>() as Views.LoginForm; Application.Run( loginForm ); } } The exception happens on the line with the Kernel.Get<>() call. Here it is: Error activating ILoginPresenter using binding from ILoginPresenter to LoginPresenter A cyclical dependency was detected between the constructors of two services. Activation path: 4) Injection of dependency ILoginPresenter into property Presenter of type LoginForm 3) Injection of dependency ILoginView into parameter view of constructor of type LoginPresenter 2) Injection of dependency ILoginPresenter into property Presenter of type LoginForm 1) Request for ILoginView Suggestions: 1) Ensure that you have not declared a dependency for ILoginPresenter on any implementations of the service. 2) Consider combining the services into a single one to remove the cycle. 3) Use property injection instead of constructor injection, and implement IInitializable if you need initialization logic to be run after property values have been injected. Why doesn't Ninject understand that since one is constructor injection and the other is property injection, this can work just fine? I even read somewhere looking for the solution to this problem that Ninject supposedly gets this right as long as the cyclic dependency isn't both in the constructors. Apparently not, though. Any help resolving this would be much appreciated.

    Read the article

  • Asp.net Mvc - Kigg: Maintain User object in HttpContext.Items between requests.

    - by Pickels
    Hallo, first I want to say that I hope this doesn't look like I am lazy but I have some trouble understanding a piece of code from the following project. http://kigg.codeplex.com/ I was going through the source code and I noticed something that would be usefull for my own little project I am making. In their BaseController they have the following code: private static readonly Type CurrentUserKey = typeof(IUser); public IUser CurrentUser { get { if (!string.IsNullOrEmpty(CurrentUserName)) { IUser user = HttpContext.Items[CurrentUserKey] as IUser; if (user == null) { user = AccountRepository.FindByClaim(CurrentUserName); if (user != null) { HttpContext.Items[CurrentUserKey] = user; } } return user; } return null; } } This isn't an exact copy of the code I adjusted it a little to my needs. This part of the code I still understand. They store their IUser in HttpContext.Items. I guess they do it so that they don't have to call the database eachtime they need the User object. The part that I don't understand is how they maintain this object in between requests. If I understand correctly the HttpContext.Items is a per request cache storage. So after some more digging I found the following code. internal static IDictionary<UnityPerWebRequestLifetimeManager, object> GetInstances(HttpContextBase httpContext) { IDictionary<UnityPerWebRequestLifetimeManager, object> instances; if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { lock (httpContext.Items) { if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { instances = new Dictionary<UnityPerWebRequestLifetimeManager, object>(); httpContext.Items.Add(Key, instances); } } } return instances; } This is the part where some magic happens that I don't understand. I think they use Unity to do some dependency injection on each request? In my project I am using Ninject and I am wondering how I can get the same result. I guess InRequestScope in Ninject is the same as UnityPerWebRequestLifetimeManager? I am also wondering which class/method they are binding to which interface? Since the HttpContext.Items get destroyed each request how do they prevent losing their user object? Anyway it's kinda a long question so I am gradefull for any push in the right direction. Kind regards, Pickels

    Read the article

  • Retain a list of objects and pass it to the create/edit view when validation fails in ASP.NET MVC 2

    - by brainnovative
    I am binding a Foreign key property in my model. I am passing a list of possible values for that property in my model. The model looks something like this: public class UserModel { public bool Email { get; set; } public bool Name { get; set; } public RoleModel Role { get; set; } public IList<RoleModel> Roles { get; set; } } public class RoleModel { public string RoleName { get; set; } } This is what I have in the controller: public ActionResult Create() { IList<RoleModel> roles = RoleModel.FromArray(_userService.GetAllRoles()); UserModel model = new UserModel() { Roles = roles }; return View(model); } In the view I have: <div class="editor-label"> <%= Html.LabelFor(model => model.Role) %> </div> <div class="editor-field"> <%= Html.DropDownListFor(model => model.Role, new SelectList(Model.Roles, "RoleName", "RoleName", Model.Role))%> <%= Html.ValidationMessageFor(model => model.Role)%> </div> What do I need to do to get the list of roles back to my controller to pass it again to the view when validation fails. This is what I need: [HttpPost] public ActionResult Create(UserModel model) { if (ModelState.IsValid) { // insert logic here } //the validation fails so I pass the model again to the view for user to update data but model.Roles is null :( return View(model); } As written in the comments above I need to pass the model with the list of roles again to my view but model.Roles is null. Currently I ask the service again for the roles (model.Roles = RoleModel.FromArray(_userService.GetAllRoles());) but I don't want to add an extra overhead of getting the list from DB when I have already done that.. Anyone knows how to do it?

    Read the article

  • Why the parent page get refreshed when I click the link to open thickbox-styled form?

    - by user333205
    Hi, all: I'm using Thickbox 3.1 to show signup form. The form content comes from jquery ajax post. The jquery lib is of version 1.4.2. I placed a "signup" link into a div area, which is a part of my other large pages, and the whole content of that div area is ajax+posted from my server. To make thickbox can work in my above arangement, I have modified the thickbox code a little like that: //add thickbox to href & area elements that have a class of .thickbox function tb_init(domChunk){ $(domChunk).live('click', function(){ var t = this.title || this.name || null; var a = this.href || this.alt; var g = this.rel || false; tb_show(t,a,g); this.blur(); return false; });} This modification is the only change against the original version. Beacause the "signup" link is placed in ajaxed content, so I Use live instead of binding the click event directly. When I tested on my pc, the thickbox works well. I can see the signup form quickly, without feeling the content of the parent page(here, is the other large pages) get refreshed. But after transmiting my site files into VHost, when I click the "signup" link, the signup form get presented very slowly. The large pages get refreshed evidently, because the borwser(ie6) are reloading images from server incessantly. These images are set as background images in CSS files. I think that's because the slow connection of network. But why the parent pages get refreshed? and why the browser reloads those images one more time? Havn't those images been placed in local computer's disk? Is there one way to stop that reloadding? Because the signup form can't get displayed sometimes due to slow connection of network. To verified the question, you can access http://www.juliantec.info/track-the-source.html and click the second link in left grey area, that is the "signup" link mentioned above. Thinks!

    Read the article

  • jquery function call with parameters

    - by Kaushik Gopal
    Hi a newb question: I have a table with a bunch of buttons like so: <tr class="hr-table-cell" > <td>REcord 1</td> <td> <INPUT type="button" value="Approve" onclick="" /> <INPUT type="button" value="Reject" onclick="" /> <INPUT type="button" value="Delete" onclick="fnDeletePpAppl(222445,704);" /> </td> </tr> <tr class="hr-table-cell" > <td>REcord 1</td> <td align="center" class="hr-table-bottom-blue-border" valign="middle"> <INPUT type="button" value="Approve" onclick="" /> <INPUT type="button" value="Reject" onclick="" /> <INPUT type="button" value="Delete" onclick="fnDeletePpAppl(237760,776);" /> </td> </tr> I have my jquery like so: <script type="text/javascript"> // JQUERY stuff $(document).ready(function(){ function fnDeletePpAppl(empno, applno) { alert('Entering here'); $("form").get(0).empno.value = empno; $("form").get(0).applNo.value = applno; $("form").get(0).listPageAction.value = "delete"; $("form").get(0).action.value = "pprelreqlist.do"; $("form").get(0).submit(); } }); This doesn't seem to work.I thought this means, the function is ready only after the dom is ready. After the dom is ready and i click the button, why is not recognizing the function declaration within the .ready() function? However if i use the function directly: <script type="text/javascript"> function fnDeletePpAppl(empno, applno) { alert('Entering here'); $("form").get(0).empno.value = empno; $("form").get(0).applNo.value = applno; $("form").get(0).listPageAction.value = "delete"; $("form").get(0).action.value = "pprelreqlist.do"; $("form").get(0).submit(); } This works. I want to get my fundamentals straight here... If i do the declaration without the .ready() , does that mean i'm using plain vanilla jscript? If i were to do this with the document.ready - the usual jquery declaration way, what would i have to change to make it work? I understand there are much better ways to do this like binding with buttons etc, but I want to know why this particular way doesn't seem to be working. Thanks. Cheers. K

    Read the article

  • ZF Autoloader to load ancestor and requested class

    - by Pekka
    I am integrating Zend Framework into an existing application. I want to switch the application over to Zend's autoloading mechanism to replace dozens of include() statements. I have a specific requirement for the autoloading mechanism, though. Allow me to elaborate. The existing application uses a core library (independent from ZF), for example: /Core/Library/authentication.php /Core/Library/translation.php /Core/Library/messages.php this core library is to remain untouched at all times and serves a number of applications. The library contains classes like class ancestor_authentication { ... } class ancestor_translation { ... } class ancestor_messages { ... } in the application, there is also a Library directory: /App/Library/authentication.php /App/Library/translation.php /App/Library/messages.php these includes extend the ancestor classes and are the ones that actually get instantiated in the application. class authentication extends ancestor_authentication { } class translation extends ancestor_translation { } class messages extends ancestor_messages { } usually, these class definitions are empty. They simply extend their ancestors and provide the class name to instantiate. $authentication = new authentication(); The purpose of this solution is to be able to easily customize aspects of the application without having to patch the core libraries. Now, the autoloader I need would have to be aware of this structure. When an object of the class authentication is requested, the autoloader would have to: 1. load /Core/Library/authentication.php 2. load /App/Library/authentication.php My current approach would be creating a custom function, and binding that to Zend_Loader_Autoloader for a specific namespace prefix. Is there already a way to do this in Zend that I am overlooking? The accepted answer in this question kind of implies there is, but that may be just a bad choice of wording. Are there extensions to the Zend Autoloader that do this? Can you - I am new to ZF - think of an elegant way, conforming with the spirit of the framework, of extending the Autoloader with this functionality? I'm not necessary looking for a ready-made implementation, some pointers (This should be an extension to the xyz method that you would call like this...) would already be enough.

    Read the article

  • Passing ViewModel for backbone.js from MVC3 Server-Side

    - by Roman
    In ASP.NET MVC there is Model, View and Controller. MODEL represents entities which are stored in database and essentially is all the data used in a application (for example, generated by EntityFramework, "DB First" approach). Not all data from model you want to show in the view (for example, hashs of passwords). So you create VIEW MODEL, each for every strongly-typed-razor-view you have in application. Like this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MyProject.ViewModels.SomeController.SomeAction { public class ViewModel { public ViewModel() { Entities1 = new List<ViewEntity1>(); Entities2 = new List<ViewEntity2>(); } public List<ViewEntity1> Entities1 { get; set; } public List<ViewEntity2> Entities2 { get; set; } } public class ViewEntity1 { //some properties from original DB-entity you want to show } public class ViewEntity2 { } } When you create complex client-side interfaces (I do), you use some pattern for javascript on client, MVC or MVVM (I know only these). So, with MVC on client you have another model (Backbone.Model for example), which is third model in application. It is a bit much. Why don`t we use the same ViewModel model on a client (in backbone.js or another framework)? Is there a way to transfer CS-coded model to JS-coded? Like in MVVM pattern, with knockout.js, when you can do like this: in SomeAction.cshtml: <div style="display: none;" id="view_model">@Json.Encode(Model)</div> after that in Javascript-code var ViewModel = ko.mapping.fromJSON($("#view_model").get(0).innerHTML); now you can extend your ViewModel with some actions, event handlers, etc: ko.utils.extend(ViewModel, { some_function: function () { //some code } }); So, we are not building the same view model on the client again, we are transferring existing view model from server. At least, data. But knockout.js is not suitable for me, you can`t build complex UI with it, it is just data-binding. I need something more structural, like backbone.js. The only way to build ViewModel for backbone.js I can see now is re-writing same ViewModel in JS from server with hands. Is there any ways to transfer it from server? To reuse the same viewmodel on server view and client view?

    Read the article

  • Animate UserControl (When It Gets Collapsed) in WPF

    - by sanjeev40084
    I have two xaml file one is MainWindow.xaml and other is userControl EditTaskView.xaml. In MainWindow.xaml it consists of listbox and when double clicked any item of listbox, it displays edit window (EditView userControl). Whenever edit window gets displayed, it plays an animation (sliding from right to left). The EditView userControl has two buttons 'Save' and 'Cancel'. Now I want to add animation (sliding edit window from left to right) when any of the button (Save or Cancel) button is clicked. When 'Save' or 'Cancel' button is clicked, it Collapse the edit window. Here is the story board which slides window from right to left. <Storyboard x:Key="AnimateEditView"> <ThicknessAnimationUsingKeyFrames Storyboard.TargetProperty="(FrameworkElement.Margin)" Storyboard.TargetName="EditTask" > <EasingThicknessKeyFrame KeyTime="0" Value="100,0,0,0"> <EasingThicknessKeyFrame.EasingFunction> <ExponentialEase EasingMode="EaseOut"/> </EasingThicknessKeyFrame.EasingFunction> </EasingThicknessKeyFrame> <EasingThicknessKeyFrame KeyTime="0:0:1" Value="0,0,0,0"> <EasingThicknessKeyFrame.EasingFunction> <ExponentialEase EasingMode="EaseOut"/> </EasingThicknessKeyFrame.EasingFunction> </EasingThicknessKeyFrame> </ThicknessAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Window.Triggers> <EventTrigger RoutedEvent="Control.MouseDoubleClick" SourceName="lstBxTask"> <BeginStoryboard Storyboard="{StaticResource AnimateEditView}"/> </EventTrigger> <Window.Triggers> Here is the xaml within MainWindow. <ListBox x:Name="lstBxTask" Style="{StaticResource ListBoxItems}" MouseDoubleClick="lstBxTask_MouseDoubleClick"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <Rectangle Style="{StaticResource LineBetweenListBox}"/> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Taskname}" Style="{StaticResource TextInListBox}"/> <Button Name="btnDelete" Style="{StaticResource DeleteButton}" Click="btnDelete_Click"/> </StackPanel> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> <ToDoTask:EditTaskView x:Name="EditTask" Grid.Row="0" Grid.RowSpan="3" Grid.ColumnSpan="2" Visibility="Collapsed" > Any suggestions?

    Read the article

  • Strange lifecycle behaviour with f:ajax and valueChangedListener

    - by gerry
    I want to use the f:ajax tag to update a part of a page with a editor gui, which style depends on a selectOneMenu and its selected item. The problem is, that if the ajax is called the server first renders the editor and then executes the valueChangedListener method. In my JSF2.0 / Facelets app I've the following code: ... <h:selectOneMenu id="typeSelect" validator="#{addEntityBean.checkType}" value="#{addEntityBean.selectedTypeAsString}" valueChangeListener="#{addEntityBean.selectedTypeChanged}"> <f:ajax render="editorGrid"/> <f:selectItems value="#{addEntityBean.entityTypeListAsString}"/> </h:selectOneMenu> ... <h:panelGrid id="editorGrid" columns="2" binding="#{addEntityBean.dynamicEditorGrid}" /> The BackingBean code looks like this: public String getSelectedTypeAsString() { return selectedTypeAsString; } public void setSelectedTypeAsString(String selectedType) { this.selectedTypeAsString = selectedType; } public Class<? extends Entity> getSelectedType() { log.severe("getSelectedType"); Class<? extends Entity> res = null; if(selectedTypeAsString != null){ int index = entityTypeListAsString.indexOf(selectedTypeAsString); res = entityTypeList.get(index); } return res; } public void selectedTypeChanged(ValueChangeEvent event){ setSelectedTypeAsString((String)event.getNewValue()); Class<? extends Entity> clazz = getSelectedType(); if(clazz != null){ try { setEntity(clazz.newInstance()); } catch (Exception e) { log.severe(e); } } else{ setEntity(null); } } public HtmlPanelGrid getDynamicEditorGrid() { HtmlPanelGrid grid = DynamicHtmlComponentCreator.createHtmlPanelGrid(); Entity entity = getEntity(); if(entity != null){ log.severe("getEntity() -->"+entity.getClassName()); grid = (HtmlPanelGrid)buildGui(grid, entity, "entityBean.entity", false); } else log.severe("getEntity() --> null"); return grid; } The problem is, that the server logs show that at first the getDynamicEditorGrid() is executed. And later the selectedTypeChanged()-listener-method. So everytime the selected editor style type is update one selection later. I.e. after a page reload (the type is initally null) the user selects the A, now the getDynamicEditorGrid() is executed again with type null and after that the type is changed to A. Again the user selects now B (after A) and now the getDynamicEditorGrid() is executed with the type A and after that the type is changed to B. What is wrong with my code? How can I fix this really strange behavior...

    Read the article

  • dojo dgrid tree, subrows in wrong position

    - by Ventura
    I have a dgrid, working with tree column plugin. Every time that the user click on the tree, I call the server, catch the subrows(json) and bind it. But when it happens, these subrows are show in wrong position, like the image bellow. The most strange is when I change the pagination, after go back to first page, the subrows stay on the correct place. (please, tell me if is possible to understand my english, then I can try to improve the text) My dgrid code: var CustomGrid = declare([OnDemandGrid, Keyboard, Selection, Pagination]); var grid = new CustomGrid({ columns: [ selector({label: "#", disabled: function(object){ return object.type == 'DOCx'; }}, "radio"), {label:'Id', field:'id', sortable: false}, tree({label: "Title", field:"title", sortable: true, indentWidth:20, allowDuplicates:true}), //{label:'Title', field:'title', sortable: false}, {label:'Count', field:'count', sortable: false} ], store: this.memoryStore, collapseOnRefresh:true, pagingLinks: false, pagingTextBox: true, firstLastArrows: true, pageSizeOptions: [10, 15, 25], selectionMode: "single", // for Selection; only select a single row at a time cellNavigation: false // for Keyboard; allow only row-level keyboard navigation }, "grid"); My memory store: loadMemoryStore: function(items){ this.memoryStore = Observable(new Memory({ data: items, getChildren: function(parent, options){ return this.query({parent: parent.id}, options); }, mayHaveChildren: function(parent){ return (parent.count != 0) && (parent.type != 'DOC'); } })); }, This moment I am binding the subrows: success: function(data){ for(var i=0; i<data.report.length; i++){ this.memoryStore.put({id:data.report[i].id, title:data.report[i].created, type:'DOC', parent:this.designId}); } }, I was thinking, maybe every moment that I bind the subrows, I could do like a refresh on the grid, maybe works. I think that the pagination does the same thing. Thanks. edit: I forgot the question. Well, How can I correct this bug? If The refresh in dgrid works. How can I do it? Other thing that I was thinking, maybe my getChildren is wrong, but I could not identify it. thanks again.

    Read the article

  • Force deletion of slot in boost::signals2

    - by villintehaspam
    Hi! I have found that boost::signals2 uses sort of a lazy deletion of connected slots, which makes it difficult to use connections as something that manages lifetimes of objects. I am looking for a way to force slots to be deleted directly when disconnected. Any ideas on how to work around the problem by designing my code differently are also appreciated! This is my scenario: I have a Command class responsible for doing something that takes time asynchronously, looking something like this (simplified): class ActualWorker { public: boost::signals2<void ()> OnWorkComplete; }; class Command : boost::enable_shared_from_this<Command> { public: ... void Execute() { m_WorkerConnection = m_MyWorker.OnWorkDone.connect(boost::bind(&Command::Handle_OnWorkComplete, shared_from_this()); // launch asynchronous work here and return } boost::signals2<void ()> OnComplete; private: void Handle_OnWorkComplete() { // get a shared_ptr to ourselves to make sure that we live through // this function but don't keep ourselves alive if an exception occurs. shared_ptr<Command> me = shared_from_this(); // Disconnect from the signal, ideally deleting the slot object m_WorkerConnection.disconnect(); OnComplete(); // the shared_ptr now goes out of scope, ideally deleting this } ActualWorker m_MyWorker; boost::signals2::connection m_WorkerConnection; }; The class is invoked about like this: ... boost::shared_ptr<Command> cmd(new Command); cmd->OnComplete.connect( foo ); cmd->Execute(); // now go do something else, forget all about the cmd variable etcetera. the Command class keeps itself alive by getting a shared_ptr to itself which is bound to the ActualWorker signal using boost::bind. When the worker completes, the handler in Command is invoked. Now, since I would like the Command object to be destroyed, I disconnect from the signal as can be seen in the code above. The problem is that the actual slot object is not deleted when disconnected, it is only marked as invalid and then deleted at a later time. This in turn appears to depend on the signal to fire again, which it doesn't do in my case, leading to the slot never expiring. The boost::bind object thus never goes out of scope, holding a shared_ptr to my object that will never get deleted. I can work around this by binding using the this pointer instead of a shared_ptr and then keeping my object alive using a member shared_ptr which I then release in the handler function, but it kind of makes the design feel a bit overcomplicated. Is there a way to force signals2 to delete the slot when disconnecting? Or is there something else I could do to simplify the design? Any comments are appreciated!

    Read the article

  • Problems with starting windows service on windows xp SP3

    - by Michiel Peeters
    I'm currently facing a problem which I can not resolve and I really don't know what to do anymore. When I'm trying to start the service I receive the message: "The service is started but again also stopped, this because that some of the services will stop if they have nothing to do, for example the performance logs and the alerts service". I've looked into the Windows Logs but nothing is written there which could describe why my service is all the time stopping. I've also tried to fire the windows service via the command prompt which gives me the message: "The service is not started, but the service didn't return any faults.". I've tried to remove all keys which references to my service, which didn't resolve the issue. I've searched on google (maybe not good enough) to find an answer but I didn't found any. I did found some websites which describes what I could do, but all of these suggestions didn't work. This is kinda ** because I do not know where to look. I do not have any error message, i do not have any id which i can use to search on. I really don't know where to start and I hope you guys can help me on this one. Detailed explanation about the windows service OS: Windows XP SP3 .Net Framework: .Net 4.0 Client Profile Language: C# Development environment: Visual Studio 2010 Professional (but Visual Studio 2012 RC is installed) Communications: WCF (Named Pipes), WCF (BasicHTTPBinding) Named Pipes: I have chosen for this solution because I wanted to communicate from a windows service to a windows form application. It worked now for quite some time but suddenly my windows service shuts it self down and I couldn't restart it anymore. There are two named pipes services implemented: An event service which will send any notification to the windows form application and an management service which gives my windows form application the possibility to maintain my windows service. BasicHTTPBinding: The basic http binding makes the connection to a central server. This connection is then used for streaming information from the client to the server. I do not know which additional information you will need, but if you guys need something then I'll try to give it as detailed as possible. Thank you in advance.

    Read the article

  • ListBox ,ScrollViewer ? CanContentScroll

    - by mrK
    Hi! I have ListBox with ScrollViewer <ScrollViewer Focusable="False" CanContentScroll="True" HorizontalScrollBarVisibility="Hidden" VerticalScrollBarVisibility="Disabled"> <ListBox ItemsSource="{Binding Path=MyItems}" VerticalAlignment="Stretch" Focusable="False"> <ListBox.ItemContainerStyle> <Style TargetType="{x:Type ListBoxItem}"> <Setter Property="OverridesDefaultStyle" Value="true"/> <Setter Property="HorizontalContentAlignment" Value="Stretch"/> <Setter Property="VerticalContentAlignment" Value="Stretch"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ListBoxItem}"> <Border CornerRadius="3,3,3,3"> <Grid> <myControls:MyControl/> </Grid> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </ListBox.ItemContainerStyle> <ListBox.Style> <Style TargetType="ListBox"> <Setter Property="SnapsToDevicePixels" Value="true"/> <Setter Property="OverridesDefaultStyle" Value="true"/> <Setter Property="ScrollViewer.HorizontalScrollBarVisibility" Value="Auto"/> <Setter Property="ScrollViewer.VerticalScrollBarVisibility" Value="Auto"/> <Setter Property="ScrollViewer.CanContentScroll" Value="True"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListBox"> <ScrollViewer Focusable="False" CanContentScroll="True"> <Border> <StackPanel Margin="2" Orientation="Horizontal" IsItemsHost="True"/> </Border> </ScrollViewer> </ControlTemplate> </Setter.Value> </Setter> </Style> </ListBox.Style> </ListBox> </ScrollViewer> But CanContentScroll="True" does't work. It's still scrolling in physical units. Whats wrong in my code? Thanks!

    Read the article

  • Property trigger in XAML for IsMouseOver fails to set Border.Background

    - by Mal Ross
    I'm currently trying to create a ControlTemplate for the Button class in WPF, replacing the usual visual tree with something that makes the button look similar to the little (X) close icon on Google Chrome's tabs. I decided to use a Path object in XAML to achieve the effect. Using a property trigger, the control responds to a change in the IsMouseOver property by setting the icon's red background. Here's the XAML from a test app: <Window x:Class="Widgets.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Window.Resources> <Style x:Key="borderStyle" TargetType="Border"> <Style.Triggers> <Trigger Property="IsMouseOver" Value="True"> <Setter Property="Background"> <Setter.Value> <SolidColorBrush Color="#CC0000"/> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> <ControlTemplate x:Key="closeButtonTemplate" TargetType="Button"> <Border Width="12" Height="12" CornerRadius="6" BorderBrush="#AAAAAA" Background="Transparent" Style="{StaticResource borderStyle}" ToolTip="Close"> <Viewbox Margin="2.75"> <Path Data="M 0,0 L 10,10 M 0,10 L 10,0" Stroke="{Binding BorderBrush, RelativeSource={RelativeSource FindAncestor, AncestorType=Border, AncestorLevel=1}}" StrokeThickness="1.8"/> </Viewbox> </Border> </ControlTemplate> </Window.Resources> <Grid Background="White"> <Button Template="{StaticResource closeButtonTemplate}"/> </Grid> </Window> Note that the circular background is always there - it's just transparent when the mouse isn't over it. The problem with this is that the trigger just isn't working. Nothing changes in the button's appearance. However, if I remove the Background="Transparent" value from the Border object in the ControlTemplate, the trigger does work (albeit only when over the 'X'). I really can't explain this. Setters for any other properties placed in the borderStyle resource work fine, but the Background setter fails as soon as the default background is specified in the ControlTemplate. Any ideas why it's happening and how I can fix it? I know I could easily replace this code with, for example, a .PNG-based image, but I want to understand why the current implementation isn't working. Thanks! :)

    Read the article

  • I know I'm doing something wrong with RaiseCanExecuteChanged and CanExecute

    - by Cowman
    Well after fiddling with MVVM light to get my button to enable and disable when I want it to... I sort of mashed things together until it worked. However, I just know I'm doing something wrong here. I have RaiseCanExecuteChanged and CanExecute in the same area being called. Surely this is not how it's done? Here's my xaml <Button Margin="10, 25, 10, 25" VerticalAlignment="Center" HorizontalAlignment="Center" Width="50" Height="50" Grid.Column="1" Grid.Row="3" Content="Host"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <mvvmLight:EventToCommand Command="{Binding HostChat}" MustToggleIsEnabled="True" /> </i:EventTrigger> </i:Interaction.Triggers> </Button> And here's my code public override void InitializeViewAndViewModel() { view = UnityContainer.Resolve<LoginPromptView>(); viewModel = UnityContainer.Resolve<LoginPromptViewModel>(); view.DataContext = viewModel; InjectViewIntoRegion(RegionNames.PopUpRegion, view, true); viewModel.HostChat = new DelegateCommand(ExecuteHostChat, CanHostChat); viewModel.PropertyChanged += new System.ComponentModel.PropertyChangedEventHandler(ViewModelPropertyChanged); } void ViewModelPropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) { if (e.PropertyName == "Name" || e.PropertyName == "Port" || e.PropertyName == "Address") { (viewModel.HostChat as DelegateCommand).RaiseCanExecuteChanged(); (viewModel.HostChat as DelegateCommand).CanExecute(); } } public void ExecuteHostChat() { } public bool CanHostChat() { if (String.IsNullOrEmpty(viewModel.Address) || String.IsNullOrEmpty(viewModel.Port) || String.IsNullOrEmpty(viewModel.Name)) { return false; } else return true; } See how these two are together? Surely that can't be right. I mean... it WORKS for me... but something seems wrong about it. Shouldn't RaiseCanExecuteChanged call CanExecute? It doesn't... and so if I don't have that CanExecute in there, my control never toggles its IsEnabled like I need it to. (viewModel.HostChat as DelegateCommand).RaiseCanExecuteChanged(); (viewModel.HostChat as DelegateCommand).CanExecute();

    Read the article

  • WPF: Can't get to original source from ExecutedRoutedEventArgs

    - by Ikhail
    I have a problem getting to the original source of a command using ExecutedRoutedEventArgs. I'm creating a simple splitbutton, in which a menu will appear below a dedicated button, as another button is pressed. When I click a menuitem in the appearing menu a command is fired. This command is registered on the splitbutton. And the idea is to get to the menuitem beeing clicked, by using the ExecutedRoutedEventsArgs. Ok, now the problem. If I choose to have the popup menu shown by default (IsOpen="True") and I click one of the menuitems I can get to the originalsource (thus the menuitem) from the ExecutedRoutedEventArgs - no problem. However, if I first click the button to show the menu and THEN click on a menuitem, the originalsource of the command will be the button instead of the MenuItem! Here's the controltemplate for the splitbutton: <ControlTemplate TargetType="{x:Type usc:SplitButton}"> <StackPanel Orientation="Horizontal"> <Button Name="mybutton"> <StackPanel> <Popup usc:SplitButton.IsPopup="True" IsOpen="True" Name="myPopup" PlacementTarget="{Binding ElementName=mybutton}" StaysOpen="False" Placement="Bottom"> <Border BorderBrush="Beige" BorderThickness="1"> <Menu Width="120"> <MenuItem Header="item1" Command="usc:SplitButton.MenuItemClickCommand" /> <MenuItem Header="item2" /> <MenuItem Header="item3" /> </Menu> </Border> </Popup> <TextBlock Text="MySplitbutton" /> </StackPanel> </Button> <Button Content="OK" Command="usc:SplitButton.ShowMenuCommand" /> </StackPanel> </ControlTemplate> The OK button fires a ShowMenuCommand on the SplitButton, which sets the IsOpen property on the Popup to True. Any ideas why the OK button (after having activated the menu) is the OriginalSource when a menuitem is clicked? Thanks.

    Read the article

  • WP7 Button inside ListBox only "clicks" every other press

    - by Zik
    I have a button defined inside of a DataTemplate for my list box. <phone:PhoneApplicationPage.Resources> <DataTemplate x:Key="ListTemplate"> <Grid Margin="12,12,24,12"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Button Grid.Column="0" Name="EnableDisableButton" Click="EnableDisableButton_Click" BorderBrush="Transparent"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Image Grid.Row="0" Source="\Images\img.dark.png" Width="48" Height="48" Visibility="{StaticResource PhoneDarkThemeVisibility}" /> <Image Grid.Row="0" Source="\Images\img.light.png" Width="48" Height="48" Visibility="{StaticResource PhoneLightThemeVisibility}" /> <Rectangle Grid.Row="1" Width="48" Height="8" Fill="{Binding CurrentColor}" RadiusX="4" RadiusY="4" /> </Grid> </Button> <Grid Grid.Column="1"> <... more stuff here ...> </Grid> </Grid> </DataTemplate> </phone:PhoneApplicationPage.Resources> What I'm seeing is that the first time I press the button, the Click event fires. The second time I press it, it does not fire. Third press, fires. Fourth press, does not fire. Etc. Originally I had it bound to a command but that was behaving the same way. (I put a Debug.WriteLine() in the event handler so I know when it fires.) Any ideas? It's really odd that the click event only fires every other time.

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • VB.NET IF() Coalesce and “Expression Expected” Error

    - by Jeff Widmer
    I am trying to use the equivalent of the C# “??” operator in some VB.NET code that I am working in. This StackOverflow article for “Is there a VB.NET equivalent for C#'s ?? operator?” explains the VB.NET IF() statement syntax which is exactly what I am looking for... and I thought I was going to be done pretty quickly and could move on. But after implementing the IF() statement in my code I started to receive this error: Compiler Error Message: BC30201: Expression expected. And no matter how I tried using the “IF()” statement, whenever I tried to visit the aspx page that I was working on I received the same error. This other StackOverflow article Using VB.NET If vs. IIf in binding/rendering expression indicated that the VB.NET IF() operator was not available until VS2008 or .NET Framework 3.5.  So I checked the Web Application project properties but it was targeting the .NET Framework 3.5: So I was still not understanding what was going on, but then I noticed the version information in the detailed compiler output of the error page: This happened to be a C# project, but with an ASPX page with inline VB.NET code (yes, it is strange to have that but that is the project I am working on).  So even though the project file was targeting the .NET Framework 3.5, the ASPX page was being compiled using the .NET Framework 2.0.  But why?  Where does this get set?  How does ASP.NET know which version of the compiler to use for the inline code? For this I turned to the web.config.  Here is the system.codedom/compilers section that was in the web.config for this project: <system.codedom>     <compilers>         <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">             <providerOption name="CompilerVersion" value="v3.5" />             <providerOption name="WarnAsError" value="false" />         </compiler>     </compilers> </system.codedom> Keep in mind that this is a C# web application project file but my aspx file has inline VB.NET code.  The web.config does not have any information for how to compile for VB.NET so it defaults to .NET 2.0 (instead of 3.5 which is what I need). So the web.config needed to include the VB.NET compiler option.  Here it is with both the C# and VB.NET options (I copied the VB.NET config from a new VB.NET Web Application project file).     <system.codedom>         <compilers>             <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">                 <providerOption name="CompilerVersion" value="v3.5" />                 <providerOption name="WarnAsError" value="false" />             </compiler>       <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" warningLevel="4" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">         <providerOption name="CompilerVersion" value="v3.5"/>         <providerOption name="OptionInfer" value="true"/>         <providerOption name="WarnAsError" value="false"/>       </compiler>     </compilers>     </system.codedom>   So the inline VB.NET code on my aspx page was being compiled using the .NET Framework 2.0 when it really needed to be compiled with the .NET Framework 3.5 compiler in order to take advantage of the VB.NET IF() coalesce statement.  Without the VB.NET web.config compiler option, the default is to compile using the .NET Framework 2.0 and the VB.NET IF() coalesce statement does not exist (at least in the form that I want it in).  FYI, there is an older IF statement in VB.NET 2.0 compiler which is why it is giving me the unusual “Expression Expected” error message – see this article for when VB.NET got the new updated version. EDIT (2011-06-20): I had made a wrong assumption in the first version of this blog post.  After a little more research and investigation I was able to figure out that the issue was in the web.config and not with the IIS App Pool.  Thanks to the comment from James which forced me to look into this again.

    Read the article

  • Creating an AJAX Accordion Menu

    - by jaullo
    Introduction Ajax is a powerful addition to asp.net that provides new functionality in a simple and agile  way This post is dedicated to creating a menu with ajax accordion type. About the Control The basic idea of this control, is to provide a serie of panels and show and hide information inside these panels. The use is very simple, we have to set each panel inside accordion control and give to each panel a Header and of course, we have to set the content of each panel.  To use accordion control, u need the ajax control toolkit. know the basic propertyes of accordion control:  Before start developing an accordion control, we have to know the basic properties for this control Other accordion propertyes  FramesPerSecond - Number of frames per second used in the transition animations RequireOpenedPane - Prevent closing the currently opened pane when its header is clicked (which ensures one pane is always open). The default value is true. SuppressHeaderPostbacks - Prevent the client-side click handlers of elements inside a header from firing (this is especially useful when you want to include hyperlinks in your headers for accessibility) DataSource - The data source to use. DataBind() must be called. DataSourceID - The ID of the data source to use. DataMember - The member to bind to when using a DataSourceID  AJAX Accordion Control Extender DataSource  The Accordion Control extender of AJAX Control toolkit can also be used as DataBound control. You can bind the data retrieved from the database to the Accordion control. Accordion Control consists of properties such as DataSource and DataSourceID (we can se it above) that can be used to bind the data. HeaderTemplate can used to display the header or title for the pane generated by the Accordion control, a click on which will open or close the ContentTemplate generated by binding the data with Accordion extender. When DataSource is passed to the Accordion control, also use the DataBind method to bind the data. The Accordion control bound with data auto generates the expand/collapse panes along with their headers.  This code represents the basic steps to bind the Accordion to a Datasource Collapse Public Sub getCategories() Dim sqlConn As New SqlConnection(conString) sqlConn.Open() Dim sqlSelect As New SqlCommand("SELECT * FROM Categories", sqlConn) sqlSelect.CommandType = System.Data.CommandType.Text Dim sqlAdapter As New SqlDataAdapter(sqlSelect) Dim myDataset As New DataSet() sqlAdapter.Fill(myDataset) sqlConn.Close() Accordion1.DataSource = myDataset.Tables(0).DefaultView Accordion1.DataBind()End Sub Protected Sub Accordion1_ItemDataBound(sender As Object, _ e As AjaxControlToolkit.AccordionItemEventArgs) If e.ItemType = AjaxControlToolkit.AccordionItemType.Content Then Dim sqlConn As New SqlConnection(conString) sqlConn.Open() Dim sqlSelect As New SqlCommand("SELECT productName " & _ "FROM Products where categoryID = '" + _ DirectCast(e.AccordionItem.FindControl("txt_categoryID"),_ HiddenField).Value + "'", sqlConn) sqlSelect.CommandType = System.Data.CommandType.Text Dim sqlAdapter As New SqlDataAdapter(sqlSelect) Dim myDataset As New DataSet() sqlAdapter.Fill(myDataset) sqlConn.Close() Dim grd As New GridView() grd = DirectCast(e.AccordionItem.FindControl("GridView1"), GridView) grd.DataSource = myDataset grd.DataBind() End If End Sub In the above code, we made two things, first, we made a sql select to database to retrieve all data from categories table, this data will be used to set the header and columns of the accordion.  Collapse <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <ajaxToolkit:Accordion ID="Accordion1" runat="server" TransitionDuration="100" FramesPerSecond="200" FadeTransitions="true" RequireOpenedPane="false" OnItemDataBound="Accordion1_ItemDataBound" ContentCssClass="acc-content" HeaderCssClass="acc-header" HeaderSelectedCssClass="acc-selected"> <HeaderTemplate> <%#DataBinder.Eval(Container.DataItem,"categoryName") %> </HeaderTemplate> <ContentTemplate> <asp:HiddenField ID="txt_categoryID" runat="server" Value='<%#DataBinder.Eval(Container.DataItem,"categoryID") %>' /> <asp:GridView ID="GridView1" runat="server" RowStyle-BackColor="#ededed" RowStyle-HorizontalAlign="Left" AutoGenerateColumns="false" GridLines="None" CellPadding="2" CellSpacing="2" Width="300px"> <Columns> <asp:TemplateField HeaderStyle-HorizontalAlign="Left" HeaderText="Product Name" HeaderStyle-BackColor="#d1d1d1" HeaderStyle-ForeColor="#777777"> <ItemTemplate> <%#DataBinder.Eval(Container.DataItem,"productName") %> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </ContentTemplate> </ajaxToolkit:Accordion>  Here, we use <%#DataBinder.Eval(Container.DataItem,"categoryName") %> to bind accordion header with categoryName, so we made on header for each element found on database.    Creating a basic accordion control As we know, to use any of the ajax components, there must be a registered ScriptManager on our site, which will be responsible for managing our controls. So the first thing we will do is create our script manager.     Collapse <asp:ScriptManager ID="ScriptManager1" runat="server"></asp:ScriptManager> Then we define our accordion  element and establish some basic properties:    Collapse <cc1:Accordion ID="AccordionCtrl" runat="server" SelectedIndex="0" HeaderCssClass="accordionHeader" ContentCssClass="accordionContent" AutoSize="None" FadeTransitions="true" TransitionDuration="250" FramesPerSecond="40" For our work we must declare PANES accordion inside it, these breads will be responsible for contain information, links or information that we want to show.  Collapse <Panes> <cc1:AccordionPane ID="AccordionPane0" runat="server"> <Header>Matenimiento</Header> <Content> <li><a href="mypagina.aspx">My página de prueba</a></li> </Content> </cc1:AccordionPane> To end this work, we have to close all panels and our accordion Collapse </Panes> </cc1:Accordion> Finally complete our example should look like:  Collapse <asp:ScriptManager ID="ScriptManager1" runat="server"></asp:ScriptManager> <cc1:Accordion ID="AccordionCtrl" runat="server" SelectedIndex="0" HeaderCssClass="accordionHeader" ContentCssClass="accordionContent" AutoSize="None" FadeTransitions="true" TransitionDuration="250" FramesPerSecond="40"> <Panes> <cc1:AccordionPane ID="AccordionPane0" runat="server"> <Header>Matenimiento</Header> <Content> <li><a href="mypagina.aspx">My página de prueba</a></li> </Content> </cc1:AccordionPane> </Panes> </cc1:Accordion>

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Securing an ASP.NET MVC 2 Application

    - by rajbk
    This post attempts to look at some of the methods that can be used to secure an ASP.NET MVC 2 Application called Northwind Traders Human Resources.  The sample code for the project is attached at the bottom of this post. We are going to use a slightly modified Northwind database. The screen capture from SQL server management studio shows the change. I added a new column called Salary, inserted some random salaries for the employees and then turned off AllowNulls.   The reporting relationship for Northwind Employees is shown below.   The requirements for our application are as follows: Employees can see their LastName, FirstName, Title, Address and Salary Employees are allowed to edit only their Address information Employees can see the LastName, FirstName, Title, Address and Salary of their immediate reports Employees cannot see records of non immediate reports.  Employees are allowed to edit only the Salary and Title information of their immediate reports. Employees are not allowed to edit the Address of an immediate report Employees should be authenticated into the system. Employees by default get the “Employee” role. If a user has direct reports, they will also get assigned a “Manager” role. We use a very basic empId/pwd scheme of EmployeeID (1-9) and password test$1. You should never do this in an actual application. The application should protect from Cross Site Request Forgery (CSRF). For example, Michael could trick Steven, who is already logged on to the HR website, to load a page which contains a malicious request. where without Steven’s knowledge, a form on the site posts information back to the Northwind HR website using Steven’s credentials. Michael could use this technique to give himself a raise :-) UI Notes The layout of our app looks like so: When Nancy (EmpID 1) signs on, she sees the default page with her details and is allowed to edit her address. If Nancy attempts to view the record of employee Andrew who has an employeeID of 2 (Employees/Edit/2), she will get a “Not Authorized” error page. When Andrew (EmpID 2) signs on, he can edit the address field of his record and change the title and salary of employees that directly report to him. Implementation Notes All controllers inherit from a BaseController. The BaseController currently only has error handling code. When a user signs on, we check to see if they are in a Manager role. We then create a FormsAuthenticationTicket, encrypt it (including the roles that the employee belongs to) and add it to a cookie. private void SetAuthenticationCookie(int employeeID, List<string> roles) { HttpCookiesSection cookieSection = (HttpCookiesSection) ConfigurationManager.GetSection("system.web/httpCookies"); AuthenticationSection authenticationSection = (AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication"); FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket( 1, employeeID.ToString(), DateTime.Now, DateTime.Now.AddMinutes(authenticationSection.Forms.Timeout.TotalMinutes), false, string.Join("|", roles.ToArray())); String encryptedTicket = FormsAuthentication.Encrypt(authTicket); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket); if (cookieSection.RequireSSL || authenticationSection.Forms.RequireSSL) { authCookie.Secure = true; } HttpContext.Current.Response.Cookies.Add(authCookie); } We read this cookie back in Global.asax and set the Context.User to be a new GenericPrincipal with the roles we assigned earlier. protected void Application_AuthenticateRequest(Object sender, EventArgs e){ if (Context.User != null) { string cookieName = FormsAuthentication.FormsCookieName; HttpCookie authCookie = Context.Request.Cookies[cookieName]; if (authCookie == null) return; FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); string[] roles = authTicket.UserData.Split(new char[] { '|' }); FormsIdentity fi = (FormsIdentity)(Context.User.Identity); Context.User = new System.Security.Principal.GenericPrincipal(fi, roles); }} We ensure that a user has permissions to view a record by creating a custom attribute AuthorizeToViewID that inherits from ActionFilterAttribute. public class AuthorizeToViewIDAttribute : ActionFilterAttribute{ IEmployeeRepository employeeRepository = new EmployeeRepository(); public override void OnActionExecuting(ActionExecutingContext filterContext) { if (filterContext.ActionParameters.ContainsKey("id") && filterContext.ActionParameters["id"] != null) { if (employeeRepository.IsAuthorizedToView((int)filterContext.ActionParameters["id"])) { return; } } throw new UnauthorizedAccessException("The record does not exist or you do not have permission to access it"); }} We add the AuthorizeToView attribute to any Action method that requires authorization. [HttpPost][Authorize(Order = 1)]//To prevent CSRF[ValidateAntiForgeryToken(Salt = Globals.EditSalt, Order = 2)]//See AuthorizeToViewIDAttribute class[AuthorizeToViewID(Order = 3)] [ActionName("Edit")]public ActionResult Update(int id){ var employeeToEdit = employeeRepository.GetEmployee(id); if (employeeToEdit != null) { //Employees can edit only their address //A manager can edit the title and salary of their subordinate string[] whiteList = (employeeToEdit.IsSubordinate) ? new string[] { "Title", "Salary" } : new string[] { "Address" }; if (TryUpdateModel(employeeToEdit, whiteList)) { employeeRepository.Save(employeeToEdit); return RedirectToAction("Details", new { id = id }); } else { ModelState.AddModelError("", "Please correct the following errors."); } } return View(employeeToEdit);} The Authorize attribute is added to ensure that only authorized users can execute that Action. We use the TryUpdateModel with a white list to ensure that (a) an employee is able to edit only their Address and (b) that a manager is able to edit only the Title and Salary of a subordinate. This works in conjunction with the AuthorizeToViewIDAttribute. The ValidateAntiForgeryToken attribute is added (with a salt) to avoid CSRF. The Order on the attributes specify the order in which the attributes are executed. The Edit View uses the AntiForgeryToken helper to render the hidden token: ......<% using (Html.BeginForm()) {%><%=Html.AntiForgeryToken(NorthwindHR.Models.Globals.EditSalt)%><%= Html.ValidationSummary(true, "Please correct the errors and try again.") %><div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %></div><div class="editor-field">...... The application uses View specific models for ease of model binding. public class EmployeeViewModel{ public int EmployeeID; [Required] [DisplayName("Last Name")] public string LastName { get; set; } [Required] [DisplayName("First Name")] public string FirstName { get; set; } [Required] [DisplayName("Title")] public string Title { get; set; } [Required] [DisplayName("Address")] public string Address { get; set; } [Required] [DisplayName("Salary")] [Range(500, double.MaxValue)] public decimal Salary { get; set; } public bool IsSubordinate { get; set; }} To help with displaying readonly/editable fields, we use a helper method. //Simple extension method to display a TextboxFor or DisplayFor based on the isEditable variablepublic static MvcHtmlString TextBoxOrLabelFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, bool isEditable){ if (isEditable) { return htmlHelper.TextBoxFor(expression); } else { return htmlHelper.DisplayFor(expression); }} The helper method is used in the view like so: <%=Html.TextBoxOrLabelFor(model => model.Title, Model.IsSubordinate)%> As mentioned in this post, there is a much easier way to update properties on an object. Download Demo Project VS 2008, ASP.NET MVC 2 RTM Remember to change the connectionString to point to your Northwind DB NorthwindHR.zip Feedback and bugs are always welcome :-)

    Read the article

  • Replacing jQuery.live() with jQuery.on()

    - by Rick Strahl
    jQuery 1.9 and 1.10 have introduced a host of changes, but for the most part these changes are mostly transparent to existing application usage of jQuery. After spending some time last week with a few of my projects and going through them with a specific eye for jQuery failures I found that for the most part there wasn't a big issue. The vast majority of code continues to run just fine with either 1.9 or 1.10 (which are supposed to be in sync but with 1.10 removing support for legacy Internet Explorer pre-9.0 versions). However, one particular change in the new versions has caused me quite a bit of update trouble, is the removal of the jQuery.live() function. This is my own fault I suppose - .live() has been deprecated for a while, but with 1.9 and later it was finally removed altogether from jQuery. In the past I had quite a bit of jQuery code that used .live() and it's one of the things that's holding back my upgrade process, although I'm slowly cleaning up my code and switching to the .on() function as the replacement. jQuery.live() jQuery.live() was introduced a long time ago to simplify handling events on matched elements that exist currently on the document and those that are are added in the future and also match the selector. jQuery uses event bubbling, special event binding, plus some magic using meta data attached to a parent level element to check and see if the original target event element matches the selected selected elements (for more info see Elijah Manor's comment below). An Example Assume a list of items like the following in HTML for example and further assume that the items in this list can be appended to at a later point. In this app there's a smallish initial list that loads to start, and as the user scrolls towards the end of the initial small list more items are loaded dynamically and added to the list.<div id="PostItemContainer" class="scrollbox"> <div class="postitem" data-id="4z6qhomm"> <div class="post-icon"></div> <div class="postitemheader"><a href="show/4z6qhomm" target="Content">1999 Buick Century For Sale!</a></div> <div class="postitemprice rightalign">$ 3,500 O.B.O.</div> <div class="smalltext leftalign">Jun. 07 @ 1:06am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> <div class="postitem" data-id="2jtvuu17"> <div class="postitemheader"><a href="show/2jtvuu17" target="Content">Toyota VAN 1987</a></div> <div class="postitemprice rightalign">$950</div> <div class="smalltext leftalign">Jun. 07 @ 12:29am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> … </div> With the jQuery.live() function you could easily select elements and hook up a click handler like this:$(".postitem").live("click", function() {...}); Simple and perfectly readable. The behavior of the .live handler generally was the same as the corresponding simple event handlers like .click(), except that you have to explicitly name the event instead of using one of the methods. Re-writing with jQuery.on() With .live() removed in 1.9 and later we have to re-write .live() code above with an alternative. The jQuery documentation points you at the .on() or .delegate() functions to update your code. jQuery.on() is a more generic event handler function, and it's what jQuery uses internally to map the high level event functions like .click(),.change() etc. that jQuery exposes. Using jQuery.on() however is not a one to one replacement of the .live() function. While .on() can handle events directly and use the same syntax as .live() did, you'll find if you simply switch out .live() with .on() that events on not-yet existing elements will not fire. IOW, the key feature of .live() is not working. You can use .on() to get the desired effect however, but you have to change the syntax to explicitly handle the event you're interested in on the container and then provide a filter selector to specify which elements you are actually interested in for handling the event for. Sounds more complicated than it is and it's easier to see with an example. For the list above hooking .postitem clicks, using jQuery.on() looks like this:$("#PostItemContainer").on("click", ".postitem", function() {...}); You specify a container that can handle the .click event and then provide a filter selector to find the child elements that trigger the  the actual event. So here #PostItemContainer contains many .postitems, whose click events I want to handle. Any container will do including document, but I tend to use the container closest to the elements I actually want to handle the events on to minimize the event bubbling that occurs to capture the event. With this code I get the same behavior as with .live() and now as new .postitem elements are added the click events are always available. Sweet. Here's the full event signature for the .on() function: .on( events [, selector ] [, data ], handler(eventObject) ) Note that the selector is optional - if you omit it you essentially create a simple event handler that handles the event directly on the selected object. The filter/child selector required if you want life-like - uh, .live() like behavior to happen. While it's a bit more verbose than what .live() did, .on() provides the same functionality by being more explicit on what your parent container for trapping events is. .on() is good Practice even for ordinary static Element Lists As a side note, it's a good practice to use jQuery.on() or jQuery.delegate() for events in most cases anyway, using this 'container event trapping' syntax. That's because rather than requiring lots of event handlers on each of the child elements (.postitem in the sample above), there's just one event handler on the container, and only when clicked does jQuery drill down to find the matching filter element and tries to match it to the originating element. In the early days of jQuery I used manually build handlers that did this and manually drilled from the event object into the originalTarget to determine if it's a matching element. With later versions of jQuery the various event functions in jQuery essentially provide this functionality out of the box with functions like .on() and .delegate(). All of this is nothing new, but I thought I'd write this up because I have on a few occasions forgotten what exactly was needed to replace the many .live() function calls that litter my code - especially older code. This will be a nice reminder next time I have a memory blank on this topic. And maybe along the way I've helped one or two of you as well to clean up your .live() code…© Rick Strahl, West Wind Technologies, 2005-2013Posted in jQuery   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

< Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >