Search Results

Search found 118204 results on 4729 pages for 'view user control'.

Page 72/4729 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Workflow Automation software for SVN

    - by KyleMit
    We're currently using IBM's ClearQuest for task management and ClearCase for change management. They plug and play very well with each other. Users can create tasks in ClearCase as defects and enhancements, and developers can use those tasks to check out and modify code in source control. We're looking to upgrade to a better, more modern Source Control system, like SVN, although we're not married to that as our Source Control system. There are loads of source control systems out there, but I'm having difficulty finding one that also includes the ability to have users enter tasks and track them, especially in a native way to the source control system itself. Are there any products that replace ClearQuest for systems like SVN? Are there any other cheap / open source application pairs that handle both sides of the coin?

    Read the article

  • SQL SERVER – Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue

    - by pinaldave
    There is an old quote “A Picture is Worth a Thousand Words”. I believe this quote immensely. Quite often I get phone calls that something is not working if I can help. My reaction is in most of the cases, I need to know more, send me exact error or a screenshot. Until and unless I see the error or reproduce the scenario myself I prefer not to comment. Yesterday I got a similar phone call from an old friend, where he was not sure what is going on. Here is what he said. “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it? “ My immediate reaction was he was facing security and permission issue. However, to make the same recommendation I suggested that he send me a screenshot of his own SSMS and his friend’s SSMS. After carefully looking at both the screenshots, I was very confident about the issue and we were able to resolve the issue. Let us reproduce the same scenario and many there is some learning for us. Issue: User not able to see user created objects First let us see the image of my friend’s SSMS screen. (Recreated on my machine) Now let us see my friend’s colleague SSMS screen. (Recreated on my machine) You can see that my friend could not see the user tables but his colleague was able to do the same for sure. Now I believed it was a permissions issue. Further to this I asked him to send me another image where I can see the various permissions of the user in the database. My friends screen My friends colleagues screen This indeed proved that my friend did not have access to the AdventureWorks database and because of the same he was not able to access the database. He did have public access which means he will have similar rights as guest access. However, their SQL Server had followed my earlier advise on having limited access for guest access, which means he was not able to see any user created objects. My next question was to validate what kind of access my friend’s colleague had. He replied that the colleague is the admin of the server. I suggested that if my friend was suppose to have admin access to the database, he should request of having admin access to his colleague. My friend promptly asked for the same to his colleague and on following screen he added him as an admin. You can do the same using following T-SQL script as well. USE [AdventureWorks2012] GO ALTER ROLE [db_owner] ADD MEMBER [testguest] GO Once my friend was admin he was able to access all the user objects just like he was expecting. Please note, this complete exercise was done on a development server. One should not play around with security on live or production server. Security is such an issue, which should be left with only senior administrator of the server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Model-View-Controller in JavaScript

    - by Casey Hope
    tl;dr: How does one implement MVC in JavaScript in a clean way? I'm trying to implement MVC in JavaScript. I have googled and reorganized with my code countless times but have not found a suitable solution. (The code just doesn't "feel right".) Here's how I'm going about it right now. It's incredibly complicated and is a pain to work with (but still better than the pile of code I had before). It has ugly workarounds that sort of defeat the purpose of MVC. And behold, the mess, if you're really brave: // Create a "main model" var main = Model0(); function Model0() { // Create an associated view and store its methods in "view" var view = View0(); // Create a submodel and pass it a function // that will "subviewify" the submodel's view var model1 = Model1(function (subview) { view.subviewify(subview); }); // Return model methods that can be used by // the controller (the onchange handlers) return { 'updateModel1': function (newValue) { model1.update(newValue); } }; } function Model1(makeSubView) { var info = ''; // Make an associated view and attach the view // to the parent view using the passed function var view = View1(); makeSubView(view.__view); // Dirty dirty // Return model methods that can be used by // the parent model (and so the controller) return { 'update': function (newValue) { info = newValue; // Notify the view of the new information view.events.value(info); } }; } function View0() { var thing = document.getElementById('theDiv'); var input = document.getElementById('theInput'); // This is the "controller", bear with me input.onchange = function () { // Ugly, uses a global to contact the model main.updateModel1(this.value); }; return { 'events': {}, // Adds a subview to this view. 'subviewify': function (subview) { thing.appendChild(subview); } }; } // This is a subview. function View1() { var element = document.createElement('div'); return { 'events': { // When the value changes this is // called so the view can be updated 'value': function (newValue) { element.innerHTML = newValue; } }, // ..Expose the DOM representation of the subview // so it can be attached to a parent view '__view': element }; } How does one implement MVC in JavaScript in a cleaner way? How can I improve this system? Or is this the completely wrong way to go, should I follow another pattern?

    Read the article

  • How Does One Make Scala Control Abstraction in Repeat Until?

    - by peter_pilgrim
    Hi I am Peter Pilgrim. I watched Martin Odersky create a control abstraction in Scala. However I can not yet seem to repeat it inside IntelliJ IDEA 9. Is it the IDE? package demo class Control { def repeatLoop ( body: = Unit ) = new Until( body ) class Until( body: = Unit ) { def until( cond: = Boolean ) { body; val value: Boolean = cond; println("value="+value) if ( value ) repeatLoop(body).until(cond) // if (cond) until(cond) } } def doTest2(): Unit = { var y: Int = 1 println("testing ... repeatUntil() control structure") repeatLoop { println("found y="+y) y = y + 1 } { until ( y < 10 ) } } } The error message reads: Information:Compilation completed with 1 error and 0 warnings Information:1 error Information:0 warnings C:\Users\Peter\IdeaProjects\HelloWord\src\demo\Control.scala Error:Error:line (57)error: Control.this.repeatLoop({ scala.this.Predef.println("found y=".+(y)); y = y.+(1) }) of type Control.this.Until does not take parameters repeatLoop { In the curried function the body can be thought to return an expression (the value of y+1) however the declaration body parameter of repeatUntil clearly says this can be ignored or not? What does the error mean?

    Read the article

  • OAuth 2.0: Can a user-agent client avoid forwarding fragments?

    - by Bosh
    In the OAuth 2.0 draft specification, user-agent clients receive authorization in the form of a bearer token via redirection (from an authentication server) to a URL such as HTTP/1.1 302 Found Location: http://example.com/rd#access_token=FJQbwq9&expires_in=3600 According to Section 3.5.2 it is then the user-agent's job to GET the URL in question, but "The user-agent SHALL NOT include the fragment component with the request." In other words, as a result of the example redirection above, the user-agent should GET /rd HTTP/1.1 Host: example.com without passing #access_token to the server. My question: what user agents behave this way? I thought redirection in Firefox, for example, would (logically) include the fragment in the GET request. Am I just wrong about this, or does the OAuth 2.0 specification rely on non-standard user-agent behavior?

    Read the article

  • UIPickerview is not visible in additionally added view

    - by MaheshBabu
    Hi folks, i created an additional IBOutlet UIView. at the same time i place the IBOutlet UIPicker view in additional view. Give connections and add Addtional view as subview to main view. As [self.view addsubview:addtionalview]; I write this code in one button click event. Unfortunatly i got additional view in button click but it did n't show UIPicker. whats the wrong. I already have Two UIPicker view in main view. How can i add UIPicker view in added view. Thank u in advnace.

    Read the article

  • Emulate back button in multi-view TabActivity

    - by ZelluX
    Hi, all I have a TabActivity with several tabs. Each tab corresponds to a specific view, and those views may further switch to other views. For example, one of my tabs displays RSS feed list, after user clicks one of the RSS feed, it will switch to a view displaying a list of articles, and after user clicks one of the titles, a full article view will be displayed. I'm going to add support for "back" button in my application. For instance, in a full article view, after user presses the "back" button, it should switch back to the article list view. And if user presses it the "back" button again, my application should switch back to the feed list view. My idea is to maintain a Stack<View> during navigation, and every time user presses the "back" button, the program will pop a View out of the stack, and set it as the current view. But I would like to know how to set current view in TabHost. Many thanks.

    Read the article

  • ASP.Net MVC 2 Auto Complete Textbox With Custom View Model Attribute & EditorTemplate

    - by SeanMcAlinden
    In this post I’m going to show how to create a generic, ajax driven Auto Complete text box using the new MVC 2 Templates and the jQuery UI library. The template will be automatically displayed when a property is decorated with a custom attribute within the view model. The AutoComplete text box in action will look like the following:   The first thing to do is to do is visit my previous blog post to put the custom model metadata provider in place, this is necessary when using custom attributes on the view model. http://weblogs.asp.net/seanmcalinden/archive/2010/06/11/custom-asp-net-mvc-2-modelmetadataprovider-for-using-custom-view-model-attributes.aspx Once this is in place, make sure you visit the jQuery UI and download the latest stable release – in this example I’m using version 1.8.2. You can download it here. Add the jQuery scripts and css theme to your project and add references to them in your master page. Should look something like the following: Site.Master <head runat="server">     <title><asp:ContentPlaceHolder ID="TitleContent" runat="server" /></title>     <link href="../../Content/Site.css" rel="stylesheet" type="text/css" />     <link href="../../css/ui-lightness/jquery-ui-1.8.2.custom.css" rel="stylesheet" type="text/css" />     <script src="../../Scripts/jquery-1.4.2.min.js" type="text/javascript"></script>     <script src="../../Scripts/jquery-ui-1.8.2.custom.min.js" type="text/javascript"></script> </head> Once this is place we can get started. Creating the AutoComplete Custom Attribute The auto complete attribute will derive from the abstract MetadataAttribute created in my previous post. It will look like the following: AutoCompleteAttribute using System.Collections.Generic; using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Attributes {     public class AutoCompleteAttribute : MetadataAttribute     {         public RouteValueDictionary RouteValueDictionary;         public AutoCompleteAttribute(string controller, string action, string parameterName)         {             this.RouteValueDictionary = new RouteValueDictionary();             this.RouteValueDictionary.Add("Controller", controller);             this.RouteValueDictionary.Add("Action", action);             this.RouteValueDictionary.Add(parameterName, string.Empty);         }         public override void Process(ModelMetadata modelMetaData)         {             modelMetaData.AdditionalValues.Add("AutoCompleteUrlData", this.RouteValueDictionary);             modelMetaData.TemplateHint = "AutoComplete";         }     } } As you can see, the constructor takes in strings for the controller, action and parameter name. The parameter name will be used for passing the search text within the auto complete text box. The constructor then creates a new RouteValueDictionary which we will use later to construct the url for getting the auto complete results via ajax. The main interesting method is the method override called Process. With the process method, the route value dictionary is added to the modelMetaData AdditionalValues collection. The TemplateHint is also set to AutoComplete, this means that when the view model is parsed for display, the MVC 2 framework will look for a view user control template called AutoComplete, if it finds one, it uses that template to display the property. The View Model To show you how the attribute will look, this is the view model I have used in my example which can be downloaded at the end of this post. View Model using System.ComponentModel; using Mvc2Templates.Attributes; namespace Mvc2Templates.Models {     public class TemplateDemoViewModel     {         [AutoComplete("Home", "AutoCompleteResult", "searchText")]         [DisplayName("European Country Search")]         public string SearchText { get; set; }     } } As you can see, the auto complete attribute is called with the controller name, action name and the name of the action parameter that the search text will be passed into. The AutoComplete Template Now all of this is in place, it’s time to create the AutoComplete template. Create a ViewUserControl called AutoComplete.ascx at the following location within your application – Views/Shared/EditorTemplates/AutoComplete.ascx Add the following code: AutoComplete.ascx <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <%     var propertyName = ViewData.ModelMetadata.PropertyName;     var propertyValue = ViewData.ModelMetadata.Model;     var id = Guid.NewGuid().ToString();     RouteValueDictionary urlData =         (RouteValueDictionary)ViewData.ModelMetadata.AdditionalValues.Where(x => x.Key == "AutoCompleteUrlData").Single().Value;     var url = Mvc2Templates.Views.Shared.Helpers.RouteHelper.GetUrl(this.ViewContext.RequestContext, urlData); %> <input type="text" name="<%= propertyName %>" value="<%= propertyValue %>" id="<%= id %>" class="autoComplete" /> <script type="text/javascript">     $(function () {         $("#<%= id %>").autocomplete({             source: function (request, response) {                 $.ajax({                     url: "<%= url %>" + request.term,                     dataType: "json",                     success: function (data) {                         response(data);                     }                 });             },             minLength: 2         });     }); </script> There is a lot going on in here but when you break it down it’s quite simple. Firstly, the property name and property value are retrieved through the model meta data. These are required to ensure that the text box input has the correct name and data to allow for model binding. If you look at line 14 you can see them being used in the text box input creation. The interesting bit is on line 8 and 9, this is the code to retrieve the route value dictionary we added into the model metada via the custom attribute. Line 11 is used to create the url, in order to do this I created a quick helper class which looks like the code below titled RouteHelper. The last bit of script is the code to initialise the jQuery UI AutoComplete control with the correct url for calling back to our controller action. RouteHelper using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Views.Shared.Helpers {     public static class RouteHelper     {         const string Controller = "Controller";         const string Action = "Action";         const string ReplaceFormatString = "REPLACE{0}";         public static string GetUrl(RequestContext requestContext, RouteValueDictionary routeValueDictionary)         {             RouteValueDictionary urlData = new RouteValueDictionary();             UrlHelper urlHelper = new UrlHelper(requestContext);                          int i = 0;             foreach(var item in routeValueDictionary)             {                 if (item.Value == string.Empty)                 {                     i++;                     urlData.Add(item.Key, string.Format(ReplaceFormatString, i.ToString()));                 }                 else                 {                     urlData.Add(item.Key, item.Value);                 }             }             var url = urlHelper.RouteUrl(urlData);             for (int index = 1; index <= i; index++)             {                 url = url.Replace(string.Format(ReplaceFormatString, index.ToString()), string.Empty);             }             return url;         }     } } See it in action All you need to do to see it in action is pass a view model from your controller with the new AutoComplete attribute attached and call the following within your view: <%= this.Html.EditorForModel() %> NOTE: The jQuery UI auto complete control expects a JSON string returned from your controller action method… as you can’t use the JsonResult to perform GET requests, use a normal action result, convert your data into json and return it as a string via a ContentResult. If you download the solution it will be very clear how to handle the controller and action for this demo. The full source code for this post can be downloaded here. It has been developed using MVC 2 and Visual Studio 2010. As always, I hope this has been interesting/useful. Kind Regards, Sean McAlinden.

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    Unfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions. Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon. Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup: "Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers." And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down... Reactions of other attendees If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them: "Inspiring. Helped me understand more about GIT." -- Sean on event comments "Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments "Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.It was great. Learned a lot from this meetup!!" -- Leonardo on event comments "Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting. Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers. Our agenda of the day Honestly, we had a bumpy start... First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-) Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery! Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS. Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself. git - a modern approach to VCS - Nayar What a tour! Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions. Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees. Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. Using gitk to display and browse information of a local git repository And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects. Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development. Visual Studio Online (VSO) - Jochen Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected. So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get. And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors. Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites. Upcoming Events and networking Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food. I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging... In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon. My resume of the day Do not work in IT without any kind of version control system! Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-) Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

    Read the article

  • Something to add to your library...

    - by werner.de.gruyter
    There is a new book in town: The Grid Control Handbook. Featuring an in-depth discussion of what Grid Control is and what Grid Control can do for your IT environment. It starts right at the beginning, and guides you through the all steps of a typical deployment: From the planning phase, to installing, to the strengthening of the environment and finally (most importantly) the maintenance and daily-use of the product. And there are quite a few tips, tricks, workshops and best practices along the way to help you with some very practical day-to-day challenges. For all those using Grid Control, something definitely worth checking out!

    Read the article

  • How to develop "Client script library" for ASP.net controls and how do these work?

    - by Niranjan Kala
    I have been working on .Net platform for 2 years and right now I am working on DevExpress controls for 6 months. All these control have client-side Events which are under some ClientScript nameSpace of particular control, Which specify ClientInstanceName, methods and properties accessible at client side. For example Button1 is ClientInstanceName and Button1.Text is a property, with methods like these: Button1.SetValue(); Button1.GetValue(); In ASP.Net Controls, buttons have the ClientClick event that fires before the Server Side Click event. I have inspected and goggled to extend client side functionality in asp.net controls. For example: create a ClientInstanceName property for controls or CheckedChanged event for CheckBox / RadioButton control. I have tried using these MSDN articles: Injecting Client-Side Script from an ASP.NET Server Control Working with Client-Side Script I got much information and ideas from these articles on how to implement/extend these. All are working in the client side. protected override void AddAttributesToRender(HtmlTextWriter writer) { base.AddAttributesToRender(writer); string script = @"return confirm(""%%POPUP_MESSAGE%%"");"; script = script.Replace("%%POPUP_MESSAGE%%", this.PopupMessage.Replace("\"", "\\\"")); writer.AddAttribute(HtmlTextWriterAttribute.Onclick, script); } Here It is just setting up attribute to the button. but all client side interaction no control from server. Here is that I want to know: How can I implement such functionality to create methods, properties etc. on client side. For example I am creating a PopControl as in the above code snippet same behavior as like Ajax ModalPopupExtender That have OK Button related properties. Ajax Controls can be directed to perform work from server side code e.g. Popup1.show(); How can I do this with such client enabled controls implemented controls as windows do? I am learning creation of Ajax Controls but I do not want to use ScriptManager or depend on another control. Just some extension to standard controls. I am expecting for ideas and implementation methods for such functionality.

    Read the article

  • What are the default groups assigned to the first user in Ubuntu Server?

    - by Wayne Koorts
    I just made a silly mistake on my Ubuntu Server box: I added myself to a group using usermod -G, after which I discovered the -a option... The result is that I am now out of the admin group, and lost my sudo rights. I can sort that out, but I want to know what other groups I may been removed from? My user was the first one so what I'm looking for is a list of groups that the first user gets added to at installation time.

    Read the article

  • multiple ssh aliases is selecting wrong user when forwarding

    - by Chris Beck
    I'm following the dual identity procedure for bitbucket: I have 2 bitbucket accounts ccmcbeck and chrisbeck. The former is personal, the latter is work. On my local Mac, I have this in my ~/.ssh/config Host *.work.com User chris ForwardAgent yes IdentityFile ~/.ssh/work_dsa Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa On my local Mac I ssh -T all is good, I get: $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as chrisbeck. On my local Mac, the ssh version is OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 When I ssh foo.work.com to my Linux box, I get: $ ssh-add -l 1024 ... /Users/chris/.ssh/work_dsa (DSA) 2048 ... /Users/chris/.ssh/bitbucket_ccmcbeck_rsa (RSA) 2048 ... /Users/chris/.ssh/bitbucket_chrisbeck_rsa (RSA) On foo.work.com, I also have this in my ~/.ssh/config Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa However, on foo.work.com when I ssh -T, it references the wrong User for git@bitbucket-work $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as ccmcbeck. On foo.work.com, the ssh version is OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 Why is my configuration causing foo.work.com to reference the wrong User?

    Read the article

  • Java Cloud Service Integration using Web Service Data Control

    - by Jani Rautiainen
    Java Cloud Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance.This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance.In this article a custom application integrating with Fusion Application using Web Service Data Control will be implemented. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration. Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source. The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the “Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud” link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guide. For details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Create Application In this example the “JcsWsDemo” application created in the “Java Cloud Service Integration using Web Service Proxy” article is used as the base. Create Web Service Data Control In this example we will use a Web Service Data Control to integrate with Credit Rule Service in Fusion Applications. The data control will be used to query data from Fusion Applications using a web service call and present the data in a table. To generate the data control choose the “Model” project and navigate to "New -> All Technologies -> Business Tier -> Data Controls -> Web Service Data Control" and enter following: Name: CreditRuleServiceDC URL: https://ic-[POD].oracleoutsourcing.com/icCnSetupCreditRulesPublicService/CreditRuleService?WSDL Service: {{http://xmlns.oracle.com/apps/incentiveCompensation/cn/creditSetup/creditRule/creditRuleService/}CreditRuleService On step 2 select the “findRule” operation: Skip step 3 and on step 4 define the credentials to access the service. Do note that in this example these credentials are only used if testing locally, for JCS deployment credentials need to be manually updated on the EAR file: Click “Finish” and the proxy generation is done. Creating UI In order to use the data control we will need to populate complex objects FindCriteria and FindControl. For simplicity in this example we will create logic in a managed bean that populates the objects. Open “JcsWsDemoBean.java” and add the following logic: Map findCriteria; Map findControl; public void setFindCriteria(Map findCriteria) { this.findCriteria = findCriteria; } public Map getFindCriteria() { findCriteria = new HashMap(); findCriteria.put("fetchSize",10); findCriteria.put("fetchStart",0); return findCriteria; } public void setFindControl(Map findControl) { this.findControl = findControl; } public Map getFindControl() { findControl = new HashMap(); return findControl; } Open “JcsWsDemo.jspx”, navigate to “Data Controls -> CreditRuleServiceDC -> findRule(Object, Object) -> result” and drag and drop the “result” node into the “af:form” element in the page: On the “Edit Table Columns” remove all columns except “RuleId” and “Name”: On the “Edit Action Binding” window displayed enter reference to the java class created above by selecting “#{JcsWsDemoBean.findCriteria}”: Also define the value for the “findControl” by selecting “#{JcsWsDemoBean.findControl}”. Deploy to JCS For WS DC the authentication details need to be updated on the connection details before deploying. Open “connections.xml” by navigating “Application Resources -> Descriptors -> ADF META-INF -> connections.xml”: Change the user name and password entry from: <soap username="transportUserName" password="transportPassword" To match the access details for the target environment. Follow the same steps as documented in previous article ”Java Cloud Service ADF Web Application”. Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsWsDemo-ViewController-context-root/faces/JcsWsDemo.jspx When accessed the first 10 rules in the system are displayed: Summary In this article we learned how to integrate with Fusion Applications using a Web Service Data Control in JCS. In future articles various other integration techniques will be covered. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Finding the groups of a user in WLS with OPSS

    - by user12587121
    How to find the group memberships for a user from a web application running in Weblogic server ?  This is useful for building up the profile of the user for security purposes for example. WLS as a container offers an identity store service which applications can access to query and manage identities known to the container.  This article for example shows how to recover the groups of the current user, but how can we find the same information for an arbitrary user ? It is the Oracle Platform for Securtiy Services (OPSS) that looks after the identity store in WLS and so it is in the OPSS APIs that we can find the way to recover this information. This is explained in the following documents.  Starting from the FMW 11.1.1.5 book list, with the Security Overview document we can see how WLS uses OPSS: Proceeding to the more detailed Application Security document, we find this list of useful references for security in FMW. We can follow on into the User/Role API javadoc. The Application Security document explains how to ensure that the identity store is configured appropriately to allow the OPSS APIs to work.  We must verify that the jps-config.xml file where the application  is deployed has it's identity store configured--look for the following elements in that file: <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">             <description>LDAP-based IdentityStore Provider</description>  </serviceProvider> <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">             <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>             <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/></serviceInstance> <serviceInstanceRef ref="idstore.ldap"/> The document contains a code sample for using the identity store here. Once we have the identity store reference we can recover the user's group memberships using the RoleManager interface:             RoleManager roleManager = idStore.getRoleManager();            SearchResponse grantedRoles = null;            try{                System.out.println("Retrieving granted WLS roles for user " + userPrincipal.getName());                grantedRoles = roleManager.getGrantedRoles(userPrincipal, false);                while( grantedRoles.hasNext()){                      Identity id = grantedRoles.next();                      System.out.println("  disp name=" + id.getDisplayName() +                                  " Name=" + id.getName() +                                  " Principal=" + id.getPrincipal() +                                  "Unique Name=" + id.getUniqueName());                     // Here, we must use WLSGroupImpl() to build the Principal otherwise                     // OES does not recognize it.                      retSubject.getPrincipals().add(new WLSGroupImpl(id.getPrincipal().getName()));                 }            }catch(Exception ex) {                System.out.println("Error getting roles for user " + ex.getMessage());                ex.printStackTrace();            }        }catch(Exception ex) {            System.out.println("OESGateway: Got exception instantiating idstore reference");        } This small JDeveloper project has a simple servlet that executes a request for the user weblogic's roles on executing a get on the default URL.  The full code to recover a user's goups is in the getSubjectWithRoles() method in the project.

    Read the article

  • How do you apply development practices like version control, testing and continuous integration/deployment to system administration?

    - by arex1337
    Imagine you're going to manage a number of servers with a number of different services that's used by a number of people. Now say you want to reconfigure or replace some software on one of those servers. Obviously you don't want to work on servers that are in production. If this was a code change, as a developer, I would make the change on my local development machine, test it locally and commit the change to a version control system. The changes could then be deployed in a staging environment, tested further and finally deployed in a production environment. It would also be easy for me to roll back, if necessary. Generally, or specifically, how do you achieve this in system administration? (The first thing that comes to mind is to use virtual machines and put virtual machine images in version control, but I'm sure there is a lot of literature and clever solutions I'm not presently aware of.)

    Read the article

  • Run Active Directory Admin Center as another user

    - by dunxd
    I am trying to run Active Directory Admin Center (dsac.exe) on Windows 7 as another user by means of creating a shortcut, rather than having to Shift+Right click and specify the user. On Windows XP I could create a runas shortcut like this (forget for a moment that dsac.exe does not exist in Windows XP): runas /user:DOMAIN\user dsac.exe When I run this on Windows 7, the cmd style windows pops up and asks for the password for DOMAIN\user, but I get the following message: Attempting to start dsac.exe as user "DOMAIN\user" ... RUNAS ERROR: Unable to run - dsac.exe 740: The requested operation requires elevation. How do I get Windows 7 to automatically run dsac.exe as a specified user? I'm happy to fill in a password prompt for the specified user, but would be even happier if there was a solution that cached the password, so I didn't have to enter it more than once a day.

    Read the article

  • Access denied for user 'diduser'@'localhost' to database 'diddata' (1044, 42000)

    - by Arlen Beiler
    I am trying to setup a MySQL server and when I went to create a second user it wouldn't give it permissions for the database. I can connect fine as long as I don't specify a database. Access denied for user 'user'@'localhost' to database 'diddata' The connection details are: { 'host' : 'localhost', 'user' : 'user', 'password' : 'password' , 'database': 'diddata' }; And to create the DB and table I did: CREATE DATABASE IF NOT exists diddata; CREATE USER 'user'@'localhost' IDENTIFIED BY 'password'; GRANT ALL ON user.* TO 'user'@'localhost'; Note that I've changed the username and password in this question. I've already checked the privileges in MySQL workbench and they are there.

    Read the article

  • transparently set up Windows 7 as remote workstation

    - by Áxel
    Maybe is a very basic question, but I can't find the exact terms to Google for it and find the concrete answer to my doubt. Suppose we have several PCs in which individual employees work. One of them has an extremely powerful CPU, and it's very useful to use that computer to perform heavy computations, but go there and set up your task means its user has to stop working for a while. Is it possible to allow a secondary user account to remotly log in, for example via Remote Desktop, and work with a full user environment, while the main user keeps working under his user session? I've used remote desktop many times in the past, but it always blocked current user session, or even terminated it. Lots of thanks in advance guys.

    Read the article

  • Losing file permissions after rebooting Windows 7

    - by SMTF
    I have a User directory full of files that are not accessible permission wise for the user who's home directory it is. Said user can't run Explorer. For example, it provides an error complaining that permission is not available for required files. I tried various ways to give said user permission of his home directory and things are fine until after rebooting the machine; the permissions reset to the previous state and the problem persists. I followed the solution outlined here. And again things worked until I reboot the machine. I'm in this mess because I replaced a corrupted user profile as outlined here. The original user and the new replacement on are/where both admin accounts. In case it is relevant I will mention that the Users directory is not on the C volume but a D volume on the same machine. Any insight is appreciated.

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c: Contributing to emerging Cloud standards

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contributed by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups . As one would expect of an industry leader, Oracle's participation in industry standards bodies is extensive. We participate in dozens of organizations that produce open standards which apply to our products, and our commitment to the success of these organizations is manifest in several way - we support them financially through our memberships; our senior engineers are active participants, often serving in leadership positions on boards, technical working groups and committees; and when it makes good business sense we contribute our intellectual property. We believe supporting the development of open standards is fundamental to Oracle meeting customer demands for product choice, seamless interoperability, and lowering the cost of ownership. Nowhere is this truer than in the area of cloud standards, and for the most recent release of our flagship management product, Oracle Enterprise Manager Cloud Control 12c (EM Cloud Control 12c). There is a fundamental rule that standards follow architecture. This was true of distributed computing, it was true of service-oriented architecture (SOA), and it's true of cloud. If you are familiar with Enterprise Manager it is likely to be no surprise that EM Cloud Control 12c is a source of technology that can be considered for adoption within cloud management standards. The reason, quite simply, is that the Oracle integrated stack architecture aligns with the cloud architecture models being adopted by the industry, and EM Cloud Control 12c has been developed to manage this architecture. EM Cloud Control 12c has facilities for managing the various underlying capabilities of the integrated stack in IaaS, PaaS, and SaaS clouds, and enables essential characteristics such as on-demand self-service provisioning, centralized policy-based resource management, integrated chargeback, and capacity planning, and complete visibility of the physical and virtual environment from applications to disk. Our most recent contribution in support of cloud management standards to come out of the EM Cloud Control 12c work was the Oracle Cloud Elemental Resource Model API. Oracle contributed the Elemental Resource Model API to the Distributed Management Task Force (DMTF) in 2011 where it was assigned to DMTF's Cloud Management Working Group (CMWG). The CMWG is considering the Oracle specification and those of several other vendors in their effort to produce a best practices specification for managing IaaS clouds. DMTF's Cloud Infrastructure Management Interface specification, called CIMI for short, is currently out for public review and expected to be released by DMTF later this year. We are proud to be playing an important role in the development of what is expected to become a major cloud standard. You can find more information on DMTF CIMI at http://dmtf.org/standards/cloud. You can find the work-in-progress release of CIMI at http://dmtf.org/content/cimi-work-progress-specifications-now-available-public-comment . The Oracle Cloud API specification is available on the Oracle Technology Network. You can find more information about the Oracle Cloud Elemental Resource Model API on the Oracle Technical Network (OTN), including a webcast featuring the API engineering manager Jack Yu (see TechCast Live: Inside the Oracle Cloud Resource Model API). If you have not seen this video we recommend you take the time to view it. Simply hover your cursor over the webcast title and control+click to follow the embedded link. If you have a question about the Oracle Cloud API or want to learn more about Oracle's participation in cloud management standards efforts drop us a line. We'd love to hear from you. The Enterprise Manager Standards Blogs are written by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups. They can be reached at Tony.DiCenzo at Oracle.com and Mark.Carlson at Oracle.com respectively. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • using Unity Android In a sub view and add actionbar and style

    - by aeroxr1
    I exported a simple animation from Unity3D (version 4.5) in android project. With eclipse I modified the manifest and added another activity. In this activity I put a button that it makes start the animation,and this is the result. The action bar appear in the main activity but it doesn't in the unity's activity :( How can I add the action bar and the style of the first activity to unity's animation activity ? This is the unity's activity's code : package com.rabidgremlin.tut.redcube; import android.app.NativeActivity; import android.content.res.Configuration; import android.graphics.PixelFormat; import android.os.Bundle; import android.view.KeyEvent; import android.view.MotionEvent; import android.view.View; import android.view.ViewGroup; import android.view.Window; import android.view.WindowManager; import com.unity3d.player.UnityPlayer; public class UnityPlayerNativeActivity extends NativeActivity { protected UnityPlayer mUnityPlayer; // don't change the name of this variable; referenced from native code // Setup activity layout @Override protected void onCreate (Bundle savedInstanceState) { //requestWindowFeature(Window.FEATURE_NO_TITLE); super.onCreate(savedInstanceState); getWindow().takeSurface(null); //setTheme(android.R.style.Theme_NoTitleBar_Fullscreen); getWindow().setFormat(PixelFormat.RGB_565); mUnityPlayer = new UnityPlayer(this); /*if (mUnityPlayer.getSettings ().getBoolean ("hide_status_bar", true)) getWindow ().setFlags (WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); */ setContentView(mUnityPlayer); mUnityPlayer.requestFocus(); } // Quit Unity @Override protected void onDestroy () { mUnityPlayer.quit(); super.onDestroy(); } // Pause Unity @Override protected void onPause() { super.onPause(); mUnityPlayer.pause(); } // eliminiamo questa onResume() e proviamo a modificare la onResume() // Resume Unity @Override protected void onResume() { super.onResume(); mUnityPlayer.resume(); } // inseriamo qualche modifica qui // This ensures the layout will be correct. @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); mUnityPlayer.configurationChanged(newConfig); } // Notify Unity of the focus change. @Override public void onWindowFocusChanged(boolean hasFocus) { super.onWindowFocusChanged(hasFocus); mUnityPlayer.windowFocusChanged(hasFocus); } // For some reason the multiple keyevent type is not supported by the ndk. // Force event injection by overriding dispatchKeyEvent(). @Override public boolean dispatchKeyEvent(KeyEvent event) { if (event.getAction() == KeyEvent.ACTION_MULTIPLE) return mUnityPlayer.injectEvent(event); return super.dispatchKeyEvent(event); } // Pass any events not handled by (unfocused) views straight to UnityPlayer @Override public boolean onKeyUp(int keyCode, KeyEvent event) { return mUnityPlayer.injectEvent(event); } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { return mUnityPlayer.injectEvent(event); } @Override public boolean onTouchEvent(MotionEvent event) { return mUnityPlayer.injectEvent(event); } /*API12*/ public boolean onGenericMotionEvent(MotionEvent event) { return mUnityPlayer.injectEvent(event); } } And this is the AndroidManifest.xml android:versionCode="1" android:versionName="1.0" > <!-- android:theme="@android:style/Theme.NoTitleBar"--> <supports-screens android:anyDensity="true" android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" android:xlargeScreens="true" /> <application android:icon="@drawable/app_icon" android:label="@string/app_name" android:theme="@android:style/Theme.Holo.Light" > <activity android:name="com.rabidgremlin.tut.redcube.UnityPlayerNativeActivity" android:configChanges="mcc|mnc|locale|touchscreen|keyboard|keyboardHidden|navigation|orientation|screenLayout|uiMode|screenSize|smallestScreenSize|fontScale" android:label="@string/app_name" android:screenOrientation="portrait" > <!--android:launchMode="singleTask"--> <meta-data android:name="unityplayer.UnityActivity" android:value="true" /> <meta-data android:name="unityplayer.ForwardNativeEventsToDalvik" android:value="false" /> </activity> <activity android:name="com.rabidgremlin.tut.redcube.MainActivity" android:label="@string/title_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="17" android:targetSdkVersion="19" /> <uses-feature android:glEsVersion="0x00020000" /> </manifest>

    Read the article

  • Rendering ASP.NET MVC Views to String

    - by Rick Strahl
    It's not uncommon in my applications that I require longish text output that does not have to be rendered into the HTTP output stream. The most common scenario I have for 'template driven' non-Web text is for emails of all sorts. Logon confirmations and verifications, email confirmations for things like orders, status updates or scheduler notifications - all of which require merged text output both within and sometimes outside of Web applications. On other occasions I also need to capture the output from certain views for logging purposes. Rather than creating text output in code, it's much nicer to use the rendering mechanism that ASP.NET MVC already provides by way of it's ViewEngines - using Razor or WebForms views - to render output to a string. This is nice because it uses the same familiar rendering mechanism that I already use for my HTTP output and it also solves the problem of where to store the templates for rendering this content in nothing more than perhaps a separate view folder. The good news is that ASP.NET MVC's rendering engine is much more modular than the full ASP.NET runtime engine which was a real pain in the butt to coerce into rendering output to string. With MVC the rendering engine has been separated out from core ASP.NET runtime, so it's actually a lot easier to get View output into a string. Getting View Output from within an MVC Application If you need to generate string output from an MVC and pass some model data to it, the process to capture this output is fairly straight forward and involves only a handful of lines of code. The catch is that this particular approach requires that you have an active ControllerContext that can be passed to the view. This means that the following approach is limited to access from within Controller methods. Here's a class that wraps the process and provides both instance and static methods to handle the rendering:/// <summary> /// Class that renders MVC views to a string using the /// standard MVC View Engine to render the view. /// /// Note: This class can only be used within MVC /// applications that have an active ControllerContext. /// </summary> public class ViewRenderer { /// <summary> /// Required Controller Context /// </summary> protected ControllerContext Context { get; set; } public ViewRenderer(ControllerContext controllerContext) { Context = controllerContext; } /// <summary> /// Renders a full MVC view to a string. Will render with the full MVC /// View engine including running _ViewStart and merging into _Layout /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to render the view with</param> /// <returns>String of the rendered view or null on error</returns> public string RenderView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, false); } /// <summary> /// Renders a partial MVC view to string. Use this method to render /// a partial view that doesn't merge with _Layout and doesn't fire /// _ViewStart. /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to pass to the viewRenderer</param> /// <returns>String of the rendered view or null on error</returns> public string RenderPartialView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, true); } public static string RenderView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderView(viewPath, model); } public static string RenderPartialView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderPartialView(viewPath, model); } protected string RenderViewToStringInternal(string viewPath, object model, bool partial = false) { // first find the ViewEngine for this view ViewEngineResult viewEngineResult = null; if (partial) viewEngineResult = ViewEngines.Engines.FindPartialView(Context, viewPath); else viewEngineResult = ViewEngines.Engines.FindView(Context, viewPath, null); if (viewEngineResult == null) throw new FileNotFoundException(Properties.Resources.ViewCouldNotBeFound); // get the view and attach the model to view data var view = viewEngineResult.View; Context.Controller.ViewData.Model = model; string result = null; using (var sw = new StringWriter()) { var ctx = new ViewContext(Context, view, Context.Controller.ViewData, Context.Controller.TempData, sw); view.Render(ctx, sw); result = sw.ToString(); } return result; } } The key is the RenderViewToStringInternal method. The method first tries to find the view to render based on its path which can either be in the current controller's view path or the shared view path using its simple name (PasswordRecovery) or alternately by its full virtual path (~/Views/Templates/PasswordRecovery.cshtml). This code should work both for Razor and WebForms views although I've only tried it with Razor Views. Note that WebForms Views might actually be better for plain text as Razor adds all sorts of white space into its output when there are code blocks in the template. The Web Forms engine provides more accurate rendering for raw text scenarios. Once a view engine is found the view to render can be retrieved. Views in MVC render based on data that comes off the controller like the ViewData which contains the model along with the actual ViewData and ViewBag. From the View and some of the Context data a ViewContext is created which is then used to render the view with. The View picks up the Model and other data from the ViewContext internally and processes the View the same it would be processed if it were to send its output into the HTTP output stream. The difference is that we can override the ViewContext's output stream which we provide and capture into a StringWriter(). After rendering completes the result holds the output string. If an error occurs the error behavior is similar what you see with regular MVC errors - you get a full yellow screen of death including the view error information with the line of error highlighted. It's your responsibility to handle the error - or let it bubble up to your regular Controller Error filter if you have one. To use the simple class you only need a single line of code if you call the static methods. Here's an example of some Controller code that is used to send a user notification to a customer via email in one of my applications:[HttpPost] public ActionResult ContactSeller(ContactSellerViewModel model) { InitializeViewModel(model); var entryBus = new busEntry(); var entry = entryBus.LoadByDisplayId(model.EntryId); if ( string.IsNullOrEmpty(model.Email) ) entryBus.ValidationErrors.Add("Email address can't be empty.","Email"); if ( string.IsNullOrEmpty(model.Message)) entryBus.ValidationErrors.Add("Message can't be empty.","Message"); model.EntryId = entry.DisplayId; model.EntryTitle = entry.Title; if (entryBus.ValidationErrors.Count > 0) { ErrorDisplay.AddMessages(entryBus.ValidationErrors); ErrorDisplay.ShowError("Please correct the following:"); } else { string message = ViewRenderer.RenderView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); string title = entry.Title + " (" + entry.DisplayId + ") - " + App.Configuration.ApplicationName; AppUtils.SendEmail(title, message, model.Email, entry.User.Email, false, false)) } return View(model); } Simple! The view in this case is just a plain MVC view and in this case it's a very simple plain text email message (edited for brevity here) that is created and sent off:@model ContactSellerViewModel @{ Layout = null; }re: @Model.EntryTitle @Model.ListingUrl @Model.Message ** SECURITY ADVISORY - AVOID SCAMS ** Avoid: wiring money, cross-border deals, work-at-home ** Beware: cashier checks, money orders, escrow, shipping ** More Info: @(App.Configuration.ApplicationBaseUrl)scams.html Obviously this is a very simple view (I edited out more from this page to keep it brief) -  but other template views are much more complex HTML documents or long messages that are occasionally updated and they are a perfect fit for Razor rendering. It even works with nested partial views and _layout pages. Partial Rendering Notice that I'm rendering a full View here. In the view I explicitly set the Layout=null to avoid pulling in _layout.cshtml for this view. This can also be controlled externally by calling the RenderPartial method instead: string message = ViewRenderer.RenderPartialView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); with this line of code no layout page (or _viewstart) will be loaded, so the output generated is just what's in the view. I find myself using Partials most of the time when rendering templates, since the target of templates usually tend to be emails or other HTML fragment like output, so the RenderPartialView() method is definitely useful to me. Rendering without a ControllerContext The preceding class is great when you're need template rendering from within MVC controller actions or anywhere where you have access to the request Controller. But if you don't have a controller context handy - maybe inside a utility function that is static, a non-Web application, or an operation that runs asynchronously in ASP.NET - which makes using the above code impossible. I haven't found a way to manually create a Controller context to provide the ViewContext() what it needs from outside of the MVC infrastructure. However, there are ways to accomplish this,  but they are a bit more complex. It's possible to host the RazorEngine on your own, which side steps all of the MVC framework and HTTP and just deals with the raw rendering engine. I wrote about this process in Hosting the Razor Engine in Non-Web Applications a long while back. It's quite a process to create a custom Razor engine and runtime, but it allows for all sorts of flexibility. There's also a RazorEngine CodePlex project that does something similar. I've been meaning to check out the latter but haven't gotten around to it since I have my own code to do this. The trick to hosting the RazorEngine to have it behave properly inside of an ASP.NET application and properly cache content so templates aren't constantly rebuild and reparsed. Anyway, in the same app as above I have one scenario where no ControllerContext is available: I have a background scheduler running inside of the app that fires on timed intervals. This process could be external but because it's lightweight we decided to fire it right inside of the ASP.NET app on a separate thread. In my app the code that renders these templates does something like this:var model = new SearchNotificationViewModel() { Entries = entries, Notification = notification, User = user }; // TODO: Need logging for errors sending string razorError = null; var result = AppUtils.RenderRazorTemplate("~/views/template/SearchNotificationTemplate.cshtml", model, razorError); which references a couple of helper functions that set up my RazorFolderHostContainer class:public static string RenderRazorTemplate(string virtualPath, object model,string errorMessage = null) { var razor = AppUtils.CreateRazorHost(); var path = virtualPath.Replace("~/", "").Replace("~", "").Replace("/", "\\"); var merged = razor.RenderTemplateToString(path, model); if (merged == null) errorMessage = razor.ErrorMessage; return merged; } /// <summary> /// Creates a RazorStringHostContainer and starts it /// Call .Stop() when you're done with it. /// /// This is a static instance /// </summary> /// <param name="virtualPath"></param> /// <param name="binBasePath"></param> /// <param name="forceLoad"></param> /// <returns></returns> public static RazorFolderHostContainer CreateRazorHost(string binBasePath = null, bool forceLoad = false) { if (binBasePath == null) { if (HttpContext.Current != null) binBasePath = HttpContext.Current.Server.MapPath("~/"); else binBasePath = AppDomain.CurrentDomain.BaseDirectory; } if (_RazorHost == null || forceLoad) { if (!binBasePath.EndsWith("\\")) binBasePath += "\\"; //var razor = new RazorStringHostContainer(); var razor = new RazorFolderHostContainer(); razor.TemplatePath = binBasePath; binBasePath += "bin\\"; razor.BaseBinaryFolder = binBasePath; razor.UseAppDomain = false; razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsBusiness.dll"); razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsWeb.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Utilities.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.Mvc.dll"); razor.ReferencedAssemblies.Add("System.Web.dll"); razor.ReferencedNamespaces.Add("System.Web"); razor.ReferencedNamespaces.Add("ClassifiedsBusiness"); razor.ReferencedNamespaces.Add("ClassifiedsWeb"); razor.ReferencedNamespaces.Add("Westwind.Web"); razor.ReferencedNamespaces.Add("Westwind.Utilities"); _RazorHost = razor; _RazorHost.Start(); //_RazorHost.Engine.Configuration.CompileToMemory = false; } return _RazorHost; } The RazorFolderHostContainer essentially is a full runtime that mimics a folder structure like a typical Web app does including caching semantics and compiling code only if code changes on disk. It maps a folder hierarchy to views using the ~/ path syntax. The host is then configured to add assemblies and namespaces. Unfortunately the engine is not exactly like MVC's Razor - the expression expansion and code execution are the same, but some of the support methods like sections, helpers etc. are not all there so templates have to be a bit simpler. There are other folder hosts provided as well to directly execute templates from strings (using RazorStringHostContainer). The following is an example of an HTML email template @inherits RazorHosting.RazorTemplateFolderHost <ClassifiedsWeb.SearchNotificationViewModel> <html> <head> <title>Search Notifications</title> <style> body { margin: 5px;font-family: Verdana, Arial; font-size: 10pt;} h3 { color: SteelBlue; } .entry-item { border-bottom: 1px solid grey; padding: 8px; margin-bottom: 5px; } </style> </head> <body> Hello @Model.User.Name,<br /> <p>Below are your Search Results for the search phrase:</p> <h3>@Model.Notification.SearchPhrase</h3> <small>since @TimeUtils.ShortDateString(Model.Notification.LastSearch)</small> <hr /> You can see that the syntax is a little different. Instead of the familiar @model header the raw Razor  @inherits tag is used to specify the template base class (which you can extend). I took a quick look through the feature set of RazorEngine on CodePlex (now Github I guess) and the template implementation they use is closer to MVC's razor but there are other differences. In the end don't expect exact behavior like MVC templates if you use an external Razor rendering engine. This is not what I would consider an ideal solution, but it works well enough for this project. My biggest concern is the overhead of hosting a second razor engine in a Web app and the fact that here the differences in template rendering between 'real' MVC Razor views and another RazorEngine really are noticeable. You win some, you lose some It's extremely nice to see that if you have a ControllerContext handy (which probably addresses 99% of Web app scenarios) rendering a view to string using the native MVC Razor engine is pretty simple. Kudos on making that happen - as it solves a problem I see in just about every Web application I work on. But it is a bummer that a ControllerContext is required to make this simple code work. It'd be really sweet if there was a way to render views without being so closely coupled to the ASP.NET or MVC infrastructure that requires a ControllerContext. Alternately it'd be nice to have a way for an MVC based application to create a minimal ControllerContext from scratch - maybe somebody's been down that path. I tried for a few hours to come up with a way to make that work but gave up in the soup of nested contexts (MVC/Controller/View/Http). I suspect going down this path would be similar to hosting the ASP.NET runtime requiring a WorkerRequest. Brrr…. The sad part is that it seems to me that a View should really not require much 'context' of any kind to render output to string. Yes there are a few things that clearly are required like paths to the virtual and possibly the disk paths to the root of the app, but beyond that view rendering should not require much. But, no such luck. For now custom RazorHosting seems to be the only way to make Razor rendering go outside of the MVC context… Resources Full ViewRenderer.cs source code from Westwind.Web.Mvc library Hosting the Razor Engine for Non-Web Applications RazorEngine on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to set conditional activation to taskflows?

    - by shantala.sankeshwar(at)oracle.com
    This article describes implementing conditional activation to taskflows.Use Case Description Suppose we have a taskflow dropped as region on a page & this region is enclosed in a popup .By default when the page is loaded the respective region also gets loaded.Hence a region model needs to provide a viewId whenever one is requested.  A consequence of this is the TaskFlowRegionModel always has to initialize its task flow and execute the task flow's default activity in order to determine a viewId, even if the region is not visible on the page.This can lead to unnecessary performance overhead of executing task flow to generate viewIds for regions that are never visible. In order to increase the performance,we need to set the taskflow bindings activation property to 'conditional'.Below described is a simple usecase that shows how exactly we can set the conditional activations to taskflow bindings.Steps:1.Create an ADF Fusion web ApplicationView image 2.Create Business components for Emp tableView image3.Create a view criteria where deptno=:some_bind_variableView image4.Generate EmpViewImpl.java file & write the below code.Then expose this to client interface.    public void filterEmpRecords(Number deptNo){            // Code to filter the deptnos         ensureVariableManager().setVariableValue("some_bind_variable",  deptNo);        this.applyViewCriteria(this.getViewCriteria("EmpViewCriteria"));        this.executeQuery();       }5.Create an ADF Taskflow with page fragements & drop the above method on the taskflow6.Also drop the view activity(showEmp.jsff) .Define control flow case from the above method activity to the view activity.Set the method activity as default activityView image7.Create  main.jspx page & drop the above taskflow as region on this pageView image8.Surround the region with the dialog & surround the dialog with the popup(id is Popup1)9.Drop the commandButton on the above page & insert af:showPopupBehavior inside the commandButton:<af:commandButton text="show popup" id="cb1"><af:showPopupBehavior popupId="::Popup1"/></af:commandButton>10.Now if we execute this main page ,we will notice that the method action gets called even before the popup is launched.We can avoid this this by setting the activation property of the taskflow to conditional11.Goto the bindings of the above main page & select the taskflow binding ,set its activation property to 'conditional' & active property to Boolean value #{Somebean.popupVisible}.By default its value should be false.View image12.We need to set the above Boolean value to true only when the popup is launched.This can be achieved by inserting setPropertyListener inside the popup:<af:setPropertyListener from="true" to="#{Somebean.popupVisible}" type="popupFetch"/>13.Now if we run the page,we will notice that the method action is not called & only when we click on 'show popup' button the method action gets called.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >