Search Results

Search found 35561 results on 1423 pages for 'value of'.

Page 470/1423 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Prioritizing Product Features

    - by Robert May
    A very common task in Agile Environments is prioritization.  Teams that are functioning well will prioritize new features, old features, the backlog, and any other source of stories for the team, and they’ll do it regularly. Not all teams are good at prioritizing according to the real return on investment that building stories will yield to the company.  This is unfortunate.  Too often, teams end up building features that are less valuable, and everyone seems to know it except perhaps the product owner!  Most features built into software are never even used.  Clearly, not much return for features that go unused. So how does a company avoid building features that add little value to the company?  This is a tough question to answer, but usually, this prioritization starts at the top with the executives of the company.  After all, they’re responsible for the overall vision of the company. Here’s what I recommend: Know your market. Know your customers and users. Know where you’re going and what you want to achieve. Implement the Vision Know Your Market We often see companies that don’t know their market.  Personally, I’m surprised by this.  These companies don’t know who their competitors are, don’t know what features make their product desirable in the market, and in many cases, get by with saying, “I’ve been doing this for XX years.  I know what the market wants!”  In many cases, they equate “marketing” with “advertising” and don’t understand the difference. This is almost never true.  Good companies will spend significant amounts of time and money finding out who they’re competing against and what makes their competitors successful in the marketplace.  Good companies understand that marketing involves more than just advertising.  Often, marketing is mostly research and analysis, not sales.  Until you understand your market, you cannot know what features will give you the best return on your investment dollar. Good companies have a marketing department and can answer the next important step which is to know your customers and your users. Know your Customers and Users First, note that I included both customers and users.  They’re often not the same thing.  Users use the product that you build.  Customers buy the product that you build.  It’s a subtle difference, but too often, I’ve seen companies that focus exclusively on one or the other and are not successful simply because they ignore an important part of the group. If your company is doing appropriate marketing, you know that these are two different aspects of your product and that both deserve attention to have a product that is successful in your target market.  Your marketing department should be spending a lot of time understanding these personas and then conveying that information to the company. I’m always surprised when development teams think that they can build a product that people want to use without understanding the users of that product.  Developers think differently than most people in the world.  They know what the computer is doing.  The computer isn’t magic to them.  So when they assume that they know how to build something, they bring with them quite a bit of baggage.  Never assume that you know your customer unless you’re regularly having interaction with them.  Also, don’t just leave this to Marketing or Product Management.  Take them time to get your developers out with the customers as well.  Developers are very smart people, and often, seeing how someone uses their software inspires them to make a much better product. Very often, because the users and customers aren’t know, teams will spend a significant amount of time building apps that are super flexible and configurable so that any possible combination of feature can be used.  This demonstrates a clear lack of understanding of the customer.  Most configuration questions can quickly be answered by talking to the customer.  In most cases, if your software requires significant setup and configuration before its usable, you probably don’t know your customers and users very well. Until you know your customers, you cannot know what features will be most valuable to your customers and you cannot build those features in a way that your customers can use. Know Where You’re Going and What You Want to Achieve Many companies suffer from not having a plan.  Executives will tell the team to make them a plan.  The team, not knowing their market and customers and users, will come up with a plan that doesn’t reflect reality and doesn’t consider ROI.  Management then wonders why the product is doing poorly in the market place. Instead of leaving this up to the teams, as executives, work with Marketing to understand what broad categories of features will sell the most product in the marketplace.  Then, once you’ve determined that, give this vision to the team and let them run with it.  Revise the vision as needed, but avoid changing streams frequently.  Sure, sometimes you need to, but often, executives will change priorities many times a month, leading to nothing more than confusion.  If the team has a vision, they’ll be able to execute that vision far better than they could otherwise. By knowing what products are most important, you can set budgetary goals and guidelines that will help you achieve the vision that was created. Implement the Vision Creating the vision is often where the general executives stop participating in the plan.  The team is responsible for implementing that vision.  Executives should attend showcases and and should remain aware of the progress that the team is making towards meeting the vision, however. Once a broad vision has been created, the team should break that vision down into minimal market features (MMF).  These MMFs should be sized using story points so that, using the team’s velocity, an estimated cost can be determined for each feature.  The product management team should then try to quantify the relative value of the MMFs based on customer feedback and interviews.  Once the value and cost of creating the feature is understood, a return on investment can be calculated.  The features should then be prioritized with the MMF’s that have the highest value and lowest cost rising to the top of features to implement.  Don’t let politics get in the way! Once the MMF’s have been prioritized, they should go through release planning to schedule them for implementation. Conclusion By having a good grasp on the strategy of the company, your Agile teams can be much more effective.  Each and every story the team is implementing will roll up into features that matter to the company and provide ROI to them.  The steps outlined in this post should be repeated on a regular basis.  I recommend reviewing them at least once per quarter to make sure that the vision hasn’t shifted and that the teams are still working on what matters most to the company. Technorati Tags: Agile,Product Owner,ROI

    Read the article

  • how to integrate jquery with jsf richfaces tags for print the image and textarea content?

    - by eswaramoorthy-nec
    hi, Here i write code to take printout the textarea content using jquery. I load the two java script (jquery-1.3.2.js and jquery.print.js) But these two source file not support rich:datascroller tag.. That means there is no reaction in datascroller. I need to take print the textarea content and also perfectly work to datascroller also. Here i give jsp and related java files. This code have datatable, rich:datascroller and textarea. Datatable for only used for test the datascroller component. My focus : print the textarea content as well as perfectly work to datascroller component. printer.jsp <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h" %> <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f" %> <%@ taglib uri="http://richfaces.org/a4j" prefix="a4j"%> <%@ taglib uri="http://richfaces.org/rich" prefix="rich"%> <html> <head> <title>Print Viewer </title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <a4j:loadScript src="resource/jquery-1.3.2.js"/> <a4j:loadScript src="resource/jquery.print.js"/> // The problem is : above two loadscript does not support datascroller //componenet. // But that two jquery file for using to take the print. <script type="text/javascript"> function printData() { //Print the Div content for textarea jQuery( ".printable" ).print(); return( false ); } </script> </head> <body> <h:form id="printViewerForm" binding="#{PrintViewer.initForm}"> <rich:panel id="printViewerRichPanel"> <h:panelGrid cellpadding="3" columns="2" id="printPanelGridId" cellspacing="3" border ="1"> <h:panelGrid> //DataScroller for dataTable <rich:datascroller id = "dataScrollerTop" align="center" for= "printDataTable" page="1" maxPages="20"/> <rich:dataTable id="printDataTable" value="#{PrintViewer.printViewerList}" cellpadding="3" rows = "5" rowKeyVar="rowIndex" cellspacing="3" var="printViewerResultListTo"> <f:facet name="header"> <rich:columnGroup> <rich:column> <h:outputText value="PrintTable"/> </rich:column> </rich:columnGroup> </f:facet> <rich:column> <h:outputText value="#{printViewerResultListTo.printName}"/> </rich:column> </rich:dataTable> </h:panelGrid> //Print Content Region <a4j:region id="printContentViewRegion"> <a4j:commandButton id="printButton" value="PrintContent" onclick="printData()"/> <div id="printContentDiv" class="printable"> <h:inputTextarea id="printContentTextArea" style="width:300px;height:300px; value =" This is Sample Jquery For Test working Text Area"/> </div> </a4j:region> </h:panelGrid> </rich:panel> </h:form> </body> PrintViewer.java import java.util.ArrayList; import java.util.List; import javax.faces.component.html.HtmlForm; public class PrintViewer { private HtmlForm initForm; private List printViewerList = new ArrayList(); public HtmlForm getInitForm() { printViewerList = getPrintList(); return initForm; } private List getUploadList() { if (!printViewerList.isEmpty()) { printViewerList.clear(); } printViewerList.add(new PrintViewerResultListTo("print 1")); printViewerList.add(new PrintViewerResultListTo("print 2")); printViewerList.add(new PrintViewerResultListTo("print 3")); printViewerList.add(new PrintViewerResultListTo("print 4")); printViewerList.add(new PrintViewerResultListTo("print 5")); printViewerList.add(new PrintViewerResultListTo("print 6")); printViewerList.add(new PrintViewerResultListTo("print 7")); printViewerList.add(new PrintViewerResultListTo("print 8")); printViewerList.add(new PrintViewerResultListTo("print 9")); printViewerList.add(new PrintViewerResultListTo("print 10")); printViewerList.add(new PrintViewerResultListTo("print 11")); printViewerList.add(new PrintViewerResultListTo("print 12")); printViewerList.add(new PrintViewerResultListTo("print 13")); printViewerList.add(new PrintViewerResultListTo("print 14")); printViewerList.add(new PrintViewerResultListTo("print 15")); return printViewerList; } public void setInitForm(HtmlForm initForm) { this.initForm = initForm; } public List getPrintViewerList() { return printViewerList; } public void setPrintViewerList(List printViewerList) { this.printViewerList = printViewerList; } } PrintViewerResultListTo.java public class PrintViewerResultListTo { private String printName; PrintViewerResultListTo(String printName) { this.printName = printName; } public String getPrintName() { return printName; } public void setPrintName(String printName) { this.printName = printName; } } I hope help me about this. Thanks in advance.

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

  • June 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m happy to announce the June 2013 release of the Ajax Control Toolkit. For this release, we enhanced the AjaxFileUpload control to support uploading files directly to Windows Azure. We also improved the SlideShow control by adding support for CSS3 animations. You can get the latest release of the Ajax Control Toolkit by visiting the project page at CodePlex (http://AjaxControlToolkit.CodePlex.com). Alternatively, you can execute the following NuGet command from the Visual Studio Library Package Manager window: Uploading Files to Azure The AjaxFileUpload control enables you to efficiently upload large files and display progress while uploading. With this release, we’ve added support for uploading large files directly to Windows Azure Blob Storage (You can continue to upload to your server hard drive if you prefer). Imagine, for example, that you have created an Azure Blob Storage container named pictures. In that case, you can use the following AjaxFileUpload control to upload to the container: <toolkit:ToolkitScriptManager runat="server" /> <toolkit:AjaxFileUpload ID="AjaxFileUpload1" StoreToAzure="true" AzureContainerName="pictures" runat="server" /> Notice that the AjaxFileUpload control is declared with two properties related to Azure. The StoreToAzure property causes the AjaxFileUpload control to upload a file to Azure instead of the local computer. The AzureContainerName property points to the blob container where the file is uploaded. .int3{position:absolute;clip:rect(487px,auto,auto,444px);}SMALL cash advance VERY CHEAP To use the AjaxFileUpload control, you need to modify your web.config file so it contains some additional settings. You need to configure the AjaxFileUpload handler and you need to point your Windows Azure connection string to your Blob Storage account. <configuration> <appSettings> <!--<add key="AjaxFileUploadAzureConnectionString" value="UseDevelopmentStorage=true"/>--> <add key="AjaxFileUploadAzureConnectionString" value="DefaultEndpointsProtocol=https;AccountName=testact;AccountKey=RvqL89Iw4npvPlAAtpOIPzrinHkhkb6rtRZmD0+ojZupUWuuAVJRyyF/LIVzzkoN38I4LSr8qvvl68sZtA152A=="/> </appSettings> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> You supply the connection string for your Azure Blob Storage account with the AjaxFileUploadAzureConnectionString property. If you set the value “UseDevelopmentStorage=true” then the AjaxFileUpload will upload to the simulated Blob Storage on your local machine. After you create the necessary configuration settings, you can use the AjaxFileUpload control to upload files directly to Azure (even very large files). Here’s a screen capture of how the AjaxFileUpload control appears in Google Chrome: After the files are uploaded, you can view the uploaded files in the Windows Azure Portal. You can see that all 5 files were uploaded successfully: New AjaxFileUpload Events In response to user feedback, we added two new events to the AjaxFileUpload control (on both the server and the client): · UploadStart – Raised on the server before any files have been uploaded. · UploadCompleteAll – Raised on the server when all files have been uploaded. · OnClientUploadStart – The name of a function on the client which is called before any files have been uploaded. · OnClientUploadCompleteAll – The name of a function on the client which is called after all files have been uploaded. These new events are most useful when uploading multiple files at a time. The updated AjaxFileUpload sample page demonstrates how to use these events to show the total amount of time required to upload multiple files (see the AjaxFileUpload.aspx file in the Ajax Control Toolkit sample site). SlideShow Animated Slide Transitions With this release of the Ajax Control Toolkit, we also added support for CSS3 animations to the SlideShow control. The animation is used when transitioning from one slide to another. Here’s the complete list of animations: · FadeInFadeOut · ScaleX · ScaleY · ZoomInOut · Rotate · SlideLeft · SlideDown You specify the animation which you want to use by setting the SlideShowAnimationType property. For example, here is how you would use the Rotate animation when displaying a set of slides: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSlideShow.aspx.cs" Inherits="TestACTJune2013.ShowSlideShow" %> <%@ Register TagPrefix="toolkit" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <script runat="Server" type="text/C#"> [System.Web.Services.WebMethod] [System.Web.Script.Services.ScriptMethod] public static AjaxControlToolkit.Slide[] GetSlides() { return new AjaxControlToolkit.Slide[] { new AjaxControlToolkit.Slide("slides/Blue hills.jpg", "Blue Hills", "Go Blue"), new AjaxControlToolkit.Slide("slides/Sunset.jpg", "Sunset", "Setting sun"), new AjaxControlToolkit.Slide("slides/Winter.jpg", "Winter", "Wintery..."), new AjaxControlToolkit.Slide("slides/Water lilies.jpg", "Water lillies", "Lillies in the water"), new AjaxControlToolkit.Slide("slides/VerticalPicture.jpg", "Sedona", "Portrait style picture") }; } </script> <!DOCTYPE html> <html > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <toolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:Image ID="Image1" Height="300" Runat="server" /> <toolkit:SlideShowExtender ID="SlideShowExtender1" TargetControlID="Image1" SlideShowServiceMethod="GetSlides" AutoPlay="true" Loop="true" SlideShowAnimationType="Rotate" runat="server" /> </div> </form> </body> </html> In the code above, the set of slides is exposed by a page method named GetSlides(). The SlideShowAnimationType property is set to the value Rotate. The following animated GIF gives you an idea of the resulting slideshow: If you want to use either the SlideDown or SlideRight animations, then you must supply both an explicit width and height for the Image control which is the target of the SlideShow extender. For example, here is how you would declare an Image and SlideShow control to use a SlideRight animation: <toolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:Image ID="Image1" Height="300" Width="300" Runat="server" /> <toolkit:SlideShowExtender ID="SlideShowExtender1" TargetControlID="Image1" SlideShowServiceMethod="GetSlides" AutoPlay="true" Loop="true" SlideShowAnimationType="SlideRight" runat="server" /> Notice that the Image control includes both a Height and Width property. Here’s an approximation of this animation using an animated GIF: Summary The Superexpert team worked hard on this release. We hope you like the new improvements to both the AjaxFileUpload and the SlideShow controls. We’d love to hear your feedback in the comments. On to the next sprint!

    Read the article

  • Help with InvalidCastException

    - by Robert
    I have a gridview and, when a record is double-clicked, I want it to open up a new detail-view form for that particular record. As an example, I have created a Customer class: using System; using System.Data; using System.Configuration; using System.Collections.Generic; using System.ComponentModel; using System.Data.SqlClient; using System.Collections; namespace SyncTest { #region Customer Collection public class CustomerCollection : BindingListView<Customer> { public CustomerCollection() : base() { } public CustomerCollection(List<Customer> customers) : base(customers) { } public CustomerCollection(DataTable dt) { foreach (DataRow oRow in dt.Rows) { Customer c = new Customer(oRow); this.Add(c); } } } #endregion public class Customer : INotifyPropertyChanged, IEditableObject, IDataErrorInfo { private string _CustomerID; private string _CompanyName; private string _ContactName; private string _ContactTitle; private string _OldCustomerID; private string _OldCompanyName; private string _OldContactName; private string _OldContactTitle; private bool _Editing; private string _Error = string.Empty; private EntityStateEnum _EntityState; private Hashtable _PropErrors = new Hashtable(); public event PropertyChangedEventHandler PropertyChanged; private void FirePropertyChangeNotification(string propName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propName)); } } public Customer() { this.EntityState = EntityStateEnum.Unchanged; } public Customer(DataRow dr) { //Populates the business object item from a data row this.CustomerID = dr["CustomerID"].ToString(); this.CompanyName = dr["CompanyName"].ToString(); this.ContactName = dr["ContactName"].ToString(); this.ContactTitle = dr["ContactTitle"].ToString(); this.EntityState = EntityStateEnum.Unchanged; } public string CustomerID { get { return _CustomerID; } set { _CustomerID = value; FirePropertyChangeNotification("CustomerID"); } } public string CompanyName { get { return _CompanyName; } set { _CompanyName = value; FirePropertyChangeNotification("CompanyName"); } } public string ContactName { get { return _ContactName; } set { _ContactName = value; FirePropertyChangeNotification("ContactName"); } } public string ContactTitle { get { return _ContactTitle; } set { _ContactTitle = value; FirePropertyChangeNotification("ContactTitle"); } } public Boolean IsDirty { get { return ((this.EntityState != EntityStateEnum.Unchanged) || (this.EntityState != EntityStateEnum.Deleted)); } } public enum EntityStateEnum { Unchanged, Added, Deleted, Modified } void IEditableObject.BeginEdit() { if (!_Editing) { _OldCustomerID = _CustomerID; _OldCompanyName = _CompanyName; _OldContactName = _ContactName; _OldContactTitle = _ContactTitle; } this.EntityState = EntityStateEnum.Modified; _Editing = true; } void IEditableObject.CancelEdit() { if (_Editing) { _CustomerID = _OldCustomerID; _CompanyName = _OldCompanyName; _ContactName = _OldContactName; _ContactTitle = _OldContactTitle; } this.EntityState = EntityStateEnum.Unchanged; _Editing = false; } void IEditableObject.EndEdit() { _Editing = false; } public EntityStateEnum EntityState { get { return _EntityState; } set { _EntityState = value; } } string IDataErrorInfo.Error { get { return _Error; } } string IDataErrorInfo.this[string columnName] { get { return (string)_PropErrors[columnName]; } } private void DataStateChanged(EntityStateEnum dataState, string propertyName) { //Raise the event if (PropertyChanged != null && propertyName != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } //If the state is deleted, mark it as deleted if (dataState == EntityStateEnum.Deleted) { this.EntityState = dataState; } if (this.EntityState == EntityStateEnum.Unchanged) { this.EntityState = dataState; } } } } Here is my the code for the double-click event: private void customersDataGridView_CellDoubleClick(object sender, DataGridViewCellEventArgs e) { Customer oCustomer = (Customer)customersBindingSource.CurrencyManager.List[customersBindingSource.CurrencyManager.Position]; CustomerForm oForm = new CustomerForm(); oForm.NewCustomer = oCustomer; oForm.ShowDialog(this); oForm.Dispose(); oForm = null; } Unfortunately, when this code runs, I receive an InvalidCastException error stating "Unable to cast object to type 'System.Data.DataRowView' to type 'SyncTest.Customer'". This error occurs on the very first line of that event: Customer oCustomer = (Customer)customersBindingSource.CurrencyManager.List[customersBindingSource.CurrencyManager.Position]; What am I doing wrong?... and what can I do to fix this? Any help is greatly appreciated. Thanks!

    Read the article

  • ajax tabcontainer with button postback problem

    - by yousof
    I have a dropdownlist in my web page and two command buttons and a tabcontainer. The tabcontainer does not appear after the load of page according to my code. Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not IsPostBack Then TabContainer1.Visible = False If CtvAct.GetRecords("Fill_RequestTypeTb") = True Then ReqTypeCmbo.DataSource = CtvAct.MainDataset.Tables("tbOLRequestType").DefaultView ReqTypeCmbo.DataTextField = "RequestTypeName" ReqTypeCmbo.DataValueField = "RequestTypeId" ReqTypeCmbo.DataBind() Dim itm As New ListItem itm.Text = "-- ??? ??? ????? --" itm.Value = "-1" itm.Selected = True ReqTypeCmbo.Items.Insert(0, itm) ReqTypeCmbo.SelectedIndex = 0 End If End If End Sub Protected Sub PrntCmd_Click(ByVal sender As Object, ByVal e As EventArgs) Handles PrntCmd.Click TextBox6.Text = "gggg" End Sub Protected Sub ReqTypeCmbo_SelectedIndexChanged(ByVal sender As Object, ByVal e As EventArgs) Handles ReqTypeCmbo.SelectedIndexChanged If ReqTypeCmbo.SelectedItem.Text = "????" Then TabContainer1.Visible = True TabContainer1.ActiveTabIndex = 0 OwnerDataPnl.Enabled = True AttornyDataPnl.Enabled = True BaseDataPnl.Enabled = True RenwDataPnl.Visible = False NewOwnerDataPnl0.Visible = False AddActivtyPnl.Visible = False SubstitPnl.Visible = False FinishLicePnl.Visible = False CancelActivityPnl.Visible = False LocTransPnl.Visible = False ' Panel1.Visible = False '?????? ?????? ??????? ???????? REstOfficeAddrTxt.Enabled = True REstOffIdeIssuedPlaceTxt.Enabled = True REstOffIdentityNameTxt.Enabled = True REstOffIdentityNumberTxt.Enabled = True REstOffIdentityStartDateTxt.Enabled = True REstOffMobileTxt.Enabled = True REstOffNameTxt.Enabled = True REstOffPhoneTxt.Enabled = True OwnerShipNumberTxt.Enabled = True OwnerShipDateTxt.Enabled = True OwnerShipIssuedPlaceTxt.Enabled = True SerailTxt.Enabled = True RequestIdTxt.Enabled = True RequestDateTxt.Enabled = True RenwDataPnl.Visible = True '''' If CtvAct.GetRecords("Fill_NationalityTb") = True Then OwnrNationCmbo.DataSource = CtvAct.MainDataset.Tables("tbOLNationality").DefaultView OwnrNationCmbo.DataTextField = "NationName" OwnrNationCmbo.DataValueField = "NationId" OwnrNationCmbo.DataBind() Dim itm As New ListItem itm.Text = "-- ??? ??????? --" itm.Value = "-1" itm.Selected = True OwnrNationCmbo.Items.Insert(0, itm) OwnrNationCmbo.SelectedIndex = 0 End If If CtvAct.GetRecords("Fill_ActivityTb") = True Then AcivityCombo.DataSource = CtvAct.MainDataset.Tables("tbOLActivity").DefaultView AcivityCombo.DataTextField = "ActivityName" AcivityCombo.DataValueField = "ActivityId" AcivityCombo.DataBind() AcivityCombo.SelectedIndex = -1 Dim itm As New ListItem itm.Text = "-- ??? ?????? --" itm.Value = "-1" itm.Selected = True AcivityCombo.Items.Insert(0, itm) AcivityCombo.SelectedIndex = 0 End If 'If CtvAct.GetRecords("Fill_ActivityTb") = True Then ' DropDownList2.DataSource = CtvAct.MainDataset.Tables("tbOLActivity").DefaultView ' DropDownList2.DataTextField = "ActivityName" ' DropDownList2.DataValueField = "ActivityId" ' DropDownList2.DataBind() 'AcivityCombo.Visible = True ' LbCon.Text = CtvAct.Prt 'End If AttronayNationCmbo.SelectedIndex = -1 If CtvAct.GetRecords("Fill_NationalityTb") = True Then AttronayNationCmbo.DataSource = CtvAct.MainDataset.Tables("tbOLNationality").DefaultView AttronayNationCmbo.DataTextField = "NationName" AttronayNationCmbo.DataValueField = "NationId" AttronayNationCmbo.DataBind() Dim itm As New ListItem itm.Text = "-- ??? ??????? --" itm.Value = "-1" itm.Selected = True AttronayNationCmbo.Items.Insert(0, itm) AttronayNationCmbo.SelectedIndex = 0 ' LbCon.Text = CtvAct.Prt End If If CtvAct.GetRecords("Fill_LocationTb") = True Then LocationIdCmbo.DataSource = CtvAct.MainDataset.Tables("tbOLLocation").DefaultView LocationIdCmbo.DataTextField = "LocationName" LocationIdCmbo.DataValueField = "LocationId" LocationIdCmbo.DataBind() Dim itm As New ListItem itm.Text = "-- ??? ?????? --" itm.Value = "-1" itm.Selected = True LocationIdCmbo.Items.Insert(0, itm) LocationIdCmbo.SelectedIndex = 0 ' LbCon.Text = CtvAct.Prt End If '''''''''''''''''''''''''''''''' con = New SqlConnection(CtvAct.Connection_String()) CmdActivty = con.CreateCommand CmdActivty.CommandText = "SELECT ActivityId FROM tbOLStoreActivty" DaActivity.SelectCommand = CmdActivty DaActivity.Fill(DsActivity, "tbOLStoreActivty") ActivityGV.DataSource = DaActivity ActivityGV.DataMember = "tbOLStoreActivty" AcivityCombo.DataSource = ds.Tables(0) AcivityCombo.DataTextField = "ename" AcivityCombo.DataValueField = "eid" AcivityCombo.DataBind() ''''''''''''''''''''''''''''' LbCon.Text = CtvAct.Prt ElseIf ReqTypeCmbo.SelectedItem.Text = "?????" Then TabContainer1.Visible = True TabContainer1.ActiveTabIndex = 1 OwnerDataPnl.Enabled = False AttornyDataPnl.Enabled = False BaseDataPnl.Enabled = True AddActivtyPnl.Visible = False SubstitPnl.Visible = False FinishLicePnl.Visible = False RenwDataPnl.Visible = True AddActivtyPnl.Visible = False NewOwnerDataPnl0.Visible = False CancelActivityPnl.Visible = False LocTransPnl.Visible = False '?????? ?????? ??????? ???????? REstOfficeAddrTxt.Enabled = False REstOffIdeIssuedPlaceTxt.Enabled = False REstOffIdentityNameTxt.Enabled = False REstOffIdentityNumberTxt.Enabled = False REstOffIdentityStartDateTxt.Enabled = False REstOffMobileTxt.Enabled = False REstOffNameTxt.Enabled = False REstOffPhoneTxt.Enabled = False OwnerShipNumberTxt.Enabled = False OwnerShipDateTxt.Enabled = False OwnerShipIssuedPlaceTxt.Enabled = False SerailTxt.Enabled = True RequestIdTxt.Enabled = True RequestDateTxt.Enabled = True ''''''''' ElseIf ReqTypeCmbo.SelectedItem.Text = "??? ?????" Then TabContainer1.Visible = True TabContainer1.ActiveTabIndex = 1 OwnerDataPnl.Enabled = False AttornyDataPnl.Enabled = False BaseDataPnl.Enabled = False RenwDataPnl.Visible = False NewOwnerDataPnl0.Visible = True AddActivtyPnl.Visible = False SubstitPnl.Visible = False FinishLicePnl.Visible = False CancelActivityPnl.Visible = False LocTransPnl.Visible = False ElseIf ReqTypeCmbo.SelectedItem.Text = "????? ????" Then TabContainer1.Visible = True TabContainer1.ActiveTabIndex = 1 OwnerDataPnl.Enabled = False AttornyDataPnl.Enabled = False BaseDataPnl.Enabled = False RenwDataPnl.Visible = False NewOwnerDataPnl0.Visible = False AddActivtyPnl.Visible = False SubstitPnl.Visible = False FinishLicePnl.Visible = False CancelActivityPnl.Visible = True LocTransPnl.Visible = False ElseIf ReqTypeCmbo.SelectedItem.Text = "????? ????" Then TabContainer1.Visible = True TabContainer1.ActiveTabIndex = 1 OwnerDataPnl.Enabled = False AttornyDataPnl.Enabled = False BaseDataPnl.Enabled = False RenwDataPnl.Visible = False NewOwnerDataPnl0.Visible = False AddActivtyPnl.Visible = False SubstitPnl.Visible = False FinishLicePnl.Visible = True LocTransPnl.Visible = False ElseIf ReqTypeCmbo.SelectedItem.Text = "??? ???? / ????" Then TabContainer1.Visible = True TabContainer1.ActiveTabIndex = 1 OwnerDataPnl.Enabled = False AttornyDataPnl.Enabled = False BaseDataPnl.Enabled = False RenwDataPnl.Visible = False NewOwnerDataPnl0.Visible = False AddActivtyPnl.Visible = False SubstitPnl.Visible = True FinishLicePnl.Visible = False LocTransPnl.Visible = False ElseIf ReqTypeCmbo.SelectedItem.Text = "??? ????" Then TabContainer1.Visible = True TabContainer1.ActiveTabIndex = 0 OwnerDataPnl.Enabled = True AttornyDataPnl.Enabled = True BaseDataPnl.Enabled = True RenwDataPnl.Visible = False NewOwnerDataPnl0.Visible = False AddActivtyPnl.Visible = False SubstitPnl.Visible = False FinishLicePnl.Visible = False CancelActivityPnl.Visible = False LocTransPnl.Visible = True End If End Sub If I press any button after page load the button work very selecting any item from dropdownlist make the tabcontainer appear but after the tabcontainer appear, the buttons does not work (postback) how can I solve these problems

    Read the article

  • How to Achieve OC4J RMI Load Balancing

    - by fip
    This is an old, Oracle SOA and OC4J 10G topic. In fact this is not even a SOA topic per se. Questions of RMI load balancing arise when you developed custom web applications accessing human tasks running off a remote SOA 10G cluster. Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusions in the field how OC4J RMI load balancing work. Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public. Here is the tech note: Overview A typical use case in Oracle SOA is that you are building web based, custom human tasks UI that will interact with the task services housed in a remote BPEL 10G cluster. Or, in a more generic way, you are just building a web based application in Java that needs to interact with the EJBs in a remote OC4J cluster. In either case, you are talking to an OC4J cluster as RMI client. Then immediately you must ask yourself the following questions: 1. How do I make sure that the web application, as an RMI client, even distribute its load against all the nodes in the remote OC4J cluster? 2. How do I make sure that the web application, as an RMI client, is resilient to the node failures in the remote OC4J cluster, so that in the unlikely case when one of the remote OC4J nodes fail, my web application will continue to function? That is the topic of how to achieve load balancing with OC4J RMI client. Solutions You need to configure and code RMI load balancing in two places: 1. Provider URL can be specified with a comma separated list of URLs, so that the initial lookup will land to one of the available URLs. 2. Choose a proper value for the oracle.j2ee.rmi.loadBalance property, which, along side with the PROVIDER_URL property, is one of the JNDI properties passed to the JNDI lookup.(http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI) More details below: About the PROVIDER_URL The JNDI property java.name.provider.url's job is, when the client looks up for a new context at the very first time in the client session, to provide a list of RMI context The value of the JNDI property java.name.provider.url goes by the format of a single URL, or a comma separate list of URLs. A single URL. For example: opmn:ormi://host1:6003:oc4j_instance1/appName1 A comma separated list of multiple URLs. For examples:  opmn:ormi://host1:6003:oc4j_instanc1/appName, opmn:ormi://host2:6003:oc4j_instance1/appName, opmn:ormi://host3:6003:oc4j_instance1/appName When the client looks up for a new Context the very first time in the client session, it sends a query against the OPMN referenced by the provider URL. The OPMN host and port specifies the destination of such query, and the OC4J instance name and appName are actually the “where clause” of the query. When the PROVIDER URL reference a single OPMN server Let's consider the case when the provider url only reference a single OPMN server of the destination cluster. In this case, that single OPMN server receives the query and returns a list of the qualified Contexts from all OC4Js within the cluster, even though there is a single OPMN server in the provider URL. A context represent a particular starting point at a particular server for subsequent object lookup. For example, if the URL is opmn:ormi://host1:6003:oc4j_instance1/appName, then, OPMN will return the following contexts: appName on oc4j_instance1 on host1 appName on oc4j_instance1 on host2, appName on oc4j_instance1 on host3,  (provided that host1, host2, host3 are all in the same cluster) Please note that One OPMN will be sufficient to find the list of all contexts from the entire cluster that satisfy the JNDI lookup query. You can do an experiment by shutting down appName on host1, and observe that OPMN on host1 will still be able to return you appname on host2 and appName on host3. When the PROVIDER URL reference a comma separated list of multiple OPMN servers When the JNDI propery java.naming.provider.url references a comma separated list of multiple URLs, the lookup will return the exact same things as with the single OPMN server: a list of qualified Contexts from the cluster. The purpose of having multiple OPMN servers is to provide high availability in the initial context creation, such that if OPMN at host1 is unavailable, client will try the lookup via OPMN on host2, and so on. After the initial lookup returns and cache a list of contexts, the JNDI URL(s) are no longer used in the same client session. That explains why removing the 3rd URL from the list of JNDI URLs will not stop the client from getting the EJB on the 3rd server. About the oracle.j2ee.rmi.loadBalance Property After the client acquires the list of contexts, it will cache it at the client side as “list of available RMI contexts”.  This list includes all the servers in the destination cluster. This list will stay in the cache until the client session (JVM) ends. The RMI load balancing against the destination cluster is happening at the client side, as the client is switching between the members of the list. Whether and how often the client will fresh the Context from the list of Context is based on the value of the  oracle.j2ee.rmi.loadBalance. The documentation at http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI list all the available values for the oracle.j2ee.rmi.loadBalance. Value Description client If specified, the client interacts with the OC4J process that was initially chosen at the first lookup for the entire conversation. context Used for a Web client (servlet or JSP) that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be returned each time InitialContext() is invoked. lookup Used for a standalone client that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be created each time the client calls Context.lookup(). Please note the regardless of the setting of oracle.j2ee.rmi.loadBalance property, the “refresh” only occurs at the client. The client can only choose from the "list of available context" that was returned and cached from the very first lookup. That is, the client will merely get a new Context object from the “list of available RMI contexts” from the cache at the client side. The client will NOT go to the OPMN server again to get the list. That also implies that if you are adding a node to the server cluster AFTER the client’s initial lookup, the client would not know it because neither the server nor the client will initiate a refresh of the “list of available servers” to reflect the new node. About High Availability (i.e. Resilience Against Node Failure of Remote OC4J Cluster) What we have discussed above is about load balancing. Let's also discuss high availability. This is how the High Availability works in RMI: when the client use the context but get an exception such as socket is closed, it knows that the server referenced by that Context is problematic and will try to get another unused Context from the “list of available contexts”. Again, this list is the list that was returned and cached at the very first lookup in the entire client session.

    Read the article

  • Creating an SMF service for mercurial web server

    - by Chris W Beal
    I'm working on a project at the moment, which has a number of contributers. We're managing the project gate (which is stand alone) with mercurial. We want to have an easy way of seeing the changelog, so we can show management what is going on.  Luckily mercurial provides a basic web server which allows you to see the changes, and drill in to change sets. This can be run as a daemon, but as it was running on our build server, every time it was rebooted, someone needed to remember to start the process again. This is of course a classic usage of SMF. Now I'm not an experienced person at writing SMF services, so it took me 1/2 an hour or so to figure it out the first time. But going forward I should know what I'm doing a bit better. I did reference this doc extensively. Taking a step back, the command to start the mercurial web server is $ hg serve -p <port number> -d So we somehow need to get SMF to run that command for us. In the simplest form, SMF services are really made up of two components. The manifest Usually lives in /var/svc/manifest somewhere Can be imported from any location The method Usually live in /lib/svc/method I simply put the script straight in that directory. Not very repeatable, but it worked Can take an argument of start, stop, or refresh Lets start with the manifest. This looks pretty complex, but all it's doing is describing the service name, the dependencies, the start and stop methods, and some properties. The properties can be by instance, that is to say I could have multiple hg serve processes handling different mercurial projects, on different ports simultaneously Here is the manifest I wrote. I stole extensively from the examples in the Documentation. So my manifest looks like this $ cat hg-serve.xml <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type='manifest' name='hg-serve'> <service name='application/network/hg-serve' type='service' version='1'> <dependency name='network' grouping='require_all' restart_on='none' type='service'> <service_fmri value='svc:/milestone/network:default' /> </dependency> <exec_method type='method' name='start' exec='/lib/svc/method/hg-serve %m' timeout_seconds='2' /> <exec_method type='method' name='stop' exec=':kill' timeout_seconds='2'> </exec_method> <instance name='project-gate' enabled='true'> <method_context> <method_credential user='root' group='root' /> </method_context> <property_group name='hg-serve' type='application'> <propval name='path' type='astring' value='/src/project-gate'/> <propval name='port' type='astring' value='9998' /> </property_group> </instance> <stability value='Evolving' /> <template> <common_name> <loctext xml:lang='C'>hg-serve</loctext> </common_name> <documentation> <manpage title='hg' section='1' /> </documentation> </template> </service> </service_bundle> So the only things I had to decide on in this are the service name "application/network/hg-serve" the start and stop methods (more of which later) and the properties. This is the information I need to pass to the start method script. In my case the port I want to start the web server on "9998", and the path to the source gate "/src/project-gate". These can be read in to the start method. So now lets look at the method scripts $ cat /lib/svc/method/hg-serve #!/sbin/sh # # # Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. # # Standard prolog # . /lib/svc/share/smf_include.sh if [ -z $SMF_FMRI ]; then echo "SMF framework variables are not initialized." exit $SMF_EXIT_ERR fi # # Build the command line flags # # Get the port and directory from the SMF properties port=`svcprop -c -p hg-serve/port $SMF_FMRI` dir=`svcprop -c -p hg-serve/path $SMF_FMRI` echo "$1" case "$1" in 'start') cd $dir /usr/bin/hg serve -d -p $port ;; *) echo "Usage: $0 {start|refresh|stop}" exit 1 ;; esac exit $SMF_EXIT_OK This is all pretty self explanatory, we read the port and directory using svcprop, and use those simply to run a command in the start case. We don't need to implement a stop case, as the manifest says to use "exec=':kill'for the stop method. Now all we need to do is import the manifest and start the service, but first verify the manifest # svccfg verify /path/to/hg-serve.xml If that doesn't give an error try importing it # svccfg import /path/to/hg-serve.xml If like me you originally put the hg-serve.xml file in /var/svc/manifest somewhere you'll get an error and told to restart the import service svccfg: Restarting svc:/system/manifest-import The manifest being imported is from a standard location and should be imported with the command : svcadm restart svc:/system/manifest-import # svcadm restart svc:/system/manifest-import and you're nearly done. You can look at the service using svcs -l # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled false state disabled next_state none state_time Thu May 31 16:11:47 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15749 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) And look at the interesting properties # svcprop hg-serve hg-serve/path astring /src/project-gate hg-serve/port astring 9998 ...stuff deleted.... Then simply enable the service and if every things gone right, you can point your browser at http://server:9998 and get a nice graphical log of project activity. # svcadm enable hg-serve # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled true state online next_state none state_time Thu May 31 16:18:11 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15858 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) None of this is rocket science, but a bit fiddly. Hence I thought I'd blog it. It might just be you see this in google and it clicks with you more than one of the many other blogs or how tos about it. Plus I can always refer back to it myself in 3 weeks, when I want to add another project to the server, and I've forgotten how to do it.

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • Using spring:nestedPath tag

    - by Ravi
    Hello All, I have this code, I know I'm missing something but don't know what. It would be great if you help me out. I'm new to Spring MVC. I'm trying out a simple Login application in Spring MVC. This is my web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <servlet> <servlet-name>springapp</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>springapp</servlet-name> <url-pattern>/app/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> here is my springapp-servlet.xml file <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"> <bean name="/login" class="springapp.web.LoginController"/> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsp/"></property> <property name="suffix" value=".jsp"></property> </bean> This is my applicationContext.xml file <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd"> <beans> <bean id="employee" class="springapp.domain.Employee" /> </beans> Here is my LoginController.java file package springapp.web; import org.springframework.web.servlet.ModelAndView; import org.springframework.web.servlet.mvc.SimpleFormController; import springapp.domain.Employee; public class LoginController extends SimpleFormController{ public LoginController(){ setCommandName("loginEmployee"); setCommandClass(Employee.class); setFormView("login"); setSuccessView("welcome"); } @Override protected ModelAndView onSubmit(Object command) throws Exception { return super.onSubmit(command); } } And finally my login.jsp file <%@ taglib uri="http://www.springframework.org/tags" prefix="spring" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Timesheet Login</title> </head> <body> <spring:nestedPath path="employee"> <form action="" method="POST"> <table width="200" border="1" align="center" cellpadding="10" cellspacing="0"> <tr> <td>Username:</td> <td> <spring:bind path="userName"> <input type="text" name="${status.expression}" id="${status.value}" /> </spring:bind> </td> </tr> <tr> <td>Password</td> <td> <spring:bind path="password"> <input type="password" name="${status.expression}" id="${status.value}" /> </spring:bind> </td> </tr> <tr> <td colspan="2"><input type="submit" value="Login" /></td> </tr> </table> </form> </spring:nestedPath> </body> </html> But when I try to run the code I get this error javax.servlet.ServletException: javax.servlet.jsp.JspTagException: Neither BindingResult nor plain target object for bean name 'employee' available as request attribute

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Bound Command not firing on another viewModel? What Am I doing wrong?

    - by devnet247
    Hi I cannot seem to bind a command to a button.I have a treeview on the left showing Country City etc.. And I tabcontrol on the right. do I This uses 4 viewModels rootviewModel-ContinentViewModel-CountryViewModel-CityViewModel What I am building is based on http://www.codeproject.com/KB/WPF/TreeViewWithViewModel.aspx Now on one of the tabs I have a Toolbar with a button "TestButton" that I have mapped in zaml. This does not fire! The reason is not firing is because I m binding the RootViewModel but the command that is bound in zaml is in the cityViewModel. How Do I pass the datacontext from one view to the other? or how do I make the button fire. I need the command to be in the cityViewModel. Any Suggestions on how I bind it? View "WorldExplorerView" where I bind the main DataContext public partial class WorldExplorerView { public WorldExplorerView() { InitializeComponent(); var continents = Database.GetContinents(); var rootViewModel = new RootViewModel(continents); DataContext = rootViewModel; } } CityViewModel public class CityViewModel : TreeViewItemViewModel { private City _city; private RelayCommand _testCommand; public CityViewModel(City city, CountryViewModel countryViewModel):base(countryViewModel,false) { _city = city; } Properties etc...... public ICommand TestCommand { get { if(_testCommand==null) { _testCommand = new RelayCommand(param => GetTestCommand(), param => CanCallTestCommand); ; } return _testCommand; } } protected bool CanCallTestCommand { get { return true; } } private static void GetTestCommand() { MessageBox.Show("It works"); } } ZAML <DockPanel> <DockPanel LastChildFill="True"> <Label DockPanel.Dock="top" Content="Title " HorizontalAlignment="Center"></Label> <StatusBar DockPanel.Dock="Bottom"> <StatusBarItem Content="Status Bar" ></StatusBarItem> </StatusBar> <Grid DockPanel.Dock="Top"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="2*"/> </Grid.ColumnDefinitions> <TreeView Name="tree" ItemsSource="{Binding Continents}"> <TreeView.ItemContainerStyle> <Style TargetType="{x:Type TreeViewItem}"> <Setter Property="IsExpanded" Value="{Binding IsExpanded,Mode=TwoWay}"/> <Setter Property="IsSelected" Value="{Binding IsSelected,Mode=TwoWay}"/> <Setter Property="FontWeight" Value="Normal"/> <Style.Triggers> <Trigger Property="IsSelected" Value="True"> <Setter Property="FontWeight" Value="Bold"></Setter> </Trigger> </Style.Triggers> </Style> </TreeView.ItemContainerStyle> <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type ViewModels:ContinentViewModel}" ItemsSource="{Binding Children}"> <StackPanel Orientation="Horizontal"> <Image Width="16" Height="16" Margin="3,0" Source="Images\Continent.png"/> <TextBlock Text="{Binding ContinentName}"/> </StackPanel> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type ViewModels:CountryViewModel}" ItemsSource="{Binding Children}"> <StackPanel Orientation="Horizontal"> <Image Width="16" Height="16" Margin="3,0" Source="Images\Country.png"/> <TextBlock Text="{Binding CountryName}"/> </StackPanel> </HierarchicalDataTemplate> <DataTemplate DataType="{x:Type ViewModels:CityViewModel}" > <StackPanel Orientation="Horizontal"> <Image Width="16" Height="16" Margin="3,0" Source="Images\City.png"/> <TextBlock Text="{Binding CityName}"/> </StackPanel> </DataTemplate> </TreeView.Resources> </TreeView> <GridSplitter Grid.Row="0" Grid.Column="1" Background="LightGray" Width="5" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"/> <Grid Grid.Column="2" Margin="5" > <TabControl> <TabItem Header="Demo"> <DockPanel LastChildFill="True"> <ToolBar DockPanel.Dock="Top"> <!-- DOES NOT WORK--> <Button Name="btnTest" Command="{Binding TestCommand}" Content="Press me see if works"></Button> </ToolBar> <TextBox></TextBox> </DockPanel> </TabItem> <TabItem Header="Details" DataContext="{Binding Path=SelectedItem.City, ElementName=tree, Mode=OneWay}"> <StackPanel > <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding CityName}"/> <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding Area}"/> <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding Population}"/> <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding CityDetailsInfo.ClubsCount}"/> <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding CityDetailsInfo.PubsCount}"/> </StackPanel> </TabItem> </TabControl> </Grid> </Grid> </DockPanel> </DockPanel>

    Read the article

  • Merge functionality of two xsl files into a single file (not a xsl import or include issue)

    - by anuamb
    I have two xsl files; both of them perform different tasks on source xml one after another. Now I need a single xsl file which will actually perform both these tasks in single file (its not an issue of xsl import or xsl include): say my source xml is: <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV#+# <EXP_AFT_CONVabc <GUARANTEE_AMOUNT#+# <CREDIT_DER/ </LVL2 <LVL21 <AZ#+# <BZbz1 <AZaz2 <BZ#+# <CZ/ </LVL21 </R7P1_1 </LIST_R7P1_1 My first xsl (tr1.xsl) removes all nodes whose value is blank or null: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="@*|node()"> <xsl:if test=". != '' or ./@* != ''"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:if> <xsl:template </xsl:stylesheet The output here is <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV#+# <EXP_AFT_CONVabc <GUARANTEE_AMOUNT#+# </LVL2 <LVL21 <AZ#+# <BZbz1 <AZaz2 <BZ#+# </LVL21 </R7P1_1 </LIST_R7P1_1 And my second xsl (tr2.xsl) does a global replace (of #+# with text blank'') on the output of first xsl: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" <xsl:template name="globalReplace" <xsl:param name="outputString"/ <xsl:param name="target"/ <xsl:param name="replacement"/ <xsl:choose <xsl:when test="contains($outputString,$target)"> <xsl:value-of select= "concat(substring-before($outputString,$target), $replacement)"/> <xsl:call-template name="globalReplace"> <xsl:with-param name="outputString" select="substring-after($outputString,$target)"/> <xsl:with-param name="target" select="$target"/> <xsl:with-param name="replacement" select="$replacement"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$outputString"/> </xsl:otherwise> </xsl:choose </xsl:template <xsl:template match="text()" <xsl:template match="@*|*"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet So my final output is <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV <EXP_AFT_CONVabc <GUARANTEE_AMOUNT </LVL2 <LVL21 <AZ <BZbz1 <AZaz2 <BZ </LVL21 </R7P1_1 </LIST_R7P1_1 My concern is that instead of these two xsl (tr1.xsl and tr2.xsl) I only need a single xsl (tr.xsl) which gives me final output? Say when I combine these two as <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" <xsl:template match="@*|node()" <xsl:if test=". != '' or ./@* != ''"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:if> <xsl:template name="globalReplace" <xsl:param name="outputString"/ <xsl:param name="target"/ <xsl:param name="replacement"/ <xsl:choose <xsl:when test="contains($outputString,$target)"> <xsl:value-of select= "concat(substring-before($outputString,$target), $replacement)"/> <xsl:call-template name="globalReplace"> <xsl:with-param name="outputString" select="substring-after($outputString,$target)"/> <xsl:with-param name="target" select="$target"/> <xsl:with-param name="replacement" select="$replacement"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$outputString"/> </xsl:otherwise> </xsl:choose </xsl:template <xsl:template match="text()" <xsl:call-template name="globalReplace" <xsl:with-param name="outputString" select="."/ <xsl:with-param name="target" select="'#+#'"/ <xsl:with-param name="replacement" select="''"/ </xsl:call-template </xsl:template <xsl:template match="@|" <xsl:copy <xsl:apply-templates select="@*|node()"/ </xsl:copy </xsl:template </xsl:stylesheet it outputs: <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV <EXP_AFT_CONVabc <GUARANTEE_AMOUNT <CREDIT_DER/ </LVL2 <LVL21 <AZ <BZbz1 <AZaz2 <BZ <CZ/ </LVL21 </R7P1_1 </LIST_R7P1_1 Only replacement is performed but not null/blank node removal.

    Read the article

  • Applications: How to create a custom dialog box for Windows Mobile 6 (native)

    - by TechTwaddle
    Ashraf, on the MSDN forum, asks, “Is there a way to make a default choice for the messagebox that happens after a period of time if the user doesn't choose (Clicked ) Yes or No buttons.” To elaborate, the requirement is to show a message box to the user with certain options to select, and if the user does not respond within a predefined time limit (say 8 seconds) then the message box must dismiss itself and select a default option. Now such a functionality is not available with the MessageBox() api, you will have to write your own custom dialog box. Surely, creating a dialog box is quite a simple task using the DialogBox() api, and we have been creating full screen dialog boxes all the while. So how will this custom message box be any different? It’s not much different from a regular dialog box except for a few changes in its properties. First, it has a title bar but no buttons on the title bar (no ‘x’ or ‘ok’ button on the title bar), it doesn’t occupy full screen and it contains the controls that you put into it, thus justifying the title ‘custom’. So in this post we create a custom dialog box with two buttons, ‘Black’ and ‘White’. The user is given 8 seconds to select one of those colours, if the user doesn’t make a selection in 8 seconds, the default option ‘Black’ is selected. Before going into the implementation here is a video of how the dialog box works; Custom dialog box To start off, add a new dialog resource into your application, size it appropriately and add whatever controls you need to the dialog. In my case, I added two static text labels and two buttons, as below; Now we need to write up the window procedure for this dialog, here is the complete function; BOOL CALLBACK CustomDialogProc(HWND hDlg, UINT uMessage, WPARAM wParam, LPARAM lParam) {     int wmID, wmEvent;     PAINTSTRUCT ps;     HDC hdc;     static int timeCount = 0;     switch(uMessage)     {         case WM_INITDIALOG:             {                 SHINITDLGINFO shidi;                 memset(&shidi, 0, sizeof(shidi));                 shidi.dwMask = SHIDIM_FLAGS;                 //shidi.dwFlags = SHIDIF_DONEBUTTON | SHIDIF_SIPDOWN | SHIDIF_SIZEDLGFULLSCREEN | SHIDIF_EMPTYMENU;                 shidi.dwFlags = SHIDIF_SIPDOWN | SHIDIF_EMPTYMENU;                 shidi.hDlg = hDlg;                 SHInitDialog(&shidi);                 SHDoneButton(hDlg, SHDB_HIDE);                 timeCount = 0;                 SetWindowText(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING), L"Time remaining: 8 second(s)");                 SetTimer(hDlg, MY_TIMER, 1000, NULL);             }             return TRUE;         case WM_COMMAND:             {                 wmID = LOWORD(wParam);                 wmEvent = HIWORD(wParam);                 switch(wmID)                 {                     case IDC_BUTTON_BLACK:                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_BLACK);                         break;                     case IDC_BUTTON_WHITE:                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_WHITE);                         break;                 }             }             break;         case WM_TIMER:             {                 if (wParam == MY_TIMER)                 {                     WCHAR wszText[128];                     memset(&wszText, 0, sizeof(wszText));                     timeCount++;                     //8 seconds are over, dismiss the dialog, select def value                     if (timeCount >= 8)                     {                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_BLACK_DEF);                     }                     wsprintf(wszText, L"Time remaining: %d second(s)", 8-timeCount);                     SetWindowText(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING), wszText);                     UpdateWindow(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING));                 }             }             break;         case WM_PAINT:             {                 hdc = BeginPaint(hDlg, &ps);                 EndPaint(hDlg, &ps);             }             break;     }     return FALSE; } The MSDN documentation mentions that you need to specify the flag WS_NONAVDONEBUTTON, but I got an error saying that the value could not be found, so we can ignore this for now. Next up, while calling SHInitDialog() for your custom dialog, make sure that you don’t specify SHDIF_DONEBUTTON in the dwFlags member of the SHINITDIALOG structure, this member makes the ‘ok’ button appear on the dialog title bar. Finally, we need to call SHDoneButton() with SHDB_HIDE flag to, well, hide the Done button. The ‘Done’ button is the same as the ‘ok’ button, so this step might seem redundant, and the dialog works fine without calling SHDoneButton() too, but it’s better to stick with the documentation (; So you can see that we have followed all these steps above, under WM_INITDIALOG. We also setup a few things like a variable to keep track of the time, and setting off a one second timer. Every time the timer fires, we receive a WM_TIMER message. We then update the static label displaying the amount of time left to the user. If 8 seconds go by without the user selecting any option, we kill the timer and end the dialog with IDC_BUTTON_BLACK_DEF. This is just a #define’d integer value, make sure it’s unique. You’ll see why this is important. If the user makes a selection, either Black or White, we kill the timer and end the dialog with corresponding selection the user made, that is, either IDC_BUTTON_BLACK or IDC_BUTTON_WHITE. Ok, so now our custom dialog is ready to be used. I invoke the custom dialog from a menu entry in the main windows as below, case IDM_MENU_CUSTOMDLG:     {         int ret = DialogBox(g_hInst, MAKEINTRESOURCE(IDD_CUSTOM_DIALOG), hWnd, CustomDialogProc);         switch(ret)         {             case IDC_BUTTON_BLACK_DEF:                 SetWindowText(g_hStaticSelection, L"You Selected: Black (default)");                 break;             case IDC_BUTTON_BLACK:                 SetWindowText(g_hStaticSelection, L"You Selected: Black");                 break;             case IDC_BUTTON_WHITE:                 SetWindowText(g_hStaticSelection, L"You Selected: White");                 break;         }         UpdateWindow(g_hStaticSelection);     }     break; So you see why ending the dialog with the corresponding value was important, that’s what the DialogBox() api returns with. And in the main window I update a static text label to show which option was selected. I cranked this out in about an hour, and unfortunately don’t have time for a managed C# version. That will have to be another post, if I manage to get it working that is (;

    Read the article

  • Flash video playing on top of everything else in IE7

    - by Brett
    Hi everyone, I've been spending hours now reading up on IE7's issue with rendering Flash content on top of other elements, particularly navigation menus (this is often a problem with dropdown menus and Flash ad banners, for example). I've tried a few of the suggested solutions but none have worked for me so far. I'll do my best to explain the circumstances, and would appreciate any advice in the matter! Update At Mercator's request, I am providing a large code-sample to assist in any advice you might have. Consider the HTML below: <body> <div id="page-wrap"> <div id="content-wrap"> <div id="main"> <h1>Page Title</h1> <p>Paragraph text before video.</p> <div class="video-container"> <script type="text/javascript"> AC_FL_RunContent('id','player','name','player','width','480','height','294','src','player','allowscriptaccess','always','allowfullscreen','true','flashvars','file=mp4/VIDEO_FILE.mp4','movie','player' ); //end AC code </script> <noscript> <object id="player" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" name="player" width="480" height="294"> <param name="wmode" value="transparent" /> <param name="movie" value="player.swf" /> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="flashvars" value="file=mp4/VIDEO_FILE.mp4" /> <embed wmode="transparent" type="application/x-shockwave-flash" id="player2" name="player2" src="player.swf" width="480" height="294" allowscriptaccess="always" allowfullscreen="true" flashvars="file=mp4/VIDEO_FILE.mp4" ></embed> </object> </noscript> </div> <p>Paragraph after video.</p> </div><!-- end main --> <div id="subContent"> <p>Sub-content.</p> </div><!-- end subContent --> <div id="content-clear"></div> </div><!-- end content-wrap --> </div><!-- end page-wrap --> <div id="footpanel"> <ul id="mainpanel"> <li id="panel-link"><a href="#"><span class="icon"></span>Panel Link</a> <div class="subpanel"> <h3><span> &ndash; </span>Panel Link</h3> <ul> <li><p>Revealed content</p></li> </ul> </div> </li> </ul> </div> <!-- END footpanel --> </body> Below are the non-presentational CSS selectors that apply to the divs above: body { /*no positioning styles applied */ } #page-wrap { width: 100%; } #content-wrap { width: 960px; margin 0 auto; } #main { float: left; width: 573px; } .video-container { position: relative; width: 480px; z-index: 1; } #sub { float: left; width: 347px; } #content-clear { clear: both; } #foot-panel { position: fixed; width: 94%; bottom: 0; left: 0; z-index: 3000; } ul#main-panel { float: left; } The footpanel uses jQuery-powered flyout menus, if that provides any further context. These menus have z-indexes in the 300X range to appear above the footpanel. The Flash in question is JW player playing a flash video or mp4. Currently, the object and embed tags are inside a container div. My understanding of previous solutions was that the combination of the param changes and the positioning/z-index change on the container div should have resolved the issue. Alas, it is not so. The player resides on top of the footpanel. Other information that may or may not be helpful is that the page is XHTML 1.0 Transitional and that Dreamweaver reports 1 error in the HTML code: <embed> is not in the XHTML 1.0 specification. This fact does not prevent the video from being viewed in any browser tested, and the page still displays correctly in FF. Thanks in advance!

    Read the article

  • jQuery Toggle with Cookie

    - by Cameron
    I have the following toggle system, but I want it to remember what was open/closed using the jQuery cookie plugin. So for example if I open a toggle and then navigate away from the page, when I come back it should be still open. This is code I have so far, but it's becoming rather confusing, some help would be much appreciated thanks. jQuery.cookie = function (name, value, options) { if (typeof value != 'undefined') { options = options || {}; if (value === null) { value = ''; options = $.extend({}, options); options.expires = -1; } var expires = ''; if (options.expires && (typeof options.expires == 'number' || options.expires.toUTCString)) { var date; if (typeof options.expires == 'number') { date = new Date(); date.setTime(date.getTime() + (options.expires * 24 * 60 * 60 * 1000)); } else { date = options.expires; } expires = '; expires=' + date.toUTCString(); } var path = options.path ? '; path=' + (options.path) : ''; var domain = options.domain ? '; domain=' + (options.domain) : ''; var secure = options.secure ? '; secure' : ''; document.cookie = [name, '=', encodeURIComponent(value), expires, path, domain, secure].join(''); } else { var cookieValue = null; if (document.cookie && document.cookie != '') { var cookies = document.cookie.split(';'); for (var i = 0; i < cookies.length; i++) { var cookie = jQuery.trim(cookies[i]); if (cookie.substring(0, name.length + 1) == (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } }; // var showTop = $.cookie('showTop'); if ($.cookie('showTop') == 'collapsed') { $(".toggle_container").hide(); $(".trigger").toggle(function () { $(this).addClass("active"); }, function () { $(this).removeClass("active"); }); $(".trigger").click(function () { $(this).next(".toggle_container").slideToggle("slow,"); }); } else { $(".toggle_container").show(); $(".trigger").toggle(function () { $(this).addClass("active"); }, function () { $(this).removeClass("active"); }); $(".trigger").click(function () { $(this).next(".toggle_container").slideToggle("slow,"); }); }; $(".trigger").click(function () { if ($(".toggle_container").is(":hidden")) { $(this).next(".toggle_container").slideToggle("slow,"); $.cookie('showTop', 'expanded'); } else { $(this).next(".toggle_container").slideToggle("slow,"); $.cookie('showTop', 'collapsed'); } return false; }); and this is a snippet of the HTML it works with: <li> <label for="small"><input type="checkbox" id="small" /> Small</label> <a class="trigger" href="#">Toggle</a> <div class="toggle_container"> <p class="funding"><strong>Funding</strong></p> <ul class="childs"> <li class="child"> <label for="fully-funded1"><input type="checkbox" id="fully-funded1" /> Fully Funded</label> <a class="trigger" href="#">Toggle</a> <div class="toggle_container"> <p class="days"><strong>Days</strong></p> <ul class="days clearfix"> <li><label for="1pre16">Pre 16</label> <input type="text" id="1pre16" /></li> <li><label for="2post16">Post 16</label> <input type="text" id="2post16" /></li> <li><label for="3teacher">Teacher</label> <input type="text" id="3teacher" /></li> </ul> </div> </li>

    Read the article

  • CheckBox ListView SelectedValues DependencyProperty Binding

    - by Ristogod
    I am writing a custom control that is a ListView that has a CheckBox on each item in the ListView to indicate that item is Selected. I was able to do so with the following XAML. <ListView x:Class="CheckedListViewSample.CheckBoxListView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d"> <ListView.Style> <Style TargetType="{x:Type ListView}"> <Setter Property="SelectionMode" Value="Multiple" /> <Style.Resources> <Style TargetType="ListViewItem"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListViewItem"> <Border BorderThickness="{TemplateBinding Border.BorderThickness}" Padding="{TemplateBinding Control.Padding}" BorderBrush="{TemplateBinding Border.BorderBrush}" Background="{TemplateBinding Panel.Background}" SnapsToDevicePixels="True"> <CheckBox IsChecked="{Binding IsSelected, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type ListViewItem}}}"> <CheckBox.Content> <ContentPresenter Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}" HorizontalAlignment="{TemplateBinding Control.HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding Control.VerticalContentAlignment}" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" /> </CheckBox.Content> </CheckBox> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </Style.Resources> </Style> </ListView.Style> </ListView> I however am trying to attempt one more feature. The ListView has a SelectedItems DependencyProperty that returns a collection of the Items that are checked. However, I need to implement a SelectedValues DependencyProperty. I also am implementing a SelectedValuesPath DependencyProperty. By using the SelectedValuesPath, I indicate the path where the values are found for each selected item. So if my items have an ID property, I can specify using the SelectedValuesPath property "ID". The SelectedValues property would then return a collection of ID values. I have this working also using this code in the code-behind: using System.Windows; using System.Windows.Controls; using System.ComponentModel; using System.Collections; using System.Collections.Generic; namespace CheckedListViewSample { /// <summary> /// Interaction logic for CheckBoxListView.xaml /// </summary> public partial class CheckBoxListView : ListView { public static DependencyProperty SelectedValuesPathProperty = DependencyProperty.Register("SelectedValuesPath", typeof(string), typeof(CheckBoxListView), new PropertyMetadata(string.Empty, null)); public static DependencyProperty SelectedValuesProperty = DependencyProperty.Register("SelectedValues", typeof(IList), typeof(CheckBoxListView), new PropertyMetadata(new List<object>(), null)); [Category("Appearance")] [Localizability(LocalizationCategory.NeverLocalize)] [Bindable(true)] public string SelectedValuesPath { get { return ((string)(base.GetValue(CheckBoxListView.SelectedValuesPathProperty))); } set { base.SetValue(CheckBoxListView.SelectedValuesPathProperty, value); } } [Bindable(true)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] [Category("Appearance")] public IList SelectedValues { get { return ((IList)(base.GetValue(CheckBoxListView.SelectedValuesPathProperty))); } set { base.SetValue(CheckBoxListView.SelectedValuesPathProperty, value); } } public CheckBoxListView() : base() { InitializeComponent(); base.SelectionChanged += new SelectionChangedEventHandler(CheckBoxListView_SelectionChanged); } private void CheckBoxListView_SelectionChanged(object sender, SelectionChangedEventArgs e) { List<object> values = new List<object>(); foreach (var item in SelectedItems) { if (string.IsNullOrWhiteSpace(SelectedValuesPath)) { values.Add(item); } else { try { values.Add(item.GetType().GetProperty(SelectedValuesPath).GetValue(item, null)); } catch { } } } base.SetValue(CheckBoxListView.SelectedValuesProperty, values); e.Handled = true; } } } My problem is that my binding only works one way right now. I'm having trouble trying to figure out how to implement my SelectedValues DependencyProperty so that I could Bind a Collection of values to it, and when the control is loaded, the CheckBoxes are checked with items that have values that correspond to the SelectedValues. I've considered using the PropertyChangedCallBack event, but can't quite figure out how I could write that to achieve my goal. I'm also unsure of how I find the correct ListViewItem to set it as Selected. And lastly, if I can find the ListViewItem and set it to be Selected, won't that fire my SelectionChanged event each time I set a ListViewItem to be Selected?

    Read the article

  • The Business of Winning Innovation: An Exclusive Blog Series

    - by Kerrie Foy
    "The Business of Winning Innovation” is a series of articles authored by Oracle Agile PLM experts on what it takes to make innovation a successful and lucrative competitive advantage. Our customers have proven Agile PLM applications to be enormously flexible and comprehensive, so we’ve launched this article series to showcase some of the most fascinating, value-packed use cases. In this article by Keith Colonna, we kick-off the series by taking a look at the science side of innovation within the Consumer Products industry and how PLM can help companies innovate faster, cheaper, smarter. This article will review how innovation has become the lifeline for growth within consumer products companies and how certain companies are “winning” by creating a competitive advantage for themselves by taking a more enterprise-wide,systematic approach to “innovation”.   Managing the Science of Innovation within the Consumer Products Industry By: Keith Colonna, Value Chain Solution Manager, Oracle The consumer products (CP) industry is very mature and competitive. Most companies within this industry have saturated North America (NA) with their products thus maximizing their NA growth potential. Future growth is expected to come from either expansion outside of North America and/or by way of new ideas and products. Innovation plays an integral role in both of these strategies, whether you’re innovating business processes or the products themselves, and may cause several challenges for the typical CP company, Becoming more innovative is both an art and a science. Most CP companies are very good at the art of coming up with new innovative ideas, but many struggle with perfecting the science aspect that involves the best practice processes that help companies quickly turn ideas into sellable products and services. Symptoms and Causes of Business Pain Struggles associated with the science of innovation show up in a variety of ways, like: · Establishing and storing innovative product ideas and data · Funneling these ideas to the chosen few · Time to market cycle time and on-time launch rates · Success rates, or how often the best idea gets chosen · Imperfect decision making (i.e. the ability to kill projects that are not projected to be winners) · Achieving financial goals · Return on R&D investment · Communicating internally and externally as more outsource partners are added globally · Knowing your new product pipeline and project status These challenges (and others) can be consolidated into three root causes: A lack of visibility Poor data with limited access The inability to truly collaborate enterprise-wide throughout your extended value chain Choose the Right Remedy Product Lifecycle Management (PLM) solutions are uniquely designed to help companies solve these types challenges and their root causes. However, PLM solutions can vary widely in terms of configurability, functionality, time-to-value, etc. Business leaders should evaluate PLM solution in terms of their own business drivers and long-term vision to determine the right fit. Many of these solutions are point solutions that can help you cure only one or two business pains in the short term. Others have been designed to serve other industries with different needs. Then there are those solutions that demo well but are owned by companies that are either unable or unwilling to continuously improve their solution to stay abreast of the ever changing needs of the CP industry to grow through innovation. What the Right PLM Solution Should Do for You Based on more than twenty years working in the CP industry, I recommend investing in a single solution that can help you solve all of the issues associated with the science of innovation in a totally integrated fashion. By integration I mean the (1) integration of the all of the processes associated with the development, maintenance and delivery of your product data, and (2) the integration, or harmonization of this product data with other downstream sources, like ERP, product catalogues and the GS1 Global Data Synchronization Network (or GDSN, which is now a CP industry requirement for doing business with most retailers). The right PLM solution should help you: Increase Revenue. A best practice PLM solution should help a company grow its revenues by consolidating product development cycle-time and helping companies get new and improved products to market sooner. PLM should also eliminate many of the root causes for a product being returned, refused and/or reclaimed (which takes away from top-line growth) by creating an enterprise-wide, collaborative, workflow-driven environment. Reduce Costs. A strong PLM solution should help shave many unnecessary costs that companies typically take for granted. Rationalizing SKU’s, components (ingredients and packaging) and suppliers is a major opportunity at most companies that PLM should help address. A natural outcome of this rationalization is lower direct material spend and a reduction of inventory. Another cost cutting opportunity comes with PLM when it helps companies avoid certain costs associated with process inefficiencies that lead to scrap, rework, excess and obsolete inventory, poor end of life administration, higher cost of quality and regulatory and increased expediting. Mitigate Risk. Risks are the hardest to quantify but can be the most costly to a company. Food safety, recalls, line shutdowns, customer dissatisfaction and, worst of all, the potential tarnishing of your brands are a few of the debilitating risks that CP companies deal with on a daily basis. These risks are so uniquely severe that they require an enterprise PLM solution specifically designed for the CP industry that safeguards product information and processes while still allowing the art of innovation to flourish. Many CP companies have already created a winning advantage by leveraging a single, best practice PLM solution to establish an enterprise-wide, systematic approach to innovation. Oracle’s Answer for the Consumer Products Industry Oracle is dedicated to solving the growth and innovation challenges facing the CP industry. Oracle’s Agile Product Lifecycle Management for Process solution was originally developed with and for CP companies and is driven by a specialized development staff solely focused on maintaining and continuously improving the solution per the latest industry requirements. Agile PLM for Process helps CP companies handle all of the processes associated with managing the science of the innovation process, including: specification management, new product development/project and portfolio management, formulation optimization, supplier management, and quality and regulatory compliance to name a few. And as I mentioned earlier, integration is absolutely critical. Many Oracle CP customers, both with Oracle ERP systems and non-Oracle ERP systems, report benefits from Oracle’s Agile PLM for Process. In future articles we will explain in greater detail how both existing Oracle customers (like Gallo, Smuckers, Land-O-Lakes and Starbucks) and new Oracle customers (like ConAgra, Tyson, McDonalds and Heinz) have all realized the benefits of Agile PLM for Process and its integration to their ERP systems. More to Come Stay tuned for more articles in our blog series “The Business of Winning Innovation.” While we will also feature articles focused on other industries, look forward to more on how Agile PLM for Process addresses innovation challenges facing the CP industry. Additional topics include: Innovation Data Management (IDM), New Product Development (NPD), Product Quality Management (PQM), Menu Management,Private Label Management, and more! . Watch this video for more info about Agile PLM for Process

    Read the article

  • SQL Server script commands to check if object exists and drop it

    - by deadlydog
    Over the past couple years I’ve been keeping track of common SQL Server script commands that I use so I don’t have to constantly Google them.  Most of them are how to check if a SQL object exists before dropping it.  I thought others might find these useful to have them all in one place, so here you go: 1: --=============================== 2: -- Create a new table and add keys and constraints 3: --=============================== 4: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 5: BEGIN 6: CREATE TABLE [dbo].[TableName] 7: ( 8: [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) 9: [ColumnName2] INT NULL, 10: [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') 11: ) 12: 13: -- Add the table's primary key 14: ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED 15: ( 16: [ColumnName1], 17: [ColumnName2] 18: ) 19: 20: -- Add a foreign key constraint 21: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY 22: ( 23: [ColumnName1], 24: [ColumnName2] 25: ) 26: REFERENCES [dbo].[Table2Name] 27: ( 28: [OtherColumnName1], 29: [OtherColumnName2] 30: ) 31: 32: -- Add indexes on columns that are often used for retrieval 33: CREATE INDEX IN_ColumnNames ON [dbo].[TableName] 34: ( 35: [ColumnName2], 36: [ColumnName3] 37: ) 38: 39: -- Add a check constraint 40: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) 41: END 42: 43: --=============================== 44: -- Add a new column to an existing table 45: --=============================== 46: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 47: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 48: BEGIN 49: ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) 50: 51: -- Add a description extended property to the column to specify what its purpose is. 52: EXEC sys.sp_addextendedproperty @name=N'MS_Description', 53: @value = N'Add column comments here, describing what this column is for.' , 54: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 55: @level1name = N'TableName', @level2type=N'COLUMN', 56: @level2name = N'ColumnName' 57: END 58: 59: --=============================== 60: -- Drop a table 61: --=============================== 62: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 63: BEGIN 64: DROP TABLE [dbo].[TableName] 65: END 66: 67: --=============================== 68: -- Drop a view 69: --=============================== 70: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') 71: BEGIN 72: DROP VIEW [dbo].[ViewName] 73: END 74: 75: --=============================== 76: -- Drop a column 77: --=============================== 78: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 79: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 80: BEGIN 81: 82: -- If the column has an extended property, drop it first. 83: IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', 84: N'TableName', N'COLUMN', N'ColumnName') 85: BEGIN 86: EXEC sys.sp_dropextendedproperty @name=N'MS_Description', 87: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 88: @level1name = N'TableName', @level2type=N'COLUMN', 89: @level2name = N'ColumnName' 90: END 91: 92: ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] 93: END 94: 95: --=============================== 96: -- Drop Primary key constraint 97: --=============================== 98: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' 99: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') 100: BEGIN 101: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] 102: END 103: 104: --=============================== 105: -- Drop Foreign key constraint 106: --=============================== 107: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' 108: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') 109: BEGIN 110: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] 111: END 112: 113: --=============================== 114: -- Drop Unique key constraint 115: --=============================== 116: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 117: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') 118: BEGIN 119: ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] 120: END 121: 122: --=============================== 123: -- Drop Check constraint 124: --=============================== 125: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' 126: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') 127: BEGIN 128: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] 129: END 130: 131: --=============================== 132: -- Drop a column's Default value constraint 133: --=============================== 134: DECLARE @ConstraintName VARCHAR(100) 135: SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id 136: WHERE s.xtype='d' AND c.cdefault=s.id 137: AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') 138: 139: IF @ConstraintName IS NOT NULL 140: BEGIN 141: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 142: END 143: 144: --=============================== 145: -- Example of how to drop dynamically named Unique constraint 146: --=============================== 147: DECLARE @ConstraintName VARCHAR(100) 148: SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS 149: WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 150: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') 151: 152: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 153: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) 154: BEGIN 155: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 156: END 157: 158: --=============================== 159: -- Check for and drop a temp table 160: --=============================== 161: IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName 162: 163: --=============================== 164: -- Drop a stored procedure 165: --=============================== 166: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND 167: ROUTINE_NAME = 'StoredProcedureName') 168: BEGIN 169: DROP PROCEDURE [dbo].[StoredProcedureName] 170: END 171: 172: --=============================== 173: -- Drop a UDF 174: --=============================== 175: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND 176: ROUTINE_NAME = 'UDFName') 177: BEGIN 178: DROP FUNCTION [dbo].[UDFName] 179: END 180: 181: --=============================== 182: -- Drop an Index 183: --=============================== 184: IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') 185: BEGIN 186: DROP INDEX TableName.IndexName 187: END 188: 189: --=============================== 190: -- Drop a Schema 191: --=============================== 192: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') 193: BEGIN 194: EXEC('DROP SCHEMA SchemaName') 195: END And here’s the same code, just not in the little code view window so that you don’t have to scroll it.--=============================== -- Create a new table and add keys and constraints --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN CREATE TABLE [dbo].[TableName]  ( [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) [ColumnName2] INT NULL, [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') ) -- Add the table's primary key ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED ( [ColumnName1],  [ColumnName2] ) -- Add a foreign key constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY ( [ColumnName1],  [ColumnName2] ) REFERENCES [dbo].[Table2Name]  ( [OtherColumnName1],  [OtherColumnName2] ) -- Add indexes on columns that are often used for retrieval CREATE INDEX IN_ColumnNames ON [dbo].[TableName] ( [ColumnName2], [ColumnName3] ) -- Add a check constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) END --=============================== -- Add a new column to an existing table --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) -- Add a description extended property to the column to specify what its purpose is. EXEC sys.sp_addextendedproperty @name=N'MS_Description',  @value = N'Add column comments here, describing what this column is for.' ,  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END --=============================== -- Drop a table --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN DROP TABLE [dbo].[TableName] END --=============================== -- Drop a view --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') BEGIN DROP VIEW [dbo].[ViewName] END --=============================== -- Drop a column --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN -- If the column has an extended property, drop it first. IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', N'TableName', N'COLUMN', N'ColumnName') BEGIN EXEC sys.sp_dropextendedproperty @name=N'MS_Description',  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] END --=============================== -- Drop Primary key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] END --=============================== -- Drop Foreign key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] END --=============================== -- Drop Unique key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') BEGIN ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] END --=============================== -- Drop Check constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] END --=============================== -- Drop a column's Default value constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id WHERE s.xtype='d' AND c.cdefault=s.id  AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') IF @ConstraintName IS NOT NULL BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Example of how to drop dynamically named Unique constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS  WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Check for and drop a temp table --=============================== IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName --=============================== -- Drop a stored procedure --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND ROUTINE_NAME = 'StoredProcedureName') BEGIN DROP PROCEDURE [dbo].[StoredProcedureName] END --=============================== -- Drop a UDF --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND  ROUTINE_NAME = 'UDFName') BEGIN DROP FUNCTION [dbo].[UDFName] END --=============================== -- Drop an Index --=============================== IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') BEGIN DROP INDEX TableName.IndexName END --=============================== -- Drop a Schema --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') BEGIN EXEC('DROP SCHEMA SchemaName') END

    Read the article

  • Evaluating Solutions to Manage Product Compliance? Don't Wait Much Longer

    - by Kerrie Foy
    Depending on severity, product compliance issues can cause all sorts of problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision, or indecision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS) is a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment.  ROHS was originally adopted by the European Union in 2003 for implementation in 2006, and it has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey.  In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014.   Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives.  Additional evolving regulations are coming from governing bodies like the Food and Drug Administration (FDA) and the International Organization for Standardization (ISO). Corporate sustainability initiatives are also gaining urgency and influencing product design. In a survey of 405 corporations in the Global 500 by Carbon Disclosure Project, co-written by PwC (CDP Global 500 Climate Change Report 2012 entitled Business Resilience in an Uncertain, Resource-Constrained World), 48% of the respondents indicated they saw potential to create new products and business services as a response to climate change. Just 21% reported a dedicated budget for the research. However, the report goes on to explain that those few companies are winning over new customers and driving additional profits by exploiting their abilities to adapt to environmental needs. The article cites Dell as an example – Dell has invested in research to develop new products designed to reduce its customers’ emissions by more than 10 million metric tons of CO2e per year. This reduction in emissions should save Dell’s customers over $1billion per year as a result! Over time we expect to see many additional companies prove that eco-design provides marketplace benefits through differentiation and direct customer value. How do you meet compliance requirements and also successfully invest in eco-friendly designs? No doubt companies struggle to answer this question. After all, the journey to get there may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes per the rapidly evolving global and regional directives. There may be limited executive focus on the initiative, inability to quantify noncompliance, or not enough resources to justify investment. To make things even more difficult to address, compliance responsibility can be a passionate topic within an organization, making the prospect of change on an enterprise scale problematic and time-consuming. Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming to an organization. With all the overhead, certain markets or demographics become simply inaccessible. Therefore, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and even thrive in today’s highly regulated and transparent environment by implementing systematic approaches to product compliance that are more than functional bandages but revenue-generating engines. Consider partnering with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase. Agile PG&C makes it possible to efficiently manage compliance per corporate green initiatives as well as regional and global directives. Options are critical, but so is ease-of-use. Anyone who’s grappled with compliance policy knows legal interpretation plays a major role in determining how an organization responds to regulation. Agile PG&C gives you the freedom to configure product compliance per your needs, while maintaining rigorous control over the product record in an easy-to-use interface that facilitates adoption efforts. It allows you to assign regulations as specifications for a part or BOM roll-up. Each specification has a threshold value that alerts you to a non-compliance issue if the threshold value is exceeded. Set however many regulations as specifications you need to make sure a product can be sold in your target countries. Another option is to implement like one of our leading consumer electronics customers and define your own “catch-all” specification to ensure compliance in all markets. You can give your suppliers secure access to enter their component data or integrate a third party’s data. With Agile PG&C you are able to design compliance earlier into your products to reduce cost and improve quality downstream when stakes are higher. Agile PG&C is a comprehensive solution that makes product compliance more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise, systematic approach to product compliance is a competitive investment. From the start, Agile Product Governance & Compliance enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738. Many thanks to Shane Goodwin, Senior Manager, Oracle Agile PLM Product Management, for contributions to this article. 

    Read the article

  • PHP Form not working IE and Chrome, but fine in FF

    - by RO
    <? if(isset($_POST['accountUser']) && isset($_POST['accountPassword'])) { include("dbase.php"); include("settings.php"); if ($_POST['accountType']=="member") { $database="chatusers"; } else if ($_POST['accountType']=="model") { $database="chatmodels"; } else if ($_POST['accountType']=="studioop") { $database="chatoperators"; } $userExists=false; $result = mysql_query("SELECT id,user,password,status FROM $database WHERE status!='pending' AND status!='rejected' "); while($row = mysql_fetch_array($result)) { $tempUser=$row["user"]; $tempPass=$row["password"]; $tempId=$row["id"]; if ($_POST['accountUser']==$tempUser && md5($_POST['accountPassword'])==$tempPass) { if ($row["status"]=="blocked") { $userExists=true; $errorMsg="Account is blocked, please contact the administrator for more details"; } else { $userExists=true; $currentTime=time(); mysql_query("UPDATE $database SET lastLogIn='$currentTime' WHERE id = '$tempId' LIMIT 1"); setcookie("usertype", $database, time()+3600); setcookie("id", $tempId, time()+3600); header("Location: cp/$database/"); } } } if (!$userExists){ $errorMsg="Wrong Username or password"; } } else if (isset($_GET['from']) && $_GET['from']=="recoverpass"){ $errorMsg="Your new password has been sent to your mail"; } else { $errorMsg="Please complete username and password fields"; } ?> <? include("_main.header.php"); ?> <table width="720" height="200" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td align="center" valign="middle"><form action="login.php" method="post" enctype="application/x-www-form-urlencoded" name="form1"> <p>&nbsp;</p> <table width="720" border="0" align="center"> <tr> <td colspan="2"><p align="left"> <span class="error"><?php if ( isset($errorMsg) && $errorMsg!=""){ echo $errorMsg; } ?></span> <br> <br> </p></td> </tr> <tr> <td width="210" align="right" valign="top" class="form_definitions"><div align="right">Username:</div></td> <td align="left" valign="top"><input name="accountUser" type="text" id="accountUser" value="<? echo $_GET[user];?>" size="24" maxlength="24"></td> </tr> <tr> <td align="right" valign="top" class="form_definitions"><div align="right">Password:</div></td> <td align="left" valign="top"><input name="accountPassword" type="password" id="accountPassword2" size="24" maxlength="24"></td> </tr> <tr> <td align="right" valign="top" class="form_definitions"><div align="right">Account type:</div></td> <td align="left" valign="top"> <select name="accountType" id="select"> <option value="member" selected>Member</option> <option value="model">Model</option> <option value="studioop">Studio Operator</option> </select> <div align="left"></div></td> </tr> <tr> <td align="right" valign="top" class="form_definitions">&nbsp;</td> <td align="left" valign="top"> <input type="submit" name="Submit" value="Log In to your account"> <div align="left"></div></td> </tr> <tr> <td align="right" valign="top" class="form_definitions">&nbsp;</td> <td align="left" valign="top"><a href="lostpassword.php" class="left">Lost Password? Press Here!</a></td> </tr> </table> </form></td> </tr> </table> <br> <br> <? include("_main.footer.php"); ?>

    Read the article

  • asp.net - csharp - jquery - looking for a better and usage solution

    - by LostLord
    hi my dear friends i have a little problem about using jquery...(i reeally do not know jquery but i forced to use it) i am using vs 2008 - asp.net web app with c# also i am using telerik controls in my pages also i am using sqldatasources (Connecting to storedprocedures) in my pages my pages base on master and content pages and in content pages i have mutiviews ================================================================================= in one of the views(inside one of those multiviews)i had made two radcombo boxes for country and city requirement like cascading dropdowns as parent and child combo boxes. i used old way for doing that , i mean i used update panel and in the SelectedIndexChange Event of Parent RadComboBox(Country) i Wrote this code : protected void RadcomboboxCountry_SelectedIndexChanged(object o, RadComboBoxSelectedIndexChangedEventArgs e) { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } my child radcombo box can fill by upper code , let me tell you how : the child sqldatasource have a sp that has a parameter and i fill that parameter by this line - hfSelectedCo_ID.Value = RadcbCoNameInInsert.SelectedValue; RadcbCoNameInInsert.SelectedValue means country ID. after doing that SelectedIndexChange Event of Parent RadComboBox(Country) could not be fire therefore i forced to set the autopostback property to true. afetr doing that every thing was ok until some one told me can u control focus and keydown of your radcombo boxes (when u press enter key on the parent combobox[country] , so child combobox gets focus -- and when u press upperkey on child radcombobox [city], so parent combobox[country] gets focus) (For Users That Do Not Want To Use Mouse for Input Info And Choose items) i told him this is web app , not win form and we can not do that. i googled it and i found jquery the only way for doing that ... so i started using jquery . i wrote this code with jquery for both of them : <script src="../JQuery/jquery-1.4.1.js" language="javascript" type="text/javascript"></script> <script type="text/javascript"> $(function() { $('input[id$=RadcomboboxCountry_Input]').focus(); $('input[id$=RadcomboboxCountry_Input]').select(); $('input[id$=RadcomboboxCountry_Input]').bind('keyup', function(e) { var code = (e.keyCode ? e.keyCode : e.which); if (code == 13) { -----------> Enter Key $('input[id$=RadcomboboxCity_Input]').focus(); $('input[id$=RadcomboboxCity_Input]').select(); } }); $('input[id$=RadcomboboxCity_Input]').bind('keyup', function(e) { var code = (e.keyCode ? e.keyCode : e.which); if (code == 38) { -----------> Upper Key $('input[id$=RadcomboboxCountry_Input]').focus(); $('input[id$=RadcomboboxCountry_Input]').select(); } }); }); </script> this jquery code worked BBBBBBUUUUUUUTTTTTT autopostback=true of the Parent RadComboBox Became A Problem , Because when SelectedIndex Change Of ParentRadComboBox is fired after that Telerik Skins runs and after that i lost parent ComboBox Focus and we should use mouse but we don't want it.... for fix this problem i decided to set autopostback of perentCB to false and convert protected void RadcomboboxCountry_SelectedIndexChanged(object o, RadComboBoxSelectedIndexChangedEventArgs e) { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } to a public non static method without parameters and call it with jquey like this : (i used onclientchanged property of parentcombo box like onclientchanged = "MyMethodForParentCB_InJquery();" insread of selectedindexchange event) public void MyMethodForParentCB_InCodeBehind() { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } for doing that i read the blow manual and do that step by step : ======================================================================= http://www.ajaxprojects.com/ajax/tutorialdetails.php?itemid=732 ======================================================================= but this manual is about static methods and this is my new problem ... when i am using static method like : public static void MyMethodForParentCB_InCodeBehind() { hfSelectedCo_ID.Value = RadcomboboxCountry.SelectedValue; RadcomboboxCity.Items.Clear(); RadcomboboxCity.Items.Add(new RadComboBoxItem(" ...", "5")); RadcomboboxCity.DataBind(); RadcomboboxCity.SelectedIndex = 0; } so i recieved some errors and this method could not recognize my controls and hidden field... one of those errors like this : Error 2 An object reference is required for the non-static field, method, or property 'Darman.SuperAdmin.Users.hfSelectedCo_ID' C:\Javad\Copy of Darman 6\Darman\SuperAdmin\Users.aspx.cs 231 13 Darman any idea or is there any way to call non static methods with jquery (i know we can not do that but is there another way to solve my problem)???????????????

    Read the article

  • Where to look for real url

    - by smallB
    I'm trying to write simple application for downloading videos from youtube. My code for getting file (http://www.youtube.com/watch?v=pViMzR_ylXg) looks like: bool FD_core::get_file() { QNetworkRequest request; request.setUrl(QUrl("http://www.youtube.com/watch?v=pViMzR_ylXg")); connect(network_access_manager_, SIGNAL(finished(QNetworkReply*)), this, SLOT(onRequestCompleted(QNetworkReply *))); network_access_manager_->get(request); return true; } void FD_core::onRequestCompleted(QNetworkReply * reply) { QByteArray data_ = reply->readAll(); cout << data_.constData(); qDebug() << "size: " << data_.size(); } In the above function data_.constData() produces lots of text, part (very small) of it: <!DOCTYPE html> <html lang="en" dir="ltr" > <head> <script> var yt = yt || {};yt.timing = yt.timing || {};yt.timing.tick = function(label, opt_time) {var timer = yt.timing['timer'] || {};if(opt_time) {timer[label] = opt_time;}else {timer[label] = new Date().getTime();}yt.timing['timer'] = timer;};yt.timing.info = function(label, value) {var info_args = yt.timing['info_args'] || {};info_args[label] = value;yt.timing['info_args'] = info_args;};yt.timing.info('e', "907050,906359,927900,919320,914021,916611,922401,920704,912806,927201,925706,928001,922403,913546,913556,920201,911116,901451");yt.timing.wff = true;yt.timing.info('pr', "1");yt.timing.info('an', "dclk,aftv,afv");if (document.webkitVisibilityState == 'prerender') {document.addEventListener('webkitvisibilitychange', function() {yt.timing.tick('start');}, false);}yt.timing.tick('start');yt.timing.info('li','0');try {yt.timing['srt'] = window.gtbExternal && window.gtbExternal.pageT() ||window.external && window.external.pageT;} catch(e) {}if (window.chrome && window.chrome.csi) {yt.timing['srt'] = Math.floor(window.chrome.csi().pageT);}if (window.msPerformance && window.msPerformance.timing) {yt.timing['srt'] = window.msPerformance.timing.responseStart - window.msPerformance.timing.navigationStart;} </script> <script>var yt = yt || {};yt.preload = {};yt.preload.counter_ = 0;yt.preload.start = function(src) {var img = new Image();var counter = ++yt.preload.counter_;yt.preload[counter] = img;img.onload = img.onerror = function () {delete yt.preload[counter];};img.src = src;img = null;};yt.preload.start("http:\/\/o-o---preferred---sn-xn5ucu-q0ce---v3---lscache7.c.youtube.com\/crossdomain.xml");yt.preload.start("http:\/\/o-o---preferred---sn-xn5ucu-q0ce---v3---lscache7.c.youtube.com\/generate_204?ip=95.83.224.63\u0026upn=A3aUhLYV55M\u0026sparams=algorithm%2Cburst%2Ccp%2Cfactor%2Cgcr%2Cid%2Cip%2Cipbits%2Citag%2Csource%2Cupn%2Cexpire\u0026fexp=907050%2C906359%2C927900%2C919320%2C914021%2C916611%2C922401%2C920704%2C912806%2C927201%2C925706%2C928001%2C922403%2C913546%2C913556%2C920201%2C911116%2C901451\u0026mt=1354207274\u0026key=yt1\u0026algorithm=throttle-factor\u0026burst=40\u0026ipbits=8\u0026itag=34\u0026sver=3\u0026signature=692E605215EB4D2CA407291CA26E14B844768A89.7A2930CE25FDDFC7C4FF5AA56DD02538B0020267\u0026mv=m\u0026source=youtube\u0026ms=au\u0026gcr=ie\u0026expire=1354228237\u0026factor=1.25\u0026cp=U0hUSVJNVl9IUUNONF9KR1pDOi0tSFhhRzVFRkd6\u0026id=a5588ccd1ff29578");</script><title>Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks) - YouTube</title><link rel="search" type="application/opensearchdescription+xml" href="http://www.youtube.com/opensearch?locale=en_US" title="YouTube Video Search"><link rel="icon" href="http://s.ytimg.com/yts/img/favicon-vfldLzJxy.ico" type="image/x-icon"><link rel="shortcut icon" href="http://s.ytimg.com/yts/img/favicon-vfldLzJxy.ico" type="image/x-icon"> <link rel="icon" href="//s.ytimg.com/yts/img/favicon_32-vflWoMFGx.png" sizes="32x32"><link rel="canonical" href="/watch?v=pViMzR_ylXg"><link rel="alternate" media="handheld" href="http://m.youtube.com/watch?v=pViMzR_ylXg"><link rel="alternate" media="only screen and (max-width: 640px)" href="http://m.youtube.com/watch?v=pViMzR_ylXg"><link rel="shortlink" href="http://youtu.be/pViMzR_ylXg"> <meta name="title" content="Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks)"> <meta name="description" content="Some of the lyrics of &quot;Die Antwoord&quot; new single &quot;Fok Julle Naaiers&quot; have caused such controversy that Die Antwoord have split with their record label Intersc..."> <meta name="keywords" content="Die Antwoord, Fok Julle Naaiers, Mike Tyson, DJ Hi-Tek, Faggot"> <link rel="alternate" type="application/json+oembed" href="http://www.youtube.com/oembed?url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DpViMzR_ylXg&amp;format=json" title="Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks)"> <link rel="alternate" type="text/xml+oembed" href="http://www.youtube.com/oembed?url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DpViMzR_ylXg&amp;format=xml" title="Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks)"> <meta property="og:url" content="http://www.youtube.com/watch?v=pViMzR_ylXg"> <meta property="og:title" content="Die Antwoord - Fok Julle Naaiers (Mike Tyson&#39;s Words NOT DJ Hi-Teks)"> <meta property="og:description" content="Some of the lyrics of &quot;Die Antwoord&quot; new single &quot;Fok Julle Naaiers&quot; have caused such controversy that Die Antwoord have split with their record label Intersc..."> <meta property="og:type" content="video"> <meta property="og:image" content="http://i1.ytimg.com/vi/pViMzR_ylXg/mqdefault.jpg"> <meta property="og:video" content="http://www.youtube.com/v/pViMzR_ylXg?version=3&amp;autohide=1"> <meta property="og:video:type" content="application/x-shockwave-flash"> <meta property="og:video:width" content="853"> <meta property="og:video:height" content="480"> <meta property="og:site_name" content="YouTube"> <meta property="fb:app_id" content="87741124305"> <meta name="twitter:card" value="player"> <meta name="twitter:site" value="@youtube"> <meta name="twitter:player" value="https://www.youtube.com/embed/pViMzR_ylXg"> <meta property="twitter:player:width" content="853"> <meta property="twitter:player:height" content="480"> So my question is, where in this file is the url hidden which will allow me to download the wanted file?

    Read the article

  • Persisting Session Between Different Browser Instances

    - by imran_ku07
        Introduction:          By default inproc session's identifier cookie is saved in browser memory. This cookie is known as non persistent cookie identifier. This simply means that if the user closes his browser then the cookie is immediately removed. On the other hand cookies which stored on the user’s hard drive and can be reused for later visits are called persistent cookies. Persistent cookies are less used than nonpersistent cookies because of security. Simply because nonpersistent cookies makes session hijacking attacks more difficult and more limited. If you are using shared computer then there are lot of chances that your persistent session will be used by other shared members. However this is not always the case, lot of users desired that their session will remain persisted even they open two instances of same browser or when they close and open a new browser. So in this article i will provide a very simple way to persist your session even the browser is closed.   Description:          Let's create a simple ASP.NET Web Application. In this article i will use Web Form but it also works in MVC. Open Default.aspx.cs and add the following code in Page_Load.    protected void Page_Load(object sender, EventArgs e)        {            if (Session["Message"] != null)                Response.Write(Session["Message"].ToString());            Session["Message"] = "Hello, Imran";        }          This page simply shows a message if a session exist previously and set the session.          Now just run the application, you will just see an empty page on first try. After refreshing the page you will see the Message "Hello, Imran". Now just close the browser and reopen it or just open another browser instance, you will get the exactly same behavior when you run your application first time . Why the session is not persisted between browser instances. The simple reason is non persistent session cookie identifier. The session cookie identifier is not shared between browser instances. Now let's make it persistent.          To make your application share session between different browser instances just add the following code in global.asax.    protected void Application_PostMapRequestHandler(object sender, EventArgs e)           {               if (Request.Cookies["ASP.NET_SessionIdTemp"] != null)               {                   if (Request.Cookies["ASP.NET_SessionId"] == null)                       Request.Cookies.Add(new HttpCookie("ASP.NET_SessionId", Request.Cookies["ASP.NET_SessionIdTemp"].Value));                   else                       Request.Cookies["ASP.NET_SessionId"].Value = Request.Cookies["ASP.NET_SessionIdTemp"].Value;               }           }          protected void Application_PostRequestHandlerExecute(object sender, EventArgs e)        {             HttpCookie cookie = new HttpCookie("ASP.NET_SessionIdTemp", Session.SessionID);               cookie.Expires = DateTime.Now.AddMinutes(Session.Timeout);               Response.Cookies.Add(cookie);         }          This code simply state that during Application_PostRequestHandlerExecute(which is executed after HttpHandler) just add a persistent cookie ASP.NET_SessionIdTemp which contains the value of current user SessionID and sets the timeout to current user session timeout.          In Application_PostMapRequestHandler(which is executed just before th session is restored) we just check whether the Request cookie contains ASP.NET_SessionIdTemp. If yes then just add or update ASP.NET_SessionId cookie with ASP.NET_SessionIdTemp. So when a new browser instance is open, then a check will made that if ASP.NET_SessionIdTemp exist then simply add or update ASP.NET_SessionId cookie with ASP.NET_SessionIdTemp.          So run your application again, you will get the last closed browser session(if it is not expired).   Summary:          Persistence session is great way to increase the user usability. But always beware the security before doing this. However there are some cases in which you might need persistence session. In this article i just go through how to do this simply. So hopefully you will again enjoy this simple article too.

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >