Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 821/916 | < Previous Page | 817 818 819 820 821 822 823 824 825 826 827 828  | Next Page >

  • Workarounds for supporting MVVM in the Silverlight TreeView Control

    - by cibrax
    MVVM (Model-View-ViewModel) is the pattern that you will typically choose for building testable user interfaces either in WPF or Silverlight. This pattern basically relies on the data binding support in those two technologies for mapping an existing model class (the view model) to the different parts of the UI or view. Unfortunately, MVVM was not threated as first citizen for some of controls released out of the box in the Silverlight runtime or the Silverlight toolkit. That means that using data binding for implementing MVVM is not always something trivial and usually requires some customization in the existing controls. In ran into different problems myself trying to fully support data binding in controls like the tree view or the context menu or things like drag & drop.  For that reason, I decided to write this post to show how the tree view control or the tree view items can be customized to support data binding in many of its properties. In first place, you will typically use a tree view for showing hierarchical data so the view model somehow must reflect that hierarchy. An easy way to implement hierarchy in a model is to use a base item element like this one, public abstract class TreeItemModel { public abstract IEnumerable<TreeItemModel> Children; } You can later derive your concrete model classes from that base class. For example, public class CustomerModel { public string FullName { get; set; } public string Address { get; set; } public IEnumerable<OrderModel> Orders { get; set; } }   public class CustomerTreeItemModel : TreeItemModel { public CustomerTreeItemModel(CustomerModel customer) { }   public override IEnumerable<TreeItemModel> Children { get { // Return orders } } } The Children property in the CustomerTreeItem model implementation can return for instance an ObservableCollection<TreeItemModel> with the orders, so the tree view will automatically subscribe to all the changes in the collection. You can bind this model to the tree view control in the UI by using a Hierarchical data template. <e:TreeView x:Name="TreeView" ItemsSource="{Binding Customers}"> <e:TreeView.ItemTemplate> <sdk:HierarchicalDataTemplate ItemsSource="{Binding Children}"> <!-- TEMPLATE --> </sdk:HierarchicalDataTemplate> </e:TreeView.ItemTemplate> </e:TreeView> An interesting behavior with the Children property and the Hierarchical data template is that the Children property is only invoked before the expansion, so you can use lazy load at this point (The tree view control will not expand the whole tree in the first expansion). The problem with using MVVM in this control is that you can not bind properties in model with specific properties of the TreeView item such as IsSelected or IsExpanded. Here is where you need to customize the existing tree view control to support data binding in tree items. public class CustomTreeView : TreeView { public CustomTreeView() { }   protected override DependencyObject GetContainerForItemOverride() { CustomTreeViewItem tvi = new CustomTreeViewItem(); Binding expandedBinding = new Binding("IsExpanded"); expandedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsExpandedProperty, expandedBinding); Binding selectedBinding = new Binding("IsSelected"); selectedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsSelectedProperty, selectedBinding); return tvi; } }   public class CustomTreeViewItem : TreeViewItem { public CustomTreeViewItem() { }   protected override DependencyObject GetContainerForItemOverride() { CustomTreeViewItem tvi = new CustomTreeViewItem(); Binding expandedBinding = new Binding("IsExpanded"); expandedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsExpandedProperty, expandedBinding); Binding selectedBinding = new Binding("IsSelected"); selectedBinding.Mode = BindingMode.TwoWay; tvi.SetBinding(CustomTreeViewItem.IsSelectedProperty, selectedBinding); return tvi; } } You basically need to derive the TreeView and TreeViewItem controls to manually add a binding for the properties you need. In the example above, I am adding a binding for the “IsExpanded” and “IsSelected” properties in the items. The model for the tree items now needs to be extended to support those properties as well, public abstract class TreeItemModel : INotifyPropertyChanged { bool isExpanded = false; bool isSelected = false;   public abstract IEnumerable<TreeItemModel> Children { get; }   public bool IsExpanded { get { return isExpanded; } set { isExpanded = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("IsExpanded")); } }   public bool IsSelected { get { return isSelected; } set { isSelected = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("IsSelected")); } }   public event PropertyChangedEventHandler PropertyChanged; } However, as soon as you use this custom tree view control, you lose all the automatic styles from the built-in toolkit themes because they are tied to the control type (TreeView in this case).  The only ugly workaround I found so far for this problem is to copy the styles from the Toolkit source code and reuse them in the application.

    Read the article

  • WebLogic Server JMS WLST Script – Who is Connected To My Server

    - by james.bayer
    Ever want to know who was connected to your WebLogic Server instance for troubleshooting?  An email exchange about this topic and JMS came up this week, and I’ve heard it come up once or twice before too.  Sometimes it’s interesting or helpful to know the list of JMS clients (IP Addresses, JMS Destinations, message counts) that are connected to a particular JMS server.  This can be helpful for troubleshooting.  Tom Barnes from the WebLogic Server JMS team provided some helpful advice: The JMS connection runtime mbean has “getHostAddress”, which returns the host address of the connecting client JVM as a string.  A connection runtime can contain session runtimes, which in turn can contain consumer runtimes.  The consumer runtime, in turn has a “getDestinationName” and “getMemberDestinationName”.  I think that this means you could write a WLST script, for example, to dump all consumers, their destinations, plus their parent session’s parent connection’s host addresses.    Note that the client runtime mbeans (connection, session, and consumer) won’t necessarily be hosted on the same JVM as a destination that’s in the same cluster (client messages route from their connection host to their ultimate destination in the same cluster). Writing the Script So armed with this information, I decided to take the challenge and see if I could write a WLST script to do this.  It’s always helpful to have the WebLogic Server MBean Reference handy for activities like this.  This one is focused on JMS Consumers and I only took a subset of the information available, but it could be modified easily to do Producers.  I haven’t tried this on a more complex environment, but it works in my simple sandbox case, so it should give you the general idea. # Better to use Secure Config File approach for login as shown here http://buttso.blogspot.com/2011/02/using-secure-config-files-with-weblogic.html connect('weblogic','welcome1','t3://localhost:7001')   # Navigate to the Server Runtime and get the Server Name serverRuntime() serverName = cmo.getName()   # Multiple JMS Servers could be hosted by a single WLS server cd('JMSRuntime/' + serverName + '.jms' ) jmsServers=cmo.getJMSServers()   # Find the list of all JMSServers for this server namesOfJMSServers = '' for jmsServer in jmsServers: namesOfJMSServers = jmsServer.getName() + ' '   # Count the number of connections jmsConnections=cmo.getConnections() print str(len(jmsConnections)) + ' JMS Connections found for ' + serverName + ' with JMSServers ' + namesOfJMSServers   # Recurse the MBean tree for each connection and pull out some information about consumers for jmsConnection in jmsConnections: try: print 'JMS Connection:' print ' Host Address = ' + jmsConnection.getHostAddress() print ' ClientID = ' + str( jmsConnection.getClientID() ) print ' Sessions Current = ' + str( jmsConnection.getSessionsCurrentCount() ) jmsSessions = jmsConnection.getSessions() for jmsSession in jmsSessions: jmsConsumers = jmsSession.getConsumers() for jmsConsumer in jmsConsumers: print ' Consumer:' print ' Name = ' + jmsConsumer.getName() print ' Messages Received = ' + str(jmsConsumer.getMessagesReceivedCount()) print ' Member Destination Name = ' + jmsConsumer.getMemberDestinationName() except: print 'Error retrieving JMS Consumer Information' dumpStack() # Cleanup disconnect() exit() Example Output I expect the output to look something like this and loop through all the connections, this is just the first one: 1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection:   Host Address = 127.0.0.1   ClientID = None   Sessions Current = 16    Consumer:      Name = consumer40      Messages Received = 1      Member Destination Name = myJMSModule!myQueue Notice that it has the IP Address of the client.  There are 16 Sessions open because I’m using an MDB, which defaults to 16 connections, so this matches what I expect.  Let’s see what the full output actually looks like: D:\Oracle\fmw11gr1ps3\user_projects\domains\offline_domain>java weblogic.WLST d:\temp\jms.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'offline_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection: Host Address = 127.0.0.1 ClientID = None Sessions Current = 16 Consumer: Name = consumer40 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer34 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer37 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer16 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer46 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer49 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer43 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer55 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer25 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer22 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer19 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer52 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer31 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer58 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer28 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer61 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Disconnected from weblogic server: AdminServer     Exiting WebLogic Scripting Tool. Thanks to Tom Barnes for the hints and the inspiration to write this up. Image of telephone switchboard courtesy of http://www.JoeTourist.net/ JoeTourist InfoSystems

    Read the article

  • Gene Hunt Says:

    - by BizTalk Visionary
    "She's as nervous as a very small nun at a penguin shoot"   "He's got fingers in more pies than a leper on a cookery course" "You so much as belch out of line and I'll have your scrotum on a barbed wire plate" "Let's go play slappyface" "your surrounded by armed barstewards" “Right, get out and find this murdering scum right now!” [pause] “Scratch that, we start 9am sharp tomorrow, it's beer-o-clock.” "So then Cartwright, you're such a good Detective.... Go and Detect me a packet of Garibaldies" "You're not the one who is going to have to knit himself a new arsehole after 25 years of aggressive male love in prison" “A dream for me is Diana Dors and a bottle of chip fat." “A dream for me is Diana Dors and a bottle of chip fat." “They reckon you've got concussion - but personally, I couldn't give a tart's furry cup if half your brains are falling out. Don't ever waltz into my kingdom playing king of the jungle.” “You great... soft... sissy... girlie... nancy... french... bender... Man-United supporting POOF!!” “Drugs eh? What's the point. They make you forget, make you talk funny, make you see things that aren't there. My old grandma got all of that for free when she had a stroke.” “He's Dead! It's quite serious!” “Fanny in the flat...Nice Work” “SoopaDoopa” “Tits in a Jumper!” “Drop your weapons! You are surrounded by armed bastards!” “It's 1973, almost dinnertime. I'm 'avin 'oops!” “Trust the Gene Genie!” “I wanna hump Britt Ekland...What're we gonna do...!” “Was that 'E' and you don't know the rest?! or you going 'Eeee, I Dunno'” “Good Girl! Prostate probe and no jelly. “ “Give over, it's nothing like Spain!” “I'll come over your houses and stamp on all your toys!” “The Wizard will sort it out. It's cos of the wonderful things he does” “Cartwright can jump up and down on his knackers!” “It's not a windup love, he really thinks like this!” “Women! You can't say two words to them” “I was thinking, maybe, a Berni Inn!” “If I wanted a bollocking for drinking too much...!” “Shhhh...hear that...that's the sound of this case being closed! “Chicken!? In a basket!?” “Seems a large quantity of cocaine...” “You probably thought he kept his cock in his keks!” “The tail-end of Rays demotion speech!” “Stephen Warren is gay!?” “You're a smart boy, use your initiative!” “Don't be such a Jessie!” “I find the idea of a bird brushing her teeth...!” “Never been tempted to the Magic talcum powder?” “Make sure she's got nice tits!” “You're more likely to find an ostrich with a plum up it's arse!” “Drink this lot under the table and have a pint on the way home!” “Never be a female Prime Minister!” “Pub? Pub! pub!.....Pub!” “Thou shalt not suck off rent boys!” “The number for the special clinic is on the notice board!” “If me uncle had tits, would he be me auntie!” “Got your vicars in a twist!” “We Done?!” “Your mates got balls...If they were any bigger he'd need a wheelbarrow!” “The Ending - from 'I want to go home' to the end music.”

    Read the article

  • A Quick Primer on SharePoint Customization

    - by PeterBrunone
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} This one goes out to all the people who have been asked to change the way a SharePoint site looks.  Management wants to know how long it will take, and you can whip that out by tomorrow, right?  If you don't have time to prepare a treatise on what's involved, or if you just want to lend some extra weight to your case by quoting a blogger who was an MVP for seven years, then dive right in; this post is for you. There are three main components of SharePoint visual customization:   1)       Theme – A theme encompasses all the standardized text formatting and coloring (borders, fonts, etc), including the background images of various sections. All told, there could be around 50 images involved, and a few hundred CSS (style) classes.  Installing a theme once it’s been created is no great feat.  Given the number of pieces, of course, creating a new theme could take anywhere from a day to a week… once decisions have been made about the desired appearance. 2)      Master Page – A master page provides the framework for page layout.  This includes all the top and side menus, where content shows up, et cetera.  Master pages have been around for a long time in ASP.NET (Microsoft’s web development platform), and they do require some .NET programming knowledge.  Beyond that, in SharePoint, there are a few dozen controls which the system expects find on a given page.  They’re not all used at once, but if they’re not there when they’re needed, chaos ensues.  Estimating a custom master page is difficult, as it depends on the level of customization.  I’ve been on projects where I was brought in simply to fix some problems and add a few finishing touches, and it took 2-3 weeks.  Master page customization requires a large amount of testing time to make sure that the HTML, JavaScript, CSS, and control placement all work well together. 3)      Individual page layout – Each page (ideally) uses a master page for its template, but within the content areas defined by the master page, web parts can be added, removed, and configured from within the browser.  The wireframe that Brent provided could most likely be completed simply by manipulating the content on the home page in this fashion, and we had allowed about a day of effort for the task.  If needed, further functionality can be provided by an experienced ASP.NET developer; custom forms are a common example.  This of course is a bit more in-depth than simple content manipulation and could take several days per page (or more; there’s really no way to quantify this without a set of requirements).   That’s basically it.  To recap:  Fonts and coloring are done with themes, and can take anywhere from a day to a week to create (not counting creative time); required technical skills include HTML, CSS, and image manipulation.  Templated layout is done with master pages, and generally requires a developer familiar with both ASP.NET and SharePoint in particular; it can have far-reaching consequences depending on the complexity of the changes, and could add weeks or months to a project.  Page layout can be as simple as content manipulation in the web browser, taking a few hours per page, or it can involve more detail, like custom forms, and can require programming expertise and significantly more development time.

    Read the article

  • OSB/OSR/OER in One Domain - QName violates loader constraints

    - by John Graves
    For demos, testing and prototyping, I wanted a single domain which contained three servers:OSB - Oracle Service BusOSR - Oracle Service RegistryOER - Oracle Enterprise Repository These three can work together to help with service governance in an enterprise.  When building out the domain, I found errors in the OSR server due to some conflicting classes from the OSB.  This wouldn't be an issue if each server was given a unique classpath setting with the node manager, but I was having the node manager use the standard startup scripts. The domain's bin/setDomainEnv.sh script has a large set of extra libraries added for OSB which look like this: if [ "${POST_CLASSPATH}" != "" ] ; then POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar${CLASSPATHSEP}${POST_CLASSPATH}" export POST_CLASSPATH else POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar" export POST_CLASSPATH fi if [ "${PRE_CLASSPATH}" != "" ] ; then PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar${CLASSPATHSEP}${PRE_CLASSPATH}" export PRE_CLASSPATH else PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar" export PRE_CLASSPATH fi POST_CLASSPATH="${POST_CLASSPATH}${CLASSPATHSEP}/oracle/fmwhome/Oracle_OSB1/soa/modules/oracle.soa.common.adapters_11.1.1/oracle.soa.common.adapters.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/version.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/alsb.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-ant.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-common.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-core.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-dameon.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/classes${CLASSPATHSEP}\ ${ALSB_HOME}/lib/external/log4j_1.2.8.jar${CLASSPATHSEP}\ ${DOMAIN_HOME}/config/osb" I didn't take the time to sort out exactly which jar was causing the problem, but I simply surrounded this block with a conditional statement: if [ "${SERVER_NAME}" == "osr_server1" ] ; then POST_CLASSPATH=""else if [ "${POST_CLASSPATH}" != "" ] ; then POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar${CLASSPATHSEP}${POST_CLASSPATH}" export POST_CLASSPATH else POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar" export POST_CLASSPATH fi if [ "${PRE_CLASSPATH}" != "" ] ; then PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar${CLASSPATHSEP}${PRE_CLASSPATH}" export PRE_CLASSPATH else PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar" export PRE_CLASSPATH fi POST_CLASSPATH="${POST_CLASSPATH}${CLASSPATHSEP}/oracle/fmwhome/Oracle_OSB1/soa/modules/oracle.soa.common.adapters_11.1.1/oracle.soa.common.adapters.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/version.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/alsb.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-ant.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-common.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-core.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-dameon.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/classes${CLASSPATHSEP}\ ${ALSB_HOME}/lib/external/log4j_1.2.8.jar${CLASSPATHSEP}\ ${DOMAIN_HOME}/config/osb" fi I could have also just done an if [ ${SERVER_NAME} = "osb_server1" ], but I would have also had to include the AdminServer because they are needed there too.  Since the oer_server1 didn't mind, I did the negative case as shown above. To help others find this post, I'm including the error that was reported in the OSR server before I made this change. ####<Mar 30, 2012 4:20:28 PM EST> <Error> <HTTP> <localhost.localdomain> <osr_server1> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:30e96542:13662023753:-8000-000000000000001c> <1333084828916> <BEA-101017> <[ServletContext@470316600[app:registry module:registry.war path:/registry spec-version:null]] Root cause of ServletException. java.lang.LinkageError: Class javax/xml/namespace/QName violates loader constraints at com.idoox.wsdl.extensions.PopulatedExtensionRegistry.<init>(PopulatedExtensionRegistry.java:84) at com.idoox.wsdl.factory.WSDLFactoryImpl.newDefinition(WSDLFactoryImpl.java:61) at com.idoox.wsdl.xml.WSDLReaderImpl.parseDefinitions(WSDLReaderImpl.java:419) at com.idoox.wsdl.xml.WSDLReaderImpl.readWSDL(WSDLReaderImpl.java:309) at com.idoox.wsdl.xml.WSDLReaderImpl.readWSDL(WSDLReaderImpl.java:272) at com.idoox.wsdl.xml.WSDLReaderImpl.readWSDL(WSDLReaderImpl.java:198) at com.idoox.wsdl.util.WSDLUtil.readWSDL(WSDLUtil.java:126) at com.systinet.wasp.admin.PackageRepositoryImpl.validateServicesNamespaceAndName(PackageRepositoryImpl.java:885) at com.systinet.wasp.admin.PackageRepositoryImpl.registerPackage(PackageRepositoryImpl.java:807) at com.systinet.wasp.admin.PackageRepositoryImpl.updateDir(PackageRepositoryImpl.java:611) at com.systinet.wasp.admin.PackageRepositoryImpl.updateDir(PackageRepositoryImpl.java:643) at com.systinet.wasp.admin.PackageRepositoryImpl.update(PackageRepositoryImpl.java:553) at com.systinet.wasp.admin.PackageRepositoryImpl.init(PackageRepositoryImpl.java:242) at com.idoox.wasp.ModuleRepository.loadModules(ModuleRepository.java:198) at com.systinet.wasp.WaspImpl.boot(WaspImpl.java:383) at org.systinet.wasp.Wasp.init(Wasp.java:151) at com.systinet.transport.servlet.server.Servlet.init(Unknown Source) at weblogic.servlet.internal.StubSecurityHelper$ServletInitAction.run(StubSecurityHelper.java:283) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) at weblogic.servlet.internal.StubSecurityHelper.createServlet(StubSecurityHelper.java:64) at weblogic.servlet.internal.StubLifecycleHelper.createOneInstance(StubLifecycleHelper.java:58) at weblogic.servlet.internal.StubLifecycleHelper.<init>(StubLifecycleHelper.java:48) at weblogic.servlet.internal.ServletStubImpl.prepareServlet(ServletStubImpl.java:539) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:244) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:184) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3732) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3696) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2273) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1490) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256) at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)

    Read the article

  • Code Metrics: Number of IL Instructions

    - by DigiMortal
    In my previous posting about code metrics I introduced how to measure LoC (Lines of Code) in .NET applications. Now let’s take a step further and let’s take a look how to measure compiled code. This way we can somehow have a picture about what compiler produces. In this posting I will introduce you code metric called number of IL instructions. NB! Number of IL instructions is not something you can use to measure productivity of your team. If you want to get better idea about the context of this metric and LoC then please read my first posting about LoC. What are IL instructions? When code written in some .NET Framework language is compiled then compiler produces assemblies that contain byte code. These assemblies are executed later by Common Language Runtime (CLR) that is code execution engine of .NET Framework. The byte code is called Intermediate Language (IL) – this is more common language than C# and VB.NET by example. You can use ILDasm tool to convert assemblies to IL assembler so you can read them. As IL instructions are building blocks of all .NET Framework binary code these instructions are smaller and highly general – we don’t want very rich low level language because it executes slower than more general language. For every method or property call in some .NET Framework language corresponds set of IL instructions. There is no 1:1 relationship between line in high level language and line in IL assembler. There are more IL instructions than lines in C# code by example. How much instructions there are? I have no common answer because it really depends on your code. Here you can see some metrics from my current community project that is developed on SharePoint Server 2007. As average I have about 7 IL instructions per line of code. This is not metric you should use, it is just illustrative example so you can see the differences between numbers of lines and IL instructions. Why should I measure the number of IL instructions? Just take a look at chart above. Compiler does something that you cannot see – it compiles your code to IL. This is not intuitive process because you usually cannot say what is exactly the end result. You know it at greater plain but you don’t know it exactly. Therefore we can expect some surprises and that’s why we should measure the number of IL instructions. By example, you may find better solution for some method in your source code. It looks nice, it works nice and everything seems to be okay. But on server under load your fix may be way slower than previous code. Although you minimized the number of lines of code it ended up with increasing the number of IL instructions. How to measure the number of IL instructions? My choice is NDepend because Visual Studio is not able to measure this metric. Steps to make are easy. Open your NDepend project or create new and add all your application assemblies to project (you can also add Visual Studio solution to project). Run project analysis and wait until it is done. You can see over-all stats form global summary window. This is the same window I used to read the LoC and the number of IL instructions metrics for my chart. Meanwhile I made some changes to my code (enabled advanced caching for events and event registrations module) and then I ran code analysis again to get results for this section of this posting. NDepend is also able to tell you exactly what parts of code have problematically much IL instructions. The code quality section of CQL Query Explorer shows you how much problems there are with members in analyzed code. If you click on the line Methods too big (NbILInstructions) you can see all the problematic members of classes in CQL Explorer shown in image on right. In my case if have 10 methods that are too big and two of them have horrible number of IL instructions – just take a look at first two methods in this TOP10. Also note the query box. NDepend has easy and SQL-like query language to query code analysis results. You can modify these queries if you like and also you can define your own ones if default set is not enough for you. What is good result? As you can see from query window then the number of IL instructions per member should have maximally 200 IL instructions. Of course, like always, the less instructions you have, the better performing code you have. I don’t mean here little differences but big ones. By example, take a look at my first method in warnings list. The number of IL instructions it has is huge. And believe me – this method looks awful. Conclusion The number of IL instructions is useful metric when optimizing your code. For analyzing code at general level to find out too long methods you can use the number of LoC metric because it is more intuitive for you and you can therefore handle the situation more easily. Also you can use NDepend as code metrics tool because it has a lot of metrics to offer.

    Read the article

  • ASP.NET Web API - Screencast series Part 4: Paging and Querying

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we modified data on the server using DELETE and POST methods. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying. This part shows two approaches to querying data (paging really just being a specific querying case) - you can do it yourself using parameters passed in via querystring (as well as headers, other route parameters, cookies, etc.). You're welcome to do that if you'd like. What I think is more interesting here is that Web API actions that return IQueryable automatically support OData query syntax, making it really easy to support some common query use cases like paging and filtering. A few important things to note: This is just support for OData query syntax - you're not getting back data in OData format. The screencast demonstrates this by showing the GET methods are continuing to return the same JSON they did previously. So you don't have to "buy in" to the whole OData thing, you're just able to use the query syntax if you'd like. This isn't full OData query support - full OData query syntax includes a lot of operations and features - but it is a pretty good subset: filter, orderby, skip, and top. All you have to do to enable this OData query syntax is return an IQueryable rather than an IEnumerable. Often, that could be as simple as using the AsQueryable() extension method on your IEnumerable. Query composition support lets you layer queries intelligently. If, for instance, you had an action that showed products by category using a query in your repository, you could also support paging on top of that. The result is an expression tree that's evaluated on-demand and includes both the Web API query and the underlying query. So with all those bullet points and big words, you'd think this would be hard to hook up. Nope, all I did was change the return type from IEnumerable<Comment> to IQueryable<Comment> and convert the Get() method's IEnumerable result using the .AsQueryable() extension method. public IQueryable<Comment> GetComments() { return repository.Get().AsQueryable(); } You still need to build up the query to provide the $top and $skip on the client, but you'd need to do that regardless. Here's how that looks: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize); $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); And the neat thing is that - without any modification to our server-side code - we can modify the above jQuery call to request the comments be sorted by author: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize) + '&$orderby=Author'; $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); So if you want to make use of OData query syntax, you can. If you don't like it, you're free to hook up your filtering and paging however you think is best. Neat. In Part 5, we'll add on support for Data Annotation based validation using an Action Filter.

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • SQL SERVER – Planned and Unplanned Availablity Group Failovers – Notes from the Field #031

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused. In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to Planned and Unplanned Availablity Group Failovers. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words. Whenever a disaster occurs it will be a stressful scenario regardless of how small or big the disaster is. This gets multiplied when it is your first time working with newer technology or the first time you are going through a disaster without a proper run book. Today, were going to help you establish a run book for creating a planned failover with availability groups. To make today’s session simple were going to have two instances of SQL Server 2012 included in an availability group and walk through the steps of doing an unplanned failover.  We will focus on using the user interface and T-SQL to complete the failovers. We are going to use a two replica Availability Group where each replica is in another location. Therefore, we will be covering Asynchronous (non automatic failover) the following is a breakdown of our availability group utilized today. Seeing the following screen might be scary the first time you come across an unplanned failover.  It looks like our test database used in this Availability Group is not functional and it currently isn’t. The database status is not synchronizing which makes sense because the primary replica went down so it couldn’t synchronize. With that said, we can still failover and make it functional while we troubleshoot why we lost our primary replica. To start we are going to right click on the availability group that needs to be restarted and select failover. This will bring up the following wizard, which will walk you through several steps needed to complete the failover using the graphical user interface provided with SQL Server Management Studio (SSMS). You are going to see warning messages simply because we are in Asynchronous commit mode and can not guarantee ‘no data loss’ when we do failover. Just incase you missed it; you get another screen warning you about potential data loss because we are in Asynchronous mode. Next we get to connect to the specific replica we want to become the primary replica after the failover occurs. In our case, we only have two replicas so this is trivial. In order to failover, it’s required to connect to the replica that will become primary.  The following screen shows that the connection has been made successfully. Next, you will see the final summary screen. Once again, this reminds you that the failover action will cause data loss as were using Asynchronous commit mode due to the distance between instances used for disaster recovery. Finally, once the failover is completed you will see the following screen. If you followed along this long you might be wondering what T-SQL scripts are generated for clicking through all the sections of the wizard. If you have used Database Mirroring in the past you might be surprised.  It’s not too different, which makes sense because the data is being replicated via SQL Server endpoints just like the good old database mirroring. Now were going to take a look at how to do a failover with just T-SQL. First, were going to need to open a new query window and run our query in SQLCMD mode. Just incase you haven’t used SQLCMD mode before we will show you how to enable it below. Now you can run the following statement. Notice, we connect to the replica we want to become primary after failover and specify to force failover to allow data loss. We can use the following script to failback over when our primary instance comes back online. -- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE. :Connect SQL2012PROD1 ALTER AVAILABILITY GROUP [AGSQL2] FORCE_FAILOVER_ALLOW_DATA_LOSS; GO Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle Linux and Oracle VM pricing guide

    - by wcoekaer
    A few days ago someone showed me a pricing guide from a Linux vendor and I was a bit surprised at the complexity of it. Especially when you look at larger servers (4 or 8 sockets) and when adding virtual machine use into the mix. I think we have a very compelling and simple pricing model for both Oracle Linux and Oracle VM. Let me see if I can explain it in 1 page, not 10 pages. This pricing information is publicly available on the Oracle store, I am using the current public list prices. Also keep in mind that this is for customers using non-oracle x86 servers. When a customer purchases an Oracle x86 server, the annual systems support includes full use (all you can eat) of Oracle Linux, Oracle VM and Oracle Solaris (no matter how many VMs you run on that server, in case you deploy guests on a hypervisor). This support level is the equivalent of premier support in the list below. Let's start with Oracle VM (x86) : Oracle VM support subscriptions are per physical server on which you deploy the Oracle VM Server product. (1) Oracle VM Premier Limited - 1- or 2 socket server : $599 per server per year (2) Oracle VM Premier - more than 2 socket server (4, or 8 or whatever more) : $1199 per server per year The above includes the use of Oracle VM Manager and Oracle Enterprise Manager Cloud Control's Virtualization management pack (including self service cloud portal, etc..) 24x7 support, access to bugfixes, updates and new releases. It also includes all options, live migrate, dynamic resource scheduling, high availability, dynamic power management, etc If you want to play with the product, or even use the product without access to support services, the product is freely downloadable from edelivery. Next, Oracle Linux : Oracle Linux support subscriptions are per physical server. If you plan to run Oracle Linux as a guest on Oracle VM, VMWare or Hyper-v, you only have to pay for a single subscription per system, we do not charge per guest or per number of guests. In other words, you can run any number of Oracle Linux guests per physical server and count it as just a single subscription. (1) Oracle Linux Network Support - any number of sockets per server : $119 per server per year Network support does not offer support services. It provides access to the Unbreakable Linux Network and also offers full indemnification for Oracle Linux. (2) Oracle Linux Basic Limited Support - 1- or 2 socket servers : $499 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem. (3) Oracle Linux Basic Support - more than 2 socket server (4, or 8 or more) : $1199 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem (4) Oracle Linux Premier Limited Support - 1- or 2 socket servers : $1399 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (5) Oracle Linux Premier Support - more than 2 socket servers : $2299 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (6) Freely available Oracle Linux - any number of sockets You can freely download Oracle Linux, install it on any number of servers and use it for any reason, without support, without right to use of these extra features like Oracle Clusterware or ksplice, without indemnification. However, you do have full access to all errata as well. Need support? then use options (1)..(5) So that's it. Count number of 2 socket boxes, more than 2 socket boxes, decide on basic or premier support level and you are done. You don't have to worry about different levels based on how many virtual instance you deploy or want to deploy. A very simple menu of choices. We offer, inclusive, Linux OS clusterware, Linux OS Management, provisioning and monitoring, cluster filesystem (ocfs), high performance filesystem (xfs), dtrace, ksplice, ofed (infiniband stack for high performance networking). No separate add-on menus. NOTE : socket/cpu can have any number of cores. So whether you have a 4,6,8,10 or 12 core CPU doesn't matter, we count the number of physical CPUs.

    Read the article

  • Editing sqlcmdvariable nodes in SSDT Publish Profile files using msbuild

    - by jamiet
    Publish profile files are a new feature of SSDT database projects that enable you to package up all environment-specific properties into a single file for use at publish time; I have written about them before at Publish Profile Files in SQL Server Data Tools (SSDT) and if it wasn’t obvious from that blog post, I’m a big fan! As I have used Publish Profile files more and more I have realised that there may be times when you need to edit those Publish profile files during your build process, you may think of such an operation as a kind of pre-processor step. In my case I have a sqlcmd variable called DeployTag, it holds a value representing the current build number that later gets inserted into a table using a Post-Deployment script (that’s a technique that I wrote about in Implementing SQL Server solutions using Visual Studio 2010 Database Projects – a compendium of project experiences – search for “Putting a build number into the DB”). Here are the contents of my Publish Profile file (simplified for demo purposes) : Notice that DeployTag defaults to “UNKNOWN”. On my current project we are using msbuild scripts to control what gets built and what I want to do is take the build number from our build engine and edit the Publish profile files accordingly. Here is the pertinent portion of the the msbuild script I came up with to do that:   <ItemGroup>     <Namespaces Include="myns">       <Prefix>myns</Prefix>       <Uri>http://schemas.microsoft.com/developer/msbuild/2003</Uri>     </Namespaces>   </ItemGroup>   <Target Name="UpdateBuildNumber">     <ItemGroup>       <SSDTPublishFiles Include="$(DESTINATION)\**\$(CONFIGURATION)\**\*.publish.xml" />     </ItemGroup>     <MSBuild.ExtensionPack.Xml.XmlFile Condition="%(SSDTPublishFiles.Identity) != ''"                                        TaskAction="UpdateElement"                                        File="%(SSDTPublishFiles.Identity)"                                        Namespaces="@(Namespaces)"                                         XPath="//myns:SqlCmdVariable[@Include='DeployTag']/myns:Value"                                         InnerText="$(BuildNumber)"/>   </Target> The important bits here are the definition of the namespace http://schemas.microsoft.com/developer/msbuild/2003: and the XPath expression //myns:SqlCmdVariable[@Include='DeployTag']/myns:Value: Some extra info: I use a fantastic tool called XMLPad to discover/test XPath expressions, read more at XMLPad – a new tool in my developer utility belt MSBuild.ExtensionPack.Xml.XmlFile is a msbuild task used to edit XML files and is available from Mike Fourie’s MSBuild Extension Pack I’m using a property called $(BuildNumber) to hold the value to substitute into the file and also $(DESTINATION)\**\$(CONFIGURATION)\**\*.publish.xml to define an ItemGroup all of my Publish Profile files. Populating those properties is basic msbuild stuff and is therefore outside the scope of this blog post however if you want to learn more check out MSBuild properties & How To: Use Wildcards to Build All Files in a Directory. Hope this is useful! @Jamiet

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • A C# implementation of the CallStream pattern

    - by Bertrand Le Roy
    Dusan published this interesting post a couple of weeks ago about a novel JavaScript chaining pattern: http://dbj.org/dbj/?p=514 It’s similar to many existing patterns, but the syntax is extraordinarily terse and it provides a new form of friction-free, plugin-less extensibility mechanism. Here’s a JavaScript example from Dusan’s post: CallStream("#container") (find, "div") (attr, "A", 1) (css, "color", "#fff") (logger); The interesting thing here is that the functions that are being passed as the first argument are arbitrary, they don’t need to be declared as plug-ins. Compare that with a rough jQuery equivalent that could look something like this: $.fn.logger = function () { /* ... */ } $("selector") .find("div") .attr("A", 1) .css("color", "#fff") .logger(); There is also the “each” method in jQuery that achieves something similar, but its syntax is a little more verbose. Of course, that this pattern can be expressed so easily in JavaScript owes everything to the extraordinary way functions are treated in that language, something Douglas Crockford called “the very best part of JavaScript”. One of the first things I thought while reading Dusan’s post was how I could adapt that to C#. After all, with Lambdas and delegates, C# also has its first-class functions. And sure enough, it works really really well. After about ten minutes, I was able to write this: CallStreamFactory.CallStream (p => Console.WriteLine("Yay!")) (Dump, DateTime.Now) (DumpFooAndBar, new { Foo = 42, Bar = "the answer" }) (p => Console.ReadKey()); Where the Dump function is: public static void Dump(object options) { Console.WriteLine(options.ToString()); } And DumpFooAndBar is: public static void DumpFooAndBar(dynamic options) { Console.WriteLine("Foo is {0} and bar is {1}.", options.Foo, options.Bar); } So how does this work? Well, it really is very simple. And not. Let’s say it’s not a lot of code, but if you’re like me you might need an Advil after that. First, I defined the signature of the CallStream method as follows: public delegate CallStream CallStream (Action<object> action, object options = null); The delegate define a call stream as something that takes an action (a function of the options) and an optional options object and that returns a delegate of its own type. Tricky, but that actually works, a delegate can return its own type. Then I wrote an implementation of that delegate that calls the action and returns itself: public static CallStream CallStream (Action<object> action, object options = null) { action(options); return CallStream; } Pretty nice, eh? Well, yes and no. What we are doing here is to execute a sequence of actions using an interesting novel syntax. But for this to be actually useful, you’d need to build a more specialized call stream factory that comes with some sort of context (like Dusan did in JavaScript). For example, you could write the following alternate delegate signature that takes a string and returns itself: public delegate StringCallStream StringCallStream(string message); And then write the following call stream (notice the currying): public static StringCallStream CreateDumpCallStream(string dumpPath) { StringCallStream str = null; var dump = File.AppendText(dumpPath); dump.AutoFlush = true; str = s => { dump.WriteLine(s); return str; }; return str; } (I know, I’m not closing that stream; sure; bad, bad Bertrand) Finally, here’s how you use it: CallStreamFactory.CreateDumpCallStream(@".\dump.txt") ("Wow, this really works.") (DateTime.Now.ToLongTimeString()) ("And that is all."); Next step would be to combine this contextual implementation with the one that takes an action parameter and do some really fun stuff. I’m only scratching the surface here. This pattern could reveal itself to be nothing more than a gratuitous mind-bender or there could be applications that we hardly suspect at this point. In any case, it’s a fun new construct. Or is this nothing new? You tell me… Comments are open :)

    Read the article

  • Super Joybox 5 HID 0925:8884 not recognized as joystick in Ubuntu 12.04 LTS

    - by Tim Evans
    Problem: When using the "Super JoyBox 5" 4 port playstation 2 to USB adapter, the device is not recognized as a joystick. there is no js0 created, but instead another input eventX and mouseX are created in /dev/input. When using the directional buttons (up down left right) on a Playstation 1 controller attached to the device, the mouse cursor moves to the top, bottom, left, and right edges of the screen respectively. Buttons are unresponsive. The joypads attached to the device cannot be used in any games or other programs. Attempted remedies: Creating a symlink from the eventX to js0 does not solve the problem. Addl Info: joydev is loaded and running peroperly according to LSMOD. evtest can be run on the created eventX (sudo evtest /dev/input/event14 in my case) and the buttons and axes all register inputs. Here is a paste of EVTEST's diagnostic and the first couple button events. [code] sudo evtest /dev/input/event14 Input driver version is 1.0.1 Input device ID: bus 0x3 vendor 0x925 product 0x8884 version 0x100 Input device name: "HID 0925:8884" Supported events: Event type 0 (EV_SYN) Event type 1 (EV_KEY) Event code 288 (BTN_TRIGGER) Event code 289 (BTN_THUMB) Event code 290 (BTN_THUMB2) Event code 291 (BTN_TOP) Event code 292 (BTN_TOP2) Event code 293 (BTN_PINKIE) Event code 294 (BTN_BASE) Event code 295 (BTN_BASE2) Event code 296 (BTN_BASE3) Event code 297 (BTN_BASE4) Event code 298 (BTN_BASE5) Event code 299 (BTN_BASE6) Event code 300 (?) Event code 301 (?) Event code 302 (?) Event code 303 (BTN_DEAD) Event code 304 (BTN_A) Event code 305 (BTN_B) Event code 306 (BTN_C) Event code 307 (BTN_X) Event code 308 (BTN_Y) Event code 309 (BTN_Z) Event code 310 (BTN_TL) Event code 311 (BTN_TR) Event code 312 (BTN_TL2) Event code 313 (BTN_TR2) Event code 314 (BTN_SELECT) Event code 315 (BTN_START) Event code 316 (BTN_MODE) Event code 317 (BTN_THUMBL) Event code 318 (BTN_THUMBR) Event code 319 (?) Event code 320 (BTN_TOOL_PEN) Event code 321 (BTN_TOOL_RUBBER) Event code 322 (BTN_TOOL_BRUSH) Event code 323 (BTN_TOOL_PENCIL) Event code 324 (BTN_TOOL_AIRBRUSH) Event code 325 (BTN_TOOL_FINGER) Event code 326 (BTN_TOOL_MOUSE) Event code 327 (BTN_TOOL_LENS) Event code 328 (?) Event code 329 (?) Event code 330 (BTN_TOUCH) Event code 331 (BTN_STYLUS) Event code 332 (BTN_STYLUS2) Event code 333 (BTN_TOOL_DOUBLETAP) Event code 334 (BTN_TOOL_TRIPLETAP) Event code 335 (BTN_TOOL_QUADTAP) Event type 3 (EV_ABS) Event code 0 (ABS_X) Value 127 Min 0 Max 255 Flat 15 Event code 1 (ABS_Y) Value 127 Min 0 Max 255 Flat 15 Event code 2 (ABS_Z) Value 127 Min 0 Max 255 Flat 15 Event code 3 (ABS_RX) Value 127 Min 0 Max 255 Flat 15 Event code 4 (ABS_RY) Value 127 Min 0 Max 255 Flat 15 Event code 5 (ABS_RZ) Value 127 Min 0 Max 255 Flat 15 Event code 6 (ABS_THROTTLE) Value 127 Min 0 Max 255 Flat 15 Event code 7 (ABS_RUDDER) Value 127 Min 0 Max 255 Flat 15 Event code 8 (ABS_WHEEL) Value 127 Min 0 Max 255 Flat 15 Event code 9 (ABS_GAS) Value 127 Min 0 Max 255 Flat 15 Event code 10 (ABS_BRAKE) Value 127 Min 0 Max 255 Flat 15 Event code 11 (?) Value 127 Min 0 Max 255 Flat 15 Event code 12 (?) Value 127 Min 0 Max 255 Flat 15 Event code 13 (?) Value 127 Min 0 Max 255 Flat 15 Event code 14 (?) Value 127 Min 0 Max 255 Flat 15 Event code 15 (?) Value 127 Min 0 Max 255 Flat 15 Event code 16 (ABS_HAT0X) Value 0 Min -1 Max 1 Event code 17 (ABS_HAT0Y) Value 0 Min -1 Max 1 Event code 18 (ABS_HAT1X) Value 0 Min -1 Max 1 Event code 19 (ABS_HAT1Y) Value 0 Min -1 Max 1 Event code 20 (ABS_HAT2X) Value 0 Min -1 Max 1 Event code 21 (ABS_HAT2Y) Value 0 Min -1 Max 1 Event code 22 (ABS_HAT3X) Value 0 Min -1 Max 1 Event code 23 (ABS_HAT3Y) Value 0 Min -1 Max 1 Event type 4 (EV_MSC) Event code 4 (MSC_SCAN) Testing ... (interrupt to exit) Event: time 1351223176.126127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223176.126130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 1 Event: time 1351223176.126166, -------------- SYN_REPORT ------------ Event: time 1351223178.238127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223178.238130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 0 Event: time 1351223178.238167, -------------- SYN_REPORT ------------ Event: time 1351223180.422127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223180.422129, type 1 (EV_KEY), code 289 (BTN_THUMB), value 1 Event: time 1351223180.422163, -------------- SYN_REPORT ------------ Event: time 1351223181.558099, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223181.558102, type 1 (EV_KEY), code 289 (BTN_THUMB), value 0 Event: time 1351223181.558137, -------------- SYN_REPORT ------------ Event: time 1351223182.486137, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223182.486140, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 1 Event: time 1351223182.486172, -------------- SYN_REPORT ------------ Event: time 1351223183.302130, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223183.302132, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 0 Event: time 1351223183.302165, -------------- SYN_REPORT ------------ Event: time 1351223184.030133, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.030136, type 1 (EV_KEY), code 291 (BTN_TOP), value 1 Event: time 1351223184.030166, -------------- SYN_REPORT ------------ Event: time 1351223184.558135, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.558138, type 1 (EV_KEY), code 291 (BTN_TOP), value 0 Event: time 1351223184.558168, -------------- SYN_REPORT ------------ [/code] The directional buttons on the pad are being identified as HAT0Y and HAT0X axes, thats zero, not the letter O. Aparently, this device used to work flawlessly on kernel 2.4.x systems, and even as late as ubunto 10.04. Perhaps the Joydev rules for identifying joypads has changed? Currently, this kind of bug is affecting a few different type of controller adapters, but since this is the one that i PERSONALLY have (and has been driving me my own special brand of crazy), its the one im documenting. What i think should be happening instead: The device should be registering js0 through js3, one for each port, or JS0 that will handle all of the connected devices with different numbered axes for each connected joypad. Either way, it should work as a joystick and stop controlling the mouse cursor. Please help!

    Read the article

  • Implementing an Interceptor Using NHibernate’s Built In Dynamic Proxy Generator

    - by Ricardo Peres
    NHibernate 3.2 came with an included proxy generator, which means there is no longer the need – or the possibility, for that matter – to choose Castle DynamicProxy, LinFu or Spring. This is actually a good thing, because it means one less assembly to deploy. Apparently, this generator was based, at least partially, on LinFu. As there are not many tutorials out there demonstrating it’s usage, here’s one, for demonstrating one of the most requested features: implementing INotifyPropertyChanged. This interceptor, of course, will still feature all of NHibernate’s functionalities that you are used to, such as lazy loading, and such. We will start by implementing an NHibernate interceptor, by inheriting from the base class NHibernate.EmptyInterceptor. This class does not do anything by itself, but it allows us to plug in behavior by overriding some of its methods, in this case, Instantiate: 1: public class NotifyPropertyChangedInterceptor : EmptyInterceptor 2: { 3: private ISession session = null; 4:  5: private static readonly ProxyFactory factory = new ProxyFactory(); 6:  7: public override void SetSession(ISession session) 8: { 9: this.session = session; 10: base.SetSession(session); 11: } 12:  13: public override Object Instantiate(String clazz, EntityMode entityMode, Object id) 14: { 15: Type entityType = Type.GetType(clazz); 16: IProxy proxy = factory.CreateProxy(entityType, new _NotifyPropertyChangedInterceptor(), typeof(INotifyPropertyChanged)) as IProxy; 17: 18: _NotifyPropertyChangedInterceptor interceptor = proxy.Interceptor as _NotifyPropertyChangedInterceptor; 19: interceptor.Proxy = this.session.SessionFactory.GetClassMetadata(entityType).Instantiate(id, entityMode); 20:  21: this.session.SessionFactory.GetClassMetadata(entityType).SetIdentifier(proxy, id, entityMode); 22:  23: return (proxy); 24: } 25: } Then we need a class that implements the NHibernate dynamic proxy behavior, let’s place it inside our interceptor, because it will only need to be used there: 1: class _NotifyPropertyChangedInterceptor : NHibernate.Proxy.DynamicProxy.IInterceptor 2: { 3: private PropertyChangedEventHandler changed = delegate { }; 4:  5: public Object Proxy 6: { 7: get; 8: set;} 9:  10: #region IInterceptor Members 11:  12: public Object Intercept(InvocationInfo info) 13: { 14: Boolean isSetter = info.TargetMethod.Name.StartsWith("set_") == true; 15: Object result = null; 16:  17: if (info.TargetMethod.Name == "add_PropertyChanged") 18: { 19: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 20: this.changed += propertyChangedEventHandler; 21: } 22: else if (info.TargetMethod.Name == "remove_PropertyChanged") 23: { 24: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 25: this.changed -= propertyChangedEventHandler; 26: } 27: else 28: { 29: result = info.TargetMethod.Invoke(this.Proxy, info.Arguments); 30: } 31:  32: if (isSetter == true) 33: { 34: String propertyName = info.TargetMethod.Name.Substring("set_".Length); 35: this.changed(this.Proxy, new PropertyChangedEventArgs(propertyName)); 36: } 37:  38: return (result); 39: } 40:  41: #endregion 42: } What this does for every interceptable method (those who are either virtual or from the INotifyPropertyChanged) is: For methods that came from the INotifyPropertyChanged interface, add_PropertyChanged and remove_PropertyChanged (yes, events are methods ), we add an implementation that adds or removes the event handlers to the delegate which we declared as changed; For all the others, we direct them to the place where they are actually implemented, which is the Proxy field; If the call is setting a property, it fires afterwards the PropertyChanged event. In order to use this, we need to add the interceptor to the Configuration before building the ISessionFactory: 1: using (ISessionFactory factory = cfg.SetInterceptor(new NotifyPropertyChangedInterceptor()).BuildSessionFactory()) 2: { 3: using (ISession session = factory.OpenSession()) 4: using (ITransaction tx = session.BeginTransaction()) 5: { 6: Customer customer = session.Get<Customer>(100); //some id 7: INotifyPropertyChanged inpc = customer as INotifyPropertyChanged; 8: inpc.PropertyChanged += delegate(Object sender, PropertyChangedEventArgs e) 9: { 10: //fired when a property changes 11: }; 12: customer.Address = "some other address"; //will raise PropertyChanged 13: customer.RecentOrders.ToList(); //will trigger the lazy loading 14: } 15: } Any problems, questions, do drop me a line!

    Read the article

  • Liskov Substitution Principle and the Oft Forgot Third Wheel

    - by Stacy Vicknair
    Liskov Substitution Principle (LSP) is a principle of object oriented programming that many might be familiar with from the SOLID principles mnemonic from Uncle Bob Martin. The principle highlights the relationship between a type and its subtypes, and, according to Wikipedia, is defined by Barbara Liskov and Jeanette Wing as the following principle:   Let be a property provable about objects of type . Then should be provable for objects of type where is a subtype of .   Rectangles gonna rectangulate The iconic example of this principle is illustrated with the relationship between a rectangle and a square. Let’s say we have a class named Rectangle that had a property to set width and a property to set its height. 1: Public Class Rectangle 2: Overridable Property Width As Integer 3: Overridable Property Height As Integer 4: End Class   We all at some point here that inheritance mocks an “IS A” relationship, and by gosh we all know square IS A rectangle. So let’s make a square class that inherits from rectangle. However, squares do maintain the same length on every side, so let’s override and add that behavior. 1: Public Class Square 2: Inherits Rectangle 3:  4: Private _sideLength As Integer 5:  6: Public Overrides Property Width As Integer 7: Get 8: Return _sideLength 9: End Get 10: Set(value As Integer) 11: _sideLength = value 12: End Set 13: End Property 14:  15: Public Overrides Property Height As Integer 16: Get 17: Return _sideLength 18: End Get 19: Set(value As Integer) 20: _sideLength = value 21: End Set 22: End Property 23: End Class   Now, say we had the following test: 1: Public Sub SetHeight_DoesNotAffectWidth(rectangle As Rectangle) 2: 'arrange 3: Dim expectedWidth = 4 4: rectangle.Width = 4 5:  6: 'act 7: rectangle.Height = 7 8:  9: 'assert 10: Assert.AreEqual(expectedWidth, rectangle.Width) 11: End Sub   If we pass in a rectangle, this test passes just fine. What if we pass in a square?   This is where we see the violation of Liskov’s Principle! A square might "IS A” to a rectangle, but we have differing expectations on how a rectangle should function than how a square should! Great expectations Here’s where we pat ourselves on the back and take a victory lap around the office and tell everyone about how we understand LSP like a boss. And all is good… until we start trying to apply it to our work. If I can’t even change functionality on a simple setter without breaking the expectations on a parent class, what can I do with subtyping? Did Liskov just tell me to never touch subtyping again? The short answer: NO, SHE DIDN’T. When I first learned LSP, and from those I’ve talked with as well, I overlooked a very important but not appropriately stressed quality of the principle: our expectations. Our inclination is to want a logical catch-all, where we can easily apply this principle and wipe our hands, drop the mic and exit stage left. That’s not the case because in every different programming scenario, our expectations of the parent class or type will be different. We have to set reasonable expectations on the behaviors that we expect out of the parent, then make sure that those expectations are met by the child. Any expectations not explicitly expected of the parent aren’t expected of the child either, and don’t register as a violation of LSP that prevents implementation. You can see the flexibility mentioned in the Wikipedia article itself: A typical example that violates LSP is a Square class that derives from a Rectangle class, assuming getter and setter methods exist for both width and height. The Square class always assumes that the width is equal with the height. If a Square object is used in a context where a Rectangle is expected, unexpected behavior may occur because the dimensions of a Square cannot (or rather should not) be modified independently. This problem cannot be easily fixed: if we can modify the setter methods in the Square class so that they preserve the Square invariant (i.e., keep the dimensions equal), then these methods will weaken (violate) the postconditions for the Rectangle setters, which state that dimensions can be modified independently. Violations of LSP, like this one, may or may not be a problem in practice, depending on the postconditions or invariants that are actually expected by the code that uses classes violating LSP. Mutability is a key issue here. If Square and Rectangle had only getter methods (i.e., they were immutable objects), then no violation of LSP could occur. What this means is that the above situation with a rectangle and a square can be acceptable if we do not have the expectation for width to leave height unaffected, or vice-versa, in our application. Conclusion – the oft forgot third wheel Liskov Substitution Principle is meant to act as a guidance and warn us against unexpected behaviors. Objects can be stateful and as a result we can end up with unexpected situations if we don’t code carefully. Specifically when subclassing, make sure that the subclass meets the expectations held to its parent. Don’t let LSP think you cannot deviate from the behaviors of the parent, but understand that LSP is meant to highlight the importance of not only the parent and the child class, but also of the expectations WE set for the parent class and the necessity of meeting those expectations in order to help prevent sticky situations.   Code examples, in both VB and C# Technorati Tags: LSV,Liskov Substitution Principle,Uncle Bob,Robert Martin,Barbara Liskov,Liskov

    Read the article

  • Creating a Reverse Proxy with URL Rewrite for IIS

    - by OWScott
    There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy). Essentially, you can front the internal web server with a friendly URL, even hiding custom ports. For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address. This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer. A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place. URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that. Getting Started First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic. After you’ve created your site then open up URL Rewrite at the site level. Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule. If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why. The next and final step of the template asks a few questions. The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered. You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this. Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server. If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site. That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server. You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed. One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules. I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

    Read the article

  • Cloud Computing Architecture Patterns: Don’t Focus on the Client

    - by BuckWoody
    Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start. Start with the Data    Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements. Next Is Data Management  With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data. Move to Data Transport How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing. Finally, Data Processing Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used. Sure, We Need A Front-End At Some Point Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.   But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.   Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best. References Microsoft Architecture Journal:   http://msdn.microsoft.com/en-us/architecture/bb410935.aspx Patterns and Practices:   http://msdn.microsoft.com/en-us/library/ff921345.aspx Windows Azure iOS, Android, Windows 8 Mobile Devices SDK: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/ Windows Azure Facebook SDK: http://ntotten.com/2013/03/14/using-windows-azure-mobile-services-with-the-facebook-sdk-for-windows-phone/

    Read the article

  • SQL SERVER – SmallDateTime and Precision – A Continuous Confusion

    - by pinaldave
    Some kinds of confusion never go away. Here is one of the ancient confusing things in SQL. The precision of the SmallDateTime is one concept that confuses a lot of people, proven by the many messages I receive everyday relating to this subject. Let me start with the question: What is the precision of the SMALLDATETIME datatypes? What is your answer? Write it down on your notepad. Now if you do not want to continue reading the blog post, head to my previous blog post over here: SQL SERVER – Precision of SMALLDATETIME. A Social Media Question Since the increase of social media conversations, I noticed that the amount of the comments I receive on this blog is a bit staggering. I receive lots of questions on facebook, twitter or Google+. One of the very interesting questions yesterday was asked on Facebook by Raghavendra. I am re-organizing his script and asking all of the questions he has asked me. Let us see if we could help him with his question: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO Now when the above script is ran, we will get the following result: Well, the expectation of the query was to have the following result. The row which was inserted last was expected to return as first row in result set as the ORDER BY descending. Side note: Because the requirement is to get the latest data, we can’t use any  column other than smalldatetime column in order by. If we use name column in the order by, we will get an incorrect result as it can be any name. My Initial Reaction My initial reaction was as follows: 1) DataType DateTime2: If file precision of the column is expected from the column which store date and time, it should not be smalldatetime. The precision of the column smalldatetime is One Minute (Read Here) for finer precision use DateTime or DateTime2 data type. Here is the code which includes above suggestion: CREATE TABLE #temp (name VARCHAR(100), registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO 2) Tie Breaker Identity: There are always possibilities that two rows were inserted at the same time. In that case, you may need a tie breaker. If you have an increasing identity column, you can use that as a tie breaker as well. CREATE TABLE #temp (ID INT IDENTITY(1,1), name VARCHAR(100),registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY ID DESC GO DROP TABLE #temp GO Those two were the quick suggestions I provided. It is not necessary that you should use both advices. It is possible that one can use only DATETIME datatype or Identity column can have datatype of BIGINT or have another tie breaker. An Alternate NO Solution In the facebook thread this was also discussed as one of the solutions: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO However, I believe it is not the solution and can be further misleading if used in a production server. Here is the example of why it is not a good solution: CREATE TABLE #temp (name VARCHAR(100) NOT NULL,registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO -- Before Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO -- Create Index ALTER TABLE #temp ADD CONSTRAINT [PK_#temp] PRIMARY KEY CLUSTERED (name DESC) GO -- After Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO Now let us examine the resultset. You will notice that an index which is created on the base table which is (indeed) schema change the table but can affect the resultset. As you can see, an index can change the resultset, so this method is not yet perfect to get the latest inserted resultset. No Schema Change Requirement After giving these two suggestions, I was waiting for the feedback of the asker. However, the requirement of the asker was there can’t be any schema change because the application was used by many other applications. I validated again, and of course, the requirement is no schema change at all. No addition of the column of change of datatypes of any other columns. There is no further help as well. This is indeed an interesting question. I personally can’t think of any solution which I could provide him given the requirement of no schema change. Can you think of any other solution to this? Need of Database Designer This question once again brings up another ancient question:  “Do we need a database designer?” I often come across databases which are facing major performance problems or have redundant data. Normalization is often ignored when a database is built fast under a very tight deadline. Often I come across a database which has table with unnecessary columns and performance problems. While working as Developer Lead in my earlier jobs, I have seen developers adding columns to tables without anybody’s consent and retrieving them as SELECT *.  There is a lot to discuss on this subject in detail, but for now, let’s discuss the question first. Do you have any suggestions for the above question? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: CodeProject, Developer Training, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Doing unit and integration tests with the Web API HttpClient

    - by cibrax
    One of the nice things about the new HttpClient in System.Net.Http is the support for mocking responses or handling requests in a http server hosted in-memory. While the first option is useful for scenarios in which we want to test our client code in isolation (unit tests for example), the second one enables more complete integration testing scenarios that could include some more components in the stack such as model binders or message handlers for example.   The HttpClient can receive a HttpMessageHandler as argument in one of its constructors. public class HttpClient : HttpMessageInvoker { public HttpClient(); public HttpClient(HttpMessageHandler handler); public HttpClient(HttpMessageHandler handler, bool disposeHandler); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } For the first scenario, you can create a new HttpMessageHandler that fakes the response, which you can use in your unit test. The only requirement is that you somehow inject an HttpClient with this custom handler in the client code. public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In an unit test, you can do something like this. var fakeResponse = new HttpResponse(); var fakeHandler = new FakeHttpMessageHandler(fakeResponse); var httpClient = new HttpClient(fakeHandler); var customerService = new CustomerService(httpClient); // Do something // Asserts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } CustomerService in this case is the class under test, and the one that receives an HttpClient initialized with our fake handler. For the second scenario in integration tests, there is a In-Memory host “System.Web.Http.HttpServer” that also derives from HttpMessageHandler and you can use with a HttpClient instance in your test. This has been discussed already in these two great posts from Pedro and Filip. 

    Read the article

  • Profit's COLLABORATE 10 Session Selections

    - by Aaron Lazenby
    COLLABORATE 2010 is a mere 11 days away (thanks for the reminder @ocp_advisor). Every year I publish my a list of the sessions I think reflect some of the more interesting people/trends in enterprise IT. I should be at all of these sessions, so drop by for a chat--I'll be the guy tapping out emails on my iPad... Monday, April 19 9:15 a.m. - Keynote: Transforming Customer Value, Delivering Highest Customer Service Location: Keynote Hall I never miss Charles Phillips when he speaks--it's one of the best opportunities to get an update on Oracle product developments and strategy. And there's certainly occasion for an update: this will be Phillips' first big presentation since the Oracle + Sun Strategy Update in late January. Phillips is appearing with Oracle Executive Vice President of Development Thomas Kurian which means there should be some excellent information about how customers are using Oracle's complete software and hardware stack to address enterprise IT challenges. The session should provide some excellent context for the rest of the week's session...don't miss it. 10:45 a.m. - Oracle Fusion Applications: Functional Overview Location: South Seas FI met Basheer Khan at COLLABORATE 08 in Denver and have followed his work ever since. He's a former member of the OAUG Board of Directors, an Oracle ACE, and a charismatic enterprise IT expert. Having worked with the Oracle Usability Advisory Board, Basheer should have some fascinating insights to share about the features and interface of Oracle's Fusine Applications. This session, along with Nadia Bendjedou's "10 Things You Can Do Today to Prepare for the Next Generation Applications" (on Tuesday, April 20 8:00 a.m. in room 3662) should give attendees the update they need about Oracle's next-generation applications.   1:15p.m. - E-Business Suite in the Amazon Cloud Location: South Seas HI did my first full-fledged cloud computing coverage at last year's COLLABORATE show (check out my interview with Oracle's Bill Hodak), where I first learned about Amazon's EC2 offering. I've since talked with several people who have provisioned server space on Amazon's cloud with great results. So I'm looking forward to watching the audience configure an instance of the Oracle E-Business Suite release 12 on the cloud while Chuck Edwards from Blue Gecko drives. This session should take some of the mist and vapor out of the cloud conversation.2:30 p.m. - "Zero Sign-on" to EBS - Enabling 96000 Users to Login to EBS Without User Maintenance Location: South Seas HI'll be sitting tight in South Seas H for the next session on Monday where Doug Pepka, a ten-year veteran of communications giant Comcast, will be walking attendees through a massive single sign-on (SSO) project across the enterprise. I'm working on a story about SSO for the August issue of Profit, so this session has real practical value to me. Plus the proliferation of user account logins--both personal and professional--makes this a critical usability/change management issue for IT leaders planning for successful long-term IT implementations.   Tuesday 8:00 am  - Information Architecture for Men in Kilts Location: SURF AGetting to a 8:00 a.m. presentation is a tall order in Las Vegas, but presenter Billy Cripe will make it worth your effort. Not only is the title of this session great, but the content should appeal to any IT strategist looking to push the limits of Web 2.0 technologies in the enterprise. Cripe is a product management director of Enterprise 2.0 and Enterprise Content Management at Oracle, author of Reshaping Your Business with Web 2.0, and a prolific blogger--he knows how information architecture is critical to and enterprise 2.0 implementation.    10:30a.m. - Oracle Virtualization: From Desktop to Data Center Location: REEF FData center virtualization is still one of the best ways to reduce the cost of running enterprise IT. With the addition of Sun products, Oracle has the industry's most comprehensive virtualization portfolio. I must admit, I'm no expert in this subject. So I'm looking forward to Monica Kumar's presentation so I can get up to speed.   Wednesday 8:00 a.m. - The Art of the Steal Location: Mandalay Bay Ballroom JMany will know Frank Abagnale from Steven Spielberg's 2002 film "Catch Me if You Can." The one-time con man and international fugitive who swindled $2.5 million in forged checks went on to help U.S. federal officials investigate fraud cases. Now the CEO of Abagnale and Associates, he has become an invaluable source to the business world on the subject of fraud and fraud protection. With identity theft and digital fraud still on the rise, this session should be an entertaining, and sobering, education on the threats facing businesses and customers around the world. A great way to start Wednesday.1:00 p.m. - Google Wave: Will it replace e-mail as we know it today? Location: SURF EBy many assessments (my own included), Google Wave is a bit of an open collaboration failure. It may seem like an odd reason for me to be excited about this session, but I'm looking forward to the chance to revisit the technology. Also, this is a great case study in connecting free, available Internet tools to existing enterprise computing environments--an issue that IT strategists must contend with as workers spreads out and choose their own productivity tools.  

    Read the article

  • Supporting users if they're not on your site

    - by Roger Hart
    Have a look at this Read Write Web article, specifically the paragraph in bold and the comments. Have a wry chuckle, or maybe weep for the future of humanity - your call. Then pause, and worry about information architecture. The short story: Read Write Web bumps up the Google rankings for "Facebook login" at the same time as Facebook makes UI changes, and a few hundred users get confused and leave comments on Read Write Web complaining about not being able to log in to their Facebook accounts.* Blindly clicking the first Google result is not a navigation behaviour I'd anticipated for folks visiting big names sites like Facebook. But then, I use Launchy and don't know where any of my files are, depend on Firefox auto-complete, view Facebook through my IM client, and don't need a map to find my backside with both hands. Not all our users behave in the same way, which means not all of our architecture is within our control, and people can get to your content in all sorts of ways. Even if the Read Write Web episode is a prank of some kind (there are, after all, plenty of folks who enjoy orchestrated trolling) it's still a useful reminder. Your users may take paths through and to your content you cannot control, and they are unlikely to deconstruct their assumptions along the way. I guess the meaningful question is: can you still support those users? If they get to you from Google instead of your front door, does what they find still make sense? Does your information architecture still work if your guests come in through the bathroom window? Ok, so here they broke into the house next door - you can't be expected to deal with that. But the rest is well worth thinking about. Other off-site interaction It's rarely going to be as funny as the comments at Read Write Web, but your users are going to do, say, and read things they think of as being about you and your products, in places you don't control. That's good. If you pay attention to it, you get data. Your users get a better experience. There are easy wins, too. Blogs, forums, social media &c. People may look for and find help with your product on blogs and forums, on Twitter, and what have you. They may learn about your brand in the same way. That's fine, it's an interaction you can be part of. It's time-consuming, certainly, but you have the option. You won't get a blogger to incorporate your site navigation just in case your users end up there, but you can be there when they do. Again, Anne Gentle, Gordon McLean and others have covered this in more depth than I could. Direct contact Sales people, customer care, support, they all talk to people. Are they sending links to your content? if so, which bits? Do they know about all of it? Do they have the content they need to support them - messaging that funnels sales, FAQ that are realistically frequent, detailed examples of things people want to do, that kind of thing. Are they sending links because users can't find the good stuff? Are they sending précis of your content, or re-writes, or brand new stuff? If so, does that mean your content isn't up to scratch, or that you've got content missing? Direct sales/care/support interactions are enormously valuable, and can help you know what content your users find useful. You can't have a table of contents or a "See also" in a phonecall, but your content strategy can support more interactions than browsing. *Passing observation about Facebook. For plenty if folks, it is  the internet. Its services are simple versions of what a lot of people use the internet for, and they're aggregated into one stop. Flickr, Vimeo, Wordpress, Twitter, LinkedIn, and all sorts of games, have Facebook doppelgangers that are not only friendlier to entry-level users, they're right there, behind only one layer of authentication. As such, it could own a lot of interaction convention. Heavy users may well not be tech-savvy, and be quite change averse. That doesn't make this episode not dumb, but I'm happy to go easy on 'em.

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

< Previous Page | 817 818 819 820 821 822 823 824 825 826 827 828  | Next Page >