Search Results

Search found 35839 results on 1434 pages for 'string utils'.

Page 659/1434 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Kendo UI Mobile with Knockout for Master-Detail Views

    - by Steve Michelotti
    Lately I’ve been playing with Kendo UI Mobile to build iPhone apps. It’s similar to jQuery Mobile in that they are both HTML5/JavaScript based frameworks for buildings mobile apps. The primary thing that drew me to investigate Kendo UI was its innate ability to adaptively render a native looking app based on detecting the device it’s currently running on. In other words, it will render to look like a native iPhone app if it’s running on an iPhone and it will render to look like a native Droid app if it’s running on a Droid. This is in contrast to jQuery Mobile which looks the same on all devices and, therefore, it can never quite look native for whatever device it’s running on. My first impressions of Kendo UI were great. Using HTML5 data-* attributes to define “roles” for UI elements is easy, the rendering looked great, and the basic navigation was simple and intuitive. However, I ran into major confusion when trying to figure out how to “correctly” build master-detail views. Since I was already very family with KnockoutJS, I set out to use that framework in conjunction with Kendo UI Mobile to build the following simple scenario: I wanted to have a simple “Task Manager” application where my first screen just showed a list of tasks like this:   Then clicking on a specific task would navigate to a detail screen that would show all details of the specific task that was selected:   Basic navigation between views in Kendo UI is simple. The href of an <a> tag just needs to specify a hash tag followed by the ID of the view to navigate to as shown in this jsFiddle (notice the href of the <a> tag matches the id of the second view):   Direct link to jsFiddle: here. That is all well and good but the problem I encountered was: how to pass data between the views? Specifically, I need the detail view to display all the details of whichever task was selected. If I was doing this with my typical technique with KnockoutJS, I know exactly what I would do. First I would create a view model that had my collection of tasks and a property for the currently selected task like this: 1: function ViewModel() { 2: var self = this; 3: self.tasks = ko.observableArray(data); 4: self.selectedTask = ko.observable(null); 5: } Then I would bind my list of tasks to the unordered list - I would attach a “click” handler to each item (each <li> in the unordered list) so that it would select the “selectedTask” for the view model. The problem I found is this approach simply wouldn’t work for Kendo UI Mobile. It completely ignored the click handlers that I was trying to attach to the <a> tags – it just wanted to look at the href (at least that’s what I observed). But if I can’t intercept this, then *how* can I pass data or any context to the next view? The only thing I was able to find in the Kendo documentation is that you can pass query string arguments on the view name you’re specifying in the href. This enabled me to do the following: Specify the task ID in each href – something like this: <a href=”#taskDetail?id=3></a> Attach an “init method” (via the “data-show” attribute on the details view) that runs whenever the view is activated Inside this “init method”, grab the task ID passed from the query string to look up the item from my view model’s list of tasks in order to set the selected task I was able to get all that working with about 20 lines of JavaScript as shown in this jsFiddle. If you click on the Results tab, you can navigate between views and see the the detail screen is correctly binding to the selected item:   Direct link to jsFiddle: here.   With all that being done, I was very happy to get it working with the behavior I wanted. However, I have no idea if that is the “correct” way to do it or if there is a “better” way to do it. I know that Kendo UI comes with its own data binding framework but my preference is to be able to use (the well-documented) KnockoutJS since I’m already familiar with that framework rather than having to learn yet another new framework. While I think my solution above is probably “acceptable”, there are still a couple of things that bug me about it. First, it seems odd that I have to loop through my items to *find* my selected item based on the ID that was passed on the query string - normally, with Knockout I can just refer directly to my selected item from where it was used. Second, it didn’t feel exactly right that I had to rely on the “data-show” method of the details view to set my context – normally with Knockout, I could just attach a click handler to the <a> tag that was actually clicked by the user in order to set the “selected item.” I’m not sure if I’m being too picky. I know there are many people that have *way* more expertise in Kendo UI compared to me – I’d be curious to know if there are better ways to achieve the same results.

    Read the article

  • BIP and Mapviewer Mash Up I

    - by Tim Dexter
    I was out in Yellowstone last week soaking up various wildlife and a bit too much rain ... good to be back until the 95F heat yesterday. Taking a little break from the Excel templates; the dev folks are planing an Excel patch in the next week or so that will add a mass of new functionality. At the risk of completely mis leading you I'm going to hang back a while. What I have written so far holds true and will continue to do so. This week, I have been mostly eating 'mapviewer' ... answers on a post card please, TV show and character. I had a request to show how BIP can call mapviewer and render a dynamic map in an output. So I hit the books and colleagues for some answers. Mapviewer is Oracle's geographic information system, hereby known as GIS. I use it a lot in our BIEE demos where the interaction with the maps is very impressive. Need a map of California and its congressional districts? I have contacts; Jerry and David with their little black box of maps. Once in my possession I can build highly interactive, clickable maps that allow the user to drill into more information using a very friendly interface driving BIEE content and navigation. But what about maps in BIP output? Bryan Wise, who has written some articles on this blog did some work a while back with the PL/SQL API interface. The extract for the report called a function that in turn called the mapviewer server, passing a set of mapping requirements, it then returned a URL to a cached copy of that map. Easy to then have BIP render that image. Thats still very doable. You need to install a couple of packages and then load the mapviewer java APIs into the database. Then you can write your function to the APIs. A little involved? Maybe, but the database is doing all the heavy lifting for you. I thought I would investigate another method for getting the maps back into BIP. There is a URL interface you can call, this involves building an XML message to be passed to the mapviewer server. It's pretty straightforward to use on the mapviewer side. On the BIP side things are little more tricksy. After some unexpected messing about I finally got the ubiquitous Hello World map to render using the URL method. Not the most exciting map in the world, lots of ocean and a rather long URL to get it to render. http://127.0.0.1:9704/mapviewer/omserver?xml_request=%3Cmap_request%20title=%22Hello%20World%22%20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E Notice all of the encoding in the URL string to handle the spaces, quotes, etc. All necessary to get BIP to make the call to the mapviewer server correctly without truncating the URL if it hits a real space rather than a %20. With that in mind constructing the URL was pretty simple. I'm not going to get into the content of the URL too much, for that you need to bone up on the mapviewer XML API. Check out the home page here and the documentation here. To make the template portable I used the standard CURRENT_SERVER_URL parameter from the BIP server and declared that in my template. <?param@begin:CURRENT_SERVER_URL;'myserver'?> Ignore the 'myserver', that was just a dummy value for testing at runtime it will resolve to: 'http://yourserver:port/xmlpserver' Not quite what we need as mapviewer has its own server path, in my case I needed 'mapviewer/omserver?xml_request=' as the fixed path to the mapviewer request URL. A little concatenation and substringing later I came up with <?param@begin:mURL;concat(substring($CURRENT_SERVER_URL,1,22),'mapviewer/omserver?xml_request=')?> Thats the basic URL that I can then build on. To get the Hello World map I need to add the following: <map_request title="Hello World" datasource="cagis" format="GIF_STREAM"/> Those angle brackets were the source of my headache, BIPs XSLT engine was attempting to process them rather than just pass them. Hok Min to the rescue ... again. I owe him lunch when I get out to HQ again! To solve the problem, I needed to escape all the characters and white space and then use native XSL to assign the string to a parameter. <xsl:param xdofo:ctx="begin"name="pXML">%3Cmap_request%20title=%22Hello%20World%22 %20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E</xsl:param> I did not need to assign it to a parameter but I felt that if I were going to do anything more serious than Hello World like plotting points of interest on the map. I would need to dynamically build the URL, so using a set of parameters or variables that I then concatenated would be easier. Now I had the initial server string and the request all I then did was combine the two using a concat: concat($mURL,$pXML) Embedding that into an image tag: <fo:external-graphic src="url({concat($mURL,$pXML)})"/> and I was done. Notice the curly braces to get the concat evaluated prior to the image call. As you will see next time, building the XML message to go onto the URL can get quite complex but I have used it with some data. Ultimately, it would be easier to build an extension to BIP to handle the data to be plotted, it would then build the XML message, call mapviewer and return a URL to the map image for BIP to render. More on that next time ...

    Read the article

  • Upgrading SSIS Custom Components for SQL Server 2012

    Having finally got around to upgrading my custom components to SQL Server 2012, I thought I’d share some notes on the process. One of the goals was minimal duplication, so the same code files are used to build the 2008 and 2012 components, I just have a separate project file. The high level steps are listed below, followed by some more details. Create a 2012 copy of the project file Upgrade project, just open the new project file is VS2010 Change target framework to .NET 4.0 Set conditional compilation symbol for DENALI Change any conditional code, including assembly version and UI type name Edit project file to change referenced assemblies for 2012 Change target framework to .NET 4.0 Open the project properties. On the Applications page, change the Target framework to .NET Framework 4. Set conditional compilation symbol for DENALI Re-open the project properties. On the Build tab, first change the Configuration to All Configurations, then set a Conditional compilation symbol of DENALI. Change any conditional code, including assembly version and UI type name The value doesn’t have to be DENALI, it can actually be anything you like, that is just what I use. It is how I control sections of code that vary between versions. There were several API changes between 2005 and 2008, as well as interface name changes. Whilst we don’t have the same issues between 2008 and 2012, I still have some sections of code that do change such as the assembly attributes. #if DENALI [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2012")] [assembly: AssemblyCopyright("Copyright © 2012 Konesans Ltd")] [assembly: AssemblyVersion("3.0.0.0")] #else [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2008")] [assembly: AssemblyCopyright("Copyright © 2008 Konesans Ltd")] [assembly: AssemblyVersion("2.0.0.0")] #endif The Visual Studio editor automatically formats the code based on the current compilation symbols, hence in this case the 2008 code is grey to indicate it is disabled. As you can see in the previous example I have distinct assembly version attributes, ensuring I can run both 2008 and 2012 versions of my component side by side. For custom components with a user interface, be sure to update the UITypeName property of the DtsTask or DtsPipelineComponent attributes. As above I use the conditional compilation symbol to control the code. #if DENALI [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=3.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2012 Konesans Ltd; http://www.konesans.com" )] #else [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=2.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2004-2008 Konesans Ltd; http://www.konesans.com" )] #endif public sealed class FileWatcherTask: Task, IDTSComponentPersist, IDTSBreakpointSite, IDTSSuspend { // .. code goes on... } Shown below is another example I found that needed changing. I borrow one of the MS editors, and use it against a custom property, but need to ensure I reference the correct version of the MS controls assembly. This section of code is actually shared between the 2005, 2008 and 2012 versions of my component hence it has test for both DENALI and KATMAI symbols. #if DENALI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=11.0.00.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #elif KATMAI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #else const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #endif // Create Match Expression parameter IDTSCustomPropertyCollection100 propertyCollection = outputColumn.CustomPropertyCollection; IDTSCustomProperty100 property = propertyCollection.New(); property = propertyCollection.New(); property.Name = MatchParams.Name; property.Description = MatchParams.Description; property.TypeConverter = typeof(MultilineStringConverter).AssemblyQualifiedName; property.UITypeEditor = multiLineUI; property.Value = MatchParams.DefaultValue; Edit project file to change referenced assemblies for 2012 We now need to edit the project file itself. Open the MyComponente2012.cproj  in you favourite text editor, and then perform a couple of find and replaces as listed below: Find Replace Comment Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Change the assembly references version from SQL Server 2008 to SQL Server 2012. Microsoft SQL Server\100\ Microsoft SQL Server\110\ Change any assembly reference hint path locations from from SQL Server 2008 to SQL Server 2012. If you use any Build Events during development, such as copying the component assembly to the DTS folder, or calling GACUTIL to install it into the GAC, you can also change these now. An example of my new post-build event for a pipeline component is shown below, which uses the .NET 4.0 path for GACUTIL. It also uses the 110 folder location, instead of 100 for SQL Server 2008, but that was covered the the previous find and replace. "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\gacutil.exe" /if "$(TargetPath)" copy "$(TargetPath)" "%ProgramFiles%\Microsoft SQL Server\110\DTS\PipelineComponents" /Y

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Reading train stop display names from a resource bundle

    - by Frank Nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In Oracle JDeveloper 11g R1, you set the display name of a train stop of an ADF bounded task flow train model by using the Oracle JDeveloper Structure Window. To do so Double-click onto the bounded task flow configuration file (XML) located in the Application Navigator so the task flow diagram open In the task flow diagram, select the view activity node for which you want to define the display name. In the Structure Window., expand the view activity node and then the train-stop node therein Add the display name element by using the right-click context menu on the train-stop node, selecting Insert inside train-stop > Display Name Edit the Display Name value with the Property Inspector Following the steps outlined above, you can define static display names – like "PF1" for page fragment 1 shown in the image below - for train stops to show at runtime. In the following, I explain how you can change the static display string to a dynamic string that reads the display label from a resource bundle so train stop labels can be internationalized. There are different strategies available for managing message bundles within an Oracle JDeveloper project. In this blog entry, I decided to build and configure the default properties file as indicated by the projects properties. To learn about the suggested file name and location, open the JDeveloper project properties (use a right mouse click on the project node in the Application Navigator and choose Project Properties. Select the Resource Bundle node to see the suggested name and location for the default message bundle. Note that this is the resource bundle that Oracle JDeveloper would automatically create when you assign a text resource to an ADF Faces component in a page. For the train stop display name, we need to create the message bundle manually as there is no context menu help available in Oracle JDeveloper. For this, use a right mouse click on the JDeveloper project and choose New | General | File from the menu and in the opened dialog. Specify the message bundle file name as the name looked up before in the project properties Resource Bundle option. Also, ensure that the file is saved in a directory structure that matches the package structure shown in the Resource Bundle dialog. For example, you would save the properties file in the View Project's src > adf > sample directory if the package structure was "adf.sample" (adf.sample.ViewControllerBundle). Edit the properties file and define key – values pairs for the train stop component. In the sample, such key value pairs are TrainStop1=Train Stop 1 TrainStop2=Train Stop 2 TrainStop3=Train Stop 3 Next, double click the faces-config.xml file and switch the opened editor to the Overview tab. Select the Application category and press the green plus icon next to the Resource Bundle section. Define the resource bundle Base Name as the package and properties file name, for example adf.sample.ViewControllerBundle Finally, define a variable name for the message bundle so the bundle can be accessed from Expression Language. For this blog example, the name is chosen as "messageBundle". <resource-bundle>   <base-name>adf.sample.ViewControllerBundle</base-name>   <var>messageBundle</var> </resource-bundle> Next, select the display-name element in the train stop node (similar to when creating the display name) and use the Property Inspector to change the static display string to an EL expression referencing the message bundle. For example: #{messageBundle.TrainStop1} At runtime, the train stops now show display names read from a message bundle (the properties file).

    Read the article

  • For Programmers familiar with ACM API? Drawing Initials [closed]

    - by user71992
    Possible Duplicate: For Programmers familiar with ACM API? Drawing Initials I came across an exercise (in the book "The Art and Science of Java" by Eric Roberts) that requires using only GArc and GLine classes to create a lettering library which draws your initials on the canvas. This should be made independent of the GLabel class. I'd like to know the correct approach to use in solving this problem. I'm not sure what I have so far is good enough (I'm thinking it's too long). The questions requires that I use a good Top-Down approach. Here's my code so far: //Passes letters to GLetter objects and draws them on the canvas package artScienceJavaExercises.chapter8; import acm.program.*; //import acm.graphics.*; public class DrawInitials extends GraphicsProgram{ public void init(){ resize(400,400); } public void run(){ //String let = readLine("Letter?: "); letter = new GLetter("l"); add(letter, (getWidth()-letter.getWidth()*2)/2, (getHeight()-letter.getHeight())/2); add(new GLetter("o"), (letter.getX()+letter.getWidth()), letter.getY()); } private GLetter letter; } //GLetter Class package artScienceJavaExercises.chapter8; import acm.graphics.*; import java.awt.*; public class GLetter extends GCompound{ private static final int ONE_THIRD = 30; private static final int ROW_2_HEIGHT = 40; private GArc[] arc = new GArc[4]; private GLine[] line = new GLine[24]; public GLetter(String s){ line[0] = new GLine(0,0, ONE_THIRD, 0); line[1] = new GLine(ONE_THIRD,0, ONE_THIRD*2, 0); line[2] = new GLine(ONE_THIRD*2,0, ONE_THIRD*3, 0); line[3] = new GLine(0,0, 0,ONE_THIRD); line[4] = new GLine(ONE_THIRD,0, ONE_THIRD, ONE_THIRD); line[5] = new GLine(ONE_THIRD*2,0, ONE_THIRD*2, ONE_THIRD); line[6] = new GLine(ONE_THIRD*3,0, ONE_THIRD*3, ONE_THIRD); line[7] = new GLine(0,ONE_THIRD, ONE_THIRD*2, ONE_THIRD); line[8] = new GLine(ONE_THIRD,ONE_THIRD, ONE_THIRD*2, ONE_THIRD); line[9] = new GLine(ONE_THIRD*2,ONE_THIRD, ONE_THIRD*3, ONE_THIRD); line[10] = new GLine(0,ONE_THIRD, 0, ONE_THIRD+ROW_2_HEIGHT); line[11] = new GLine(ONE_THIRD, ONE_THIRD, ONE_THIRD, ONE_THIRD+ROW_2_HEIGHT); line[12] = new GLine(ONE_THIRD*2,ONE_THIRD, ONE_THIRD*2, ONE_THIRD+ROW_2_HEIGHT); line[13] = new GLine(ONE_THIRD*3,ONE_THIRD, ONE_THIRD*3, ONE_THIRD+ROW_2_HEIGHT); line[14] = new GLine(0, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD, ONE_THIRD+ROW_2_HEIGHT); line[15] = new GLine(ONE_THIRD, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD*2, ONE_THIRD+ROW_2_HEIGHT); line[16] = new GLine(ONE_THIRD*2, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD*3, ONE_THIRD+ROW_2_HEIGHT); line[17] = new GLine(0, ONE_THIRD+ROW_2_HEIGHT, 0, ONE_THIRD*2+ROW_2_HEIGHT); line[18] = new GLine(ONE_THIRD, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD, ONE_THIRD*2+ROW_2_HEIGHT); line[19] = new GLine(ONE_THIRD*2, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD*2, ONE_THIRD*2+ROW_2_HEIGHT); line[20] = new GLine(ONE_THIRD*3, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD*3, ONE_THIRD*2+ROW_2_HEIGHT); line[21] = new GLine(0,ONE_THIRD*2+ROW_2_HEIGHT, ONE_THIRD, ONE_THIRD*2+ROW_2_HEIGHT); line[22] = new GLine(ONE_THIRD, ONE_THIRD*2+ROW_2_HEIGHT, ONE_THIRD*2, ONE_THIRD*2+ROW_2_HEIGHT); line[23] = new GLine(ONE_THIRD*2,ONE_THIRD*2+ROW_2_HEIGHT, ONE_THIRD*3, ONE_THIRD*2+ROW_2_HEIGHT); for(int i = 0; i<line.length; i++){ add(line[i]); line[i].setColor(Color.BLACK); line[i].setVisible(false); } arc[0] = new GArc(getWidth(), getHeight(), 106.699, 49.341); arc[1] = new GArc(getWidth(), getHeight(), 23.96, 49.341); arc[2] = new GArc(getWidth(), getHeight(), -23.96, -49.341); arc[3] = new GArc(0,0,getWidth(), getHeight(), -106.699, -49.341); for(int i = 0; i<arc.length; i++){ add(arc[i],0,0); arc[i].setColor(Color.BLACK); arc[i].setVisible(false); } paintLetter(s); } private void paintLetter(String s){ if (s.equalsIgnoreCase("l")){ turnOn(line[3]); turnOn(line[10]); turnOn(line[17]); turnOn(line[21]); turnOn(line[22]); turnOn(line[23]); } else if(s.equalsIgnoreCase("o")){ for(int i = 0; i<4; ++i){ turnOn(arc[i]); } turnOn(line[1]); turnOn(line[10]); turnOn(line[13]); turnOn(line[22]); } } private void turnOn(GObject g){ g.setVisible(true); } } I created a class (GLetter.java) with arrays for GArc and GLine objects. They are positioned in certain ways thereby turning certain Glines and/or GArcs on or off (changing visiblity) would create a pattern for a letter. This Gletter uses the if/else statements to determine which pattern to create - this makes me feel my code is too long. There is another class (DrawInitials.java) that simulates a GraphicsProgram and allows the user to pass certain letters as arguments to the GLetter object. I've used 'L' and 'O' as examples. However, I posted this because I'm not sure I'm using the right approach. That's why I need your help. I feel MY CODE IS TOO LONG! The code above is not the complete project...it only draws letters 'L' and 'O' for now.

    Read the article

  • Mouse takes a while to start working after boot

    - by warkior
    I just recently installed Ubuntu 12.04 (64 bit) and a number of my USB devices have stopped working. At least, they don't work for the first 3-5 minutes. I have two mice (one wireless, one wired) and a camera, which seem to take Ubuntu 3-5 minutes to recognize after booting up. Eventually, they do start to work, but it takes ages! lsusb results: (when the mice are working...) $ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 046d:c512 Logitech, Inc. LX-700 Cordless Desktop Receiver Bus 003 Device 003: ID 03f0:3f11 Hewlett-Packard PSC-1315/PSC-1317 Bus 006 Device 002: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 006 Device 003: ID 046d:c52b Logitech, Inc. Unifying Receiver syslog entries for what seems (to my very untrained eye) to be the problem: Oct 12 20:12:51 REMOVED-GA-MA785GM-US2H kernel: [ 17.420117] usb 2-3: device descriptor read/64, error -110 Oct 12 20:12:57 REMOVED-GA-MA785GM-US2H goa[1879]: goa-daemon version 3.4.0 starting [main.c:112, main()] Oct 12 20:13:06 REMOVED-GA-MA785GM-US2H kernel: [ 32.636107] usb 2-3: device descriptor read/64, error -110 Oct 12 20:13:06 REMOVED-GA-MA785GM-US2H kernel: [ 32.852122] usb 2-3: new high-speed USB device number 3 using ehci_hcd Oct 12 20:13:21 REMOVED-GA-MA785GM-US2H kernel: [ 47.964131] usb 2-3: device descriptor read/64, error -110 Oct 12 20:13:37 REMOVED-GA-MA785GM-US2H kernel: [ 63.180115] usb 2-3: device descriptor read/64, error -110 Oct 12 20:13:37 REMOVED-GA-MA785GM-US2H kernel: [ 63.396126] usb 2-3: new high-speed USB device number 4 using ehci_hcd Oct 12 20:13:47 REMOVED-GA-MA785GM-US2H kernel: [ 73.804158] usb 2-3: device not accepting address 4, error -110 Oct 12 20:13:47 REMOVED-GA-MA785GM-US2H kernel: [ 73.916190] usb 2-3: new high-speed USB device number 5 using ehci_hcd Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H kernel: [ 84.324160] usb 2-3: device not accepting address 5, error -110 Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H kernel: [ 84.324197] hub 2-0:1.0: unable to enumerate USB device on port 3 Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H udev-configure-printer: failed to claim interface Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H udev-configure-printer: Failed to get parent Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H udev-configure-printer: device devpath is /devices/pci0000:00/0000:00:12.0/usb3/3-3 Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H udev-configure-printer: MFG:hp MDL:psc 1310 series SERN:CN47CB60BJO2 serial:CN47CB60BJO2 Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H kernel: [ 84.768132] usb 5-3: new full-speed USB device number 2 using ohci_hcd Oct 12 20:14:01 REMOVED-GA-MA785GM-US2H udev-configure-printer: no corresponding CUPS device found Oct 12 20:14:13 REMOVED-GA-MA785GM-US2H kernel: [ 99.904185] usb 5-3: device descriptor read/64, error -110 Oct 12 20:14:29 REMOVED-GA-MA785GM-US2H kernel: [ 115.144188] usb 5-3: device descriptor read/64, error -110 Oct 12 20:14:29 REMOVED-GA-MA785GM-US2H kernel: [ 115.384178] usb 5-3: new full-speed USB device number 3 using ohci_hcd Oct 12 20:14:44 REMOVED-GA-MA785GM-US2H kernel: [ 130.520196] usb 5-3: device descriptor read/64, error -110 Oct 12 20:14:59 REMOVED-GA-MA785GM-US2H kernel: [ 145.760179] usb 5-3: device descriptor read/64, error -110 Oct 12 20:14:59 REMOVED-GA-MA785GM-US2H kernel: [ 146.000173] usb 5-3: new full-speed USB device number 4 using ohci_hcd Oct 12 20:15:10 REMOVED-GA-MA785GM-US2H kernel: [ 156.408168] usb 5-3: device not accepting address 4, error -110 Oct 12 20:15:10 REMOVED-GA-MA785GM-US2H kernel: [ 156.544188] usb 5-3: new full-speed USB device number 5 using ohci_hcd Oct 12 20:15:20 REMOVED-GA-MA785GM-US2H kernel: [ 166.952181] usb 5-3: device not accepting address 5, error -110 Oct 12 20:15:20 REMOVED-GA-MA785GM-US2H kernel: [ 166.952215] hub 5-0:1.0: unable to enumerate USB device on port 3 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.216164] usb 6-2: new low-speed USB device number 2 using ohci_hcd Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H mtp-probe: checking bus 6, device 2: "/sys/devices/pci0000:00/0000:00:13.1/usb6/6-2" Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H mtp-probe: bus: 6, device: 2 was not an MTP device Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.396138] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:13.1/usb6/6-2/6-2:1.0/input/input16 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.396442] generic-usb 0003:046D:C00C.0003: input,hidraw2: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:13.1-2/input0 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.660187] usb 6-3: new full-speed USB device number 3 using ohci_hcd Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H mtp-probe: checking bus 6, device 3: "/sys/devices/pci0000:00/0000:00:13.1/usb6/6-3" Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H mtp-probe: bus: 6, device: 3 was not an MTP device Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.859045] logitech-djreceiver 0003:046D:C52B.0006: hiddev0,hidraw3: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:13.1-3/input2 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.865086] input: Logitech Unifying Device. Wireless PID:400a as /devices/pci0000:00/0000:00:13.1/usb6/6-3/6-3:1.2/0003:046D:C52B.0006/input/input17 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.865291] logitech-djdevice 0003:046D:C52B.0007: input,hidraw4: USB HID v1.11 Mouse [Logitech Unifying Device. Wireless PID:400a] on usb-0000:00:13.1-3:1 Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 139: unable get_string_descriptor -1: Operation not permitted Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 2040: invalid product id string ret=-1 Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 139: unable get_string_descriptor -1: Operation not permitted Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 2045: invalid serial id string ret=-1 Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 139: unable get_string_descriptor -1: Operation not permitted Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 2050: invalid manufacturer string ret=-1

    Read the article

  • Scripting Part 1

    - by rbishop
    Dynamic Scripting is a large topic, so let me get a couple of things out of the way first. If you aren't familiar with JavaScript, I can suggest CodeAcademy's JavaScript series. There are also many other websites and books that cover JavaScript from every possible angle.The second thing we need to deal with is JavaScript as a programming language versus a JavaScript environment running in a web browser. Many books, tutorials, and websites completely blur these two together but they are in fact completely separate. What does this really mean in relation to DRM? Since DRM isn't a web browser, there are no document, window, history, screen, or location objects. There are no events like mousedown or click. Trying to call alert('hello!') in DRM will just cause an error. Those concepts are all related to an HTML document (web page) and are part of the Browser Object Model or Document Object Model. DRM has its own object model that exposes DRM-related objects. In practice, feel free to use those sorts of tutorials or practice within your browser; Many of the concepts are directly translatable to writing scripts in DRM. Just don't try to call document.getElementById in your property definition!I think learning by example tends to work the best, so let's try getting a list of all the unique property values for a given node and its children. var uniqueValues = {}; var childEnumerator = node.GetChildEnumerator(); while(childEnumerator.MoveNext()) { var propValue = childEnumerator.GetCurrent().PropValue("Custom.testpropstr1"); print(propValue); if(propValue != null && propValue != '' && !uniqueValues[propValue]) uniqueValues[propValue] = true; } var result = ''; for(var value in uniqueValues){ result += "Found value " + value + ","; } return result;  Now lets break this down piece by piece. var uniqueValues = {}; This declares a variable and initializes it as a new empty Object. You could also have written var uniqueValues = new Object(); Why use an object here? JavaScript objects can also function as a list of keys and we'll use that later to store each property value as a key on the object. var childEnumerator = node.GetChildEnumerator(); while(childEnumerator.MoveNext()) { This gets an enumerator for the node's children. The enumerator allows us to loop through the children one by one. If we wanted to get a filtered list of children, we would instead use ChildrenWith(). When we reach the end of the child list, the enumerator will return false for MoveNext() and that will stop the loop. var propValue = childEnumerator.GetCurrent().PropValue("Custom.testpropstr1"); print(propValue); if(propValue != null && propValue != '' && !uniqueValues[propValue]) uniqueValues[propValue] = true; } This gets the node the enumerator is currently pointing at, then calls PropValue() on it to get the value of a property. We then make sure the prop value isn't null or the empty string, then we make sure the value doesn't already exist as a key. Assuming it doesn't we add it as a key with a value (true in this case because it makes checking for an existing value faster when the value exists). A quick word on the print() function. When viewing the prop grid, running an export, or performing normal DRM operations it does nothing. If you have a lot of print() calls with complicated arguments it can slow your script down slightly, but otherwise has no effect. But when using the script editor, all the output of print() will be shown in the Warnings area. This gives you an extremely useful debugging tool to see what exactly a script is doing. var result = ''; for(var value in uniqueValues){ result += "Found value " + value + ","; } return result; Now we build a string by looping through all the keys in uniqueValues and adding that value to our string. The last step is to simply return the result. Hopefully this small example demonstrates some of the core Dynamic Scripting concepts. Next time, we can try checking for node references in other hierarchies to see if they are using duplicate property values.

    Read the article

  • WebSocket API 1.1 released!

    - by Pavel Bucek
    Its my please to announce that JSR 356 – Java API for WebSocket maintenance release ballot vote finished with majority of “yes” votes (actually, only one eligible voter did not vote, all other votes were “yeses”). New release is maintenance release and it addresses only one issue:  WEBSOCKET_SPEC-226. What changed in the 1.1? Version 1.1 is fully backwards compatible with version 1.0, there are only two methods added to javax.websocket.Session: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 /** * Register to handle to incoming messages in this conversation. A maximum of one message handler per * native websocket message type (text, binary, pong) may be added to each Session. I.e. a maximum * of one message handler to handle incoming text messages a maximum of one message handler for * handling incoming binary messages, and a maximum of one for handling incoming pong * messages. For further details of which message handlers handle which of the native websocket * message types please see {@link MessageHandler.Whole} and {@link MessageHandler.Partial}. * Adding more than one of any one type will result in a runtime exception. * * @param clazz   type of the message processed by message handler to be registered. * @param handler whole message handler to be added. * @throws IllegalStateException if there is already a MessageHandler registered for the same native *                               websocket message type as this handler. */ public void addMessageHandler(Class<T> clazz, MessageHandler.Whole<T> handler); /** * Register to handle to incoming messages in this conversation. A maximum of one message handler per * native websocket message type (text, binary, pong) may be added to each Session. I.e. a maximum * of one message handler to handle incoming text messages a maximum of one message handler for * handling incoming binary messages, and a maximum of one for handling incoming pong * messages. For further details of which message handlers handle which of the native websocket * message types please see {@link MessageHandler.Whole} and {@link MessageHandler.Partial}. * Adding more than one of any one type will result in a runtime exception. * * * @param clazz   type of the message processed by message handler to be registered. * @param handler partial message handler to be added. * @throws IllegalStateException if there is already a MessageHandler registered for the same native *                               websocket message type as this handler. */ public void addMessageHandler(Class<T> clazz, MessageHandler.Partial<T> handler); Why do we need to add those methods? Short and not precise version: to support Lambda expressions as MessageHandlers. Longer and slightly more precise explanation: old Session#addMessageHandler method (which is still there and works as it worked till now) does rely on getting the generic parameter during the runtime, which is not (always) possible. The unfortunate part is that it works for some common cases and the expert group did not catch this issue before 1.0 release because of that. The issue is really clearly visible when Lambdas are used as message handlers: 1 2 3 session.addMessageHandler(message -> { System.out.println("### Received: " + message); }); There is no way for the JSR 356 implementation to get the type of the used Lambda expression, thus this call will always result in an exception. Since all modern IDEs do recommend to use Lambda expressions when possible and MessageHandler interfaces are single method interfaces, it basically just scream “use Lambdas” all over the place but when you do that, the application will fail during runtime. Only solution we currently have is to explicitly provide the type of registered MessageHandler. (There might be another sometime in the future when generic type reification is introduced, but that is not going to happen soon enough). So the example above will then be: 1 2 3 session.addMessageHandler(String.class, message -> { System.out.println("### Received: " + message); }); and voila, it works. There are some limitations – you cannot do 1 List<String>.class , so you will need to encapsulate these types when you want to use them in MessageHandler implementation (something like “class MyType extends ArrayList<String>”). There is no better way how to solve this issue, because Java currently does not provide good way how to describe generic types. The api itself is available on maven central, look for javax.websocket:javax.websocket-api:1.1. The reference implementation is project Tyrus, which implements WebSocket API 1.1 from version 1.8.

    Read the article

  • Mapping Repeating Sequence Groups in BizTalk

    - by Paul Petrov
    Repeating sequence groups can often be seen in real life XML documents. It happens when certain sequence of elements repeats in the instance document. Here’s fairly abstract example of schema definition that contains sequence group: <xs:schemaxmlns:b="http://schemas.microsoft.com/BizTalk/2003"            xmlns:xs="http://www.w3.org/2001/XMLSchema"            xmlns="NS-Schema1"            targetNamespace="NS-Schema1" >  <xs:elementname="RepeatingSequenceGroups">     <xs:complexType>       <xs:sequencemaxOccurs="1"minOccurs="0">         <xs:sequencemaxOccurs="unbounded">           <xs:elementname="A"type="xs:string" />           <xs:elementname="B"type="xs:string" />           <xs:elementname="C"type="xs:string"minOccurs="0" />         </xs:sequence>       </xs:sequence>     </xs:complexType>  </xs:element> </xs:schema> And here’s corresponding XML instance document: <ns0:RepeatingSequenceGroupsxmlns:ns0="NS-Schema1">  <A>A1</A>  <B>B1</B>  <C>C1</C>  <A>A2</A>  <B>B2</B>  <A>A3</A>  <B>B3</B>  <C>C3</C> </ns0:RepeatingSequenceGroups> As you can see elements A, B, and C are children of anonymous xs:sequence element which in turn can be repeated N times. Let’s say we need do simple mapping to the schema with similar structure but with different element names: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Beta>B1</Beta>  <Gamma>C1</Gamma>  <Alpha>A2</Alpha>  <Beta>B2</Beta>  <Gamma>C2</Gamma> </ns0:Destination> The basic map for such typical task would look pretty straightforward: If we test this map without any modification it will produce following result: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Alpha>A2</Alpha>  <Alpha>A3</Alpha>  <Beta>B1</Beta>  <Beta>B2</Beta>  <Beta>B3</Beta>  <Gamma>C1</Gamma>  <Gamma>C3</Gamma> </ns0:Destination> The original order of the elements inside sequence is lost and that’s not what we want. Default behavior of the BizTalk 2009 and 2010 Map Editor is to generate compatible map with older versions that did not have ability to preserve sequence order. To enable this feature simply open map file (*.btm) in text/xml editor and find attribute PreserveSequenceOrder of the root <mapsource> element. Set its value to Yes and re-test the map: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Beta>B1</Beta>  <Gamma>C1</Gamma>  <Alpha>A2</Alpha>  <Beta>B2</Beta>  <Alpha>A3</Alpha>  <Beta>B3</Beta>  <Gamma>C3</Gamma> </ns0:Destination> The result is as expected – all corresponding elements are in the same order as in the source document. Under the hood it is achieved by using one common xsl:for-each statement that pulls all elements in original order (rather than using individual for-each statement per element name in default mode) and xsl:if statements to test current element in the loop:  <xsl:templatematch="/s0:RepeatingSequenceGroups">     <ns0:Destination>       <xsl:for-eachselect="A|B|C">         <xsl:iftest="local-name()='A'">           <Alpha>             <xsl:value-ofselect="./text()" />           </Alpha>         </xsl:if>         <xsl:iftest="local-name()='B'">           <Beta>             <xsl:value-ofselect="./text()" />           </Beta>         </xsl:if>         <xsl:iftest="local-name()='C'">           <Gamma>             <xsl:value-ofselect="./text()" />           </Gamma>         </xsl:if>       </xsl:for-each>     </ns0:Destination>  </xsl:template> BizTalk Map editor became smarter so learn and use this lesser known feature of XSLT 2.0 in your maps and XSL stylesheets.

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • [Windows 8] An application bar toggle button

    - by Benjamin Roux
    To stay in the application bar stuff, here’s another useful control which enable to create an application bar button that can be toggled between two different contents/styles/commands (used to create a favorite/unfavorite or a play/pause button for example). namespace Indeed.Controls { public class AppBarToggleButton : Button { public bool IsChecked { get { return (bool)GetValue(IsCheckedProperty); } set { SetValue(IsCheckedProperty, value); } } public static readonly DependencyProperty IsCheckedProperty = DependencyProperty.Register("IsChecked", typeof(bool), typeof(AppBarToggleButton), new PropertyMetadata(false, (o, e) => (o as AppBarToggleButton).IsCheckedChanged())); public string CheckedContent { get { return (string)GetValue(CheckedContentProperty); } set { SetValue(CheckedContentProperty, value); } } public static readonly DependencyProperty CheckedContentProperty = DependencyProperty.Register("CheckedContent", typeof(string), typeof(AppBarToggleButton), null); public ICommand CheckedCommand { get { return (ICommand)GetValue(CheckedCommandProperty); } set { SetValue(CheckedCommandProperty, value); } } public static readonly DependencyProperty CheckedCommandProperty = DependencyProperty.Register("CheckedCommand", typeof(ICommand), typeof(AppBarToggleButton), null); public Style CheckedStyle { get { return (Style)GetValue(CheckedStyleProperty); } set { SetValue(CheckedStyleProperty, value); } } public static readonly DependencyProperty CheckedStyleProperty = DependencyProperty.Register("CheckedStyle", typeof(Style), typeof(AppBarToggleButton), null); public bool AutoToggle { get { return (bool)GetValue(AutoToggleProperty); } set { SetValue(AutoToggleProperty, value); } } public static readonly DependencyProperty AutoToggleProperty = DependencyProperty.Register("AutoToggle", typeof(bool), typeof(AppBarToggleButton), null); private object content; private ICommand command; private Style style; private void IsCheckedChanged() { if (IsChecked) { // backup the current content and command content = Content; command = Command; style = Style; if (CheckedStyle == null) Content = CheckedContent; else Style = CheckedStyle; Command = CheckedCommand; } else { if (CheckedStyle == null) Content = content; else Style = style; Command = command; } } protected override void OnTapped(Windows.UI.Xaml.Input.TappedRoutedEventArgs e) { base.OnTapped(e); if (AutoToggle) IsChecked = !IsChecked; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To use it, it’s very simple. <ic:AppBarToggleButton Style="{StaticResource PlayAppBarButtonStyle}" CheckedStyle="{StaticResource PauseAppBarButtonStyle}" Command="{Binding Path=PlayCommand}" CheckedCommand="{Binding Path=PauseCommand}" IsChecked="{Binding Path=IsPlaying}" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When the IsPlaying property (in my ViewModel) is true the button becomes a Pause button, when it’s false it becomes a Play button. Warning: Just make sure that the IsChecked property is set in last in your control !! If you don’t use style you can alternatively use Content and CheckedContent. Furthermore you can set the AutoToggle to true if you don’t want to control is IsChecked property through binding. With this control and the AppBarPopupButton, you can now create awesome application bar for your apps ! Stay tuned for more awesome Windows 8 tricks !

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • Building a plug-in for Windows Live Writer

    - by mbcrump
    This tutorial will show you how to build a plug-in for Windows Live Writer. Windows Live Writer is a blogging tool that Microsoft provides for free. It includes an open API for .NET developers to create custom plug-ins. In this tutorial, I will show you how easy it is to build one. Open VS2008 or VS2010 and create a new project. Set the target framework to 2.0, Application Type to Class Library and give it a name. In this tutorial, we are going to create a plug-in that generates a twitter message with your blog post name and a TinyUrl link to the blog post.  It will do all of this automatically after you publish your post. Once, we have a new projected created. We need to setup the references. Add a reference to the WindowsLive.Writer.Api.dll located in the C:\Program Files (x86)\Windows Live\Writer\ folder, if you are using X64 version of Windows. You will also need to add a reference to System.Windows.Forms System.Web from the .NET tab as well. Once that is complete, add your “using” statements so that it looks like whats shown below: Live Writer Plug-In "Using" using System; using System.Collections.Generic; using System.Text; using WindowsLive.Writer.Api; using System.Web; Now, we are going to setup some build events to make it easier to test our custom class. Go into the Properties of your project and select Build Events, click edit the Post-build and copy/paste the following line: XCOPY /D /Y /R "$(TargetPath)" "C:\Program Files (x86)\Windows Live\Writer\Plugins\" Your screen should look like the one pictured below: Next, we are going to launch an external program on debug. Click the debug tab and enter C:\Program Files (x86)\Windows Live\Writer\WindowsLiveWriter.exe Your screen should look like the one pictured below:   Now we have a blank project and we need to add some code. We start with adding the attributes for the Live Writer Plugin. Before we get started creating the Attributes, we need to create a GUID. This GUID will uniquely identity our plug-in. So, to create a GUID follow the steps in VS2008/2010. Click Tools from the VS Menu ->Create GUID It will generate a GUID like the one listed below: GUID <Guid("56ED8A2C-F216-420D-91A1-F7541495DBDA")> We only want what’s inside the quotes, so your final product should be: "56ED8A2C-F216-420D-91A1-F7541495DBDA". Go ahead and paste this snipped into your class just above the public class. Live Writer Plug-In Attributes [WriterPlugin("56ED8A2C-F216-420D-91A1-F7541495DBDA",    "Generate Twitter Message",    Description = "After your new post has been published, this plug-in will attempt to generate a Twitter status messsage with the Title and TinyUrl link.",    HasEditableOptions = false,    Name = "Generate Twitter Message",    PublisherUrl = "http://michaelcrump.net")] [InsertableContentSource("Generate Twitter Message")] So far, it should look like the following: Next, we need to implement the PublishNotifcationHook class and override the OnPostPublish. I’m not going to dive into what the code is doing as you should be able to follow pretty easily. The code below is the entire code used in the project. PublishNotificationHook public class Class1 :  PublishNotificationHook  {      public override void OnPostPublish(System.Windows.Forms.IWin32Window dialogOwner, IProperties properties, IPublishingContext publishingContext, bool publish)      {          if (!publish) return;          if (string.IsNullOrEmpty(publishingContext.PostInfo.Permalink))          {              PluginDiagnostics.LogError("Live Tweet didn't execute, due to blank permalink");          }          else          {                var strBlogName = HttpUtility.UrlEncode("#blogged : " + publishingContext.PostInfo.Title);  //Blog Post Title              var strUrlFinal = getTinyUrl(publishingContext.PostInfo.Permalink); //Blog Permalink URL Converted to TinyURL              System.Diagnostics.Process.Start("http://twitter.com/home?status=" + strBlogName + strUrlFinal);            }      } We are going to go ahead and create a method to create the short url (tinyurl). TinyURL Helper Method private static string getTinyUrl(string url) {     var cmpUrl = System.Globalization.CultureInfo.InvariantCulture.CompareInfo;     if (!cmpUrl.IsPrefix(url, "http://tinyurl.com"))     {         var address = "http://tinyurl.com/api-create.php?url=" + url;         var client = new System.Net.WebClient();         return (client.DownloadString(address));     }     return (url); } Go ahead and build your project, it should have copied the .DLL into the Windows Live Writer Plugin Directory. If it did not, then you will want to check your configuration. Once that is complete, open Windows Live Writer and select Tools-> Options-> Plug-ins and enable your plug-in that you just created. Your screen should look like the one pictured below: Go ahead and click OK and publish your blog post. You should get a pop-up with the following: Hit OK and It should open a Twitter and either ask for a login or fill in your status as shown below:   That should do it, you can do so many other things with the API. I suggest that if you want to build something really useful consult the MSDN pages. This plug-in that I created was perfect for what I needed and I hope someone finds it useful.

    Read the article

  • Custom Configuration Section Handlers

    Most .NET developers who need to store something in configuration tend to use appSettings for this purpose, in my experience.  More recently, the framework itself has helped things by adding the <connectionStrings /> section so at least these are in their own section and not adding to the appSettings clutter that pollutes most apps.  I recommend avoiding appSettings for several reasons.  In addition to those listed there, I would add that strong typing and validation are additional reasons to go the custom configuration section route. For my ASP.NET Tips and Tricks talk, I use the following example, which is a simple DemoSettings class that includes two fields.  The first is an integer representing how many attendees there are present for the talk, and the second is the title of the talk.  The setup in web.config is as follows: <configSections> <section name="DemoSettings" type="ASPNETTipsAndTricks.Code.DemoSettings" /> </configSections>   <DemoSettings sessionAttendees="100" title="ASP.NET Tips and Tricks DevConnections Spring 2010" /> Referencing the values in code is strongly typed and straightforward.  Here I have a page that exposes two properties which internally get their values from the configuration section handler: public partial class CustomConfig1 : System.Web.UI.Page { public string SessionTitle { get { return DemoSettings.Settings.Title; } } public int SessionAttendees { get { return DemoSettings.Settings.SessionAttendees; } } } Note that the settings are only read from the config file once after that they are cached so there is no need to be concerned about excessive file access. Now weve seen how to set it up on the config file and how to refer to the settings in code.  All that remains is to see the file itself: public class DemoSettings : ConfigurationSection { private static DemoSettings settings = ConfigurationManager.GetSection("DemoSettings") as DemoSettings; public static DemoSettings Settings{ get { return settings;} }   [ConfigurationProperty("sessionAttendees" , DefaultValue = 200 , IsRequired = false)] [IntegerValidator(MinValue = 1 , MaxValue = 10000)] public int SessionAttendees { get { return (int)this["sessionAttendees"]; } set { this["sessionAttendees"] = value; } }   [ConfigurationProperty("title" , IsRequired = true)] [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;\"|\\")] public string Title { get { return (string)this["title"]; } set { this["title"] = value; }   } } The class is pretty straightforward, but there are some important components to note.  First, it must inherit from System.Configuration.ConfigurationSection.  Next, as a convention I like to have a static settings member that is responsible for pulling out the section when the class is first referenced, and further to expose this collection via a static readonly property, Settings.  Note that the types of both of these are the type of my class, DemoSettings. The properties of the class, SessionAttendees and Title, should map to the attributes of the config element in the XML file.  The [ConfigurationProperty] attribute allows you to map the attribute name to the property name (thus using both XML standard naming conventions and C# naming conventions).  In addition, you can specify a default value to use if nothing is specified in the config file, and whether or not the setting must be provided (IsRequired).  If it is required, then it doesnt make sense to include a default value. Beyond defaults and required, you can specify more advanced validation rules for the configuration values using additional C# attributes, such as [IntegerValidator] and [StringValidator].  Using these, you can declaratively specify that your configuration values be in a given range, or omit certain forbidden characters, for instance.  Of course you can write your own custom validation attributes, and there are others specified in System.Configuration. Individual sections can also be loaded from separate files, using syntax like this: <DemoSettings configSource="demosettings.config" /> Summary Using a custom configuration section handler is not hard.  If your application or component requires configuration, I recommend creating a custom configuration handler dedicated to your app or component.  Doing so will reduce the clutter in appSettings, will provide you with strong typing and validation, and will make it much easier for other developers or system administrators to locate and understand the various configuration values that are necessary for a given application. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • BizTalk: Sample: Context routing and Throttling with orchestration

    - by Leonid Ganeline
    The sample demonstrates using orchestration for throttling and using context routing. Usually throttling is implemented on the host level (in BizTalk 2010 we can also using the host instance level throttling). Here is demonstrated the throttling with orchestration convoy that slows down message flow from some customers. Sample implements sort of quality service agreement layer for different kind of customers. The sample demonstrates the context routing between orchestrations. It has several advantages over the content routing. For example, we don’t have to create the property schema and promote properties on the schemas; we don’t have to change the message content to change routing. Use case:  The BizTalk application has a main processing orchestration that process all input messages. The application usually works as an OLTP application. Input messages came in random order without peaks, typical scenario for the on-line users. But sometimes the big data batch payloads come. These batches overload processing orchestrations. All processes, activated by on-line users after the payload, come to the same queue and are processed only after the payload. Result is on-line users can see significant delay in processing. It can be minutes or hours, depending of the batch size. Requirements: On-line user’s processing should work without delays. Big batches cannot disturb on-line users. There should be higher priority for the on-line users and the lower priority for the batches. Design: Decision is to divide the message flow in two branches, one for on-line users and second for batches. Branch with batches provides messages to the processing line with low priority, and the on-line user’s branch – with high priority. All messages are provided by hi-speed receive port. BTS.ReceivePortName context property is used for routing. The Router orchestration separates messages sent from on-line users and from the batch messages. But the Router does not use the BizTalk provided value of this property, the Router set up this value by itself. Router uses the content of the messages to decide if it is from on-line users or from batches. The message context property the BTS.ReceivePortName is changed respectively, its value works as a recipient address, as the “To” address for the next recipient orchestrations. Those next orchestrations are the BatchBottleneck and the MainProcess orchestrations. Messages with context equal “ToBatch” are filtered up by the BatchBottleneck orchestration. It is a unified convoy orchestration and it throttles the message flow, delaying the message delivery to the MainProcess orchestration. The BatchBottleneck orchestration changes the message context to the “ToProcess” and sends messages one after another with small delay in between. Delay can be configured in the BizTalk config file as:                 <appSettings>                                 <add key="GLD_Tests_TwoWayRouting_BatchBottleneck_DelayMillisec" value="100"/>                 </appSettings>   Of course, messages with context equal “ToProcess” are filtered up by the MainProcess orchestration.   NOTES: Filters with string values: In Orchestrations (the first Receive shape in orchestration) use string values WITH quotes; in Send Ports use string values WITHOUT quotes. Filters on the Send Ports are dynamic; we can change them in run-time. Filters on the Orchestrations are static; we can change them only in design-time. To check the existence of the promoted property inside orchestration use the Expression shape with construction like this:       if (BTS.ReceivePortName exists myMessage) { …; } It is not possible in the Message Assignment shape because using the “if” statement inside Message Assignment is prohibited. Several predefined context properties can behave in specific way. Say MessageTracking.OriginatingMessage or XMLNORM.DocumentSpecName, they are required some internal rules should be applied to the format or usage of this properties. MessageTracking.* parameters require you have to use tracking and you can get unexpected run-time errors in some cases. My recommendation is - use very limited set of the predefined context properties. To “attach” the new promoted property to the message, we have to use correlation. The correlation type should include this property. [Here is a good explanation by Saravana ] The sample code is here [sorry, temporary trubles with CodePlex].

    Read the article

  • Moving DataSets through BizTalk

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Yuck. But sometimes you have to, so here are a couple of things to bear in mind: Schemas Point a codegen tool at a WCF endpoint which exposes a DataSet and it will generate an XSD which describes the DataSet like this: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:annotation>       <xs:appinfo>         <ActualTypeName="DataSet"                     Namespace="http://schemas.datacontract.org/2004/07/System.Data"                     xmlns="http://schemas.microsoft.com/2003/10/Serialization/" />       </xs:appinfo>     </xs:annotation>     <xs:sequence>       <xs:elementref="xs:schema" />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  In a serialized instance, the element of type xs:schema contains a full schema which describes the structure of the DataSet – tables, columns etc. The second element, of type xs:any, contains the actual content of the DataSet, expressed as DiffGrams: <GetDataSetResult>  <xs:schemaid="NewDataSet"xmlns:xs="http://www.w3.org/2001/XMLSchema"xmlns=""xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <xs:elementname="NewDataSet"msdata:IsDataSet="true"msdata:UseCurrentLocale="true">       <xs:complexType>         <xs:choiceminOccurs="0"maxOccurs="unbounded">           <xs:elementname="Table1">             <xs:complexType>               <xs:sequence>                 <xs:elementname="Id"type="xs:string"minOccurs="0" />                 <xs:elementname="Name"type="xs:string"minOccurs="0" />                 <xs:elementname="Date"type="xs:string"minOccurs="0" />               </xs:sequence>             </xs:complexType>           </xs:element>         </xs:choice>       </xs:complexType>     </xs:element>  </xs:schema>  <diffgr:diffgramxmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <NewDataSetxmlns="">       <Table1diffgr:id="Table11"msdata:rowOrder="0"diffgr:hasChanges="inserted">         <Id>377fdf8d-cfd1-4975-a167-2ddb41265def</Id>         <Name>157bc287-f09b-435f-a81f-2a3b23aff8c4</Name>         <Date>a5d78d83-6c9a-46ca-8277-f2be8d4658bf</Date>       </Table1>     </NewDataSet>  </diffgr:diffgram> </GetDataSetResult> Put the XSD into a BizTalk schema and it will fail to compile, giving you error: The 'http://www.w3.org/2001/XMLSchema:schema' element is not declared. You should be able to work around that, but I've had no luck in BizTalk Server 2006 R2 – instead you can safely change that xs:schema element to be another xs:any type: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:any />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  (This snippet omits the annotation, but you can leave it in the schema). For an XML instance to pass validation through the schema, you'll also need to flag the any attributes so they can contain any namespace and skip validation:  <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:anynamespace="##any"processContents="skip" />       <xs:anynamespace="##any"processContents="skip" />     </xs:sequence>  </xs:complexType> </xs:element>  You should now have a compiling schema which can be successfully tested against a serialised DataSet. Transforms If you're mapping a DataSet element between schemas, you'll need to use the Mass Copy Functoid to populate the target node from the contents of both the xs:any type elements on the source node: This should give you a compiled map which you can test against a serialized instance. And if you have a .NET consumer on the other side of the mapped BizTalk output, it will correctly deserialize the response into a DataSet.

    Read the article

  • Generically correcting data before save with Entity Framework

    - by koevoeter
    Been working with Entity Framework (.NET 4.0) for a week now for a data migration job and needed some code that generically corrects string values in the database. You probably also have seen things like empty strings instead of NULL or non-trimmed texts ("United States       ") in "old" databases, and you don't want to apply a correcting function on every column you migrate. Here's how I've done this (extending the partial class of my ObjectContext):public partial class MyDatacontext{    partial void OnContextCreated()    {        SavingChanges += OnSavingChanges;    }     private void OnSavingChanges(object sender, EventArgs e)    {        foreach (var entity in GetPersistingEntities(sender))        {            foreach (var propertyInfo in GetStringProperties(entity))            {                var value = (string)propertyInfo.GetValue(entity, null);                 if (value == null)                {                    continue;                }                 if (value.Trim().Length == 0 && IsNullable(propertyInfo))                {                    propertyInfo.SetValue(entity, null, null);                }                else if (value != value.Trim())                {                    propertyInfo.SetValue(entity, value.Trim(), null);                }            }        }    }     private IEnumerable<object> GetPersistingEntities(object sender)    {        return ((ObjectContext)sender).ObjectStateManager            .GetObjectStateEntries(EntityState.Added | EntityState.Modified)             .Select(e => e.Entity);    }    private IEnumerable<PropertyInfo> GetStringProperties(object entity)    {        return entity.GetType().GetProperties()            .Where(pi => pi.PropertyType == typeof(string));    }    private bool IsNullable(PropertyInfo propertyInfo)    {        return ((EdmScalarPropertyAttribute)propertyInfo             .GetCustomAttributes(typeof(EdmScalarPropertyAttribute), false)            .Single()).IsNullable;    }}   Obviously you can use similar code for other generic corrections.

    Read the article

  • Creating SparseImages for Pivot

    - by John Conwell
    Learning how to programmatically make collections for Microsoft Live Labs Pivot has been a pretty interesting ride. There are very few examples out there, and the folks at MS Live Labs are often slow on any feedback.  But that is what Reflector is for, right? Well, I was creating these InfoCard images (similar to the Car images in the "New Cars" sample collection that that MS created for Pivot), and wanted to put a Tag Cloud into the info card.  The problem was the size of the tag cloud might vary in order for all the tags to fit into the tag cloud (often times being bigger than the info card itself).  This was because the varying word lengths and calculated font sizes. So, to fix this, I made the tag cloud its own separate image from the info card.  Then, I would create a sparse image out of the two images, where the tag cloud fit into a small section of the info card.  This would allow the user to see the info card, but then zoom into the tag cloud and see all the tags at a normal resolution.  Kind'a cool. But...I couldn't find one code example (not one!) of how to create a sparse image.  There is one page on the SeaDragon site (http://www.seadragon.com/developer/creating-content/deep-zoom-tools/) that gives over the API for creating images and collections, and it sparsely goes over how to create a sparse image, but unless you are familiar with the API already, the documentation doesn't help very much. The key is the Image.ViewportWidth and Image.ViewportOrigin properties of the image that is getting super imposed on the main image.  I'll walk through the code below.  I've setup a couple Point structs to represent the parent and sub image sizes, as well as where on the parent I want to position the sub image.  Next, create the parent image.  This is pretty straight forward.  Then I create the sub image.  Then I calculate several ratios; the height to width ratio of the sub image, the width ratio of the sub image to the parent image, the height ratio of the sub image to the parent image, then the X and Y coordinates on the parent image where I want the sub image to be placed represented as a ratio of the position to the parent image size. After all these ratios have been calculated, I use them to calculate the Image.ViewportWidth and Image.ViewportOrigin values, then pass the image objects into the SparseImageCreator and call Create. The key thing that was really missing from the API documentation page is that when setting up your sub images, everything is expressed in a ratio in relation to the main parent image.  If I had known this, it would have saved me a lot of trial and error time.  And how did I figure this out?  Reflector of course!  There is a tool called Deep Zoom Composer that came from MS Live Labs which can create a sparse image.  I just dug around the tool's code until I found the method that create sparse images.  But seriously...look at the API documentation from the SeaDragon size and look at the code below and tell me if the documentation would have helped you at all.  I don't think so!   public static void WriteDeepZoomSparseImage(string mainImagePath, string subImagePath, string destination) {     Point parentImageSize = new Point(720, 420);     Point subImageSize = new Point(490, 310);     Point subImageLocation = new Point(196, 17);     List<Image> images = new List<Image>();     //create main image     Image mainImage = new Image(mainImagePath);     mainImage.Size = parentImageSize;     images.Add(mainImage);     //create sub image     Image subImage = new Image(subImagePath);     double hwRatio = subImageSize.X/subImageSize.Y;            // height width ratio of the tag cloud     double nodeWidth = subImageSize.X/parentImageSize.X;        // sub image width to parent image width ratio     double nodeHeight = subImageSize.Y / parentImageSize.Y;    // sub image height to parent image height ratio     double nodeX = subImageLocation.X/parentImageSize.X;       //x cordinate position on parent / width of parent     double nodeY = subImageLocation.Y / parentImageSize.Y;     //y cordinate position on parent / height of parent     subImage.ViewportWidth = (nodeWidth < double.Epsilon) ? 1.0 : (1.0 / nodeWidth);     subImage.ViewportOrigin = new Point(         (nodeWidth < double.Epsilon) ? -1.0 : (-nodeX / nodeWidth),         (nodeHeight < double.Epsilon) ? -1.0 : ((-nodeY / nodeHeight) / hwRatio));     images.Add(subImage);     //create sparse image     SparseImageCreator creator = new SparseImageCreator();     creator.Create(images, destination); }

    Read the article

  • Undeclared Scope in Rock Paper Scissors Simple Game

    - by Rianelle
    #include <iostream> #include <string> #include <cstdlib> #include <ctime> using namespace std; bool win; int winnings; int draws; int loses; string comChoice; string playerChoice; void winGame () { cout << "You won! Play again?" <<endl; cout << "Type y/n" <<endl; char x; cin >> x; if (x == 'y') { beginGame(); } else if ('n'){ cout << "Game Stopped." <<endl; cout << "Number of Draws: " <<draws << endl; cout << "Number of Loses: " <<loses << endl; cout << "Number of Wins: " << winnings << endl; win = true; } } void drawGame (){ ++draws; cout << "Draw! Try again" << endl; return; } void lose () { cout << "You lose! Try again?" <<endl; cout << "Type y/n" <<endl; char feedback; cin >> feedback; if (feedback == 'y') { beginGame(); } else if ('n'){ cout << "Game Stopped." <<endl; cout << "Number of Draws: " <<draws << endl; cout << "Number of Loses: " <<loses << endl; cout << "Number of Wins: " << winnings << endl; } } void beginGame() { cout << "Welcome to the Rock, Paper and Scissors Game!" <<endl; cout << "Let's begin. Type <rock, paper, scissors> for your choice!" <<endl; cin >> playerChoice; srand(time(0)); int randomizer = 1+(rand()%3); if (randomizer == 1) comChoice = "rock"; if (randomizer == 2) comChoice = "paper"; if (randomizer == 3) comChoice = "scissors"; do { if (playerChoice == comChoice) { drawGame(); } if (playerChoice == "rock" && comChoice == "paper") ++loses; lose(); if (playerChoice == "rock" && comChoice == "scissors") ++winnings; winGame(); if (playerChoice == "paper" && comChoice == "rock") ++winnings; winGame(); if (playerChoice == "paper" && comChoice == "scissors") ++loses; lose(); if (playerChoice == "scissors" && comChoice == "rock") ++loses; lose(); if (playerChoice == "scissors" && comChoice == "paper") ++winnings; winGame(); }while (win != true); } int main () { beginGame(); return 0; }

    Read the article

  • Navigation in a #WP7 application with MVVM Light

    - by Laurent Bugnion
    In MVVM applications, it can be a bit of a challenge to send instructions to the view (for example a page) from a viewmodel. Thankfully, we have good tools at our disposal to help with that. In his excellent series “MVVM Light Toolkit soup to nuts”, Jesse Liberty proposes one approach using the MVVM Light messaging infrastructure. While this works fine, I would like to show here another approach using what I call a “view service”, i.e. an abstracted service that is invoked from the viewmodel, and implemented on the view. Multiple kinds of view services In fact, I use view services quite often, and even started standardizing them for the Windows Phone 7 applications I work on. If there is interest, I will be happy to show other such view services, for example Animation services, responsible to start/stop animations on the view. Dialog service, in charge of displaying messages to the user and gathering feedback. Navigation service, in charge of navigating to a given page directly from the viewmodel. In this article, I will concentrate on the navigation service. The INavigationService interface In most WP7 apps, the navigation service is used in quite a straightforward way. We want to: Navigate to a given URI. Go back. Be notified when a navigation is taking place, and be able to cancel. The INavigationService interface is quite simple indeed: public interface INavigationService { event NavigatingCancelEventHandler Navigating; void NavigateTo(Uri pageUri); void GoBack(); } Obviously, this interface can be extended if necessary, but in most of the apps I worked on, I found that this covers my needs. The NavigationService class It is possible to nicely pack the navigation service into its own class. To do this, we need to remember that all the PhoneApplicationPage instances use the same instance of the navigation service, exposed through their NavigationService property. In fact, in a WP7 application, it is the main frame (RootFrame, of type PhoneApplicationFrame) that is responsible for this task. So, our implementation of the NavigationService class can leverage this. First the class will grab the PhoneApplicationFrame and store a reference to it. Also, it registers a handler for the Navigating event, and forwards the event to the listening viewmodels (if any). Then, the NavigateTo and the GoBack methods are implemented. They are quite simple, because they are in fact just a gateway to the PhoneApplicationFrame. The whole class is as follows: public class NavigationService : INavigationService { private PhoneApplicationFrame _mainFrame; public event NavigatingCancelEventHandler Navigating; public void NavigateTo(Uri pageUri) { if (EnsureMainFrame()) { _mainFrame.Navigate(pageUri); } } public void GoBack() { if (EnsureMainFrame() && _mainFrame.CanGoBack) { _mainFrame.GoBack(); } } private bool EnsureMainFrame() { if (_mainFrame != null) { return true; } _mainFrame = Application.Current.RootVisual as PhoneApplicationFrame; if (_mainFrame != null) { // Could be null if the app runs inside a design tool _mainFrame.Navigating += (s, e) => { if (Navigating != null) { Navigating(s, e); } }; return true; } return false; } } Exposing URIs I find that it is a good practice to expose each page’s URI as a constant. In MVVM Light applications, a good place to do that is the ViewModelLocator, which already acts like a central point of setup for the views and their viewmodels. Note that in some cases, it is necessary to expose the URL as a string, for instance when a query string needs to be passed to the view. So for example we could have: public static readonly Uri MainPageUri = new Uri("/MainPage.xaml", UriKind.Relative); public const string AnotherPageUrl = "/AnotherPage.xaml?param1={0}&param2={1}"; Creating and using the NavigationService Normally, we only need one instance of the NavigationService class. In cases where you use an IOC container, it is easy to simply register a singleton instance. For example, I am using a modified version of a super simple IOC container, and so I can register the navigation service as follows: SimpleIoc.Register<INavigationService, NavigationService>(); Then, it can be resolved where needed with: SimpleIoc.Resolve<INavigationService>(); Or (more frequently), I simply declare a parameter on the viewmodel constructor of type INavigationService and let the IOC container do its magic and inject the instance of the NavigationService when the viewmodel is created. On supported platforms (for example Silverlight 4), it is also possible to use MEF. Or, of course, we can simply instantiate the NavigationService in the ViewModelLocator, and pass this instance as a parameter of the viewmodels’ constructor, injected as a property, etc… Once the instance has been passed to the viewmodel, it can be used, for example with: NavigationService.NavigateTo(ViewModelLocator.ComparisonPageUri); Testing Thanks to the INavigationService interface, navigation can be mocked and tested when the viewmodel is put under unit test. Simply implement and inject a mock class, and assert that the methods are called as they should by the viewmodel. Conclusion As usual, there are multiple ways to code a solution answering your needs. I find that view services are a really neat way to delegate view-specific responsibilities such as animation, dialogs and of course navigation to other classes through an abstracted interface. In some cases, such as the NavigationService class exposed here, it is even possible to standardize the implementation and pack it in a class library for reuse. I hope that this sample is useful! Happy coding. Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Shallow Copy vs DeepCopy in C#.NET

    Hope below example helps to understand the difference. Please drop a comment if any doubts. using System; using System.IO; using System.Runtime.Serialization.Formatters.Binary; namespace ShallowCopyVsDeepCopy {     class Program     {         static void Main(string[] args)         {             var e1 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e2 = e1.ShallowClone();             e1.Department.DeptName = "Accounts";             Console.WriteLine(e2.Department.DeptName);             var e3 = new Emp { EmpNo = 10, EmpName = "Smith", Department = new Dep { DeptNo = 100, DeptName = "Finance" } };             var e4 = e3.DeepClone();             e3.Department.DeptName = "Accounts";             Console.WriteLine(e4.Department.DeptName);         }     }     [Serializable]     class Dep     {         public int DeptNo { get; set; }         public String DeptName { get; set; }     }     [Serializable]     class Emp     {         public int EmpNo { get; set; }         public String EmpName { get; set; }         public Dep Department { get; set; }         public Emp ShallowClone()         {             return (Emp)this.MemberwiseClone();         }         public Emp DeepClone()         {             MemoryStream ms = new MemoryStream();             BinaryFormatter bf = new BinaryFormatter();             bf.Serialize(ms, this);             ms.Seek(0, SeekOrigin.Begin);             object copy = bf.Deserialize(ms);             ms.Close();             return copy as Emp;         }     } } span.fullpost {display:none;}

    Read the article

  • Concurrency Utilities for Java EE Early Draft (JSR 236)

    - by arungupta
    Concurrency Utilities for Java EE is being worked as JSR 236 and has released an Early Draft. It provides concurrency capabilities to Java EE application components without compromising container integrity. Simple (common) and advanced concurrency patterns are easily supported without sacrificing usability. Using Java SE concurrency utilities such as java.util.concurrent API, java.lang.Thread and java.util.Timer in a Java EE application component such as EJB or Servlet are problematic since the container and server have no knowledge of these resources. JSR 236 enables concurrency largely by extending the Concurrency Utilities API developed under JSR-166. This also allows a consistency between Java SE and Java EE concurrency programming model. There are four main programming interfaces available: ManagedExecutorService ManagedScheduledExecutorService ContextService ManagedThreadFactory ManagedExecutorService is a managed version of java.util.concurrent.ExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/BatchExecutor")ManagedExecutorService executor; Its recommended to bind the JNDI references in the java:comp/env/concurrent subcontext. The asynchronous tasks that need to be executed need to implement java.lang.Runnable or java.util.concurrent.Callable interface as: public class MyTask implements Runnable { public void run() { // business logic goes here }} OR public class MyTask2 implements Callable<Date> {  public Date call() { // business logic goes here   }} The task is then submitted to the executor using one of the submit method that return a Future instance. The Future represents the result of the task and can also be used to check if the task is complete or wait for its completion. Future<String> future = executor.submit(new MyTask(), String.class);. . .String result = future.get(); Another example to submit tasks is: class MyTask implements Callback<Long> { . . . }class MyTask2 implements Callback<Date> { . . . }ArrayList<Callable> tasks = new ArrayList<();tasks.add(new MyTask());tasks.add(new MyTask2());List<Future<Object>> result = executor.invokeAll(tasks); The ManagedExecutorService may be configured for different properties such as: Hung Task Threshold: Time in milliseconds that a task can execute before it is considered hung Pool Info Core Size: Number of threads to keep alive Maximum Size: Maximum number of threads allowed in the pool Keep Alive: Time to allow threads to remain idle when # of threads > Core Size Work Queue Capacity: # of tasks that can be stored in inbound buffer Thread Use: Application intend to run short vs long-running tasks, accordingly pooled or daemon threads are picked ManagedScheduledExecutorService adds delay and periodic task running capabilities to ManagedExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/timedExecutor")ManagedExecutorService executor; And then the tasks are submitted using submit, invokeXXX or scheduleXXX methods. ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS); This will create and execute a one-shot action that becomes enabled after 5 seconds of delay. More control is possible using one of the newly added methods: MyTaskListener implements ManagedTaskListener {  public void taskStarting(...) { . . . }  public void taskSubmitted(...) { . . . }  public void taskDone(...) { . . . }  public void taskAborted(...) { . . . } }ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS, new MyTaskListener()); Here, ManagedTaskListener is used to monitor the state of a task's future. ManagedThreadFactory provides a method for creating threads for execution in a managed environment. A simple usage is: @Resource(name="concurrent/myThreadFactory")ManagedThreadFactory factory;. . .Thread thread = factory.newThread(new Runnable() { . . . }); concurrent/myThreadFactory is a JNDI resource. There is lot of interesting content in the Early Draft, download it, and read yourself. The implementation will be made available soon and also be integrated in GlassFish 4 as well. Some references for further exploring ... Javadoc Early Draft Specification concurrency-ee-spec.java.net [email protected]

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >