Search Results

Search found 11479 results on 460 pages for 'resource usage'.

Page 408/460 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • SQL SERVER – The Story of a Lesser Known Startup Parameter in SQL Server – Guest Post by Balmukund Lakhani

    - by Pinal Dave
    This is a fantastic blog post from my dear friend Balmukund ( blog | twitter | facebook ). He had presented a fantastic session in our last UG and there were lots of requests from attendees that he blogs about it. Well, here is the blog post about the same very popular UG session. Let us read the entire blog post in the voice of the Balmukund himself. During my last session in SQL Bangalore User Group (Facebook) meeting, I was lucky enough to deliver a session on SQL Server Startup issue. The name of the session was “SQL Engine Starting Trouble – How to start?” From the feedback, I realized that one of the “not well known” startup parameter is “-m”. Okay, you might say “I know that this is used to start the SQL in single user mode”. But what you might not know is that you can pass a string with -m which has special meaning and use. I have used this parameter in my blog here but looks like not many of you have seen that. It happens most of the time when we want to start SQL Server in single user mode, someone else makes connection before you can. The only choice you have is to repeat same process again till you succeed. Some smart DBAs may disable the remote network protocols (TCP/IP and Named Pipes) of SQL Instance and allow only local connections to SQL. Once the activity is complete, our dear smart DBA has to remember to re-enable network protocols. Sometimes, it may be a local service or application getting connection to SQL before we can. There is a better way to deal with it. Yes, you have guessed it correctly: -m parameter which a string. Since I work with SQL Product Support team, I may know little more undocumented commands and parameters, but this is not an undocumented stuff. It’s already documented in books online. So in this blog, I am going to show a demo of its usage. As documentation shows, “Do not use this option as a security feature.” So please read this blog as knowledge enhancer and troubleshooting issues not security feature. In my laptop, I have a default instance of SQL Server 2012 and here is what we would in the configuration manager. Now, I would go ahead and stop SQL Service by selecting SQL Server (MSSQLServer) > Right Click > Stop. There are multiple ways to start SQL with startup parameter. 1) Use Net Start Command from command prompt Net Start MSSQLServer /mSQLCMD The above command is the simplest way to add startup parameter to SQL. This parameter would be cleared once we stop and start SQL. 2) Add Startup Parameter via configuration manager. Step is already listed here. We need to add -mSQLCMD If we compare 1 and 2, it’s clear that unless we modify startup parameter and remove -m, it would be in effect. 3) Start SQL Service via command line SQLServr.exe –mSQLCMD –s<InstanceName> Wait, what does SQLCMD mean with /m? It’s the instruction to SQL that start SQL Server in Single User Mode and allow only the application which is SQLCMD. Any other application would fail with Login Failed for User Error message. It would be important to note that string is case sensitive. This value should be picked up from application_name column from sys.dm_exec_sessions. I have made a connection using SQLCMD and as we can see it comes as upper case “SQLCMD”. If we want only management studio query windows to connect then we need to give -m” Microsoft SQL Server Management Studio – Query” as startup parameter. In below example, I have given it as SQLCMd (lower case d at the end) and we would notice that we would not be able to connect to SQL Instance. Above proves that parameter works as expected and it’s case sensitive. Error Log would show below information. How to get error log location? I have already blogged about it. Hope you have learned something new. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Implementing an Interceptor Using NHibernate’s Built In Dynamic Proxy Generator

    - by Ricardo Peres
    NHibernate 3.2 came with an included proxy generator, which means there is no longer the need – or the possibility, for that matter – to choose Castle DynamicProxy, LinFu or Spring. This is actually a good thing, because it means one less assembly to deploy. Apparently, this generator was based, at least partially, on LinFu. As there are not many tutorials out there demonstrating it’s usage, here’s one, for demonstrating one of the most requested features: implementing INotifyPropertyChanged. This interceptor, of course, will still feature all of NHibernate’s functionalities that you are used to, such as lazy loading, and such. We will start by implementing an NHibernate interceptor, by inheriting from the base class NHibernate.EmptyInterceptor. This class does not do anything by itself, but it allows us to plug in behavior by overriding some of its methods, in this case, Instantiate: 1: public class NotifyPropertyChangedInterceptor : EmptyInterceptor 2: { 3: private ISession session = null; 4:  5: private static readonly ProxyFactory factory = new ProxyFactory(); 6:  7: public override void SetSession(ISession session) 8: { 9: this.session = session; 10: base.SetSession(session); 11: } 12:  13: public override Object Instantiate(String clazz, EntityMode entityMode, Object id) 14: { 15: Type entityType = Type.GetType(clazz); 16: IProxy proxy = factory.CreateProxy(entityType, new _NotifyPropertyChangedInterceptor(), typeof(INotifyPropertyChanged)) as IProxy; 17: 18: _NotifyPropertyChangedInterceptor interceptor = proxy.Interceptor as _NotifyPropertyChangedInterceptor; 19: interceptor.Proxy = this.session.SessionFactory.GetClassMetadata(entityType).Instantiate(id, entityMode); 20:  21: this.session.SessionFactory.GetClassMetadata(entityType).SetIdentifier(proxy, id, entityMode); 22:  23: return (proxy); 24: } 25: } Then we need a class that implements the NHibernate dynamic proxy behavior, let’s place it inside our interceptor, because it will only need to be used there: 1: class _NotifyPropertyChangedInterceptor : NHibernate.Proxy.DynamicProxy.IInterceptor 2: { 3: private PropertyChangedEventHandler changed = delegate { }; 4:  5: public Object Proxy 6: { 7: get; 8: set;} 9:  10: #region IInterceptor Members 11:  12: public Object Intercept(InvocationInfo info) 13: { 14: Boolean isSetter = info.TargetMethod.Name.StartsWith("set_") == true; 15: Object result = null; 16:  17: if (info.TargetMethod.Name == "add_PropertyChanged") 18: { 19: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 20: this.changed += propertyChangedEventHandler; 21: } 22: else if (info.TargetMethod.Name == "remove_PropertyChanged") 23: { 24: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 25: this.changed -= propertyChangedEventHandler; 26: } 27: else 28: { 29: result = info.TargetMethod.Invoke(this.Proxy, info.Arguments); 30: } 31:  32: if (isSetter == true) 33: { 34: String propertyName = info.TargetMethod.Name.Substring("set_".Length); 35: this.changed(this.Proxy, new PropertyChangedEventArgs(propertyName)); 36: } 37:  38: return (result); 39: } 40:  41: #endregion 42: } What this does for every interceptable method (those who are either virtual or from the INotifyPropertyChanged) is: For methods that came from the INotifyPropertyChanged interface, add_PropertyChanged and remove_PropertyChanged (yes, events are methods ), we add an implementation that adds or removes the event handlers to the delegate which we declared as changed; For all the others, we direct them to the place where they are actually implemented, which is the Proxy field; If the call is setting a property, it fires afterwards the PropertyChanged event. In order to use this, we need to add the interceptor to the Configuration before building the ISessionFactory: 1: using (ISessionFactory factory = cfg.SetInterceptor(new NotifyPropertyChangedInterceptor()).BuildSessionFactory()) 2: { 3: using (ISession session = factory.OpenSession()) 4: using (ITransaction tx = session.BeginTransaction()) 5: { 6: Customer customer = session.Get<Customer>(100); //some id 7: INotifyPropertyChanged inpc = customer as INotifyPropertyChanged; 8: inpc.PropertyChanged += delegate(Object sender, PropertyChangedEventArgs e) 9: { 10: //fired when a property changes 11: }; 12: customer.Address = "some other address"; //will raise PropertyChanged 13: customer.RecentOrders.ToList(); //will trigger the lazy loading 14: } 15: } Any problems, questions, do drop me a line!

    Read the article

  • Trash Destination Adapter

    The Trash Destination and this article came from early experiences of using SSIS and community feedback at the time. When developing a package it is very useful to have a destination adapter that does nothing but consume rows with no setup requirement. You often want run a package part way through development, or just add a path so you can set a Data Viewer. There are stock tasks that can be used, but with the Trash Destination all columns are treated as selected automatically (usage type of read-only), so the pipeline knows they are required. It is also obvious that this is for development or diagnostic purposes, and is clearly not a part of the functional design of the package. It is also ideal for just playing around and exploring concepts in SSIS, and is often used in conjunction with the Data Generator Source. Using these two components it is easy to setup a test of an expression in the Derived Column Transformation for example. The Data Generator Source provides some dummy data, and the Trash Destination allows you to anchor the output path and set a Data Viewer to examine the results. It can also be used when performance tuning packages. It is a consistent and known quantity that has no external influences, so it is ideal as a destination when breaking the data flow into sections to isolate a bottleneck. The adapter is really simple to use and requires no setup. Simply drop it onto the pipeline designer and use it to terminate your data flow path. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally, for 2005/2008, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trash Destination transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The Trash Destination is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Trash Destination for SQL Server 2005 Trash Destination for SQL Server 2008 Trash Destination for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.34 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.33 - SQL Server 2008 release. Includes support for upgrade of 2005 packages. RTM compatible, previously February 2008 CTP. (4 Mar 2008) Version 2.0.0.31 - SQL Server 2008 November 2007 CTP. (14 Feb 2008) SQL Server 2005 Version 1.0.2.18 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.1.1 - SQL Server 2005 IDW 15 June CTP. Minor enhancements over v1.0.1.0. (11 Jun 2005) Version 1.0.1.0 - SQL Server 2005 IDW 14 April CTP. First Public Release. (30 May 2005) Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The component could not be added to the Data Flow task. Please verify that this component is properly installed. ------------------------------ ADDITIONAL INFORMATION: The data flow object "Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=1.0.1.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc" is not installed correctly on this computer. (Microsoft.DataTransformationServices.Design) For 2005/2008, once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? This is not necessary for SQL Server 2012 as the new SSIS toolbox automatically detects components. If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • Who broke the build?

    - by Martin Hinshelwood
    I recently sent round a list of broken builds at SSW and asked for them to be fixed or deleted if they are not being used. My colleague Peter came back with a couple of questions which I love as it tells me that at least one person reads my email I think first we need to answer a couple of other questions related to builds in general.   Why do we want the build to pass? Any developer can pick up a project and build it Standards can be enforced Constant quality is maintained Problems in code are identified early What could a failed build signify? Developers have not built and tested their code properly before checking in. Something added depends on a local resource that is not under version control or does not exist on the target computer. Developers are not writing tests to cover common problems. There are not enough tests to cover problems. Now we know why, lets answer Peters questions: Where is this list? (can we see it somehow) You can normally only see the builds listed for each project. But, you have a little application called “Build Notifications” on your computer. It is installed when you install Visual Studio 2010. Figure: Staring the build notification application on Windows 7. Once you have it open (it may disappear into your system tray) you should click “Options” and select all the projects you are involved in. This application only lists projects that have builds, so don’t worry if it is not listed. This just means you are about to setup a build, right? I just selected ALL projects that have builds. Figure: All builds are listed here In addition to seeing the list you will also get toast notification of build failure’s. How can we get more info on what broke the build? (who is interesting too, to point the finger but more important is what) The only thing worse than breaking the build, is continuing to develop on a broken build! Figure: I have highlighted the users who either are bad for braking the build, or very bad for not fixing it. To find out what is wrong with a build you need to open the build definition. You can open a web version by double clicking the build in the image above, or you can open it from “Team Explorer”. Just connect to your project and open out the “Builds” tree. Then Open the build by double clicking on it. Figure: Opening a build is easy, but double click it and then open a build run from the list. Figure: Good example, the build and tests have passed Figure: Bad example, there are 133 errors preventing POK from being built on the build server. For identifying failures see: Solution: Getting Silverlight to build on Team Build 2010 RC Solution: Testing Web Services with MSTest on Team Build Finding the problem on a partially succeeded build So, Peter asked about blame, let’s have a look and see: Figure: The build has been broken for so long I have no idea when it was broken, but everyone on this list is to blame (I am there too) The rest of the history is lost in the sands of time, there is no way to tell when the build was originally broken, or by whom, or even if it ever worked in the first place. Build should be protected by the team that uses them and the only way to do that is to have them own them. It is fine for me to go in and setup a build, but the ownership for a build should always reside with the person who broke it last. Conclusion This is an example of a pointless build. Lets be honest, if you have a system like TFS in place and builds are constantly left broken, or not added to projects then your developers don’t yet understand the value. I have found that adding a Gated Check-in helps instil that understanding of value. If you prevent them from checking in without passing that basic quality gate of “your code builds on another computer” then it makes them look more closely at why they can’t check-in. I have had builds fail because one developer had a “d” drive, but the build server did not. That is what they are there to catch.   If you want to know what builds to create and why I wrote a post on “Do you know the minimum builds to create on any branch?”   Technorati Tags: TFS2010,Gated Check-in,Builds,Build Failure,Broken Build

    Read the article

  • Get your content off Blogger.com

    - by Daniel Moth
    Due to blogger.com deprecating FTP users I've decided to move my blog. When I think of the content of a blog, 4 items come to mind: blog posts, comments, binary files that the blog posts linked to (e.g. images, ZIP files) and the CSS+structure of the blog. 1. Binaries The binary files you used in your blog posts are sitting on your own web space, so really blogger.com is not involved with that. Nothing for you to do at this stage, I'll come back to these in another post. 2. CSS and structure In the best case this exists as a separate CSS file on your web space (so no action for now) or in a worst case, like me, your CSS is embedded with the HTML. In the latter case, simply navigate from you dashboard to "Template" then "Edit HTML" and copy paste the contents of the box. Save that locally in a txt file and we'll come back to that in another post. 3. Blog posts and Comments The blog posts and comments exist in all the HTML files on your own web space. Parsing HTML files to extract that can be painful, so it is easier to download the XML files from blogger's servers that contain all your blog posts and comments. 3.1 Single XML file, but incomplete The obvious thing to do is go into your dashboard "Settings" and under the "Basic" tab look at the top next to "Blog Tools". There is a link there to "Export blog" which downloads an XML file with both comments and posts. The problem with that is that it only contains 200 comments - if you have more than that, you will lose the surplus. Also, this XML file has a lot of noise, compared to the better solution described next. (note that a tool I will refer to in a future post deals with either kind of XML file) 3.2 Multiple XML files First you need to find your blog ID. In case you don't know what that is, navigate to the "Template" as described in section 2 above. You will find references to the blog id in the HTML there, but you can also see it as part of the URL in your browser: blogger.com/template-edit.g?blogID=YOUR_NUMERIC_ID. Mine is 7 digits. You can now navigate to these URLs to download the XML for your posts and comments respectively: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=1 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=1 Note that you can only get 500 posts at a time and only 200 comments at a time. To get more than that you have to change the URL and download the next batch. To get you started, to get the XML for the next 500 posts and next 200 comments respectively you’d have to use these URLs: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=501 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=201 ...and so on and so forth. Keep all the XML files in the same folder on your local machine (with nothing else in there). 4. Validating the XML aka editing older blog posts The XML files you just downloaded really contain HTML fragments inside for all your blog posts. If you are like me, your blog posts did not conform to XHTML so passing them to an XML parser (which is what we will want to do) will result in the XML parser choking. So the next step is to fix that. This can be no work at all for you, or a huge time sink or just a couple hours of pain (which was my case). The process I followed was to attempt to load the XML files using XmlDocument.Load and wait for the exception to be thrown from my code. The exception would point to the exact offending line and column which would help me fix the issue. Rather than fix it in the XML itself, I would go back and edit the offending blog post and fix it there - recommended! Then I'd repeat the cycle until the XML could be loaded in the XmlDocument. To give you an idea, some of the issues I encountered are: extra or missing quotes in img and href elements, direct usage of chevrons instead of encoding them as &lt;, missing closing tags, mismatched nested pairs of elements and capitalization of html elements. For a full list of things that may go wrong see this. 5. Opportunity for other changes I also found a few posts that did not have a category assigned so I fixed those too. I took the further opportunity to create new categories and tag some of my blog posts with that. Note that I did not remove/change categories of existing posts, but only added.   In an another post we'll see how to use the XML files you stored in the local folder… Comments about this post welcome at the original blog.

    Read the article

  • Puppet: Making Windows Awesome Since 2011

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-making-windows-awesome-since-2011.aspxPuppet was one of the first configuration management (CM) tools to support Windows, way back in 2011. It has the heaviest investment on Windows infrastructure with 1/3 of the platform client development staff being Windows folks.  It appears that Microsoft believed an end state configuration tool like Puppet was the way forward, so much so that they cloned Puppet’s DSL (domain-specific language) in many ways and are calling it PowerShell DSC. Puppet Labs is pushing the envelope on Windows. Here are several things to note: Puppet x64 Ruby support for Windows coming in v3.7.0. An awesome ACL module (with order, SIDs and very granular control of permissions it is best of any CM). A wealth of modules that work with Windows on the Forge (and more on GitHub). Documentation solely for Windows folks - https://docs.puppetlabs.com/windows. Some of the common learning points with Puppet on Windows user are noted in this recent blog post. Microsoft OpenTech supports Puppet. Azure has the ability to deploy a Puppet Master (http://puppetlabs.com/solutions/microsoft). At Microsoft //Build 2014 in the Day 2 Keynote Puppet Labs CEO Luke Kanies co-presented with Mark Russonivich (http://channel9.msdn.com/Events/Build/2014/KEY02  fast forward to 19:30)! Puppet has a Visual Studio Plugin! It can be overwhelming learning a new tool like Puppet at first, but Puppet Labs has some resources to help you on that path. Take a look at the Learning VM, which has a quest-based learning tool. For real-time questions, feel free to drop onto #puppet on freenode.net (yes, some folks still use IRC) with questions, and #puppet-dev with thoughts/feedback on the language itself. You can subscribe to puppet-users / puppet-dev mailing lists. There is also ask.puppetlabs.com for questions and Server Fault if you want to go to a Stack Exchange site. There are books written on learning Puppet. There are even Puppet User Groups (PUGs) and other community resources! Puppet does take some time to learn, but with anything you need to learn, you need to weigh the benefits versus the ramp up time. I learned NHibernate once, it had a very high ramp time back then but was the only game on the street. Puppet’s ramp up time is considerably less than that. The advantage is that you are learning a DSL, and it can apply to multiple platforms (Linux, Windows, OS X, etc.) with the same Puppet resource constructs. As you learn Puppet you may wonder why it has a DSL instead of just leveraging the language of Ruby (or maybe this is one of those things that keeps you up wondering at night). I like the DSL over a small layer on top of Ruby. It allows the Puppet language to be portable and go more places. It makes you think about the end state of what you want to achieve in a declarative sense instead of in an imperative sense. You may also find that right now Puppet doesn’t run manifests (scripts) in order of the way resources are specified. This is the number one learning point for most folks. As a long time consternation of some folks about Puppet, manifest ordering was not possible in the past. In fact it might be why some other CMs exist! As of 3.3.0, Puppet can do manifest ordering, and it will be the default in Puppet 4. http://puppetlabs.com/blog/introducing-manifest-ordered-resources You may have caught earlier that I mentioned PowerShell DSC. But what about DSC? Shouldn’t that be what Windows users want to choose? Other CMs are integrating with DSC, will Puppet follow suit and integrate with DSC? The biggest concern that I have with DSC is it’s lack of visibility in fine-grained reporting of changes (which Puppet has). The other is that it is a very young Microsoft product (pre version 3, you know what they say :) ). I tried getting it working in December and ran into some issues. I’m hoping that newer releases are there that actually work, it does have some promising capabilities, it just doesn’t quite come up to the standard of something that should be used in production. In contrast Puppet is almost a ten year old language with an active community! It’s very stable, and when trusting your business to configuration management, you want something that has been around awhile and has been proven. Give DSC another couple of releases and you might see more folks integrating with it. That said there may be a future with DSC integration. Portability and fine-grained reporting of configuration changes are reasons to take a closer look at Puppet on Windows. Yes, Puppet on Windows is here to stay and it’s continually getting better folks.

    Read the article

  • MVVM Light V4 preview 2 (BL0015) #mvvmlight

    - by Laurent Bugnion
    Over the past few weeks, I have worked hard on a few new features for MVVM Light V4. Here is a second early preview (consider this pre-alpha if you wish). The features are unit-tested, but I am now looking for feedback and there might be bugs! Bug correction: Messenger.CleanupList is now thread safe This was an annoying bug that is now corrected: In some circumstances, an exception could be thrown when the Messenger’s recipients list was cleaned up (i.e. the “dead” instances were removed). The method is called now and then and the exception was thrown apparently at random. In fact it was really a multi-threading issue, which is now corrected. Bug correction: AllowPartiallyTrustedCallers prevents EventToCommand to work This is a particularly annoying regression bug that was introduced in BL0014. In order to allow MVVM Light to work in XBAPs too, I added the AllowPartiallyTrustedCallers attribute to the assemblies. However, we just found out that this causes issues when using EventToCommand. In order to allow EventToCommand to continue working, I reverted to the previous state by removing the AllowPartiallyTrustedCallers attribute for now. I will work with my friends at Microsoft to try and find a solution. Stay tuned. Bug correction: XML documentation file is now generated in Release configuration The XML documentation file was not generated for the Release configuration. This was a simple flag in the project file that I had forgotten to set. This is corrected now. Applying EventToCommand to non-FrameworkElements This feature has been requested in order to be able to execute a command when a Storyboard is completed. I implemented this, but unfortunately found out that EventToCommand can only be added to Storyboards in Silverlight 3 and Silverlight 4, but not in WPF or in Windows Phone 7. This obviously limits the usefulness of this change, but I decided to publish it anyway, because it is pretty damn useful in Silverlight… Why not in WPF? In WPF, Storyboards added to a resource dictionary are frozen. This is a feature of WPF which allows to optimize certain objects for performance: By freezing them, it is a contract where we say “this object will not be modified anymore, so do your perf optimization on them without worrying too much”. Unfortunately, adding a Trigger (such as EventTrigger) to an object in resources does not work if this object is frozen… and unfortunately, there is no way to tell WPF not to freeze the Storyboard in the resources… so there is no way around that (at least none I can see. In Silverlight, objects are not frozen, so an EventTrigger can be added without problems. Why not in WP7? In Windows Phone 7, there is a totally different issue: Adding a Trigger can only be done to a FrameworkElement, which Storyboard is not. Here I think that we might see a change in a future version of the framework, so maybe this small trick will work in the future. Workaround? Since you cannot use the EventToCommand on a Storyboard in WPF and in WP7, the workaround is pretty obvious: Handle the Completed event in the code behind, and call the Command from there on the ViewModel. This object can be obtained by casting the DataContext to the ViewModel type. This means that the View needs to know about the ViewModel, but I never had issues with that anyway. New class: NotifyPropertyChanged Sometimes when you implement a model object (for example Customer), you would like to have it implement INotifyPropertyChanged, but without having all the frills of a ViewModelBase. A new class named NotifyPropertyChanged allows you to do that. This class is a simple implementation of INotifyPropertyChaned (with all the overloads of RaisePropertyChanged that were implemented in BL0014). In fact, ViewModelBase inherits NotifyPropertyChanged. ViewModelBase does not implement IDisposable anymore The IDisposable interface and the Dispose method had been marked obsolete in the ViewModelBase class already in V3. Now they have been removed. Note: By this, I do not mean that IDisposable is a bad interface, or that it shouldn’t be used on viewmodels. In the contrary, I know that this interface is very useful in certain circumstances. However, I think that having it by default on every instance of ViewModelBase was sending a wrong message. This interface has a strong meaning in .NET: After Dispose has been executed, the instance should not be used anymore, and should be ready for garbage collection. What I really wanted to have on ViewModelBase was rather a simple cleanup method, something that can be executed now and then during runtime. This is fulfilled by the ICleanup interface and its Cleanup method. If your ViewModels need IDisposable, you can still use it! You will just have to implement the interface on the class itself, because it is not available on ViewModelBase anymore. What’s next? I have a couple exciting new features implemented already but that need more testing before they go live… Just stay tuned and by MIX11 (12-14 April 2011), we should see at least a major addition to MVVM Light Toolkit, as well as another smaller feature which is pretty cool nonetheless More about this later! Happy Coding Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Check Your Spelling, Grammar, and Style in Firefox and Chrome

    - by Matthew Guay
    Are you tired of making simple writing mistakes that get past your browser’s spell-check?  Here’s how you can get advanced grammar check and more in Firefox and Chrome with After the Deadline. Microsoft Word has spoiled us with grammar, syntax, and spell checking, but the default spell check in Firefox and Chrome still only does basic checks.  Even webapps like Google Docs don’t check more than basic spelling errors.  However, WordPress.com is an exception; it offers advanced spelling, grammar, and syntax checking with its After the Deadline proofing system.  This helps you keep from making embarrassing mistakes on your blog posts, and now, thanks to a couple free browser plugins, it can help you keep from making these mistakes in any website or webapp. After the Deadline in Google Chrome Add the After the Deadline extension (link below) to Chrome as usual. As soon as it’s installed, you’re ready to start improving your online writing.  To check spelling, grammar, and more, click the ABC button that you’ll now see at the bottom of most text boxes online. After a quick scan, grammar mistakes are highlighted in green, complex expressions and other syntax problems are highlighted in blue, and spelling mistakes are highlighted in red as would be expected.  Click on an underlined word to choose one of its recommended changes or ignore the suggestion. Or, if you want more explanation about what was wrong with that word or phrase, click Explain for more info. And, if you forget to run an After the Deadline scan before submitting a text entry, it will automatically check to make sure you still want to submit it.  Click Cancel to go back and check your writing first.   To change the After the Deadline settings, click its icon in the toolbar and select View Options.  Additionally, if you want to disable it on the site you’re on, you can click Disable on this site directly from the popup. From the settings page, you can choose extra things to check for such as double negatives and redundant phrases, as well as add sites and words to ignore. After the Deadline in Firefox Add the After the Deadline add-on to Firefox (link below) as normal. After the Deadline basically the same in Firefox as it does in Chrome.  Select the ABC icon in the lower right corner of textboxes to check them for problems, and After the Deadline will underline the problems as it did in Chrome.  To view a suggested change in Firefox, right-click on the underlined word and select the recommended change or ignore the suggestion. And, if you forget to check, you’ll see a friendly reminder asking if you’re sure you want to submit your text like it is. You can access the After the Deadline settings in Firefox from the menu bar.  Click Tools, then select AtD Preferences.  In Firefox, the settings are in a options dialog with three tabs, but it includes the same options as the Chrome settings page.  Here you can make After the Deadline as correction-happy as you like.   Conclusion The web has increasingly become an interactive place, and seldom does a day go by that we aren’t entering text in forms and comments that may stay online forever.  Even our insignificant tweets are being archived in the Library of Congress.  After the Deadline can help you make sure that your permanent internet record is as grammatically correct as possible.  Even though it doesn’t catch every problem, and even misses some spelling mistakes, it’s still a great help. Links Download the After the Deadline extension for Google Chrome Download the After the Deadline add-on for Firefox Similar Articles Productive Geek Tips Quick Tip: Disable Favicons in FirefoxStupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxHow to Disable the New Geolocation Feature in Google ChromeStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeStop YouTube Videos from Automatically Playing in Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Big Data – Various Learning Resources – How to Start with Big Data? – Day 20 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned how to become a Data Scientist for Big Data. In this article we will go over various learning resources related to Big Data. In this series we have covered many of the most essential details about Big Data. At the beginning of this series, I have encouraged readers to send me questions. One of the most popular questions is - “I want to learn more about Big Data. Where can I learn it?” This is indeed a great question as there are plenty of resources out to learn about Big Data and it is indeed difficult to select on one resource to learn Big Data. Hence I decided to write here a few of the very important resources which are related to Big Data. Learn from Pluralsight Pluralsight is a global leader in high-quality online training for hardcore developers.  It has fantastic Big Data Courses and I started to learn about Big Data with the help of Pluralsight. Here are few of the courses which are directly related to Big Data. Big Data: The Big Picture Big Data Analytics with Tableau NoSQL: The Big Picture Understanding NoSQL Data Analysis Fundamentals with Tableau I encourage all of you start with this video course as they are fantastic fundamentals to learn Big Data. Learn from Apache Resources at Apache are single point the most authentic learning resources. If you want to learn fundamentals and go deep about every aspect of the Big Data, I believe you must understand various concepts in Apache’s library. I am pretty impressed with the documentation and I am personally referencing it every single day when I work with Big Data. I strongly encourage all of you to bookmark following all the links for authentic big data learning. Haddop - The Apache Hadoop® project develops open-source software for reliable, scalable, distributed computing. Ambari: A web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters which include support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard for viewing cluster health such as heat maps and ability to view MapReduce, Pig and Hive applications visually along with features to diagnose their performance characteristics in a user-friendly manner. Avro: A data serialization system. Cassandra: A scalable multi-master database with no single points of failure. Chukwa: A data collection system for managing large distributed systems. HBase: A scalable, distributed database that supports structured data storage for large tables. Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying. Mahout: A Scalable machine learning and data mining library. Pig: A high-level data-flow language and execution framework for parallel computation. ZooKeeper: A high-performance coordination service for distributed applications. Learn from Vendors One of the biggest issues with about learning Big Data is setting up the environment. Every Big Data vendor has different environment request and there are lots of things require to set up Big Data framework. Many of the users do not start with Big Data as they are afraid about the resources required to set up framework as well as a time commitment. Here Hortonworks have created fantastic learning environment. They have created Sandbox with everything one person needs to learn Big Data and also have provided excellent tutoring along with it. Sandbox comes with a dozen hands-on tutorial that will guide you through the basics of Hadoop as well it contains the Hortonworks Data Platform. I think Hortonworks did a fantastic job building this Sandbox and Tutorial. Though there are plenty of different Big Data Vendors I have decided to list only Hortonworks due to their unique setup. Please leave a comment if there are any other such platform to learn Big Data. I will include them over here as well. Learn from Books There are indeed few good books out there which one can refer to learn Big Data. Here are few good books which I have read. I will update the list as I will learn more. Ethics of Big Data Balancing Risk and Innovation Big Data for Dummies Head First Data Analysis: A Learner’s Guide to Big Numbers, Statistics, and Good Decisions If you search on Amazon there are millions of the books but I think above three books are a great set of books and it will give you great ideas about Big Data. Once you go through above books, you will have a clear idea about what is the next step you should follow in this series. You will be capable enough to make the right decision for yourself. Tomorrow In tomorrow’s blog post we will wrap up this series of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • concurrency::accelerator

    - by Daniel Moth
    Overview An accelerator represents a "target" on which C++ AMP code can execute and where data can reside. Typically (but not necessarily) an accelerator is a GPU device. Accelerators are represented in C++ AMP as objects of the accelerator class. For many scenarios, you do not need to obtain an accelerator object, since the runtime has a notion of a default accelerator, which is what it thinks is the best one in the system. Examples where you need to deal with accelerator objects are if you need to pick your own accelerator (based on your specific criteria), or if you need to use more than one accelerators from your app. Construction and operator usage You can query and obtain a std::vector of all the accelerators on your system, which the runtime discovers on startup. Beyond enumerating accelerators, you can also create one directly by passing to the constructor a system-wide unique path to a device if you know it (i.e. the “Device Instance Path” property for the device in Device Manager), e.g. accelerator acc(L"PCI\\VEN_1002&DEV_6898&SUBSYS_0B001002etc"); There are some predefined strings (for predefined accelerators) that you can pass to the accelerator constructor (and there are corresponding constants for those on the accelerator class itself, so you don’t have to hardcode them every time). Examples are the following: accelerator::default_accelerator represents the default accelerator that the C++ AMP runtime picks for you if you don’t pick one (the heuristics of how it picks one will be covered in a future post). Example: accelerator acc; accelerator::direct3d_ref represents the reference rasterizer emulator that simulates a direct3d device on the CPU (in a very slow manner). This emulator is available on systems with Visual Studio installed and is useful for debugging. More on debugging in general in future posts. Example: accelerator acc(accelerator::direct3d_ref); accelerator::direct3d_warp represents a target that I will cover in future blog posts. Example: accelerator acc(accelerator::direct3d_warp); accelerator::cpu_accelerator represents the CPU. In this first release the only use of this accelerator is for using the staging arrays technique that I'll cover separately. Example: accelerator acc(accelerator::cpu_accelerator); You can also create an accelerator by shallow copying another accelerator instance (via the corresponding constructor) or simply assigning it to another accelerator instance (via the operator overloading of =). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator objects between them to determine if they refer to the same underlying device. Querying accelerator characteristics Given an accelerator object, you can access its description, version, device path, size of dedicated memory in KB, whether it is some kind of emulator, whether it has a display attached, whether it supports double precision, and whether it was created with the debugging layer enabled for extensive error reporting. Below is example code that accesses some of the properties; in your real code you'd probably be checking one or more of them in order to pick an accelerator (or check that the default one is good enough for your specific workload): void inspect_accelerator(concurrency::accelerator acc) { std::wcout << "New accelerator: " << acc.description << std::endl; std::wcout << "is_debug = " << acc.is_debug << std::endl; std::wcout << "is_emulated = " << acc.is_emulated << std::endl; std::wcout << "dedicated_memory = " << acc.dedicated_memory << std::endl; std::wcout << "device_path = " << acc.device_path << std::endl; std::wcout << "has_display = " << acc.has_display << std::endl; std::wcout << "version = " << (acc.version >> 16) << '.' << (acc.version & 0xFFFF) << std::endl; } accelerator_view In my next blog post I'll cover a related class: accelerator_view. Suffice to say here that each accelerator may have from 1..n related accelerator_view objects. You can get the accelerator_view from an accelerator via the default_view property, or create new ones by invoking the create_view method that creates an accelerator_view object for you (by also accepting a queuing_mode enum value of deferred or immediate that we'll also explore in the next blog post). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • ct.sym steals the ASM class

    - by Geertjan
    Some mild consternation on the Twittersphere yesterday. Marcus Lagergren not being able to find the ASM classes in JDK 8 in NetBeans IDE: And there's no such problem in Eclipse (and apparently in IntelliJ IDEA). Help, does NetBeans (despite being incredibly awesome) suck, after all? The truth of the matter is that there's something called "ct.sym" in the JDK. When javac is compiling code, it doesn't link against rt.jar. Instead, it uses a special symbol file lib/ct.sym with class stubs. Internal JDK classes are not put in that symbol file, since those are internal classes. You shouldn't want to use them, at all. However, what if you're Marcus Lagergren who DOES need these classes? I.e., he's working on the internal JDK classes and hence needs to have access to them. Fair enough that the general Java population can't access those classes, since they're internal implementation classes that could be changed anytime and one wouldn't want all unknown clients of those classes to start breaking once changes are made to the implementation, i.e., this is the rt.jar's internal class protection mechanism. But, again, we're now Marcus Lagergen and not the general Java population. For the solution, read Jan Lahoda, NetBeans Java Editor guru, here: https://netbeans.org/bugzilla/show_bug.cgi?id=186120 In particular, take note of this: AFAIK, the ct.sym is new in JDK6. It contains stubs for all classes that existed in JDK5 (for compatibility with existing programs that would use private JDK classes), but does not contain implementation classes that were introduced in JDK6 (only API classes). This is to prevent application developers to accidentally use JDK's private classes (as such applications would be unportable and may not run on future versions of JDK). Note that this is not really a NB thing - this is the behavior of javac from the JDK. I do not know about any way to disable this except deleting ct.sym or the option mentioned above. Regarding loading the classes: JVM uses two classpath's: classpath and bootclasspath. rt.jar is on the bootclasspath and has precedence over anything on the "custom" classpath, which is used by the application. The usual way to override classes on bootclasspath is to start the JVM with "-Xbootclasspath/p:" option, which prepends the given jars (and presumably also directories) to bootclasspath. Hence, let's take the first option, the simpler one, and simply delete the "ct.sym" file. Again, only because we need to work with those internal classes as developers of the JDK, not because we want to hack our way around "ct.sym", which would mean you'd not have portable code at the end of the day. Go to the JDK 8 lib folder and you'll find the file: Delete it. Start NetBeans IDE again, either on JDK 7 or JDK 8, doesn't make a difference for these purposes, create a new Java application (or use an existing one), make sure you have set the JDK above as the JDK of the application, and hey presto: The above obviously assumes you have a build of JDK 8 that actually includes the ASM package. And below you can see that not only are the classes found but my build succeeded, even though I'm using internal JDK classes. The yellow markings in the sidebar mean that the classes are imported but not used in the code, where normally, if I hadn't removed "ct.sym", I would have seen red error marking instead, and the code wouldn't have compiled. Note: I've tried setting "-XDignore.symbol.file" in "netbeans.conf" and in other places, but so far haven't got that to work. Simply deleting the "ct.sym" file (or back it up somewhere and put it back when needed) is quite clearly the most straightforward solution. Ultimately, if you want to be able to use those internal classes while still having portable code, do you know what you need to do? You need to create a JDK bug report stating that you need an internal class to be added to "ct.sym". Probably you'll get a motivation back stating WHY that internal class isn't supposed to be used externally. There must be a reason why those classes aren't available for external usage, otherwise they would have been added to "ct.sym". So, now the only remaining question is why the Eclipse compiler doesn't hide the internal JDK classes. Apparently the Eclipse compiler ignores the "ct.sym" file. In other words, at the end of the day, far from being a bug in NetBeans... we have now found a (pretty enormous, I reckon) bug in Eclipse. The Eclipse compiler does not protect you from using internal JDK classes and the code that you create in Eclipse may not work with future releases of the JDK, since the JDK team is simply going to be changing those classes that are not found in the "ct.sym" file while assuming (correctly, thanks to the presence of "ct.sym" mechanism) that no code in the world, other than JDK code, is tied to those classes.

    Read the article

  • Switching the layout in Orchard CMS

    - by Bertrand Le Roy
    The UI composition in Orchard is extremely flexible, thanks in no small part to the usage of dynamic Clay shapes. Every notable UI construct in Orchard is built as a shape that other parts of the system can then party on and modify any way they want. Case in point today: modifying the layout (which is a shape) on the fly to provide custom page structures for different parts of the site. This might actually end up being built-in Orchard 1.0 but for the moment it’s not in there. Plus, it’s quite interesting to see how it’s done. We are going to build a little extension that allows for specialized layouts in addition to the default layout.cshtml that Orchard understands out of the box. The extension will add the possibility to add the module name (or, in MVC terms, area name) to the template name, or module and controller names, or module, controller and action names. For example, the home page is served by the HomePage module, so with this extension you’ll be able to add an optional layout-homepage.cshtml file to your theme to specialize the look of the home page while leaving all other pages using the regular layout.cshtml. I decided to implement this sample as a theme with code. This way, the new overrides are only enabled as the theme is activated, which makes a lot of sense as this is going to be where you’ll be creating those additional layouts. The first thing I did was to create my own theme, derived from the default TheThemeMachine with this command: codegen theme CustomLayoutMachine /CreateProject:true /IncludeInSolution:true /BasedOn:TheThemeMachine .csharpcode, .csharpcode pre { font-size: 12px; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Once that was done, I worked around a known bug and moved the new project from the Modules solution folder into Themes (the code was already physically in the right place, this is just about Visual Studio editing). The CreateProject flag in the command-line created a project file for us in the theme’s folder. This is only necessary if you want to run code outside of views from that theme. The code that we want to add is the following LayoutFilter.cs: using System.Linq; using System.Web.Mvc; using System.Web.Routing; using Orchard; using Orchard.Mvc.Filters; namespace CustomLayoutMachine.Filters { public class LayoutFilter : FilterProvider, IResultFilter { private readonly IWorkContextAccessor _wca; public LayoutFilter(IWorkContextAccessor wca) { _wca = wca; } public void OnResultExecuting(ResultExecutingContext filterContext) { var workContext = _wca.GetContext(); var routeValues = filterContext.RouteData.Values; workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller", "action")); } public void OnResultExecuted(ResultExecutedContext filterContext) { } private static string BuildShapeName( RouteValueDictionary values, params string[] names) { return "Layout__" + string.Join("__", names.Select(s => ((string)values[s] ?? "").Replace(".", "_"))); } } } This filter is intercepting ResultExecuting, which is going to provide a context object out of which we can extract the route data. We are also injecting an IWorkContextAccessor dependency that will give us access to the current Layout object, so that we can add alternate shape names to its metadata. We are adding three possible shape names to the default, with different combinations of area, controller and action names. For example, a request to a blog post is going to be routed to the “Orchard.Blogs” module’s “BlogPost” controller’s “Item” action. Our filters will then add the following shape names to the default “Layout”: Layout__Orchard_Blogs Layout__Orchard_Blogs__BlogPost Layout__Orchard_Blogs__BlogPost__Item Those template names get mapped into the following file names by the system (assuming the Razor view engine): Layout-Orchard_Blogs.cshtml Layout-Orchard_Blogs-BlogPost.cshtml Layout-Orchard_Blogs-BlogPost-Item.cshtml This works for any module/controller/action of course, but in the sample I created Layout-HomePage.cshtml (a specific layout for the home page), Layout-Orchard_Blogs.cshtml (a layout for all the blog views) and Layout-Orchard_Blogs-BlogPost-Item.cshtml (a layout that is specific to blog posts). Of course, this is just an example, and this kind of dynamic extension of shapes that you didn’t even create in the first place is highly encouraged in Orchard. You don’t have to do it from a filter, we only did it this way because that was a good place where we could get the context that we needed. And of course, you can base your alternate shape names on something completely different from route values if you want. For example, you might want to create your own part that modifies the layout for a specific content item, or you might want to do it based on the raw URL (like it’s done in widget rules) or who knows what crazy custom rule. The point of all this is to show that extending or modifying shapes is easy, and the layout just happens to be a shape. In other words, you can do whatever you want. Ain’t that nice? The custom theme can be found here: Orchard.Theme.CustomLayoutMachine.1.0.nupkg Many thanks to Louis, who showed me how to do this.

    Read the article

  • Screen (command-line program) bug?

    - by VioletCrime
    fired up my Minecraft server again after about a year off. My server used to run 11.04, which has since been upgraded to 12.04; I lost my management scripts in the upgrade (thought I'd backed up the user's home directory), but whatever, I enjoy developing stuff like that anyways. However, this time around, I'm running into issues. I start the Minecraft server using a detached screen, however the script is unable to 'stuff' commands into the screen instance until I attach to the screen then detach again? Once I do that, I can stuff anything I want into the server's terminal using the -X option, until I stop/start the server again, then I have to reattach and detach in order to restore functionality? Here's the manager script: #!/bin/bash #Name of the screen housing the server (set with the -S flag at startup) SCREENNAME=minecraft1 #Name of the folder housing the Minecraft world FOLDERNAME=world1 startServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Cannot start Minecraft server; it is already running!" else screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server started; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } stopServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } stopServerNow (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 5-second warning." screen -S $SCREENNAME -X stuff "/say EMERGENCY SHUTDOWN! 5 seconds to halt." screen -S $SCREENNAME -X stuff $'\015' sleep 5 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } restartServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' sleep 2 startServer else echo "Cannot restart server: it isn't running." fi; } #In order for this function to work, a directory 'backup/$FOLDERNAME' must exist in the same #directory that '$FOLDERNAME' resides backupWorld (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (backup) in one minute." screen -S $SCREENNAME -X stuff $'\015' screen -S $SCREENNAME -X stuff "/say Server should be down for no more than a few seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down for backup in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' fi sleep 2 if screen -list | grep $SCREENNAME > /dev/null then echo "Server is still running? Error." else cd .. tar -czvf backup0.tar.gz $FOLDERNAME mv backup0.tar.gz backup/$FOLDERNAME cd backup/$FOLDERNAME rm backup10.tar.gz mv backup9.tar.gz backup10.tar.gz mv backup8.tar.gz backup9.tar.gz mv backup7.tar.gz backup8.tar.gz mv backup6.tar.gz backup7.tar.gz mv backup5.tar.gz backup6.tar.gz mv backup4.tar.gz backup5.tar.gz mv backup3.tar.gz backup4.tar.gz mv backup2.tar.gz backup3.tar.gz mv backup1.tar.gz backup2.tar.gz mv backup0.tar.gz backup1.tar.gz cd ../../$FOLDERNAME screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar; sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server restarted; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } printCommands (){ echo echo "$0 usage:" echo echo "Start : Starts the server on a detached screen." echo "Stop : Stop the server; includes a 1-minute warning." echo "StopNOW : Stops the server with only a 5-second warning." echo "Restart : Stops the server and starts the server again." echo "Backup : Stops the server (1 min), backs up the world, and restarts." echo "Help : Display this message." } #Forces case-insensitive string comparisons shopt -s nocasematch #Primary 'Switch' if [[ $1 = "start" ]] then startServer elif [[ $1 = "stop" ]] then stopServer elif [[ $1 = "stopnow" ]] then stopServerNow elif [[ $1 = "backup" ]] then backupWorld elif [[ $1 = "restart" ]] then restartServer else printCommands fi

    Read the article

  • web.xml not reloading in tomcat even after stop/start

    - by ajay
    This is in relation to:- http://stackoverflow.com/questions/2576514/basic-tomcat-servlet-error I changed my web.xml file, did ant compile , all, /etc/init.d/tomcat stop , start Even then my web.xml file in tomcat deployment is still unchanged. This is build.properties file:- app.name=hello catalina.home=/usr/local/tomcat manager.username=admin manager.password=admin This is my build.xml file. Is there something wrong with this:- <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- General purpose build script for web applications and web services, including enhanced support for deploying directly to a Tomcat 6 based server. This build script assumes that the source code of your web application is organized into the following subdirectories underneath the source code directory from which you execute the build script: docs Static documentation files to be copied to the "docs" subdirectory of your distribution. src Java source code (and associated resource files) to be compiled to the "WEB-INF/classes" subdirectory of your web applicaiton. web Static HTML, JSP, and other content (such as image files), including the WEB-INF subdirectory and its configuration file contents. $Id: build.xml.txt 562814 2007-08-05 03:52:04Z markt $ --> <!-- A "project" describes a set of targets that may be requested when Ant is executed. The "default" attribute defines the target which is executed if no specific target is requested, and the "basedir" attribute defines the current working directory from which Ant executes the requested task. This is normally set to the current working directory. --> <project name="My Project" default="compile" basedir="."> <!-- ===================== Property Definitions =========================== --> <!-- Each of the following properties are used in the build script. Values for these properties are set by the first place they are defined, from the following list: * Definitions on the "ant" command line (ant -Dfoo=bar compile). * Definitions from a "build.properties" file in the top level source directory of this application. * Definitions from a "build.properties" file in the developer's home directory. * Default definitions in this build.xml file. You will note below that property values can be composed based on the contents of previously defined properties. This is a powerful technique that helps you minimize the number of changes required when your development environment is modified. Note that property composition is allowed within "build.properties" files as well as in the "build.xml" script. --> <property file="build.properties"/> <property file="${user.home}/build.properties"/> <!-- ==================== File and Directory Names ======================== --> <!-- These properties generally define file and directory names (or paths) that affect where the build process stores its outputs. app.name Base name of this application, used to construct filenames and directories. Defaults to "myapp". app.path Context path to which this application should be deployed (defaults to "/" plus the value of the "app.name" property). app.version Version number of this iteration of the application. build.home The directory into which the "prepare" and "compile" targets will generate their output. Defaults to "build". catalina.home The directory in which you have installed a binary distribution of Tomcat 6. This will be used by the "deploy" target. dist.home The name of the base directory in which distribution files are created. Defaults to "dist". manager.password The login password of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) manager.url The URL of the "/manager" web application on the Tomcat installation to which we will deploy web applications and web services. manager.username The login username of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) --> <property name="app.name" value="myapp"/> <property name="app.path" value="/${app.name}"/> <property name="app.version" value="0.1-dev"/> <property name="build.home" value="${basedir}/build"/> <property name="catalina.home" value="../../../.."/> <!-- UPDATE THIS! --> <property name="dist.home" value="${basedir}/dist"/> <property name="docs.home" value="${basedir}/docs"/> <property name="manager.url" value="http://localhost:8080/manager"/> <property name="src.home" value="${basedir}/src"/> <property name="web.home" value="${basedir}/web"/> <!-- ==================== External Dependencies =========================== --> <!-- Use property values to define the locations of external JAR files on which your application will depend. In general, these values will be used for two purposes: * Inclusion on the classpath that is passed to the Javac compiler * Being copied into the "/WEB-INF/lib" directory during execution of the "deploy" target. Because we will automatically include all of the Java classes that Tomcat 6 exposes to web applications, we will not need to explicitly list any of those dependencies. You only need to worry about external dependencies for JAR files that you are going to include inside your "/WEB-INF/lib" directory. --> <!-- Dummy external dependency --> <!-- <property name="foo.jar" value="/path/to/foo.jar"/> --> <!-- ==================== Compilation Classpath =========================== --> <!-- Rather than relying on the CLASSPATH environment variable, Ant includes features that makes it easy to dynamically construct the classpath you need for each compilation. The example below constructs the compile classpath to include the servlet.jar file, as well as the other components that Tomcat makes available to web applications automatically, plus anything that you explicitly added. --> <path id="compile.classpath"> <!-- Include all JAR files that will be included in /WEB-INF/lib --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <!-- <pathelement location="${foo.jar}"/> --> <!-- Include all elements that Tomcat exposes to applications --> <fileset dir="${catalina.home}/bin"> <include name="*.jar"/> </fileset> <pathelement location="${catalina.home}/lib"/> <fileset dir="${catalina.home}/lib"> <include name="*.jar"/> </fileset> </path> <!-- ================== Custom Ant Task Definitions ======================= --> <!-- These properties define custom tasks for the Ant build tool that interact with the "/manager" web application installed with Tomcat 6. Before they can be successfully utilized, you must perform the following steps: - Copy the file "lib/catalina-ant.jar" from your Tomcat 6 installation into the "lib" directory of your Ant installation. - Create a "build.properties" file in your application's top-level source directory (or your user login home directory) that defines appropriate values for the "manager.password", "manager.url", and "manager.username" properties described above. For more information about the Manager web application, and the functionality of these tasks, see <http://localhost:8080/tomcat-docs/manager-howto.html>. --> <taskdef resource="org/apache/catalina/ant/catalina.tasks" classpathref="compile.classpath"/> <!-- ==================== Compilation Control Options ==================== --> <!-- These properties control option settings on the Javac compiler when it is invoked using the <javac> task. compile.debug Should compilation include the debug option? compile.deprecation Should compilation include the deprecation option? compile.optimize Should compilation include the optimize option? --> <property name="compile.debug" value="true"/> <property name="compile.deprecation" value="false"/> <property name="compile.optimize" value="true"/> <!-- ==================== All Target ====================================== --> <!-- The "all" target is a shortcut for running the "clean" target followed by the "compile" target, to force a complete recompile. --> <target name="all" depends="clean,compile" description="Clean build and dist directories, then compile"/> <!-- ==================== Clean Target ==================================== --> <!-- The "clean" target deletes any previous "build" and "dist" directory, so that you can be ensured the application can be built from scratch. --> <target name="clean" description="Delete old build and dist directories"> <delete dir="${build.home}"/> <delete dir="${dist.home}"/> </target> <!-- ==================== Compile Target ================================== --> <!-- The "compile" target transforms source files (from your "src" directory) into object files in the appropriate location in the build directory. This example assumes that you will be including your classes in an unpacked directory hierarchy under "/WEB-INF/classes". --> <target name="compile" depends="prepare" description="Compile Java sources"> <!-- Compile Java classes as necessary --> <mkdir dir="${build.home}/WEB-INF/classes"/> <javac srcdir="${src.home}" destdir="${build.home}/WEB-INF/classes" debug="${compile.debug}" deprecation="${compile.deprecation}" optimize="${compile.optimize}"> <classpath refid="compile.classpath"/> </javac> <!-- Copy application resources --> <copy todir="${build.home}/WEB-INF/classes"> <fileset dir="${src.home}" excludes="**/*.java"/> </copy> </target> <!-- ==================== Dist Target ===================================== --> <!-- The "dist" target creates a binary distribution of your application in a directory structure ready to be archived in a tar.gz or zip file. Note that this target depends on two others: * "compile" so that the entire web application (including external dependencies) will have been assembled * "javadoc" so that the application Javadocs will have been created --> <target name="dist" depends="compile,javadoc" description="Create binary distribution"> <!-- Copy documentation subdirectories --> <mkdir dir="${dist.home}/docs"/> <copy todir="${dist.home}/docs"> <fileset dir="${docs.home}"/> </copy> <!-- Create application JAR file --> <jar jarfile="${dist.home}/${app.name}-${app.version}.war" basedir="${build.home}"/> <!-- Copy additional files to ${dist.home} as necessary --> </target> <!-- ==================== Install Target ================================== --> <!-- The "install" target tells the specified Tomcat 6 installation to dynamically install this web application and make it available for execution. It does *not* cause the existence of this web application to be remembered across Tomcat restarts; if you restart the server, you will need to re-install all this web application. If you have already installed this application, and simply want Tomcat to recognize that you have updated Java classes (or the web.xml file), use the "reload" target instead. NOTE: This target will only succeed if it is run from the same server that Tomcat is running on. NOTE: This is the logical opposite of the "remove" target. --> <target name="install" depends="compile" description="Install application to servlet container"> <deploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}" localWar="file://${build.home}"/> </target> <!-- ==================== Javadoc Target ================================== --> <!-- The "javadoc" target creates Javadoc API documentation for the Java classes included in your application. Normally, this is only required when preparing a distribution release, but is available as a separate target in case the developer wants to create Javadocs independently. --> <target name="javadoc" depends="compile" description="Create Javadoc API documentation"> <mkdir dir="${dist.home}/docs/api"/> <javadoc sourcepath="${src.home}" destdir="${dist.home}/docs/api" packagenames="*"> <classpath refid="compile.classpath"/> </javadoc> </target> <!-- ====================== List Target =================================== --> <!-- The "list" target asks the specified Tomcat 6 installation to list the currently running web applications, either loaded at startup time or installed dynamically. It is useful to determine whether or not the application you are currently developing has been installed. --> <target name="list" description="List installed applications on servlet container"> <list url="${manager.url}" username="${manager.username}" password="${manager.password}"/> </target> <!-- ==================== Prepare Target ================================== --> <!-- The "prepare" target is used to create the "build" destination directory, and copy the static contents of your web application to it. If you need to copy static files from external dependencies, you can customize the contents of this task. Normally, this task is executed indirectly when needed. --> <target name="prepare"> <!-- Create build directories as needed --> <mkdir dir="${build.home}"/> <mkdir dir="${build.home}/WEB-INF"/> <mkdir dir="${build.home}/WEB-INF/classes"/> <!-- Copy static content of this web application --> <copy todir="${build.home}"> <fileset dir="${web.home}"/> </copy> <!-- Copy external dependencies as required --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <mkdir dir="${build.home}/WEB-INF/lib"/> <!-- <copy todir="${build.home}/WEB-INF/lib" file="${foo.jar}"/> --> <!-- Copy static files from external dependencies as needed --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> </target> <!-- ==================== Reload Target =================================== --> <!-- The "reload" signals the specified application Tomcat 6 to shut itself down and reload. This can be useful when the web application context is not reloadable and you have updated classes or property files in the /WEB-INF/classes directory or when you have added or updated jar files in the /WEB-INF/lib directory. NOTE: The /WEB-INF/web.xml web application configuration file is not reread on a reload. If you have made changes to your web.xml file you must stop then start the web application. --> <target name="reload" depends="compile" description="Reload application on servlet container"> <reload url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> <!-- ==================== Remove Target =================================== --> <!-- The "remove" target tells the specified Tomcat 6 installation to dynamically remove this web application from service. NOTE: This is the logical opposite of the "install" target. --> <target name="remove" description="Remove application on servlet container"> <undeploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> </project>

    Read the article

  • Driver error when using multiple shaders

    - by Jinxi
    I'm using 3 different shaders: a tessellation shader to use the tessellation feature of DirectX11 :) a regular shader to show how it would look without tessellation and a text shader to display debug-info such as FPS, model count etc. All of these shaders are initialized at the beginning. Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader. Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message: Display Driver stopped responding and has recovered This happens with either of the two shaders used to render the scene. Additionally: I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader. What am I doing wrong when switching between shaders? What am I doing wrong when displaying text at the same time? What file can I post to help you help me? :) thx P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok Testing Procedure Regular Shader without text shader Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...) Remove text shader, then change shader to Tessellation Shader by key input Then if I add the Text Shader or switch back to the Regular Shader Switching/Render Shader Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader": if(m_useTessellationShader) { // Render the model using the tesselation shader ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), worldMatrix, viewMatrix, projectionMatrix, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), (D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT); } else { // todo: loaded model depends on distance to camera // Render the model using the light shader. ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), worldMatrix, viewMatrix, projectionMatrix); } And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader": // RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer. void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader) { unsigned int stride; unsigned int offset; // Set vertex buffer stride and offset. stride = sizeof(VertexType); offset = 0; // Set the vertex buffer to active in the input assembler so it can be rendered. deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); // Set the index buffer to active in the input assembler so it can be rendered. deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); // Check which Shader is used to set the appropriate Topology // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles. if(useTessellationShader) { deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST); }else{ deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } return; } RenderShader Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader? Here a little overview of my architecture: TextClass: uses font.vs and font.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); RegularShader: uses vertex.vs and pixel.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-HSSetShader(m_hullShader, NULL, 0); deviceContext-DSSetShader(m_domainShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); ClearState I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL: I found following in my Direct3D Class: depth-stencil state - m_deviceContext-OMSetDepthStencilState rasterizer state - m_deviceContext-RSSetState(m_rasterState); blend state - m_device-CreateBlendState viewports - m_deviceContext-RSSetViewports(1, &viewport); I found following in every Shader Class: input/output resource slots - deviceContext-PSSetShaderResources shaders - deviceContext-VSSetShader to - deviceContext-PSSetShader input layouts - device-CreateInputLayout sampler state - device-CreateSamplerState These two I didn't understand, where can I find them? predications - ? scissor rectangles - ? Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!

    Read the article

  • Big Data – Buzz Words: What is Hadoop – Day 6 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is NoSQL. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – Hadoop. What is Hadoop? Apache Hadoop is an open-source, free and Java based software framework offers a powerful distributed platform to store and manage Big Data. It is licensed under an Apache V2 license. It runs applications on large clusters of commodity hardware and it processes thousands of terabytes of data on thousands of the nodes. Hadoop is inspired from Google’s MapReduce and Google File System (GFS) papers. The major advantage of Hadoop framework is that it provides reliability and high availability. What are the core components of Hadoop? There are two major components of the Hadoop framework and both fo them does two of the important task for it. Hadoop MapReduce is the method to split a larger data problem into smaller chunk and distribute it to many different commodity servers. Each server have their own set of resources and they have processed them locally. Once the commodity server has processed the data they send it back collectively to main server. This is effectively a process where we process large data effectively and efficiently. (We will understand this in tomorrow’s blog post). Hadoop Distributed File System (HDFS) is a virtual file system. There is a big difference between any other file system and Hadoop. When we move a file on HDFS, it is automatically split into many small pieces. These small chunks of the file are replicated and stored on other servers (usually 3) for the fault tolerance or high availability. (We will understand this in the day after tomorrow’s blog post). Besides above two core components Hadoop project also contains following modules as well. Hadoop Common: Common utilities for the other Hadoop modules Hadoop Yarn: A framework for job scheduling and cluster resource management There are a few other projects (like Pig, Hive) related to above Hadoop as well which we will gradually explore in later blog posts. A Multi-node Hadoop Cluster Architecture Now let us quickly see the architecture of the a multi-node Hadoop cluster. A small Hadoop cluster includes a single master node and multiple worker or slave node. As discussed earlier, the entire cluster contains two layers. One of the layer of MapReduce Layer and another is of HDFC Layer. Each of these layer have its own relevant component. The master node consists of a JobTracker, TaskTracker, NameNode and DataNode. A slave or worker node consists of a DataNode and TaskTracker. It is also possible that slave node or worker node is only data or compute node. The matter of the fact that is the key feature of the Hadoop. In this introductory blog post we will stop here while describing the architecture of Hadoop. In a future blog post of this 31 day series we will explore various components of Hadoop Architecture in Detail. Why Use Hadoop? There are many advantages of using Hadoop. Let me quickly list them over here: Robust and Scalable – We can add new nodes as needed as well modify them. Affordable and Cost Effective – We do not need any special hardware for running Hadoop. We can just use commodity server. Adaptive and Flexible – Hadoop is built keeping in mind that it will handle structured and unstructured data. Highly Available and Fault Tolerant – When a node fails, the Hadoop framework automatically fails over to another node. Why Hadoop is named as Hadoop? In year 2005 Hadoop was created by Doug Cutting and Mike Cafarella while working at Yahoo. Doug Cutting named Hadoop after his son’s toy elephant. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – MapReduce. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Superpower Your Touchpad Computer with Scrybe

    - by Matthew Guay
    Are you looking for a way to help your Touchpad computer make you more productive?  Here’s a quick look at Scrybe, a new application from Synaptics that lets you superpower it. Touchpad devices have become increasingly more interesting as they’ve included support for multi-touch gestures.  Scrybe takes it to the next level and lets you use your touchpad as an application launcher.  You can launch any application, website, or complete many common commands on your computer with a simple gesture.  Scrybe works with most modern Synaptics touchpads, which are standard on most laptops and netbooks.  It is optimized for newer multi-touch touchpads, but can also work with standard single-touch touchpads.  It works on Windows 7, Vista, and XP, so chances are it will work with your laptop or netbook. Get Started With Scrybe Head over to the Scrybe website and download the latest version (link below).  You are asked to enter your email address, name, and information about your computer…but you actually only have to enter your email address.  Click Download when finished. Run the installer when it’s download.  It will automatically download the latest Synaptics driver for your touchpad and any other components needed for Scrybe.  Note that the Scrybe installer will ask to install the Yahoo! toolbar, so uncheck this to avoid adding this worthless browser toolbar. Using Scrybe To open an application or website with a gesture, press 3 fingers on your touchpad at once, or if your touchpad doesn’t support multi-touch gestures, then press Ctrl+Alt and press 1 finger on your touchpad.  This will open the Scrype input pane; start drawing a gesture, and you’ll see it on the grey square.  The input pane shows some default gestures you can try. Here we drew an “M”, which opens our default Music player.  As soon as you finish the gesture and lift up your finger, Scrybe will open the application or website you selected. A notification balloon will let you know what gesture was preformed. When you’re entering your gesture, the input pane will show white “ink”.  The “ink” will turn blue if the command is recognized, but will turn red if it isn’t.  If Scrybe doesn’t recognize your command, press 3 fingers and try again. Scrybe Control Panel You can open the Scrybe Control panel to enter or change commands by entering a box-like gesture, or right-clicking the Scrybe icon in your system tray and selecting “Scrybe Control Panel”. Scrybe has many pre-configured gestures that you can preview and even practice. All of the gestures in the Popular tab are preset and cannot be changed.  However, the ones in the favorites tab can be edited.  Select the gesture you wish to edit, and click the gear icon to change it.  Here we changed the email gesture to open Hotmail instead of the default Yahoo Mail. Scrybe can also help you perform many common Windows commands such as Copy and Undo.  Select the Tools tab to see all of these commands.   Scrybe has many settings you may wish to change.  Select the Preferences button in the Control Panel to change these.  Here’s some of the settings we changed. Uncheck “Display a message” to turn off the tooltip notifications when you enter a gesture Uncheck “Show symbol hints” to turn off the sidebar on the input pane Select the search engine you want to open with the Search Gesture.  The default is Yahoo, but you can choose your favorite. Adding a new Scrybe Gesture The default Scrybe options are useful, but the best part is that you can assign gestures to your own programs or websites.  Open the Scrybe control panel, and click the plus sign on the bottom left corner.  Enter a name for your gesture, and then choose if it is for a website or an application. If you want the gesture to open a website, enter the address in the box. Alternately, if you want your gesture to open an application, select Launch Application and then either enter the path to the application, or click the button beside the Launch field and browse to it. Now click the down arrow on the blue box and choose one of the gestures for your application or website. Your new gesture will show up under the Favorites tab in the Scrybe control panel, and you can use it whenever you want from Scrybe, or practice the gesture by selecting the Practice button. Conclusion If you enjoy multi-touch gestures, you may find Scrybe very useful on your laptop or netbook.  Scrybe recognizes gestures fairly easily, even if you don’t enter them perfectly correctly.  Just like pinch-to-zoom and two-finger scroll, Scrybe can quickly become something you miss on other laptops. Download Scrybe (registration required) Similar Articles Productive Geek Tips Fixing Firefox Scrolling Problems with Dell Synaptics TouchpadRemove Synaptics Touchpad Icon from System TrayRoll Back Troublesome Device Drivers in Windows VistaChange Your Computer Name in Windows 7 or VistaLet Somebody Use Your Computer Without Logging Off in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • SQL SERVER – What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1

    - by Pinal Dave
    This is the first part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 Statistics are considered one of the most important aspects of SQL Server Performance Tuning. You might have often heard the phrase, with related to performance tuning. “Update Statistics before you take any other steps to tune performance”. Honestly, I have said above statement many times and many times, I have personally updated statistics before I start to do any performance tuning exercise. You may agree or disagree to the point, but there is no denial that Statistics play an extremely vital role in the performance tuning. SQL Server 2014 has a new feature called Incremental Statistics. I have been playing with this feature for quite a while and I find that very interesting. After spending some time with this feature, I decided to write about this subject over here. New in SQL Server 2014 – Incremental Statistics Well, it seems like lots of people wants to start using SQL Server 2014′s new feature of Incremetnal Statistics. However, let us understand what actually this feature does and how it can help. I will try to simplify this feature first before I start working on the demo code. Code for all versions of SQL Server Here is the code which you can execute on all versions of SQL Server and it will update the statistics of your table. The keyword which you should pay attention is WITH FULLSCAN. It will scan the entire table and build brand new statistics for you which your SQL Server Performance Tuning engine can use for better estimation of your execution plan. UPDATE STATISTICS TableName(StatisticsName) WITH FULLSCAN Who should learn about this? Why? If you are using partitions in your database, you should consider about implementing this feature. Otherwise, this feature is pretty much not applicable to you. Well, if you are using single partition and your table data is in a single place, you still have to update your statistics the same way you have been doing. If you are using multiple partitions, this may be a very useful feature for you. In most cases, users have multiple partitions because they have lots of data in their table. Each partition will have data which belongs to itself. Now it is very common that each partition are populated separately in SQL Server. Real World Example For example, if your table contains data which is related to sales, you will have plenty of entries in your table. It will be a good idea to divide the partition into multiple filegroups for example, you can divide this table into 3 semesters or 4 quarters or even 12 months. Let us assume that we have divided our table into 12 different partitions. Now for the month of January, our first partition will be populated and for the month of February our second partition will be populated. Now assume, that you have plenty of the data in your first and second partition. Now the month of March has just started and your third partition has started to populate. Due to some reason, if you want to update your statistics, what will you do? In SQL Server 2012 and earlier version You will just use the code of WITH FULLSCAN and update the entire table. That means even though you have only data in third partition you will still update the entire table. This will be VERY resource intensive process as you will be updating the statistics of the partition 1 and 2 where data has not changed at all. In SQL Server 2014 You will just update the partition of Partition 3. There is a special syntax where you can now specify which partition you want to update now. The impact of this is that it is smartly merging the new data with old statistics and update the entire statistics without doing FULLSCAN of your entire table. This has a huge impact on performance. Remember that the new feature in SQL Server 2014 does not change anything besides the capability to update a single partition. However, there is one feature which is indeed attractive. Previously, when table data were changed 20% at that time, statistics update were triggered. However, now the same threshold is applicable to a single partition. That means if your partition faces 20% data, change it will also trigger partition level statistics update which, when merged to your final statistics will give you better performance. In summary If you are not using a partition, this feature is not applicable to you. If you are using a partition, this feature can be very helpful to you. Tomorrow: We will see working code of SQL Server 2014 Incremental Statistics. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Patch an Existing NK.BIN

    - by Kate Moss' Open Space
    As you know, we can use MAKEIMG.EXE tool to create OS Image file, NK.BIN, or ROMIMAGE.EXE with a BIB for more accurate. But what if the image file is already created but need to be patched or you want to extract a file from NK.BIN? The Platform Builder provide many useful command line utilities, and today I am going to introduce one, BINMOD.EXE. http://msdn.microsoft.com/en-us/library/ee504622.aspx is the official page for BINMOD tool. As the page says, The BinMod Tool (binmod.exe) extracts files from a run-time image, and replaces files in a run-time image and its usage binmod [-i imagename] [-r replacement_filename.ext | -e extraction_filename.ext] This is a simple tool and is easy to use, if we want to extract a file from nk.bin, just type binmod –i nk.bin –e filename.ext And that's it! Or use can try -r command to replace a file inside NK.BIN. The small tool is good but there is a limitation; due to the files in MODULES section are fixed up during ROMIMAGE so the original file format is not preserved, therefore extract or replace file in MODULE section will be impossible. So just like this small tool, this post supposed to be end here, right? Nah... It is not that easy. Just try the above example, and you will find, the tool is not work! Double check the file is in FILES section and the NK.BIN is good, but it just quits. Before you throw away this useless toy, we can try to fix it! Yes, the source of this tool is available in your CE6, private\winceos\COREOS\nk\tools\romimage\binmod. As it is a tool run in your Windows so you need to Windows SDK or Visual Studio to build the code. (I am going to save you some time by skipping the detail as building a desktop console mode program is fairly trivial) The cbinmod.cpp is the core logic for this program and follow up the error message we got, it looks like the following code is suspected.   //   // Extra sanity check...   //   if((DWORD)(HIWORD(pTOCLoc->dllfirst) << 16) <= pTOCLoc->dlllast &&       (DWORD)(LOWORD(pTOCLoc->dllfirst) << 16) <= pTOCLoc->dlllast)   {     dprintf("Found pTOC  = 0x%08x\n", (DWORD)dwpTOC);     fFoundIt = true;     break;   }    else    {     dprintf("NOTICE! Record %d looked like a TOC except DLL first = 0x%08X, and DLL last = 0x%08X\r\n", i, pTOCLoc->dllfirst, pTOCLoc->dlllast);   } The logic checks if dllfirst <= dlllast but look closer, the code only separated the high/low WORD from dllfirst but does not apply the same to dlllast, is that on purpose or a bug? While the TOC is created by ROMIMAGE.EXE, so let's move to ROMIMAGE. In private\winceos\coreos\nk\tools\romimage\romimage\bin.cpp    Module::s_romhdr.dllfirst  = (HIWORD(xip_mem->dll_data_bottom) << 16) | HIWORD(xip_mem->kernel_dll_bottom);   Module::s_romhdr.dlllast   = (HIWORD(xip_mem->dll_data_top) << 16)    | HIWORD(xip_mem->kernel_dll_top); It is clear now, the high word of dll first is the upper 16 bits of XIP DLL bottom and the low word is the upper 16 bits of kernel dll bottom; also, the high word of dll last is the upper 16 bits of XIP DLL top and the low word is the upper 16 bits of kernel dll top. Obviously, the correct statement should be if((DWORD)(HIWORD(pTOCLoc->dllfirst) << 16) <= (DWORD)(HIWORD(pTOCLoc->dlllast) << 16) &&    (DWORD)(LOWORD(pTOCLoc->dllfirst) << 16) <= (DWORD)(LOWORD(pTOCLoc->dlllast) << 16)) So update the code like this should fix this issue or just like the comment, it is an extra sanity check, you can just get rid of it, either way can make the code moving forward and everything worked as advertised.  "Extracting out copies of files from the nk.bin... replacing files... etc." Since the NK.BIN can be compressed, so the BinMod needs the compress.dll to decompress the data, the DLL can be found in C:\program files\microsoft platform builder\6.00\cepb\idevs\imgutils.

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >