Search Results

Search found 16135 results on 646 pages for 'quick launch bar'.

Page 509/646 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • SQLAuthority News – Android Efficiency Tips and Tricks – Personal Technology Tip #003

    - by pinaldave
    I use my phone for lots of things.  I use it mainly to replace my tablet – I can e-mail, take and edit photos, and do almost everything I can do on a laptop with this phone.  And I am sure that there are many of you out there just like me.  I personally have a Galaxy S3, which uses the Android operating system, and I have decided to feature it as the third installment of my Technology Tips and Tricks series. 1) Shortcut to your favorite contacts on home screen Access your most-called contacts easily from your home screen by holding your finger on any empty spot on the home screen.  A menu will pop up that allows you to choose Shortcuts, and Contact.  You can scroll through your contact list and then just tap on the name of the person you want to be able to dial with a single click. 2) Keep track of your data usage Yes, we all should keep a close eye on our data usage, because it is very easy to go over our limits and then end up with a giant bill at the end of the month.  Never get surprised when you open that mobile phone envelope again.  Go to Settings, then Data Usage, and you can find a quick rundown of your usage, how much data each app uses, and you can even set alarms to let you know when you are nearing the limits.   Better yet, you can set the phone to stop using data when it reaches a certain limit. 3) Bring back Good Grammar We often hear proclamations about the downfall of written language, and how texting abbreviations, misspellings, and lack of punctuation are the root of all evil.  Well, we can show all those doomsdayers that all is not lost by bringing punctuation back to texting.  Usually we leave it off when we text because it takes too long to get to the screen with all the punctuation options.  But now you can hold down the period (or “full stop”) button and a list of all the commonly-used punctuation marks will pop right up. 4) Apps, Apps, Apps and Apps And finally, I cannot end an article about smart phones without including a list of my favorite apps.  Here are a list of my Top 10 Applications on my Android (not counting social media apps). Advanced Task Killer – Keeps my phone snappy by closing un-necessary apps WhatsApp - my favorite alternate to Text SMS Flipboard - my ‘timepass’ moments Skype – keeps me close to friends and family GoogleMaps - I am never lost because of this one thing Amazon Kindle – Books my best friends DropBox - My data always safe Pluralsight Player – Learning never stops for me Samsung Kies Air – Connecting Phone to Computer Chrome – Replacing default browser I have not included any social media applications in the above list, but you can be sure that I am linked to Twitter, Facebook, Google+, LinkedIn, and YouTube. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Android, Personal Technology

    Read the article

  • Home Energy Management & Automation with Windows Phone 7

    A number of people at Clarity are personally interested in home energy conservation and home automation. We feel that a mobile device is a great fit for bringing this idea to fruition. While this project is merely a concept and not directly associated with Microsofts Hohm web service, it provides a great model for communicating the concept. I wanted to take the idea a step further and combine saving energy in your home with the ability to track water usage and control your home devices. I designed an application that focuses on total home control and not just energy usage. Application Overview By monitoring home consumption in real time and with yearly projections users can pinpoint vampire devices, times of high or low consumption, and wasteful patterns of energy use. Energy usage meters indicate total current consumption as well as individual device consumption. Users can then use the information to take action, make adjustments, and change their consumption behaviors. The app can be used to automate certain systems like lighting, temperature, or alarms. Other features can be turned on an off at the touch of a toggle switch on your phone, away from home. Forget to turn off the TV or shut the garage door? No problem, you can do it from your phone. Through settings you can enable and disable features of the phone that apply to your home making it a completely customized and convenient experience. To be clear, this equates to more security, big environmental impact, and even bigger savings.   Design and User Interface  Since this panorama application is designed for win phone 7 devices, it complies with the UI Design and Interaction Guide for wp7. I developed the frame and page hierarchy from existing examples. The interface takes advantage of the interactive nature of touch screens with slider controls, pivot control views, and toggle switches to turn on and off devices (not shown in mockup). I followed recommendations for text based elements and adapted the tile notifications to display the most recent user activity. For example, the mockup indicates upon launching the app that the last thing you did was program the thermostat. This model is great for quick launching common user actions. One last design feature to point out is the technical reasons for supplying both light and dark themes for the app. Since this application is targeting energy consumption it only makes sense to consider the effect of the apps background color or image on the phones energy use. When displaying darker colors like black the OLED display may use less power, extending battery life. Other Considerations For now I left out options of wind and solar powered energy options because they are not available to everyone. Renewable energy sources and new technologies associated with them are definitely ideas to keep in mind for a next iteration. Another idea to explore for such an application would be to include a savings model similar to mint.com. In addition to general energy-saving recommendations the application could recommend customized ways to save based on your current utility providers and available options in your area. If your television or refrigerator is guilty of sucking a lot of energy then you may see recommendations for energy star products that could save you even more money! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • SQL SERVER – DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28

    - by pinaldave
    The key Dynamic Management View (DMV) that helps us to understand wait stats is sys.dm_os_wait_stats; this DMV gives us all the information that we need to know regarding wait stats. However, the interpretation is left to us. This is a challenge as understanding wait stats can often be quite tricky. Anyway, we will cover few wait stats in one of the future articles. Today we will go over the basic understanding of the DMV. The Official Book OnLine Reference for DMV is over here: sys.dm_os_wait_stats. I suggest you all to refer this for all the accuracy. Following is a statement from the online book: “Specific types of wait times during query execution can indicate bottlenecks or stall points within the query. Similarly, high wait times, or wait counts server wide can indicate bottlenecks or hot spots in interaction query interactions within the server instance.” This is the statement which has inspired me to write this series. Let us first run the following statement from DMV. SELECT * FROM sys.dm_os_wait_stats ORDER BY wait_time_ms DESC GO Above statement will show us few of the columns. Here it is quick explanation of each of the column. wait_type – this is the name of the wait type. There can be three different kinds of wait types – resource, queue and external. waiting_tasks_count – this incremental counter is a good indication of frequent the wait is happening. If this number is very high, it is good indication for us to investigate that particular wait type. It is quite possible that the wait time is considerably low, but the frequency of the wait is much high. wait_time_ms – this is total wait accumulated for any type of wait. This is the total wait time and includes singal_wait_time_ms. max_wait_time_ms – this indicates the maximum wait type ever occurred for that particular wait type. Using this, one can estimate the intensity of the wait type in past. Again, it is not necessary that this max wait time will occur every time; so do not over invest yourself here. signal_wait_time_ms – this is the wait time when thread is marked as runnable and it gets to the running state. If the runnable queue is very long, you will find that this wait time becomes high. Additionally, please note that this DMV does not show current wait type or wait stats. This is cumulative view of the all the wait stats since server (instance) restarted or wait stats have been cleared. In future blog post, we will also cover two more DMVs which can be helpful to identify wait-related issues. ?sys.dm_os_waiting_tasks sys.dm_exec_requests Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Multiplayer / Networking options for a 2D game with physics

    - by lahmas
    Summary: My 50% finished 2D sidescroller with Box2D as physics engine should have multiplayer support in the final version. However, the current code is just a singleplayer game. What should I do now? And more important, how should I implement multiplayer and combine it with singleplayer? Is it a bad idea to code the singleplayer mode separated from multiplayer mode (like Notch did it with Minecraft)? The performance in singleplayer should be as good as possible (Simulating physics with using a loopback server to implement singleplayer mode would be a problem there) Full background / questions: I'm working on a relatively large 2D game project in C++, with physics as a core element of it. (I use Box2D for that) The finished game should have full multiplayer support, however I made the mistake that I didn't plan the networking part properly and basically worked on a singleplayer game until now. I thought that multiplayer support could be added to the almost finished singleplayer game in a relatively easy and clear way, but apparently, from what I have read this is wrong. I even read that a multiplayer game should be programmed as one from the beginning, with the singleplayer mode actually just consisting of hosting an invisible local server and connecting to it via loopback. (I found out that most FPS game engines do it that way, an example would be Source) So here I am, with my half finished 2D sidescroller game, and I don't really know how to go on. Simply continueing to work on the singleplayer / client seems useless to me now, as I'd have to recode and refactor even more later. First, a general question to anybody who possibly found himself in a situation like this: How should I proceed? Then, the more specific one - I have been trying to find out how I can approach the networking part for my game: (Possible solutions:) Invisible / loopback server for singleplayer This would have the advantage that there basically is no difference between singleplayer and multiplayer mode. Not much additional code would be needed. A big disadvantage: Performance and other limitations in singleplayer. There would be two physics simulations running. One for the client and one for the loopback server. Even if you work around by providing a direct path for the data from the loopback server, through direct communcation by the threads for example, the singleplayer would be limited. This is a problem because people should be allowed to play around with masses of objects at once. Separated singleplayer / Multiplayer mode There would be no server involved in singleplayer mode. I'm not really sure how this would work. But at least I think that there would be a lot of additional work, because all of the singleplayer features would have to be re-implemented or glued to multiplayer mode. Multiplayer mode as a module for singleplayer This is merely a quick thought I had. Multiplayer could consist of a singleplayer game, with an additional networking module loaded and connected to a server, which sends and receives data and updates the singleplayer world. In the retrospective, I regret not having planned the multiplayer mode earlier. I'm really stuck at this point and I hope that somebody here is able to help me!

    Read the article

  • Silverlight Cream for March 21, 2010 -- #816

    - by Dave Campbell
    In this Issue: Michael Washington, John Papa(-2-, -3-, -4-), Jonas Follesø, David Anson, Scott Guthrie, Andrej Tozon, Bill Reiss(-2-), Pete Blois, and Lee. Shoutouts: Frank LaVigne has a Mix10 Session Downloader for us all to use... thanks Frank! Read what Ward Bell has to say about MVVM, Josh Smith’s Way ... it's all good. Robby Ingebretsen posts on his 10 Favorite Open Source Fonts You Can Embed in WPF or Silverlight Mike Harsh posted Slides and Demos from my MIX10 Session . The download link at Drop.io is down for maintenance until Sunday evening, March 21. From SilverlightCream.com: Blend 4: TreeView SelectedItemChanged using MVVM Michael Washington has a post up about doing SelectedItemChanged on a TreeView with MVVM, oh and he's starting out in Blend 4... Silverlight TV 14: Developing for Windows Phone 7 with Silverlight John Papa hit Silverlight TV pretty hard at the beginning of MIX10. This first one is with Mike Harsh talking about WP7. (Hi Mike ... wondered where you'd run off to!), and you can go to the shoutout section to get Mike's session material from MIX as well. Silverlight TV 15: Announcing Silverlight 4 RC at MIX 10 In this next Silverlight TV(15), John Papa and Adam Kinney discuss Silverlight 4RC ... thank goodness it's out, we can all let go of the breath we've been holding in :) Silverlight TV 16: Tim Heuer and Jesse Liberty Talk about Silverlight 4 RC at MIX 10 Silverlight TV 16 has John Papa sharing the spotlight with Jesse Liberty and Tim Heuer ... geez... can you find 3 more kowledgable Silverlight folks to listen to? No? then go listen to this :) Silverlight TV 17: Build a Twitter Client for Windows Phone 7 with Silverlight The latest Silverlight TV has John Papa bringing Mike Harsh back to produce a Twitter Client for WP7. Simulating multitouch on the Windows Phone 7 Emulator Jonas Follesø has a great post up about simulating multi-touch on WP7 using multiple mice ... yeah, you read that right :) Using IValueConverter to create a grouped list of items simply and flexibly David Anson demonstrates grouping items in a ListBox using IValueConverter. I think I can pretty well guarantee I would NOT have thought of doing this.. :) Building a Windows Phone 7 Twitter Application using Silverlight In the MIX10 first-day keynote, Scott Guthrie did File->New Project and built a WP7 Twitter app. He has that up as a tutorial with all sorts of external links including one to the keynote itself. Named and optional parameters in Silverlight 4 Andrej Tozon delves into the optional parameters that are now available to Silverlight developers... pretty cool stuff. Space Rocks game step 4: Inheriting from Sprite Bill Reiss continues with his game development series with this one on inheriting from the Sprite class and centering objects Space Rocks game step 5: Rotating the ship Bill Reiss's episode 5 is on rotating the ship you setup in episode 4. Don't worry about the transforms, Bill gives it all to us :) Labyrinth Sample for Windows Phone Wow... check out the sample Pete Blois did for the Phone... Silverlight coolness :) PathListBox in SL4 – firstlook Lee has a post up on the PathListBox. I think this is going to catch on quick... it's just too cool not to! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • Time to stop using &ldquo;Execute Package Task&rdquo;&ndash; a way to execute package in SSIS catalog taking advantage of the new project deployment model ,and the logging and reporting feature

    - by Kevin Shyr
    I set out to find a way to dynamically call package in SSIS 2012.  The following are 2 excellent blogs I found; I used them heavily.  The code below has some addition to parameter types and message types, but was made essentially derived entirely from the blogs. http://sqlblog.com/blogs/jamie_thomson/archive/2011/07/16/ssis-logging-in-denali.aspx http://www.ssistalk.com/2012/07/24/quick-tip-run-ssis-2012-packages-synchronously-and-other-execution-options/   The code: Every package will be called by a PackageController package.  The packageController is initialized with some information on which package to run and what information to pass in.   The following is the stored procedure called from the “Execute SQL Task”.  Here is the highlight of the stored procedure It takes in packageName, project name, and folder name (folder in SSIS project deployment to SSIS catalog) The stored procedure sets the package variables of the upcoming package execution Execute package in SSIS Catalog Get the status of the execution.  Also, if exists, get the error message’s message_id and store them in the management database. Return value to “Execute SQL Task” to manage failure properly CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog]        @PackageName NVARCHAR(255)        , @ProjectFolder NVARCHAR(255)        , @ProjectName NVARCHAR(255)        , @AuditKey INT        , @DisableNotification BIT        , @PackageExecutionLogID INT AS BEGIN TRY        DECLARE @execution_id BIGINT = 0;        -- Create a package execution        EXEC [SSISDB].[catalog].[create_execution]                     @package_name=@PackageName,                     @execution_id=@execution_id OUTPUT,                     @folder_name=@ProjectFolder,                     @project_name=@ProjectName,                     @use32bitruntime=False;          UPDATE [AUDIT].[PackageInstanceExecutionLog] WITH(ROWLOCK)        SET [SSISCatalogExecutionID] = @execution_id        WHERE [PackageInstanceExecutionLogID] = @PackageExecutionLogID          -- this is to set the execution synchronized so that I can check the result in the end        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'SYNCHRONIZED',                     @parameter_value=1; -- true          /********************************************************         ********************************************************              Section: setting parameters                     Source table:  SSISDB.internal.object_parameters              object_type list:                     20: project level variables                     30: package level variables                     50: execution parameter         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_AuditKey',                     @parameter_value=@AuditKey; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_DisableNotification',                     @parameter_value=@DisableNotification; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_PackageInstanceExecutionID',                     @parameter_value=@PackageExecutionLogID; -- true        /********************************************************         ********************************************************              Section: setting variables END         ********************************************************         ********************************************************/            /* This section is carried over from example code           I don't see a reason to change them yet        */        -- Set our package parameters        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_EVENT',                     @parameter_value=1; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_EVENT_CODE',                     @parameter_value=N'0x80040E4D;0x80004005';          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'LOGGING_LEVEL',                     @parameter_value= 1; -- Basic          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_ERROR',                     @parameter_value=1; -- true                              /********************************************************         ********************************************************              Section: EXECUTING         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[start_execution]                     @execution_id;        /********************************************************         ********************************************************              Section: EXECUTING END         ********************************************************         ********************************************************/            /********************************************************         ********************************************************              Section: checking execution result                     Source table:  [SSISDB].[catalog].[executions]              status:                     1: created                     2: running                     3: cancelled                     4: failed                     5: pending                     6: ended unexpectedly                     7: succeeded                     8: stopping                     9: completed         ********************************************************         ********************************************************/        if EXISTS(SELECT TOP 1 1                            FROM [SSISDB].[catalog].[executions] WITH(NOLOCK)                            WHERE [execution_id] = @execution_id                                  AND [status] NOT IN (2, 7, 9)) BEGIN                /********************************************************               ********************************************************                     Section: logging error messages                            Source table:  [SSISDB].[internal].[operation_messages]                     message type:                            10:  OnPreValidate                             20:  OnPostValidate                             30:  OnPreExecute                             40:  OnPostExecute                             60:  OnProgress                             70:  OnInformation                             90:  Diagnostic                             110:  OnWarning                            120:  OnError                            130:  Failure                            140:  DiagnosticEx                             200:  Custom events                             400:  OnPipeline                     message source type:                            10:  Messages logged by the entry APIs (e.g. T-SQL, CLR Stored procedures)                             20:  Messages logged by the external process used to run package (ISServerExec)                             30:  Messages logged by the package-level objects                             40:  Messages logged by tasks in the control flow                             50:  Messages logged by containers (For, ForEach, Sequence) in the control flow                             60:  Messages logged by the Data Flow Task                                    ********************************************************               ********************************************************/                INSERT INTO AUDIT.PackageInstanceExecutionOperationErrorLink                     SELECT @PackageExecutionLogID                                  ,[operation_message_id]                            FROM [SSISDB].[internal].[operation_messages] WITH(NOLOCK)                            WHERE operation_id = @execution_id                                  AND message_type IN (120, 130)                           EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, 'SSISDB Internal operation_messages found'                GOTO ReturnTrueAsErrorFlag                /********************************************************               ********************************************************                     Section: checking messages END               ********************************************************               ********************************************************/                /* This part is not really working, so now using rowcount to pass status              --DECLARE @PackageErrorMessage NVARCHAR(4000)              --SET @PackageErrorMessage = @PackageName + 'failed with executionID: ' + CONVERT(VARCHAR(20), @execution_id)                --RAISERROR (@PackageErrorMessage -- Message text.              --     , 18 -- Severity,              --     , 1 -- State,              --     , N'check table AUDIT.PackageInstanceExecutionErrorMessages' -- First argument.              --     );              */        END        ELSE BEGIN              GOTO ReturnFalseAsErrorFlagToSignalSuccess        END        /********************************************************         ********************************************************              Section: checking execution result END         ********************************************************         ********************************************************/ END TRY BEGIN CATCH        DECLARE @SSISCatalogCallError NVARCHAR(MAX)        SELECT @SSISCatalogCallError = ERROR_MESSAGE()          EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, @SSISCatalogCallError          GOTO ReturnTrueAsErrorFlag END CATCH;     /********************************************************  ********************************************************    Section: end result  ********************************************************  ********************************************************/ ReturnTrueAsErrorFlag:        SELECT CONVERT(BIT, 1) AS PackageExecutionErrorExists ReturnFalseAsErrorFlagToSignalSuccess:        SELECT CONVERT(BIT, 0) AS PackageExecutionErrorExists   GO

    Read the article

  • Large File Upload in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Okay this is a big BIG B-I-G problem. And with SP2010 it’s going to be more prominent, because atleast at the server side, SharePoint can support large files much much better than SharePoint 2007 ever did. The issue with very large files being uploaded through any browser based API are - Reliably transferring gigabyte or bigger files without breakages over a protocol like HTTP, which is better suited for tiny transfers like images and text. Not killing your browser because it has to load all that in memory Not killing your web server because All that you upload through HTTP post, first gets streamed into IIS Memory, w3wp.exe memory before the ENTIRE FILE finishes uploading .. before it is stored. Which means, You cannot show an accurate and live progress bar of the upload, IIS gives you no such accurate metric of an upload. All the counters it gives you are approximate. Your w3wp.exe eats up all server memory – 4GB of it, for a 4GB upload. A thread is kept busy for the entire duration of the upload, thereby greatly limiting your web server’s capability to serve newer requests. Kills effective load balancing. Not killing your content database because, As you are uploading a very large file, that large file gets written sequentially into the DB, and therefore for a very large file very severely impacts the database performance. I had put together another video showing RBS usage in SharePoint 2010. I talked about many practical ramifications of using RBS in SharePoint in that video. Note that enabling large file support will never ever be a point and click job, simply because there are too many questions one needs to ask, and too many things one needs to plan for. However, one part that will remain common across all large file upload scenarios, in SharePoint or outside of SharePoint is to do it efficiently while not killing the web server. In this video, I describe using the Telerik Silverlight Upload control with SharePoint 2010 to enable efficient large file uploads in SharePoint. Presenting .. The video Comment on the article ....

    Read the article

  • Hello World Pagelet

    - by astemkov
    Introduction The goal of this exercise is to give you a basic feel of how you can use Pagelet Producer to proxy a web page We will proxy a simple static Hello World web page, cut one section out of that page and present it as a pagelet that you can later insert on your own application page or to your portal page such as WebCenter Portal space or WebCenter Interaction community page. Hello World sample app This is the static web page we will work with: Let's assume the following: The Hello World web page is running on server http://appserver.company.com:1234/ The Hello World web page path is: http://appserver.company.com:1234/helloworld/ Initial Pagelet Producer setup Let's assume that the Pagelet Producer server is running on http://pageletserver.company.com:8889/pagelets/ First let's check that Pagelet Producer is up and running. In order to do that we just need to access the following URL: http://pageletserver.company.com:8889/pagelets/ And this is what should be returned: Now you can access Pagelet Producer administration screens using this URL: http://pageletserver.company.com:8889/pagelets/admin This is how the UI looks: Now if you connect to the internet via proxy server, you need to configure proxy in Pagelet Producer settings. In the Navigator pane: Jump To - Settings Click on "Proxy" Enter your proxy server configuration: Creating a resource First thing that you need to do is to create a resource for your web page. This will tell Pagelet Producer that all sub-paths of the web page should be proxied. It also will allow you to setup common rules of how your web page should be proxied and will serve as a container for your pagelets. In the Navigator pane: Jump To - Resources Click on any existing resource (ex. welcome_resource) Click on "Create selected type" toolbar button at the top of the Navigator pane Select "Web" in the "Select Producer Type" dialog box and click "OK" Now after the resource is created let's click on "General" sub-item a specify the following values Name = AppServer Source URL = http://appserver.company.com:1234/ Destination URL = /appserver/ Click on "Save" toolbar button at the top of the Navigator pane After the resource is created our web page becomes accessible by the URL: http://pageletserver.company.com:8889/pagelets/appserver/helloworld/ So in original web page address Source URL is replaced with Pagelet Producer URL (http://pageletserver.company.com:8889/pagelets) + Destination URL Creating a pagelet Now let's create "Hello World" pagelet. Under the resource node activate Pagelets subnode Click on "Create selected type" toolbar button at the top of the Navigator pane Click on "General" sub-node of newly created pagelet and specify the following values Name = Hello_World Library = MyLib Library is used for logical grouping. The portals use the "Library" value to group pagelets in their respective UI's. For example, when adding pagelets to a WebCenter Portal space you would see the individual pagelets listed under the "Library" name. URL Suffix = helloworld/index.html this is where the Hello World page html is served from Click on "Save" toolbar button at the top of the Navigator pane The Library name can be anything you want, it doesn't have to match the resource name at all. It is used as a logical grouping of pagelets, and you can include pagelets from multiple resources into the same library or create a new library for each pagelet. After you save the pagelet you can access it here: http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/MyLib/Hello_World which is : http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/ + [Library] + [Name] Or to test the injection of a pagelet into iframe you can click on the pagelets "Documentation" sub-node and use "Access Pagelet using REST" URL: This is what we will see: Clipping The pagelet that we just created covers the whole web page, but we want just the "Hello World" segment of it. So let's clip it. Under the Hello_World pagelet node activate Clipper sub-node Click on "Create selected type" toolbar button at the top of the Navigator pane Specify a Name for newly created clipper. For example: "c1" Click on "Content" sub-node of the clipper Click on "Launch Clipper" button New browser window will open By moving a mouse pointer over the web page select the area you want to clip: Click left mouse button - the browser window will disappear and you will see that Clipping Path was automatically generated Now let's save and access the link from the "Documentation" page again Here's our pagelet nicely clipped and ready for being used on your Web Center Space

    Read the article

  • SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo

    - by pinaldave
    Earlier I wrote a blog post about how to install DQS in SQL Server 2012. Today I decided to write a second part of this series where I explain how to use DQS, however, as soon as I started the DQS client, I encountered an error that will not let me pass through and connect with DQS client. It was a bit strange to me as everything was functioning very well when I left it last time.  The error was very big but here are the first few words of it. Cannot connect to server. A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions”: System.Data.SqlClient.SqlException (0×80131904): A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessionPhaseTwo”: The error continues – here is the quick screenshot of the error. As my initial attempts could not fix the error I decided to search online and I finally received a wonderful solution from Microsoft Site. The error has happened due to latest update I had installed on .NET Framework 4. There was a  mismatch between the Module Version IDs (MVIDs) of the SQL Common Language Runtime (SQLCLR) assemblies in the SQL Server 2012 database and the Global Assembly Cache (GAC). This mismatch was to be resolved for the DQS to work properly. The workaround is specified here in detail. Scroll to subtopic 4.23 Some .NET Framework 4 Updates Might Cause DQS to Fail. The script was very much straight forward. Here are the few things to not to miss while applying workaround. Make sure DQS client is properly closed The NETAssemblies is based on your OS. NETAssemblies for 64 bit machine – which is my machine is “c:\windows\Microsoft.NET\Framework64\v4.0.30319″. If you have Winodws installed on any other drive other than c:\windows do not forget to change that in the above path. Additionally if you have 32 bit version installed on c:\windows you should use path as ”c:\windows\Microsoft.NET\Framework\v4.0.30319″ Make sure that you execute the script specified in 4.23 sections in this article in the database DQS_MAIN. Do not run this in the master database as this will not fix your error. Do not forget to restart your SQL Services once above script has been executed. Once you open the client it will work this time. Here is the script which I have bit modified from original script. I strongly suggest that you use original script mentioned 4.23 sections. However, this one is customized my own machine. /* Original source: http://bit.ly/PXX4NE (Technet) Modifications: -- Added Database context -- Added environment variable @NETAssemblies -- Main script modified to use @NETAssemblies */ USE DQS_MAIN GO BEGIN -- Set your environment variable -- assumption - Windows is installed in c:\windows folder DECLARE @NETAssemblies NVARCHAR(200) -- For 64 bit uncomment following line SET @NETAssemblies = 'c:\windows\Microsoft.NET\Framework64\v4.0.30319\' -- For 32 bit uncomment following line -- SET @NETAssemblies = 'c:\windows\Microsoft.NET\Framework\v4.0.30319\' DECLARE @AssemblyName NVARCHAR(200), @RefreshCmd NVARCHAR(200), @ErrMsg NVARCHAR(200) DECLARE ASSEMBLY_CURSOR CURSOR FOR SELECT name AS NAME FROM sys.assemblies WHERE name NOT LIKE '%ssdqs%' AND name NOT LIKE '%microsoft.sqlserver.types%' AND name NOT LIKE '%practices%' AND name NOT LIKE '%office%' AND name NOT LIKE '%stdole%' AND name NOT LIKE '%Microsoft.Vbe.Interop%' OPEN ASSEMBLY_CURSOR FETCH NEXT FROM ASSEMBLY_CURSOR INTO @AssemblyName WHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRY SET @RefreshCmd = 'ALTER ASSEMBLY [' + @AssemblyName + '] FROM ''' + @NETAssemblies + @AssemblyName + '.dll' + ''' WITH PERMISSION_SET = UNSAFE' EXEC sp_executesql @RefreshCmd PRINT 'Successfully upgraded assembly ''' + @AssemblyName + '''' END TRY BEGIN CATCH IF ERROR_NUMBER() != 6285 BEGIN SET @ErrMsg = ERROR_MESSAGE() PRINT 'Failed refreshing assembly ' + @AssemblyName + '. Error message: ' + @ErrMsg END END CATCH FETCH NEXT FROM ASSEMBLY_CURSOR INTO @AssemblyName END CLOSE ASSEMBLY_CURSOR DEALLOCATE ASSEMBLY_CURSOR END GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Visual Studio 2010 SP1

    - by ScottGu
    Last week we shipped Service Pack 1 of Visual Studio 2010 and the Visual Studio Express Tools.  In addition to bug fixes and performance improvements, SP1 includes a number of feature enhancements.  This includes improved local help support, IntelliTrace support for 64-bit applications and SharePoint, built-in Silverlight 4 Tooling support in the box, unit testing support when targeting .NET 3.5, a new performance wizard for Silverlight, IIS Express and SQL CE Tooling support for web projects, HTML5 Intellisense for ASP.NET, and more.  TFS 2010 SP1 was also released last week, together with a new TFS Project Server Integration Pack and Load Test Feature Pack.  Brian Harry has a good blog post about the TFS updates here. VS 2010 SP1 Download Click here to download and install SP1 for all versions of Visual Studio (including express).  This installer examines what you have installed on your machine, and only downloads the servicing downloads necessary to update them to SP1.  The time it takes to download and update will consequently depend on what all you have installed.  Jon Galloway has a good blog post on tips to speed up the SP1 install by uninstalling unused components. Web Platform Installer Bundles In addition to the core VS 2010 SP1 installer, we have also put together two Web Platform Installer (WebPI) bundles that automate installing SP1 together with additional web-specific components: VS 2010 SP1 WebPI Bundle Visual Web Developer 2010 SP1 WebPI Bundle The above WebPI bundles automate installing: VS 2010/VWD 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 Only the components that are not already installed on your machine will be downloaded when you use the above WebPI bundles.  This means that you can run the WebPI bundle at any time (even if you have already installed SP1 or ASP.NET MVC 3) and not have to worry about wasting time downloading/installing these components again. Earlier this year I did two posts that discussed how to use IIS Express and SQL CE with ASP.NET projects in SP1.  Read the below posts to learn more about how to use them after you run the above bundles: Visual Studio 2010 SP1 and IIS Express Visual Studio 2010 SP1 and SQL CE for ASP.NET The above feature additions work with any web project type – including both ASP.NET Web Forms and ASP.NET MVC. Additional SP1 Notes Two additional notes about VS 2010 SP1: 1) One change we made between RTM and SP1 is that by default Visual Studio now uses software rendering instead of hardware acceleration when running on Windows XP.  We made this change because we’ve seen reports of (often inconsistent) performance issues caused by older video drivers.  Running in software mode eliminates these and delivers consistent speeds.  You can optionally re-enable hardware acceleration with SP1 using Visual Studio’s Tools->Options menu command – we did not remove support for HW acceleration on XP, we simply changed the default setting for it.  Jason Zander has written more details on the change and how to re-enable HW acceleration inside VS here. 2) We have discovered an issue where installing SP1 can cause TSQL intellisense within SQL Server Management Studio 2008 R2 to stop working (typing still works – but intellisense doesn’t show up).  The SQL team is investigating this now and I’ll post an update on how to fix this once more details are known.  Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • BYOD is not a fashion statement; it’s an architectural shift - by Indus Khaitan

    - by Greg Jensen
    Ten years ago, if you asked a CIO, “how mobile is your enterprise?”. The answer would be, “100%, we give Blackberry to all our employees.”Few things have changed since then: 1.    Smartphone form-factors have matured, especially after the launch of iPhone. 2.    Rapid growth of productivity applications and services that enable creation and consumption of digital content 3.    Pervasive mobile data connectivityThere are two threads emerging from the change. Users are rapidly mingling their personas of an individual as well as an employee. In the first second, posting a picture of a fancy dinner on Facebook, to creating an expense report for the same meal on the mobile device. Irrespective of the dual persona, a user’s personal and corporate lives intermingle freely on a single hardware and more often than not, it’s an employees personal smartphone being used for everything. A BYOD program enables IT to “control” an employee owned device, while enabling productivity. More often than not the objective of BYOD programs are financial; instead of the organization, an employee pays for it.  More than a fancy device, BYOD initiatives have become sort of fashion statement, of corporate productivity, of letting employees be in-charge and a show of corporate empathy to not force an archaic form-factor in a world of new device launches every month. BYOD is no longer a means of effectively moving expense dollars and support costs. It does not matter who owns the device, it has to be protected.  BYOD brings an architectural shift.  BYOD is an architecture, which assumes that every device is vulnerable, not just what your employees have brought but what organizations have purchased for their employees. It's an architecture, which forces us to rethink how to provide productivity without comprising security.Why assume that every device is vulnerable? Mobile operating systems are rapidly evolving with leading upgrade announcement every other month. It is impossible for IT to catch-up. More than that, user’s are savvier than earlier.  While IT could install locks at the doors to prevent intruders, it may degrade productivity—which incentivizes user’s to bypass restrictions. A rapidly evolving mobile ecosystem have moving parts which are vulnerable. Hence, creating a mobile security platform, which uses the fundamental blocks of BYOD architecture such as identity defragmentation, IT control and data isolation, ensures that the sprawl of corporate data is contained. In the next post, we’ll dig deeper into the BYOD architecture. Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Silverlight Cream for March 23, 2010 -- #818

    - by Dave Campbell
    In this Issue: Max Paulousky, Jeremy Likness, Mark Tucker, Christian Schormann, Page Brooks, Brad Abrams(-2-), Jeff Wilcox, Unnir, Bea Stollnitz, John Papa and Adam Kinney, and Bill Reiss(-2-). Shoutouts: Ashish Shetty posted his material from his MIX10 presentation: Stepping outside the browser with Silverlight 4 Not Silverlight, but dang useful, Karl Shifflett posted a Visual Studio 2010 XAML Editor IntelliSense Presenter Extension Yavor Georgiev posted his MIX10 material: Two samples from today's MIX talk From SilverlightCream.com: GroupBox Sketching Control for WPF applications Using Blend Max Paulousky creates a GroupBox control for SketchFlow for WPF. He includes a link to an example of doing the same for Silverlight. Sequential Asynchronous Workflows in Silverlight using Coroutines Jeremy Likness' latest post begann with a post on the Silverlight.net forum and Rob Eisenburg's MVVM presentation from MIX10 resulting in the use of Wintellect's PowerThreading library (downloadable), and Coroutines. Windows Phone 7 UI Templates Mark Tucker has been putting a lot of thought into WP7 apps and produced 5 templates for building apps, downloadable in PowerPoint format. He's also looking to discuss this concept. Blend 4: About Path Layout, Part I Christian Schormann has a great tutorial up about Expression Blend 4 and path layout ... this is lots of great info, and it's only part 1! Custom Splash Screen for Windows Phone Page Brooks makes very quick work of showing how to add a splash screen to your WP7 app... very nice, Page! Silverlight 4 + RIA Services - Ready for Business: Exposing Data from Entity Framework Brad Abrams next post in the series is is on pulling your data from wherever it lives, and uses a DomainService to shape it for your Silverlight app. Silverlight 4 + RIA Services - Ready for Business: Consuming Data in the Silverlight Client Brad Abrams then discusses consuming that data in a Silverlight app. Not much code involvement at all.. great ROI :) Building Silverlight 3 and Silverlight 4 applications on a .NET 3.5 build machine Jeff Wilcox talks about building Silverlight 3 and Silverlight 4B both on a .NET 3.5 machine. He then adds in the Toolkit, and even WCF RIA Services. Expression Blend 4 - XAML generation tweaks Unnir demonstrates a few changes to Expression Blend 4 that produce more compact XAML. He's also asking for other examples you'd like to see tightened up. How can I sort a hierarchy? Bea Stollnitz posts plausible solutions to sorting data items at each level of a hierarchical UI, with descriptions of why they don't work, followed by the real deal... Silverlight and WPF. Silverlight Training Course (Silverlight 4) John Papa and Adam Kinney have posted a huge body of work to get us up-to-speed on Silverlight 4 -- a WhitePaper, hands-on labs, and an 8-unit course with 25 accompanying videos... geez... Silverlight game development on Windows Phone 7 Bill Reiss has a post up discussing game development on WP7 in general and then discusses his SilverSprite library, with a link to it. XNA or Silverlight for Windows Phone 7 game development? Bill Reiss next discusses the advantage of using Silverlight or XNA for your WP7 game development, and who better to discuss both? Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • ASP.NET MVC, Web API, Razor and Open Source

    - by ScottGu
    Microsoft has made the source code of ASP.NET MVC available under an open source license since the first V1 release. We’ve also integrated a number of great open source technologies into the product, and now ship jQuery, jQuery UI, jQuery Mobile, jQuery Validation, Modernizr.js, NuGet, Knockout.js and JSON.NET as part of it. I’m very excited to announce today that we will also release the source code for ASP.NET Web API and ASP.NET Web Pages (aka Razor) under an open source license (Apache 2.0), and that we will increase the development transparency of all three projects by hosting their code repositories on CodePlex (using the new Git support announced last week). Doing so will enable a more open development model where everyone in the community will be able to engage and provide feedback on code checkins, bug-fixes, new feature development, and build and test the products on a daily basis using the most up-to-date version of the source code and tests. We will also for the first time allow developers outside of Microsoft to submit patches and code contributions that the Microsoft development team will review for potential inclusion in the products. We announced a similar open development approach with the Windows Azure SDK last December, and have found it to be a great way to build an even tighter feedback loop with developers – and ultimately deliver even better products as a result. Very importantly - ASP.NET MVC, Web API and Razor will continue to be fully supported Microsoft products that ship both standalone as well as part of Visual Studio (the same as they do today). They will also continue to be staffed by the same Microsoft developers that build them today (in fact, we have more Microsoft developers working on the ASP.NET team now than ever before). Our goal with today’s announcement is to increase the feedback loop on the products even more, and allow us to deliver even better products.  We are really excited about the improvements this will bring. Learn More You can now browse, sync and build the source tree of ASP.NET MVC, Web API, and Razor on the http://aspnetwebstack.codeplex.com web-site.  The Git repository on the site is the live RC milestone development tree that the team has been working on the last several weeks, and the tree contains both the runtime sources + tests, and is buildable and testable by anyone.  Because the binaries produced are bin-deployable, this allows you to compile your own builds and try product updates out as soon as they are checked-in. You can also now contribute directly to the development of the products by reviewing and sending feedback on code checkins, submitting bugs and helping us verify fixes as they are checked in, suggesting and giving feedback on new features as they are implemented, as well as by submitting code fixes or code contributions of your own. Note that all code submissions will be rigorously reviewed and tested by the ASP.NET MVC Team, and only those that meet an extremely high bar for both quality and design/roadmap appropriateness will be merged into the source. Summary All of us on the team are really excited about today’s announcement – it has been something we’ve been working toward for many years.  The tighter feedback loop is going to enable us to build even better products, and take ASP.NET to the next level in terms of innovation and customer focus. Thanks, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • Read Mobi eBooks on Kindle for PC

    - by Matthew Guay
    Do you use your PC as a eBook reader?  Kindle for PC makes it easy to read thousands of books from the Kindle Store on your computer. What you may not know is that is also works with .mobi format too, so you can increase the amount of books you can read. Amazon has jumpstarted the eBook market with their popular Kindle device.  Last fall Amazon unveiled Kindle for PC, and we reviewed how you can Read Kindle Books On Your Computer with Kindle for PC.  Whether or not you own a Kindle or other eBook reader, this is a great way to take advantage of the thousands of eBooks available from the Kindle Store today. It supports azw, prc, and tpz format, which are sold from the Kindle store, but it also supports Mobipocket (.mobi) eBooks that are not DRM protected.  Here’s how you can add them to Kindle for PC so you can easily read them on your PC Getting Started: First, make sure you have Kindle for PC (link below) installed on your computer. Sign in with your Amazon account when you first run it. Kindle for PC lets you easily read eBooks downloaded from the Kindle Store, but it doesn’t have any way to add other eBooks directly from the program. To add eBooks, you can sometimes download and double-click on the books, and they will open in Kindle for PC and be automatically added to the library.  However, this does not always seem to work. So instead, browse to your Documents folder (simply click on the Documents link on your Start menu), and double-click on the My Kindle Content folder. This folder contains all the Kindle books you have downloaded.  If you have other eBooks you would like to add to Kindle for PC, simply drag-and-drop or copy and paste them into this folder.  Here we have a .mobi formatted book downloaded from the Gutenberg Project that we’re dragging into the folder. Now, close and reopen Kindle for PC.  It should now show your new eBook right beside the eBooks you have downloaded from the Kindle Store. These eBooks work just the same as the ones downloaded from the Kindle store, and you can change font size and add bookmarks just as with other eBooks. The eBooks downloaded this way may show up with either a Amazon logo or a mobile device icon.  You should only see the mobile device icon on .mobi files formatted for mobile devices; other ones should show up with the Amazon logo.  In this screen, Pilgrim’s Progress is a standard .mobi book, The Adventures of Sherlock Holmes is a mobipocket book, and the others are downloaded from the Kindle Store. Conclusion This is a great way to read eBooks from across the internet on Kindle for PC.  Wikipedia’s Kindle page has a list of websites that offer eBooks formatted for the Kindle, so be sure to check it out for more books. Links Download Kindle for PC List of websites that offer eBooks that will work on Kindle – via Wikipedia Similar Articles Productive Geek Tips Read Kindle Books On Your Computer with Kindle for PCInstall Adobe PDF Reader on Ubuntu EdgyHow to Access your Box.Net Account from Ubuntu the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • Closer look at the SOA 12c Feature: Oracle Managed File Transfer

    - by Tshepo Madigage-Oracle
    The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application integration complexity. To meet this challenge, Oracle introduced Oracle SOA Suite 12c, the latest version of the industry's most complete and unified application integration and SOA solution. With simplified cloud, mobile, on-premises, and Internet of Things (IoT) integration capabilities, all within a single platform, Oracle SOA Suite 12c helps organizations speed time to integration, improve productivity, and lower TCO. To extend its B2B solution capabilities with Oracle SOA Suite 12c, Oracle unveiled Oracle Managed File Transfer, an integrated solution that enables organizations to virtually eliminate file transfer complexities. This allows customers to load data securely into Oracle Cloud applications as well as third-party cloud or partner applications. Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal departments and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use especially for non technical staff so you can leverage more resources to manage the transfer of files. The extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required. You can protect data in your DMZ by using the SSH/FTP reverse proxy. Oracle Managed File Transfer can help integrate applications by transferring files between them in complex use case patterns. Standalone: Transferring files on its own using embedded FTP and sFTP servers and the file systems to which it has access. SOA Integration: a SOA application can be the source or target of a transfer. A SOA application can also be the common endpoint for the target of one transfer and the source of another. B2B Integration: B2B application can be the source or target of a transfer. A B2B application can also be the common endpoint for the target of one transfer and the source of another. Healthcare Integration:  Healthcare application can be the source or target of a transfer. A Healthcare application can also be the common endpoint for the target of one transfer and the source of another. Oracle Service Bus (OSB) integration: OMT can integrate with Oracle Service Bus web service interfaces. OSB interface can be the source or target of a transfer. An Oracle Service Bus interface can also be the common endpoint for the target of one transfer and the source of another. Hybrid Integration: can be one participant in a web of data transfers that includes multiple application types. Oracle Managed File Transfers has four user roles: file handlers, designers, monitors, and administrators. File Handlers: - Copy files to file transfer staging areas, which are called sources. - Retrieve files from file transfer destinations, which are called targets. Designers: - Create, read, update and delete file transfer sources. - Create, read, update and delete file transfer targets. - Create, read, update and delete transfers, which link sources and targets in complete file delivery flows. - Deploy and test transfers. Monitors: - Use the Dashboard and reports to ensure that transfer instances are successful. - Pause and resume lengthy transfers. - Troubleshoot errors and resubmit transfers. - View artifact deployment details and history. - View artifact dependence relationships. - Enable and disable sources, targets, and transfers. - Undeploy sources, targets, and transfers. - Start and stop embedded FTP and sFTP servers. Administrators: - All file handler tasks - All designer tasks - All monitor tasks - Add other users and determine their roles - Configure user directory permissions - Configure the Oracle Managed File Transfer server - Configure embedded FTP and sFTP servers, including security - Configure B2B and Healthcare domains - Back up and restore the Oracle Managed File Transfer configuration - Purge transferred files and instance data - Archive and restore instance data and payloads - Import and export metadata You will find all the related information about SOA 12.1.3. Oracle Manages File Transfer OMT in the documentation: Using Oracle Manages File Transfer Resources and links: Oracle Unveils Oracle SOA Suite 12c Oracle Managed Files Transfer Oracle Managed Files Transfer SOA 12c White Paper For further enquiries don't hesitate to contact us at [email protected] and join our Partner Webcast on Oracle SOA Suite 12c

    Read the article

  • iPad Impressions

    - by Aaron Lazenby
    So, I spent some quality time with my new iPad on Saturday. Here are things I like/don't like: -- Don't like that it has to sync with iTunes before you use it: I was traveling and left my laptop at home thinking I'd use this iPad thing instead. But the first thing it asked me to do is connect it to a laptop. Ugh. Had to borrow my mother-in-law's MacBook Pro just to get the iPad rolling. -- Like that magazines and newspapers are forever changed: And I think for the better...it's why I bought this thing in the first place. I spent significant time with The New York Times, The Wall Street Journal, Time Magazine and Popular Science on the iPad. Sliding stories around, jumping from section to section, enlarging images = all excellent experiences. Actually prefer iPad magazine to print, which will require a major shift in editorial strategy, summed up by Popular Science's Mark Jannot in his editor's note "What defines a magazine? Curated expertise--not paper." -- Don't like the screwy human factors: I actually enjoy the virtual keyboard (although I think I'm in the minority), but you have to hunch over to look down at what you're typing. Bad technology ergonomics have already jacked my body in various ways. The iPad just introduced a new one.-- Like the multitouch: In fact, it's awesome. Hands down. Probably will have the most lasting impact on the personal computing industry as a whole.   -- Don't like that it's heavy: If you plan to read in bed, you'd better double up on the creatine and curls. Holding this thing up on your own gets pretty uncomfortable. -- Like the Netfilx app: I wanted to watch "The Big Lebowski," so I did. That is all. -- Don't like that people feel 3G is necessary: For $30 a month? Please. I'm already accustomed to limiting my laptop internet use to readily available free wi-fi. Why do I expect anything different with the iPad? Most anyplace I have time to sit and read/use a computer (cafe, airport, you house, library, etc.) has free wi-fi. I can live without web surfing in your car. That's what the iPhone is for. -- Don't like that not everyone was ready in day one: I'm looking at you Facebook. No iPad app for launch? Lame. iPhone apps scaled-up to work on the iPad look grainy and cheap. Not a quality befitting this beautiful $700 piece of glass.Verdict: I'm bringing it to COLLABORATE 08 and seeing if I can go the whole week using only the iPad. If I can trade this thing for my laptop, I know it's a winner. For now, I'm enjoying Popular Science.

    Read the article

  • Silverlight Cream for December 12, 2010 -- #1008

    - by Dave Campbell
    In this Issue: Michael Washington, Samuel Jack, Alfred Astort(-2-), Nokola(-2-), Avi Pilosof, Chris Klug, Pete Brown, Laurent Bugnion(-2-), and Jaime Rodriguez(-2-, -3-). Above the Fold: Silverlight: "Sharing resources and styles between projects in Silverlight" Chris Klug WP7: "Windows Phone Application Performance at Silverlight Firestarter" Jaime Rodriguez Training: "Silverlight View Model (MVVM) - A Play In One Act" Michael Washington Shoutouts: Koen Zwikstra announced the availability of the first Silverlight Spy 4 Preview 1 Gavin Wignall announced the Launch of Festive game built with Silverlight 4, hosted on Azure ... free to play. From SilverlightCream.com: Silverlight View Model (MVVM) - A Play In One Act Michael Washington has an interesting take on writing a blog post with this 'play' version of Silverlight View Models and Expression Blend with a heaping dose of Behaviors added in for flavoring. Build a Windows Phone Game in 3 days – Day 1 Samuel Jack is attempting to build a WP7 game in 3 days including downloading the tools and an XNA book... interesting to see where he's headed wth this venture. 4 of 10 - Make sure your finger can hit the target and text is legible Continuing with a series of tips from the folks reviewing apps for the marketplace via Alfred Astort is this number 4 -- touch target size and legible text. 5 of 10 - Give feedback on touch and progress within your UI Alfred Astort's number 5 is also up, and continues the touch discussion with this tip about giving the user feedback on their touch. Fantasia Painter Released for Windows Phone 7 + Tips Nokola took the release of his Fantasia Painter on WP& as an opportunity not only to blog about the fact that we can go buy it, but has a blog full of hints and tips that he gathered while working on it. Games for Windows Phone 7 Resources: Reducing Load Times, RPG Kit; Other Nokola also blogged about the release of the new games education pack, and gives up the cursor he uses in his videos after being asked... The simplest way to do design-time ViewModels with MVVM and Blend. Avi Pilosof attacks the design-time ViewModel issue in Blend with a 'no code' solution. Sharing resources and styles between projects in Silverlight Chris Klug is talking about sharing resources and styles across a large Silverlight project... near and dear to my heart at this moment. Dynamically Generating Controls in WPF and Silverlight Pete Brown has a post up that's generated some interest... creating controls at runtime... and he's demonstrating several different ways for both Silverlight and WPF #twitter for Windows Phone 7 protips (#wp7) Laurent Bugnion was posting these great tips for Twitter for WP7 and rolled all 16 of them up into a blog post... check them and the app out... Increasing touch surface (#wp7dev) Laurent Bugnion's most current post should be of great interest to WP7 devs... providing more touch surface for your user's fat fingers, err, I mean their fat fingerings :) ... great information and samples ... and interesting it is a fail point as listed by Alfred Astort above. Windows Phone Application Performance at Silverlight Firestarter This material from Jaime Rodriguez actually hit prior to his Firestarter presentation, but should be required reading for anyone doing a WP7 app... great Performance tips from the trenches... slide deck, cheat-sheet, and code. UpdateSourceTrigger on Windows Phone data bindings Another post from Jaime Rodriguez actually went through a couple revisions already.. how about a WP7 TextBox that fires notifications to the ViewModel when the text changes? ... would you like a behavior with that? Details on the Push Notification app limits Jaime Rodriguez has yet another required reading post up on Push Notification limits ... what it really entails and how you can be a good WP7 citizen by the way you program your app. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Wipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy Way

    - by The Geek
    Giving a computer to somebody else? Maybe you’re putting it out on Craigslist to sell to a stranger—either way, you’ll want to make sure that your drive is completely wiped, scrubbed, and clean of any personal data. Here’s the easy way to do it. If you only have access to an Ubuntu Live CD or thumb drive, you can actually use that instead if you prefer, and we’ve got you covered with a full guide to securely wiping your PC’s hard drive. Otherwise, keep reading. Wipe the Drive with DBAN Darik’s Boot and Nuke CD is the easiest way to permanently and totally destroy every bit of personal information on that drive—nobody is going to recover a thing once this is done. The first thing you’ll need to do is download a copy of the ISO image, and then burn it to a blank CD with something really useful like Imgburn. Just choose Burn image to Disc at the start screen, select the little file icon, grab the downloaded ISO, and then go. If you need a little more help, we’ve got you covered with a beginner’s guide to burning an ISO image. Once you’re done, stick the disc into the drive, start the PC up, and then once you boot to the DBAN prompt you’ll see a menu. You can pretty much ignore everything on here, and just type… autonuke And there you are, your disk is now being securely wiped. Once it’s all done, you can remove the CD, and then either pack the PC up to sell, or re-install Windows on there if you feel like it. More Advanced Method If you’re really paranoid, want to run a different type of wipe, or just like fiddling with the options, you can choose F3 or hit Enter at the prompt to head to the advanced selection screen. Here you can choose exactly which drive to wipe, or hit the M key to change the method. You’ll be able to choose between a bunch of different wipe options. The Quick Erase is all you really need though.   So there you are, easy PC wiping in one package. What about you? Do you make sure to wipe your old PCs before giving them away? Personally I’ve always just yanked out the hard drives before I got rid of an old PC, but that’s just me. Download DBAN from dban.org Similar Articles Productive Geek Tips Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveHow to Dispose of Old Computers ResponsiblyHow To Delete a VHD in Windows 7Speed up External USB Hard Drives in Windows VistaSpeed Up SATA Hard Drives in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • SQL SERVER – How to Ignore Columnstore Index Usage in Query

    - by pinaldave
    Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following. “We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow. Any workaround in this situation?” In summary the question in simple words “How can we ignore using columnstore index in selective queries?” Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index. Here is the quick script to prove the same. We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO Now we have created columnstore index so if we run following query it will use for sure the same index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) GO Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Watch Netflix Instant Movies in Boxee

    - by DigitalGeekery
    Boxee is multi-platform Media PC application with a host of media applications. One of which is the popular Movie service, Netflix. Today we’ll show you how to get setup to watch Netflix Instant streaming video in Boxee. Note: Nexflix requires Microsoft Silverlight which unfortunately means Boxee users running Linux out of luck. What You’ll Need A Netflix account Authorize your Netflix account with Boxee Install Microsoft Silverlight Authorize Your Netflix Account First, we need to authorize our Netflix account with Boxee. (See link below). Type in your Boxee username and password and click “Login.”  When prompted, click “Authorize.”   Click “Yes, Link This Account.”    Install Silverlight If you don’t already have Silverlight installed, you’ll need to do so. See the download link at the end of the article.   Log into Boxee Now we’re ready to log into Boxee. Once logged in, click on “Apps” on the Home screen.   From the My Apps screen click on Netflix. Then click “Start.” Click “Yes” to enable the cookie.   Now you’ll enter the Netflix App. From here, you can browse your Instant Queue, Recommendations, New Arrivals, Browse Genre, or Search for available titles.   Click on a selection you’d like to watch. From here, you can Play, Rate, or even add the title to your regular Netflix Queue.   With a remote or the on-screen controls you can pause, stop, play, and skip forward or back through the video.   Now you’re all set to enjoy the Netflix Instant library with Boxee. Netflix Instant is one of many great Apps included with Boxee. While the current available selection isn’t exactly overwhelming, most subscribers will likely find enough to keep themselves entertained in between DVD deliveries. Haven’t tried Boxee yet? Check out our article on getting started with Boxee. Links Authorize your Netflix account with Boxee Install Microsoft Silverlight Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Find Movies and TV Based on your Mood with JinniGetting Started with BoxeeQuickly Find Movies to Watch at Hello MoviesIntegrate Boxee with Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer

    Read the article

  • Adobe Photoshop CS5 vs Photoshop CS5 extended

    - by Edward
    Adobe Photoshop has been an industry standard for most web designers & photographers worldwide. Photoshop CS5 has made photography editing much more refined and the composition process has become much easier than ever before.  To study the advantage of Photoshop CS5 extended over Photoshop CS5 we have written this comparison article, with both a Designer’s & Photographer’s perspective. Hopefully it shall help you in your buying/upgrade decision. Photoshop CS5 Photoshop CS5 has refining feature with powerful photography tools. It made editing process easy as fewer steps are involved to remove noise, add grain, create vignettes, correct lens distortions, sharpen, and create HDR images. It has quick image correction and color and tone control for professional purpose. Intelligent image editing and enhancement , extraordinary advanced compositing has made it a better tool than earlier versions for photographers. It allows users to accelerate workflow with fast performance on 64-bit Windows® and Mac hardware systems and smoother interactions due to more GPU-accelerated features. It also boasts of a state-of-the-art processing with Adobe Photoshop Camera Raw 6 and helps to maximize creative impact. It provides for tremendous precision and freedom. It allows user to easily select intricate image elements, such as hair and create realistic painting effects. It also allows to remove any image element and see the space fill in almost magically. It has easy access to core editing and streamlined work flow and flexible work ambience. It has creative tools and contents. Photoshop CS5 Extended Photoshop CS5 extended is quite innovative and has incorporated 3D elements to 2D artwork directly within digital imaging application, which enables user to do an easy on-ramp to 3D image creation. It also provides for 3D editing. It has intelligent image editing and enhancement. It offers advance composing and has extraordinary painting and drawing toolset. It provides for video and animation designing. It helps to work with specialized images for architecture, manufacturing, engineering, science, and medicine. Where CS5 extended scores over CS5 CS5 extended has many features, which were not included in CS5. These features make it score more over CS5. These features are: Technology for creating 3D extrusion 3D material library and picker Field depth for 3D 3D merging and scene composition improvements 3D workflow improvement Customization of 3D features Image based light source Shadow catcher for shadow creation Enhanced ray tracer Context sensitive widgets, which allows easy control of objects, lights and cameras. Overlays for materials and mesh boundaries Photoshop CS5 extended is far better than CS5 as it incorporates all the features of CS5 and have more advanced features. It allows 3D creation and editing and has other advanced tools to make it better. Redefining the Image-Editing Experience  : A Photographer’s point of View Photoshop CS5 delivers amazing features and creative options so even new users can perform advanced image manipulations and compositions. Breath taking image intelligence behind Content-Aware Fill magically removes any image detail or object, examines the surroundings and seamlessly fills in the space left behind. Lighting, tone and noise of the surrounding area can be matched. New Refine Edge makes nearly-impossible image selections possible. Masking was never easier, the toughest types of edges, such as hair and foliage seem easier to fix. To sum up following are few advantages of CS5 extended over previous versions 64-bit processing Content Aware Fill Refine Edge, “makes nearly-impossible image selections impossible” HDR Pro, including ghost artifact removal and HDR toning, which gives the look of HDR with a single exposure New brush options Improved image management with enhanced Adobe Bridge Lens corrections Improved black-and-white conversions Puppet Warp: Precisely reposition or warp any image element Adobe Camera Raw 6 Upgrade Buy Online Pricing and Availability Adobe Photoshop CS5 and CS5 Extended are available through Adobe Authorized Resellers & the Adobe Store. Estimated street price for Adobe Photoshop CS5 is US$699 and US$999 for Photoshop CS5 Extended. Upgrade pricing and volume licensing are also available. Related posts:10 Free Alternatives for Adobe Photoshop Software Web based Alternatives to Photoshop 15 Useful Adobe Illustrator Tutorials For Designers

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >