Search Results

Search found 14961 results on 599 pages for 'tab complete'.

Page 513/599 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • Understanding EDI 997

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout:   No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • POP Forums v9 Beta 1 for ASP.NET MVC 3 posted to CodePlex!

    - by Jeff
    As promised, I posted a beta build of my forum app for ASP.NET MVC 3. Get the new goodies here: http://popforums.codeplex.com/releases/view/58228 This is the first beta for the ASP.NET MVC 3 version of POP Forums. It is nearly feature complete, and ready for testing and feedback. For previous release notes, look here, here and here.Check out the live preview: http://preview.popforums.com/ForumsSetup instructions are on the home page of this project. The new hotness in the beta, or what has been done since the last preview: All views converted to use Razor E-mail subscription/notification of new posts New post indicators/mark read buttons Permalinks to posts Jump to newest post (from new post indicators) Recent topics Favorite topics Moderator functions for topics (pin/close/delete, plus move and rename) Search, ported from v8. Not a ton of optimization here, or new unit testing, but the old version worked pretty well User posts (topics the user posted in) Forgot password Vanity items (signatures and avatars) Hide vanity items per user preference Some minor data caching where appropriate A little bit of UI refinement Lots-o-bug fixes Lots-o-unit tests What's next? The plan between now and the next beta is as follows: Continue working through features/tasks, and fix bugs as they're reported Integrate the forum into a real, production site Refine the UI Refactor as much as possible... the code organization is not entirely logical in some places After the second beta, a release candidate will follow, with a real "final" release after that. Subsequent releases should come relatively frequently and without a lot of risk. The trick in building this thing has been that it mostly tossed the previous WebForms version, which was all full of crusties. The time table for this is a little harder to pin down, as day jobs and families will have their effect. Other notes Refactoring will be a priority. As the features of MVC have evolved, so have my desires to use it in a fashion that makes things clear and easy to follow. I don't even know if anyone will ever start mucking around in the code, but on the off chance they do, I'd like what they find to not suck. Other nice-to-haves are builds to target Windows Azure and SQL CE. A nice setup UI would be super too. I think the ASP.NET MVC world has gone long enough without a decent forum.The biggest challenge that I've found is making the forum something that can be dropped in any app. While it does rope its views into an area, areas are mostly just routing details. I haven't thought of a clever way yet to limit dependency injection, for example, to just the forum bits. I mean, everyone should be using Ninject, but how realistic is that? ;)How much time and effort should you spend on POP Forums in its current state? Change is inevitable, but at this point I'm reasonably committed to not changing the database schema. I really think it will stay as-is. All bets are off for the various interfaces throughout the app, but the data should generally resist change. It's not even that different from v8, which was one of the original goals because I didn't want to rewrite SQL or introduce a new ORM or whatever. My point is that if you wanted to build a site around this today, even though it's not entirely functional, I think it's low risk in terms of data loss. I can't vouch for whether or not you know what you're doing.I've been having some chats with people lately about quoting posts, and honestly there has to be something better and straight forward. That continues to be a holy grail of mine, and some day, I hope to find it.Enjoy... it's starting to feel more real every day!

    Read the article

  • Beware when using .NET's named pipes in a windows forms application

    - by FransBouma
    Yesterday a user of our .net ORM Profiler tool reported that he couldn't get the snapshot recording from code feature working in a windows forms application. Snapshot recording in code means you start recording profile data from within the profiled application, and after you're done you save the snapshot as a file which you can open in the profiler UI. When using a console application it worked, but when a windows forms application was used, the snapshot was always empty: nothing was recorded. Obviously, I wondered why that was, and debugged a little. Here's an example piece of code to record the snapshot. This piece of code works OK in a console application, but results in an empty snapshot in a windows forms application: var snapshot = new Snapshot(); snapshot.Record(); using(var ctx = new ORMProfilerTestDataContext()) { var customers = ctx.Customers.Where(c => c.Country == "USA").ToList(); } InterceptorCore.Flush(); snapshot.Stop(); string error=string.Empty; if(!snapshot.IsEmpty) { snapshot.SaveToFile(@"c:\temp\generatortest\test2\blaat.opsnapshot", out error); } if(!string.IsNullOrEmpty(error)) { Console.WriteLine("Save error: {0}", error); } (the Console.WriteLine doesn't do anything in a windows forms application, but you get the idea). ORM Profiler uses named pipes: the interceptor (referenced and initialized in your application, the application to profile) sends data over the named pipe to a listener, which when receiving a piece of data begins reading it, asynchronically, and when properly read, it will signal observers that new data has arrived so they can store it in a repository. In this case, the snapshot will be the observer and will store the data in its own repository. The reason the above code doesn't work in windows forms is because windows forms is a wrapper around Win32 and its WM_* message based system. Named pipes in .NET are wrappers around Windows named pipes which also work with WM_* messages. Even though we use BeginRead() on the named pipe (which spawns a thread to read the data from the named pipe), nothing is received by the named pipe in the windows forms application, because it doesn't handle the WM_* messages in its message queue till after the method is over, as the message pump of a windows forms application is handled by the only thread of the windows forms application, so it will handle WM_* messages when the application idles. The fix is easy though: add Application.DoEvents(); right before snapshot.Stop(). Application.DoEvents() forces the windows forms application to process all WM_* messages in its message queue at that moment: all messages for the named pipe are then handled, the .NET code of the named pipe wrapper will react on that and the whole process will complete as if nothing happened. It's not that simple to just say 'why didn't you use a worker thread to create the snapshot here?', because a thread doesn't get its own message pump: the messages would still be posted to the window's message pump. A hidden form would create its own message pump, so the additional thread should also create a window to get the WM_* messages of the named pipe posted to a different message pump than the one of the main window. This WM_* messages pain is not something you want to be confronted with when using .NET and its libraries. Unfortunately, the way they're implemented, a lot of APIs are leaky abstractions, they bleed the characteristics of the OS objects they hide away through to the .NET code. Be aware of that fact when using them :)

    Read the article

  • Silverlight Cream for March 17, 2010 -- #814

    - by Dave Campbell
    In this Issue: Tim Heuer(-2-), René Schulte(-2-), Bart Czernicki, Mark Monster, Pencho Popadiyn, Alex Golesh, Phil Middlemiss, and Yochay Kiriaty. Shoutouts: Check out the new themes, and Tim Heuer's poetry skills: SNEAK PEEK: New Silverlight application themes I learned to program Windows 3.1 from reading Charles Petzold's book, and here we are again: Free ebook: Programming Windows Phone 7 Series (DRAFT Preview) Here's a blog you're going to want to watch, and first up on the blog tonight is links to the complete set of MIX10 phone sessions: The Windows Phone Developer Blog First let me get a couple of things out of my system... "Holy Crap it's March 17th already" and "Holy Crap, we're all Windows Phone Developers!" I'm sure both of those were old news to anyone that's not been in a coma since Monday, but I've been a tad busy here at #MIX10. I'm not complainin' ... I'm just sayin' From SilverlightCream.com: Getting Started with Silverlight and Windows Phone 7 Development With any new Silverlight technology we have to begin with Tim Heuer... and this is Tim's announcement of Silverlight on the Windows Phone 7 Series ('cmon, can I call it a "Silverlight Phone"? ... please?) ... hope I didn't type that out loud :) ... so... in case you fell asleep Sunday, and just woke up, Tim let the dogs out on this and we could all talk about it. In all seriousness, bookmark this page... lots of good links. A guide to what has changed in the Silverlight 4 RC Continuing the 'bookmark this page' thought... Tim Heuer also has one up on what the heck is all in the Silverlight 4 RC they released on Monday... check this out... really good stuff in there... and a great post detailing it all. The Silverlight 4 Release Candidate René Schulte has a good post up detailing the new stuff in Silverlight 4 RC, with special attention paid to the webcam/mic and AsyncCaptureImage Let it ring - WriteableBitmapEx for Windows Phone René Schulte has a Windows Phone post up as well, introducing the WriteableBitmapEx library for Windows Phone... how cool is that?? Silverlight for Windows Phone 7 is NOT the same full Silverlight 3 RTM Bart Czernicki dug into the docs to expose some of the differences between Silverlight for the Windows Phone and Silverlight 3. If you've been developing in SL3 and want to also do Phone, check out this post and his resource listings. Trying to sketch a Windows Phone 7 application Mark Monster tried to SketchFlow a Windows Phone app and hit some problems... if anyone has thoughts, contribute on his blog page. Using Reactive Extensions in Silverlight – part 2 – Web Services Pencho Popadiyn has part 2 of his tutorial on Rx, and this one is concentrating on asynchronous service calls. Silverlight 4 Quick Tip: Out-Of-Browser Improvements This post from Alex Golesh is a little weird since he was sitting next to me in a session at MIX10 when he submitted it :) ... good update on what's new in OOB in the RC Turning a round button into a rounded panel I like Phil Middlemiss' other title for this post: "A Scalable Orb Panel-Button-Thingy" ... this is a very cool resizing button that works amazingly similar to the resizable skinned dialogs I did in Win32!... very cool, Phil! Go Get It – The Windows Phone Developer Training Kit Did you know there was a Windows Phone Training Kit with Hands-on Labs? Yochay Kiriaty at the Windows Phone Developer Blog wrote about it... I pulled it down, and it looks really good! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Localization with ASP.NET MVC ModelMetadata

    - by kazimanzurrashid
    When using the DisplayFor/EditorFor there has been built-in support in ASP.NET MVC to show localized validation messages, but no support to show the associate label in localized text, unless you are using the .NET 4.0 with Mvc Future. Lets a say you are creating a create form for Product where you have support both English and German like the following. English German I have recently added few helpers for localization in the MvcExtensions, lets see how we can use it to localize the form. As mentioned in the past that I am not a big fan when it comes to decorate class with attributes which is the recommended way in ASP.NET MVC. Instead, we will use the fluent configuration (Similar to FluentNHibernate or EF CodeFirst) of MvcExtensions to configure our View Models. For example for the above we will using: public class ProductEditModelConfiguration : ModelMetadataConfiguration<ProductEditModel> { public ProductEditModelConfiguration() { Configure(model => model.Id).Hide(); Configure(model => model.Name).DisplayName(() => LocalizedTexts.Name) .Required(() => LocalizedTexts.NameCannotBeBlank) .MaximumLength(64, () => LocalizedTexts.NameCannotBeMoreThanSixtyFourCharacters); Configure(model => model.Category).DisplayName(() => LocalizedTexts.Category) .Required(() => LocalizedTexts.CategoryMustBeSelected) .AsDropDownList("categories", () => LocalizedTexts.SelectCategory); Configure(model => model.Supplier).DisplayName(() => LocalizedTexts.Supplier) .Required(() => LocalizedTexts.SupplierMustBeSelected) .AsListBox("suppliers"); Configure(model => model.Price).DisplayName(() => LocalizedTexts.Price) .FormatAsCurrency() .Required(() => LocalizedTexts.PriceCannotBeBlank) .Range(10.00m, 1000.00m, () => LocalizedTexts.PriceMustBeBetweenTenToThousand); } } As you can we are using Func<string> to set the localized text, this is just an overload with the regular string method. There are few more methods in the ModelMetadata which accepts this Func<string> where localization can applied like Description, Watermark, ShortDisplayName etc. The LocalizedTexts is just a regular resource, we have both English and German:   Now lets see the view markup: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Demo.Web.ProductEditModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> <%= LocalizedTexts.Create %> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= LocalizedTexts.Create %></h2> <%= Html.ValidationSummary(false, LocalizedTexts.CreateValidationSummary)%> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <fieldset> <%= Html.EditorForModel() %> <p> <input type="submit" value="<%= LocalizedTexts.Create %>" /> </p> </fieldset> <% } %> <div> <%= Html.ActionLink(LocalizedTexts.BackToList, "Index")%> </div> </asp:Content> As we can see that we are using the same LocalizedTexts for the other parts of the view which is not included in the ModelMetadata like the Page title, button text etc. We are also using EditorForModel instead of EditorFor for individual field and both are supported. One of the added benefit of the fluent syntax based configuration is that we will get full compile type checking for our resource as we are not depending upon the string based resource name like the ASP.NET MVC. You will find the complete localized CRUD example in the MvcExtensions sample folder. That’s it for today.

    Read the article

  • Systems Solutions at COLLABORATE12

    - by ferhat
    Want to connect with fellow Oracle users and learn more about how to maximize your Oracle software environments with Oracle Systems?   Pack your bags for Las Vegas!   COLLABORATE 12  is right around the corner! COLLABORATE 12 Conference will be held at the Mandalay Bay in Las Vegas, NV 22-26 April, 2012. This is an event designed and delivered by users just like you with sessions, interactive panel discussions and hands-on learning opportunities packed with first-hand experiences, case studies and practical “how-to” content.. This year’s event includes a number of educational sessions and demos for users interested in learning from the experts how to use Oracle Optimized Solutions to get the most out of their Oracle Technology and Application software. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance.  And because they are highly flexible by design,  Oracle Optimized Solutions   can be implemented as an end-to-end solution or easily adapted into existing environments. Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions. Please come by our Exhibition Booth #1273 to see the demos and meet 1-1 with the experts behind a number of  Oracle Optimized Solutions  including those for JD Edwards EnterpriseOne, E-Business Suite, PeopleSoft HCM, Oracle WebCenter, and Oracle Database.  Exhibitor Showcase Booth #1273 DAY TIME TITLE Monday  April 23 6:00 pm - 8:00 pm Welcome Reception in the Exhibitor Showcase Tuesday  April 24 10:15 am - 4:00 pm Exhibitor Showcase Open 1:00 pm - 2:00 pm Dedicated Exhibitor Showcase Time 5:30 pm - 7:00 pm Exhibitor Showcase Happy Hour Wednesday  April 25 10:30 am - 3:00 pm Exhibitor Showcase Open 2:15 pm -3:00 pm Afternoon Break in Exhibitor Showcase  There are also a number of deep dive, educational sessions covering deployment best practices using Oracle’s engineered systems and best-in-class hardware, operating system and virtualization technologies.  Education Sessions DAY TIME TITLE LOCATION Monday  April 23 9:45 am - 10:45 am Architecting and Implementing Backup and Recovery Solutions Surf E Tuesday  April 24 2:00 pm – 3:00 pm Oracle's High Performance Systems for JD Edwards EnterpriseOne Mandalay Bay GH 4:30 pm - 5:30 pm Virtualization Boot Camp: What's New with Oracle VM Server for x86 Mandalay Bay C 9:30 am - 10:30 am Oracle on Oracle VM - Expert Panel Mandalay Bay L Wednesday  April 25 9:30 am - 10:30 am Cloud Computing Directions: Part II Understanding Oracle's Cloud Directions South Seas E  And don’t forget the keynotes and software roadmap sessions! Keynotes and Roadmap Sessions DAY TIME TITLE LOCATION Sunday  April 22 3:20 pm – 4:20 pm Oracle’s Cloud Computing Strategy Breakers B Monday  April 23 11:00 am – 12:00 pm JD Edwards - Vision, Promises and Execution: IT'S THE WAY WE ROLL and Why it Matters! Mandalay Bay A 11:00 am – 12:00 pm PeopleSoft Executive Update and Roadmap Mandalay Bay J 1:15 pm - 2:15 pm Oracle Database - Engineered for Innovation Mandalay Bay L 2:30 pm - 3:30 pm Oracle E-Business Suite Applications Strategy and General Manager Update Mandalay Bay D Tuesday  April 24 9:15 am - 10:15 am IT at Oracle: The Art of IT Transformation to Enable Business Growth Mandalay Bay Ballroom H

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

  • How do you install a USB CD Rom drive?

    - by Matt Allen
    Hello, I recently purchased a USB CD ROM drive, but I don't know how to get it to work with my computer which runs Ubuntu 10.04. http://www.amazon.com/gp/product/B00303H908/ref=oss_product When I issue the lsusb command, it shows up as: Bus 002 Device 016: ID 05e3:0701 Genesys Logic, Inc. USB 2.0 IDE Adapter The computer doesn't recognize it automatically. How can I get this drive to show up as an actual drive on my computer? If this particular drive can't handle Linux, can you recommended one which can and provide a link to it so I can purchase it? Thanks! Update: I was asked by Scaine to run a command and report back with the output: joe@joe-laptop:~$ tail -f /var/log/kern.log Dec 29 12:51:35 joe-laptop kernel: [103190.551437] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.551446] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.551463] end_request: I/O error, dev sr1, sector 0 Dec 29 12:51:35 joe-laptop kernel: [103190.877542] sr 7:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 12:51:35 joe-laptop kernel: [103190.877551] sr 7:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 12:51:35 joe-laptop kernel: [103190.877559] Info fld=0x0, ILI Dec 29 12:51:35 joe-laptop kernel: [103190.877562] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.877572] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.877588] end_request: I/O error, dev sr1, sector 0 Dec 29 13:08:46 joe-laptop kernel: [104221.558911] usb 2-2.2: USB disconnect, address 16 Then when I plugged the drive back into the computer, I got: Dec 29 13:10:29 joe-laptop kernel: [104324.668320] usb 2-2.2: new high speed USB device using ehci_hcd and address 17 Dec 29 13:10:29 joe-laptop kernel: [104324.761702] usb 2-2.2: configuration #1 chosen from 1 choice Dec 29 13:10:29 joe-laptop kernel: [104324.762700] scsi8 : SCSI emulation for USB Mass Storage devices Dec 29 13:10:29 joe-laptop kernel: [104324.762935] usb-storage: device found at 17 Dec 29 13:10:29 joe-laptop kernel: [104324.762938] usb-storage: waiting for device to settle before scanning Dec 29 13:10:34 joe-laptop kernel: [104329.760521] usb-storage: device scan complete Dec 29 13:10:34 joe-laptop kernel: [104329.761344] scsi 8:0:0:0: CD-ROM TEAC CD-224E 1.7A PQ: 0 ANSI: 0 CCS Dec 29 13:10:34 joe-laptop kernel: [104329.767425] sr1: scsi3-mmc drive: 24x/24x cd/rw xa/form2 cdda tray Dec 29 13:10:34 joe-laptop kernel: [104329.767612] sr 8:0:0:0: Attached scsi CD-ROM sr1 Dec 29 13:10:34 joe-laptop kernel: [104329.767720] sr 8:0:0:0: Attached scsi generic sg2 type 5 Dec 29 13:10:34 joe-laptop kernel: [104330.141060] sr 8:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 13:10:34 joe-laptop kernel: [104330.141069] sr 8:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 13:10:34 joe-laptop kernel: [104330.141077] Info fld=0x0, ILI Dec 29 13:10:34 joe-laptop kernel: [104330.141081] sr 8:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 13:10:34 joe-laptop kernel: [104330.141090] sr 8:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 13:10:34 joe-laptop kernel: [104330.141106] end_request: I/O error, dev sr1, sector 0 Dec 29 13:10:34 joe-laptop kernel: [104330.141113] __ratelimit: 18 callbacks suppressed There was more output than this (the number of lines started growing after the drive was plugged back in, and kept growing), but this is the first few lines.

    Read the article

  • SQL SERVER – SQL Server Misconceptions and Resolution – A Practical Perspective – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there in two different sessions. On the very first day of this event, my presentation will be all about SQL Server Misconceptions and Resolution – A Practical Perspective. The dictionary tells us that a “misconception” means a view or opinion that is incorrect and is based on faulty thinking or understanding. In SQL Server, there are so many misconceptions. In fact, when I hear some of these misconceptions, I feel like fainting at that very moment! Seriously, at one time, I came across the scenario where instead of using INSERT INTO…SELECT, the developer used CURSOR believing that cursor is faster (duh!). Here is the link the blog post related to this. Pinal and Vinod in 2009 I have been presenting in TechEd India for last three years. This is my fourth opportunity to present a technical session on SQL Server. Just like the previous years, I decided to present something different. Here is a novelty of this year: I will be presenting this session with Vinod Kumar. Vinod Kumar and I have a great synergy when we work together. So far, we have written one SQL Server Interview Questions and Answers book and 2 video courses: (1) SQL Server Questions and Answers (2) SQL Server Performance: Indexing Basics. Pinal and Vinod in 2011 When we sat together and started building an outline for this course, we had many options in mind for this tango session. However, we have decided that we will make this session as lively as possible while keeping it natural at the same time. We know our flow and we know our conversation highlight, but we do not know what exactly each of us is going to present. We have decided to challenge each other on stage and push each other’s knowledge to the verge. We promise that the session will be entertaining with lots of SQL Server trivia, tips and tricks. Here are the challenges that I’ll take on: I will puzzle Vinod with my difficult questions I will present such misconception that Vinod will have no resolution for it. I need your help.  Will you help me stump Vinod? If yes, come and attend our session and join me to prove that together we are superior (a friendly brain clash, but we must win!). SQL Server enthusiasts and SQL Server fans are going to have gala time at #TechEdIn as we have a very solid lineup of the speaker and extremely interesting sessions at TechEdIn. Read the complete blog post of Vinod. Session Details Title: SQL Server Misconceptions and Resolution – A Practical Perspective (Add to Calendar) Abstract: “Earth is flat”! – An ancient common misconception, which has been proven incorrect as we progressed in modern times. In this session we will see various database misconceptions prevailing and their resolution with the aid of the demos. In this unique session audience will be part of the conversation and resolution. Date and Time: March 21, 2012, 15:15 to 16:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Juju stuck in "pending" state when using LXC

    - by Andre
    So I'm trying to get started with Juju, and tried to do this locally using LXC. I followed the instructions here: How do I configure juju for local usage? Unfortunately this doesn't seem to work for me. status shows the following: $ juju status machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 exposed: true relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 open-ports: [] public-address: null 2012-05-10 14:09:38,155 INFO 'status' command finished successfully As you can see the agent-state is 'pending' and there is no public address where I'm able to access the newly created site. Am I missing something here? UPDATE: Tried destroying the environment an doing everything again (multiple times). This is the output for debug-log: ~$ juju debug-log 2012-05-11 08:50:23,790 INFO Enabling distributed debug log. 2012-05-11 08:50:23,806 INFO Tailing logs - Ctrl-C to stop. 2012-05-11 08:50:42,338 Machine:0: juju.agents.machine DEBUG: Units changed old:set([]) new:set(['mysql/0']) 2012-05-11 08:50:42,339 Machine:0: juju.agents.machine DEBUG: Starting service unit: mysql/0 ... 2012-05-11 08:50:42,459 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/mysql-1 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:50:42,620 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c54b6c> for mysql/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:50:42,648 Machine:0: unit.deploy DEBUG: Starting service unit mysql/0... 2012-05-11 08:50:42,649 Machine:0: unit.deploy DEBUG: Creating master container... 2012-05-11 08:54:33,992 Machine:0: unit.deploy DEBUG: Created master container andre-local-0-template 2012-05-11 08:54:33,993 Machine:0: unit.deploy INFO: Creating container mysql-0... 2012-05-11 08:56:18,760 Machine:0: unit.deploy INFO: Container created for mysql/0 2012-05-11 08:56:19,466 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:56:19,569 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started container for mysql/0 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started service unit mysql/0 2012-05-11 08:56:23,012 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['mysql/0']) new:set(['wordpress/0', 'mysql/0']) 2012-05-11 08:56:23,039 Machine:0: juju.agents.machine DEBUG: Starting service unit: wordpress/0 ... 2012-05-11 08:56:23,154 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/wordpress-0 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:56:23,396 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c519cc> for wordpress/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:56:23,620 Machine:0: unit.deploy DEBUG: Starting service unit wordpress/0... 2012-05-11 08:56:23,621 Machine:0: unit.deploy INFO: Creating container wordpress-0... 2012-05-11 08:58:24,739 Machine:0: unit.deploy INFO: Container created for wordpress/0 2012-05-11 08:58:25,163 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:58:25,397 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:58:27,982 Machine:0: unit.deploy INFO: Started container for wordpress/0 2012-05-11 08:58:27,983 Machine:0: unit.deploy INFO: Started service unit wordpress/0 This is the result for the status command (with verbose flag): ~$ juju -v status 2012-05-11 08:51:53,464 DEBUG Initializing juju status runtime 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@662: Client environment:host.name=andre-ufo 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-24-generic-pae 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@671: Client environment:os.version=#37-Ubuntu SMP Wed Apr 25 10:47:59 UTC 2012 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@679: Client environment:user.name=andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@687: Client environment:user.home=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@699: Client environment:user.dir=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=192.168.122.1:41779 sessionTimeout=10000 watcher=0xb7780620 sessionId=0 sessionPasswd=<null> context=0x9242ee8 flags=0 2012-05-11 08:51:53,627:4030(0xb6b90b40):ZOO_INFO@check_events@1585: initiated connection to server [192.168.122.1:41779] 2012-05-11 08:51:53,649:4030(0xb6b90b40):ZOO_INFO@check_events@1632: session establishment complete on server [192.168.122.1:41779], sessionId=0x1373ae057d90007, negotiated timeout=10000 2012-05-11 08:51:53,651 DEBUG Environment is initialized. machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 public-address: null

    Read the article

  • SQL SERVER – Top 10 “Ease of Use” Features of expressor Studio

    - by pinaldave
    expressor Studio is new data integration platform that is being marketed as the most easy to use tool of its kind.  But “easy to use” can be a relative term – an expert can find a very complex system easy, but a beginner might be stumped.  A recent article online discussed exactly what makes expressor Studio so easy use, and here is my view on this subject. Simple Installation There is one pop-up for one .exe file, and nothing more.  You can’t get much simpler than this.  It is also in the familiar Windows design, so there should be no surprises. No 3rd party software dependency Have you ever tried to download software, only to be slowed down by the need to download a compatible system to run the program, and another to read the user manual, and so on?  expressor Studio was designed specifically to avoid this problem. Microsoft Office Like Ribbon Bar and Menus As mentioned before, everything is in the familiar Windows design, from the pop up windows to the tool bars and menus.  There should be no learning curve for using this program, or even simply trying to navigate around a new system. General Development Design Interface This software has been designed to be simple and straightforward.  Projects can be arranged in a simple “tree” design, that is totally collapsible and can easy be added to or “trimmed” with a click of a button.  It was meant to be logical and easy to follow. Integrated Contextual Help This is a fancy way of saying that you can practically yell “help!” if you do get stuck on something.  Solving a problem is as simple as highlighting and hitting F1 for contextual help. Visual Indicators and Messages Wouldn’t it be nice to know exactly where something has gone wrong before trying to complete a project.  expressor Studio has a built in system to catch mistakes and highlight them in a bright color, flash a warning message, and even disable functions before you can continue – and possibly lose hours of work. Property Inputs and Selectors Every operator will have a list of requirements that need to be filled in.  But don’t worry; you won’t have to make stuff up to fill in the boxes.  Each one will have a drop-down menu with options to choose from – but not too many as to be confusing. Connection Wizards Configuring connections can be the hardest part of a project.  But not with the expressor Studioconnection wizard.  A familiar, Windows-style menu will walk you through connections so quickly you’ll forget what trouble it used to be. Templates With large, complex projects, a majority of your time is often spent simply setting up the files and inputting data.  But expressor Studio allows you to create one file and then save it as a template, saving you hours of boring data input. Extension Manager Let’s say that you need a little more functionality or some new features on your program. A lot of software requires you to download complex plug-ins that need to be decompressed and installed.  However, expressor Studio has extended its system to an Extension Manager, which allows for quick and easy installation of the functionality you need, without the need to download and decompress. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Setting up SharePoint without Active Directory

    - by eJugnoo
    In order to setup SharePoint without AD, you need to run following PowerShell command on Management Shell after installing SharePoint on your server, but before running Config Wizard: (we don’t want to run this SP farm in stand-alone mode!) 1. New-SPConfigurationDatabase SYNOPSIS     Creates a new configuration database. SYNTAX     New-SPConfigurationDatabase [-DatabaseName] <String> [-DatabaseServer] <String> [[-DirectoryDomain] <String>] [[-DirectoryOrganizationUnit] <String>]     [[-AdministrationContentDatabaseName] <String>] [[-DatabaseCredentials] <PSCredential>] [-FarmCredentials] <PSCredential> [-Passphrase] <SecureString>      [-AssignmentCollection <SPAssignmentCollection>] [<CommonParameters>] DESCRIPTION     The New-SPConfigurationDatabase cmdlet creates a new configuration database on the specified database server. This is the central database for a new SharePoint farm.     For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation (http://go.microsoft.com/fwlink/?LinkId=163185). RELATED LINKS     Backup-SPConfigurationDatabase     Disconnect-SPConfigurationDatabase     Connect-SPConfigurationDatabase     Remove-SPConfigurationDatabase REMARKS     To see the examples, type: "get-help New-SPConfigurationDatabase -examples".     For more information, type: "get-help New-SPConfigurationDatabase -detailed".     For technical information, type: "get-help New-SPConfigurationDatabase -full". NOTE: Use –AdministrationContentDatabaseName switch to pass the name of Admin database you want instead of GUID-based name it automatically creates. Hence, one can pretty much easily control Admin, Config, and Content database names at the time of farm creation. If creating new farm, you can also delete and re-provision any service databases automatically created, from UI, to decide what database names you want. 2. Run SharePoint Configuration Wizard, and you’ll following as already added to farm. Select do not discconect from farm, and proceed… Select the port, and authentication (NTLM in my case). Click next, and wizard will complete the remaining steps of provisioning, including creation of Central Admin Web App on the desired port. Once successful, it will open the Central Admin site and ask you to run Farm Config Wizard. I chose to skip and do things manually, to remain in control of what is happening on the farm. Like creating web-app for site collections, creating the very first site collection, and any other service applications. I needed this to create a public-facing installation of SharePoint Foundation RTM on a server which didn’t have AD. Now I am going to setup FBA, and possibly Live ID Auth as well. I will be also setting up RBS, and multi-tenancy on this farm ,and would post any notes, and findings here… --Sharad

    Read the article

  • Start/Stop Window Service from ASP.NET page

    - by kaushalparik27
    Last week, I needed to complete one task on which I am going to blog about in this entry. The task is "Create a control panel like webpage to control (Start/Stop) Window Services which are part of my solution installed on computer where the main application is hosted". Here are the important points to accomplish:[1] You need to add System.ServiceProcess reference in your application. This namespace holds ServiceController Class to access the window service.[2] You need to check the status of the window services before you explicitly start or stop it.[3] By default, IIS application runs under ASP.NET account which doesn't have access rights permission to window service. So, Very Important part of the solution is: Impersonation. You need to impersonate the application/part of the code with the User Credentials which is having proper rights and permission to access the window service. If you try to access window service it will generate "access denied" error.The alternatives are: You can either impersonate whole application by adding Identity tag in web.cofig as:        <identity impersonate="true" userName="" password=""/>This tag will be under System.Web section. the "userName" and "password" will be the credentials of the user which is having rights to access the window service. But, this would not be a wise and good solution; because you may not impersonate whole website like this just to have access window service (which is going to be a small part of code).Second alternative is: Only impersonate part of code where you need to access the window service to start or stop it. I opted this one. But, to be fair; I am really unaware of the code part for impersonation. So, I just googled it and injected the code in my solution in a separate class file named as "Impersonate" with required static methods. In Impersonate class; impersonateValidUser() is the method to impersonate a part of code and undoImpersonation() is the method to undo the impersonation. Below is one example:  You need to provide domain name (which is "." if you are working on your home computer), username and password of appropriate user to impersonate.[4] Here, it is very important to note that: You need to have to store the Access Credentials (username and password) which you are going to user for impersonation; to some secured and encrypted format. I have used Machinekey Encryption to store the value encrypted value inside database.[5] So now; The real part is to start or stop a window service. You are almost done; because ServiceController class has simple Start() and Stop() methods to start or stop a window service. A ServiceController class has parametrized constructor that takes name of the service as parameter.Code to Start the window service: Code to Stop the window service: Isn't that too easy! ServiceController made it easy :) I have attached a working example with this post here to start/stop "SQLBrowser" service where you need to provide proper credentials who have permission to access to window service.  hope it would helps./.

    Read the article

  • A Letter for Your CEO About Social Marketing’s Future

    - by Mike Stiles
    We’ll leave it to you to decide if or how to sneak this in front of them. Dear Chief: This social marketing thing looks serious. It’s gone beyond having a Facebook page and putting our info and a few promotions on it. It’s seriously disrupting how we’ve always done marketing. And its implications reach well beyond marketing. My concern is that we stay positioned ahead of these changes and are prepared to embrace, adapt and capitalize on these new capabilities as opposed to spending valuable time and money trying to shoehorn social into “the way we’ve always done things.” I’m also concerned about what happens if our competition executes on this before we do. The days of being able to impose our ad messaging on the masses to great effect are numbered. The public now has the tech tools and ability to filter out things that are irrelevant to them. And frankly, spending ad dollars to reach unlikely prospects isn’t the most efficient path for us either. Today, our customers have to genuinely love what we do. That starts with a renewed, customer-centric focus on the quality and usability of our product. If their experience with it is bad, they now have very connected, loud voices that will testify against us. We can’t afford that. Next, their customer service experience, before and after the sale, has to be a pleasant surprise. That requires truly knowing our customers and listening to them. Lip service won’t cut it. We have to get and use as much data on the customer as possible, interact with them wherever they want to interact with us, and commit to impressing them. If we do, they’ll get out there and advertise for us. Since peer-to-peer recommendation is the most effective marketing, that’s money in the bank. Social marketing is about forming relationships, same as how individuals use social. We want them to know us, trust us, and get real value from knowing us. That requires honesty and transparency that before now might have been uncomfortable. I propose that if we clearly make everything we do about our customers’ wants and needs, we’ll have nothing to hide. It will solidify customer loyalty, retention, and thus, revenue. These things can’t happen without certain tools and structural changes in the organization. There are social cloud platforms that integrate social management into all of the necessary areas: CRM, customer service, sales, marketing automation, content marketing, ecommerce, etc. This is will give us a real-time, complete view of the customer so their every interaction with us is attentive, personalized, accurate, relevant, and satisfying. Without it, we’re just a collage of disjointed systems, each gathering data that informs only its own departmental silo. The customer is voluntarily giving us everything we need to know about them to win them over, but we have to start listening and putting the pieces together. There’s still time. Brands are coming to terms with this transition to the socially enabled enterprise, but so far they aren’t moving very fast. Like us, they’re dealing with long-entrenched technologies and processes. CMO’s and CIO’s have to form new partnerships. Content operations have to be initiated and properly staffed and funded. Various departments must be able to utilize interconnected big data. What will separate the winners from the losers? Well chief, that’s why I’m writing you. It’s in your hands. These initiatives won’t get the kind of priority and seriousness that inspire actual deadlines & action unless they come from your desk. You have to be the champion of customer centricity. You have to be our change agent. You have to be our innovator. Otherwise, it’s going to be business as usual, and that puts us in a very vulnerable place. Sincerely, Your Team @mikestilesPhoto: Gary Scott, stock.xchng

    Read the article

  • Keepin’ It Simple with StorageTek SL150

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are your customers archive and data protection environments getting out of hand?  Are they looking for a little simplicity in their lives? How about some scalability? Or are they looking for a way to save on capital and operational expenses? If you answered yes to any of these, then  Oracle's new StorageTek SL150 Modular Tape Library is the product for you. It beats the competition in terms of simplicity, scalability and savings, and provides some seriously wallet friendly revenue opportunities for you. If the long-term service annuities on the SL150 aren’t convincing enough, then the resale margins, rebates and follow-on revenue from modular upgrades will be!  The SL150 simplifies StorageTek’s tape portfolio by replacing three products with one scalable solution that  provides an entry point for repeat business within accounts. The SL150 expands your potential storage customer base to smaller companies with low cost, simple upgrades and streamlined management that help alleviate key customer pain points. With the SL150, your customers will be able to simplify growth of their archive and data protection environments with small entry configurations and 10x growth, something that would require multiple box swaps across up to three product categories with competitive products. With the SL150, Oracle can help you provide greater customer satisfaction with  Simplicity, Scalability and Savings! We know you’re probably wondering how you can get started and sell this new and magnificent product… Well, look no further because the only thing you need to do is complete the SL150 Guided Learning Paths (GLPs). For some extra insight, watch the video below on the new StorageTek SL150 modular tape library, and don’t forget to ‘tweet’ this post, and share it on Facebook to spread the good news! Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Wishing you Simplicity, Scalability and Savings, The OPN Communications Team

    Read the article

  • Sun Fire X4800 M2 Posts World Record x86 SPECjEnterprise2010 Result

    - by Brian
    Oracle's Sun Fire X4800 M2 using the Intel Xeon E7-8870 processor and Sun Fire X4470 M2 using the Intel Xeon E7-4870 processor, produced a world record single application server SPECjEnterprise2010 benchmark result of 27,150.05 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server ran the application tier and the Sun Fire X4470 M2 server was used for the database tier. The Sun Fire X4800 M2 server demonstrated 63% better performance compared to IBM P780 server result of 16,646.34 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server demonstrated 4% better performance than the Cisco UCS B440 M2 result, both results used the same number of processors. This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.7.0_02, and Oracle Database 11g. This result was produced using Oracle Linux. Performance Landscape Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below compares against the best results from IBM and Cisco. SPECjEnterprise2010 Performance Chart as of 3/12/2012 Submitter EjOPS* Application Server Database Server Oracle 27,150.05 1x Sun Fire X4800 M2 8x 2.4 GHz Intel Xeon E7-8870 Oracle WebLogic 12c 1x Sun Fire X4470 M2 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) Cisco 26,118.67 2x UCS B440 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle WebLogic 11g (10.3.5) 1x UCS C460 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) IBM 16,646.34 1x IBM Power 780 8x 3.86 GHz POWER 7 WebSphere Application Server V7 1x IBM Power 750 Express 4x 3.55 GHz POWER 7 IBM DB2 9.7 Workgroup Server Edition FP3a * SPECjEnterprise2010 EjOPS, bigger is better. Configuration Summary Application Server: 1 x Sun Fire X4800 M2 8 x 2.4 GHz Intel Xeon processor E7-8870 256 GB memory 4 x 10 GbE NIC 2 x FC HBA Oracle Linux 5 Update 6 Oracle WebLogic Server 11g Release 1 (10.3.5) Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.7.0_02 (Java SE 7 Update 2) Database Server: 1 x Sun Fire X4470 M2 4 x 2.4 GHz Intel Xeon E7-4870 512 GB memory 4 x 10 GbE NIC 2 x FC HBA 2 x Sun StorageTek 2540 M2 4 x Sun Fire X4270 M2 4 x Sun Storage F5100 Flash Array Oracle Linux 5 Update 6 Oracle Database 11g Enterprise Edition Release 11.2.0.2 Benchmark Description SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems. The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans. The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network. The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark. Key Points and Best Practices Sixteen Oracle WebLogic server instances were started using numactl, binding 2 instances per chip. Eight Oracle database listener processes were started, binding 2 instances per chip using taskset. Additional tuning information is in the report at http://spec.org. See Also Oracle Press Release -- SPECjEnterprise2010 Results Page Sun Fire X4800 M2 Server oracle.com OTN Sun Fire X4270 M2 Server oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Oracle Linux oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Disclosure Statement SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Sun Fire X4800 M2, 27,150.05 SPECjEnterprise2010 EjOPS; IBM Power 780, 16,646.34 SPECjEnterprise2010 EjOPS; Cisco UCS B440 M2, 26,118.67 SPECjEnterprise2010 EjOPS. Results from www.spec.org as of 3/27/2012.

    Read the article

  • Changing the action of a hyperlink in a Silverlight RichTextArea

    - by Marc Schluper
    The title of this post could also have been "Move over Hyperlink, here comes Actionlink" or "Creating interactive text in Silverlight." But alas, there can be only one. Hyperlinks are very useful. However, they are also limited because their action is fixed: browse to a URL. This may have been adequate at the start of the Internet, but nowadays, in web applications, the one thing we do not want to happen is a complete change of context. In applications we typically like a hyperlink selection to initiate an action that updates a part of the screen. For instance, if my application has a map displayed with some text next to it, the map would react to a selection of a hyperlink in the text, e.g. by zooming in on a location and displaying additional locational information in a popup. In this way, the text becomes interactive text. It is quite common that one company creates and maintains websites for many client companies. To keep maintenance cost low, it is important that the content of these websites can be updated by the client companies themselves, without the need to involve a software engineer. To accommodate this scenario, we want the author of the interactive text to configure all hyperlinks (without writing any code). In a Silverlight RichTextArea, the default action of a Hyperlink is the same as a traditional hyperlink, but it can be changed: if the Command property has a value then upon a click event this command is called with the value of the CommandParameter as parameter. How can we let the author of the text specify a command for each hyperlink in the text, and how can we let an application react properly to a hyperlink selection event? We are talking about any command here. Obviously, the application would recognize only a specific set of commands, with well defined parameters, but the approach we take here is generic in the sense that it pertains to the RichTextArea and any command. So what do we require? We wish that: As a text author, I can configure the action of a hyperlink in a (rich) text without writing code; As a text author, I can persist the action of a hyperlink with the text; As a reader of persisted text, I can click a hyperlink and the configured action will happen; As an application developer, I can configure a control to use my application specific commands. In an excellent introduction to the RichTextArea, John Papa shows (among other things) how to persist a text created using this control. To meet our requirements, we can create a subclass of RichTextArea that uses John's code and allows plugging in two command specific components: one to prompt for a command definition, and one to execute the command. Since both of these plugins are application specific, our RichTextArea subclass should not assume anything about them except their interface. public interface IDefineCommand { void Prompt(string content, // the link content Action<string, object> callback); // the method called to convey the link definition } public interface IPerformCommand : ICommand {} The IDefineCommand plugin receives the content of the link (the text visible to the reader) and displays some kind of control that allows the author to define the link. When that's done, this (possibly changed) content string is conveyed back to the RichTextArea, together with an object that defines the command to execute when the link is clicked by the reader of the published text. The IPerformCommand plugin simply implements System.Windows.Input.ICommand. Let's use MEF to load the proper plugins. In the example solution there is a project that contains rudimentary implementations of these. The IDefineCommand plugin simply prompts for a command string (cf. a command line or query string), and the IPerformCommand plugin displays a MessageBox showing this command string. An actual application using this extended RichTextArea would have its own set of commands, each having their own parameters, and hence would provide more user friendly application specific plugins. Nonetheless, in any case a command can be persisted as a string and hence the two interfaces defined above suffice. For a Visual Studio 2010 solution, see my article on The Code Project.

    Read the article

  • Installing SharePoint 2010 in one machine with built in database

    - by sreejukg
    It is very easy to deploy SharePoint 2010 in a single server using the built-in database. Normally one need to choose such installation for evaluation purposes. When installing with default settings, setup installs Microsoft SQL server 2008 express database along with SharePoint. After installing SharePoint, you need to run SharePoint products and technology configuration wizard which will install central admin website and creates the configuration database and content database for SharePoint sites. Limitations 1. You can not perform this installation on a domain controller 2. The maximum size for express edition database is 4 GB SharePoint 2010 only supports 64 bit operating systems. The installation steps are for windows server r2 64 bit enterprise edition. Installation steps The first screen for the installation is as follows As a first step you need to install the s/w prerequisites. Click on the corresponding link Click next, here you have to agree on the license terms. Select the checkbox and then click next. The installation will starts. The progress will be updated in the screen. This may take some time as during this process, there are some components needs to be downloaded from internet. Make sure you are connected to the internet, then only the installation will become a success. If any error occurs, it will display the error, you need to configure in order to continue. If everything ok you will receive the following success page. Click finish to exit the installation window. Now from the first screen, select Install SharePoint server. This will install SharePoint and SQL server 2008 express edition. First you need to enter the product key for SharePoint. Enter the product key and clicks continue. Now you need to accept the license agreement. Select the checkbox and click on continue. Select the installation type you want.   Now click on the standalone button. In production scenario, you need to select the server farm installation. This article only cover the first option, installing server farm is not in the scope of this article. Once you click on the standalone, the installation starts and you can view the progress as below. If any error occurred during installation, you will get the details and link to the log file. Refer log file and fix the corresponding issue and then start the installation again. If installation completes without any error, you will see the below screen. Make sure you selected the check box “Run the SharePoint products Configuration Wizard now” and click close. The SharePoint products configuration wizard starts. Click next; you will get the following warning Click yes and the configuration steps starts. You can view the progress for each step. Once completed the below screen appears to the user. Click finish to complete the installation. Now SharePoint installation is completed. You can navigate to SharePoint central administration website from the administrative tools and start building your portal. Good luck

    Read the article

  • Don&rsquo;t Kill the Password

    - by Anthony Trudeau
    A week ago Mr. Honan from Wired.com penned an article on security he titled “Kill the Password: Why a String of Characters Can’t Protect Us Anymore.” He asserts that the password is not effective and a new solution is needed. Unfortunately, Mr. Honan was a victim of hacking. As a result he has a victim’s vendetta. His conclusion is ill conceived even though there are smatterings of truth and good advice. The password is a security barrier much like a lock on your door. In of itself it’s not guaranteeing protection. You can have a good password akin to a steel reinforced door with the best lock money can buy, or you can have a poor password like “password” which is like a sliding lock like on a bathroom stall. But, just like in the real world a lock isn’t always enough. You can have a lock, security system, video cameras, guard dogs, and even armed security guards; but none of that guarantees your protection. Even top secret government agencies can be breached by someone who is just that good (as dramatized in movies like Mission Impossible). And that’s the crux of it. There are real hackers out there that are that good. Killer coding ninja monkeys do exist! We still have locks on our doors, because they still serve their role. Passwords are no different. Security doesn’t end with the password. Most people would agree that stuffing your mattress with your life savings isn’t a good idea even if you have the best locks and security system. Most people agree its safest to have the money in a bank. Essentially this is compartmentalization. Compartmentalization extends to the online world as well. You’re at risk if your online banking accounts are linked to the same account as your social networks. This is especially true if you’re lackadaisical about linking those social networks to outside sources including apps. The object here is to minimize the damage that can be done. An attacker should not be able to get into your bank account, because they breached your Twitter account. It’s time to prioritize once you’ve compartmentalized. This simply means deciding how much security you want for the different compartments which I’ll call security zones. Social networking applications like Facebook provide a lot of security features. However, security features are almost always a compromise with privacy and convenience. It’s similar to an engineering adage, but in this case it’s security, convenience, and privacy – pick two. For example, you might use a safe instead of bank to store your money, because the convenience of having your money closer or the privacy of not having the bank records is more important than the added security. The following are lists of security do’s and don’ts (these aren’t meant to be exhaustive and each could be an article in of themselves): Security Do’s: Use strong passwords based on a phrase Use encryption whenever you can (e.g. HTTPS in Facebook) Use a firewall (and learn to use it properly) Configure security on your router (including port blocking) Keep your operating system patched Make routine backups of important files Realize that if you’re not paying for it, you’re the product Security Don’ts Link accounts if at all possible Reuse passwords across your security zones Use real answers for security questions (e.g. mother’s maiden name) Trust anything you download Ignore message boxes shown by your system or browser Forget to test your backups Share your primary email indiscriminately Only you can decide your comfort level between convenience, privacy, and security. Attackers are going to find exploits in software. Software is complex and depends on other software. The exploits are the responsibility of the software company. But your security is always your responsibility. Complete security is an illusion. But, there is plenty you can do to minimize the risk online just like you do in the physical world. Be safe and enjoy what the Internet has to offer. I expect passwords to be necessary just as long as locks.

    Read the article

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

  • Book Review: Programming Windows Identity Foundation

    - by DigiMortal
    Programming Windows Identity Foundation by Vittorio Bertocci is right now the only serious book about Windows Identity Foundation available. I started using Windows Identity Foundation when I made my first experiments on Windows Azure AppFabric Access Control Service. I wanted to generalize the way how people authenticate theirselves to my systems and AppFabric ACS seemed to me like good point where to start. My first steps trying to get things work opened the door to whole new authentication world for me. As I went through different blog postings and articles to get more information I discovered that the thing I am trying to use is the one I am looking for. As best security API for .NET was found I wanted to know more about it and this is how I found Programming Windows Identity Foundation. What’s inside? Programming WIF focuses on architecture, design and implementation of WIF. I think Vittorio is very good at teaching people because you find no too complex topics from the book. You learn more and more as you read and as a good thing you will find that you can also try out your new knowledge on WIF immediately. After giving good overview about WIF author moves on and introduces how to use WIF in ASP.NET applications. You will get complete picture how WIF integrates to ASP.NET request processing pipeline and how you can control the process by yourself. There are two chapters about ASP.NET. First one is more like introduction and the second one goes deeper and deeper until you have very good idea about how to use ASP.NET and WIF together, what issues you may face and how you can configure and extend WIF. Other two chapters cover using WIF with Windows Communication Foundation (WCF) band   Windows Azure. WCF chapter expects that you know WCF very well. This is not introductory chapter for beginners, this is heavy reading if you are not familiar with WCF. The chapter about Windows Azure describes how to use WIF in cloud applications. Last chapter talks about some future developments of WIF and describer some problems and their solutions. Most interesting part of this chapter is section about Silverlight. Who should read this book? Programming WIF is targeted to developers. It does not matter if you are beginner or old bullet-proof professional – every developer should be able to be read this book with no difficulties. I don’t recommend this book to administrators and project managers because they find almost nothing that is related to their work. I strongly recommend this book to all developers who are interested in modern authentication methods on Microsoft platform. The book is written so well that I almost forgot all things around me when I was reading the book. All additional tools you need are free. There is also Azure AppFabric ACS test version available and you can try it out for free. Table of contents Foreword Acknowledgments Introduction Part I Windows Identity Foundation for Everybody 1 Claims-Based Identity 2 Core ASP.NET Programming Part II Windows Identity Foundation for Identity Developers 3 WIF Processing Pipeline in ASP.NET 4 Advanced ASP.NET Programming 5 WIF and WCF 6 WIF and Windows Azure 7 The Road Ahead Index

    Read the article

  • Is Microsoft&rsquo;s Cloud Bet Placed on the Ground?

    - by andrewbrust
    Today at the Unversity of Washington, Steve Ballmer gave a speech on Microsoft’s cloud strategy.  Significantly, Azure was only briefly mentioned and was not shown.  Instead, Ballmer spoke about what he called the five “dimensions” of the cloud, and used that as the basis for an almost philosophical discussion.  Ballmer opined on how the cloud should be distinguished from the Internet.as well as what the cloud will and should enable.  Ballmer worked hard to portray the cloud not as a challenger to Windows and PCs (as Google would certainly suggest it is) but  really as just the latest peripheral that adds value to PCs and devices. At one point during his speech, Ballmer said “We start with Windows at Microsoft.  It’s the most popular smart device on the planet.  And our design center for the future of Windows is to make it one of those smarter devices that the cloud really wants.”  I’m not sure I agree with Ballmer’s ambition here, but I must admit he’s taken the “software + services” concept and expanded on it in more consumer-friendly fashion. There were demos too.  For example, Blaise Aguera y Arcas reprised his Bing Maps demo from the TED conference held last month.  And Simon Atwell showed how Microsoft has teamed with Sky TV in the UK to turn Xbox into something that looks uncannily like Windows Media Center.  Specifically, an Xbox console app called Sky Player provides full access to Sky’s on-demand programming but also live TV access to an array of networks carried on its home TV service, complete with an on-screen programming guide.  Windows Phone 7 Series was shown quickly and Ballmer told us that while Windows Mobile/Phone 6.5 and earlier were designed for voice and legacy functionality, Windows Phone 7 Series is designed for the cloud. Over and over during Ballmer’s talk (and those of his guest demo presenters), the message was clear: Microsoft believes that client (“smart”) devices, and not mere HTML terminals, are the technologies to best deliver on the promise of the cloud.  The message was that PCs running Windows, game consoles and smart phones  whose native interfaces are Internet-connected offer the most effective way to utilize cloud capabilities.  Even the Bing Maps demo conveyed this message, because the advanced technology shown in the demo uses Silverlight (and thus the PCs computing power), and not AJAX (which relies only upon the browser’s native scripting and rendering capabilities) to produce the impressive interface shown to the audience. Microsoft’s new slogan, with respect to the cloud, is “we’re all in.”  Just as a Texas Hold ‘em player bets his entire stash of chips when he goes all in, so too is Microsoft “betting the company” on the cloud.  But it would seem that Microsoft’s bet isn’t on the cloud in a pure sense, and is instead on the power of the cloud to fuel new growth in PCs and other client devices, Microsoft’s traditional comfort zone.  Is that a bet or a hedge?  If the latter, is Microsoft truly all in?  I don’t really know.  I think many people would say this is a sucker’s bet.  But others would say it’s suckers who bet against Microsoft.  No matter what, the burden is on Microsoft to prove this contrarian view of the cloud is a sensible one.  To do that, they’ll need to deliver on cloud-connected device innovation.  And to do that, the whole company will need to feel that victory is crucial.  Time will tell.  And I expect to present progress reports in future posts.

    Read the article

  • Announcing the ASP.NET and Web Tools 2012.2 Release Candidate

    - by ScottGu
    This week the ASP.NET and Visual Web Developer teams delivered the Release Candidate of the ASP.NET and Web Tools 2012.2 update (formerly ASP.NET Fall 2012 Update BUILD Prerelease). This update extends the existing ASP.NET runtime and adds new web tooling to Visual Studio 2012. Whether you use Web Forms, MVC, Web API, or any other ASP.NET technology, there is something cool in this update for you. You can download and install the RC today: http://www.asp.net/vnext. Great ASP.NET Enhancements This update adds new ASP.NET templates and features, including: New ASP.NET MVC templates. Creating Facebook applications just became easier using the new Facebook Application template. In just a few easy steps you can create a Facebook application that gets data from the logged in user as well as integrates with their friends. A new Single Page Application template allows developers to build interactive client-side web apps using Knockout, jQuery, and ASP.NET Web API. Real-time communication support with ASP.NET SignalR.  This enables you to easily take advantage of the new WebSocket support in .NET 4.5, while also automatically degrading to long-polling and other protocols for older clients.  If you haven’t tried SignalR yet you should – it is awesome. New ASP.NET Web API functionality, including support for OData, integrated tracing, and automatically generating help page documentation for your API. New ASP.NET Friendly URL functionality. This new feature makes it very easy for Web Forms developers to generate cleaner looking URLs (without the .aspx extension). The Friendly URLs feature also makes it easier for developers to add mobile support to their applications with support for mobile .ASPX pages and  supporting switching between desktop and mobile views. It can be used with existing ASP.NET v4.0 applications. Visual Studio 2012 Web publishing enhancements. Web site projects now have the same publish experience as web application projects (including to Windows Azure Web Sites), and you can selectively publish files, see the differences between local and remote files, and update local to remote files or vice versa. Visual Studio 2012 Page Inspector enhancements. JavaScript selection mapping is now supported, and you can CSS updates in real-time. Visual Studio 2012 editor support for Knockout IntelliSense and pasting JSON as a .NET class (which makes it even easier to consume Web APIs from others). Visual Studio 2012 Project Template updates, including the latest versions of jQuery, jQuery UI, jQuery Validation, Modernirz, Knockout and more… How it is delivered You can download and install an integrated setup that contains the above enhancements today from http://www.asp.net/vnext. The new runtime functionality is delivered to ASP.NET via additional NuGet packages. This means that installing this update does not make any changes to the existing ASP.NET binaries, and thus does not cause any compatibility issues with existing projects. New projects will contain the new functionality and existing projects can be updated with the new NuGet packages. Summary Web development is changing, and ASP.NET is rapidly delivering new capabilities to developers that help them take full advantage of new capabilities.  The ASP.NET and Web Tools 2012.2 update installs in minutes without altering the current ASP.NET run time components. For a complete description see the Release Notes. Next week I plan to publish a tutorial showing how to build a cool Facebook application using the new Facebook template. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >