Search Results

Search found 6694 results on 268 pages for 'wait states'.

Page 210/268 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Use Twitter in Windows Media Center with TwitterMCE

    - by DigitalGeekery
    Are you a Media Center user who just can’t get enough Twitter? If so, you may want to check out the TwitterMCE plugin for Windows Media Center. Download and install TwitterMCE application. (See download link below) When you start Windows Media Center, you’ll find the TwitterMCE icon listed in the Extras. When you open the plug-in you’ll be prompted for a Paypal donation and have to wait out the 15 second timer. Next, you’ll need to log in to your Twitter account. Enter your Twitter account username and password. You can do this with the keyboard or by entering letters and numbers with a Media Center remote. When you are finished, select the Login button.   You’ll be prompted to select Standard or Video Mode. Standard displays items in a more vertical fashion. Video displays them horizontally and one at a time, and also allows you to watch Live TV, a movie, or video at the same time. Reading Your Tweets Clicking on Home allows you to read the latest Twitter messages from your friends. You can access the previous 20 tweets. Scroll up and down to see additional messages in Standard mode, ro right and left in Video mode. Click on the individual Twitter messages to get more information, such as which friend sent the tweet. Create a Tweet To Create a Tweet directly from Media Center, select the Update button. Type out your message using your keyboard or your remote and the on-screen keyboard. When you are finished, select Update to send your Twitter message. A few moments later your new tweet will appear.   To send a tweet while you are watching TV or a video, log in to the TwitterMCE app, choose the Video mode, and select Update.   Enter your tweet with the remote or keyboard. Select Update to send the tweet.   You can also view Mentions, Friends, and Followers selecting the appropriate button.   Scroll through your list of friends to read their latest tweets.   The TwitterMCE plugin works will Windows Vista Premium, Ultimate, and Windows 7. It might not completely replace your favorite Twitter App, but it will allow you to send all the tweets you want without having to take your eyes off your favorite TV programs. Download TwitterMCE Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Schedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program GuideIntegrate Boxee with Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday

    Read the article

  • .NET 4: &ldquo;Slim&rdquo;-style performance boost!

    - by Vitus
    RTM version of .NET 4 and Visual Studio 2010 is available, and now we can do some test with it. Parallel Extensions is one of the most valuable part of .NET 4.0. It’s a set of good tools for easily consuming multicore hardware power. And it also contains some “upgraded” sync primitives – Slim-version. For example, it include updated variant of widely known ManualResetEvent. For people, who don’t know about it: you can sync concurrency execution of some pieces of code with this sync primitive. Instance of ManualResetEvent can be in 2 states: signaled and non-signaled. Transition between it possible by Set() and Reset() methods call. Some shortly explanation: Thread 1 Thread 2 Time mre.Reset(); mre.WaitOne(); //code execution 0 //wating //code execution 1 //wating //code execution 2 //wating //code execution 3 //wating mre.Set(); 4 //code execution //… 5 Upgraded version of this primitive is ManualResetEventSlim. The idea in decreasing performance cost in case, when only 1 thread use it. Main concept in the “hybrid sync schema”, which can be done as following:   internal sealed class SimpleHybridLock : IDisposable { private Int32 m_waiters = 0; private AutoResetEvent m_waiterLock = new AutoResetEvent(false);   public void Enter() { if (Interlocked.Increment(ref m_waiters) == 1) return; m_waiterLock.WaitOne(); }   public void Leave() { if (Interlocked.Decrement(ref m_waiters) == 0) return; m_waiterLock.Set(); }   public void Dispose() { m_waiterLock.Dispose(); } } It’s a sample from Jeffry Richter’s book “CLR via C#”, 3rd edition. Primitive SimpleHybridLock have two public methods: Enter() and Leave(). You can put your concurrency-critical code between calls of these methods, and it would executed in only one thread at the moment. Code is really simple: first thread, called Enter(), increase counter. Second thread also increase counter, and suspend while m_waiterLock is not signaled. So, if we don’t have concurrent access to our lock, “heavy” methods WaitOne() and Set() will not called. It’s can give some performance bonus. ManualResetEvent use the similar idea. Of course, it have more “smart” technics inside, like a checking of recursive calls, and so on. I want to know a real difference between classic ManualResetEvent realization, and new –Slim. I wrote a simple “benchmark”: class Program { static void Main(string[] args) { ManualResetEventSlim mres = new ManualResetEventSlim(false); ManualResetEventSlim mres2 = new ManualResetEventSlim(false);   ManualResetEvent mre = new ManualResetEvent(false);   long total = 0; int COUNT = 50;   for (int i = 0; i < COUNT; i++) { mres2.Reset(); Stopwatch sw = Stopwatch.StartNew();   ThreadPool.QueueUserWorkItem((obj) => { //Method(mres, true); Method2(mre, true); mres2.Set(); }); //Method(mres, false); Method2(mre, false);   mres2.Wait(); sw.Stop();   Console.WriteLine("Pass {0}: {1} ms", i, sw.ElapsedMilliseconds); total += sw.ElapsedMilliseconds; }   Console.WriteLine(); Console.WriteLine("==============================="); Console.WriteLine("Done in average=" + total / (double)COUNT); Console.ReadLine(); }   private static void Method(ManualResetEventSlim mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } }   private static void Method2(ManualResetEvent mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } } } I use 2 concurrent thread (the main thread and one from thread pool) for setting and resetting ManualResetEvents, and try to run test COUNT times, and calculate average execution time. Here is the results (I get it on my dual core notebook with T7250 CPU and Windows 7 x64): ManualResetEvent ManualResetEventSlim Difference is obvious and serious – in 10 times! So, I think preferable way is using ManualResetEventSlim, because not always on calling Set() and Reset() will be called “heavy” methods for working with Windows kernel-mode objects. It’s a small and nice improvement! ;)

    Read the article

  • What Counts For a DBA – Decisions

    - by Louis Davidson
    It’s Friday afternoon, and the lead DBA, a very talented guy, is getting ready to head out for two well-earned weeks of vacation, with his family, when this error message pops up in his inbox: Msg 211, Level 23, State 51, Line 1. Possible schema corruption. Run DBCC CHECKCATALOG. His heart sinks. It’s ten…no eight…minutes till it’s time to walk out the door. He glances around at his coworkers, competent to handle many problems, but probably not up to the challenge of fixing possible database corruption. What does he do? After a few agonizing moments of indecision, he clicks shut his laptop. He’ll just wait and see. It was unlikely to come to anything; after all, it did say “possible” schema corruption, not definite. In that moment, his fate was sealed. The start of the solution to the problem (run DBCC CHECKCATALOG) had been right there in the error message. Had he done this, or at least took two of those eight minutes to delegate the task to a coworker, then he wouldn’t have ended up spending two-thirds of an idyllic vacation (for the rest of the family, at least) dealing with a problem that got consistently worse as the weekend progressed until the entire system was down. When I told this story to a friend of mine, an opera fan, he smiled and said it described the basic plotline of almost every opera or ‘Greek Tragedy’ ever written. The particular joy in opera, he told me, isn’t the warbly voiced leading ladies, or the plump middle-aged romantic leads, or even the music. No, what packs the opera houses in Italy is the drama of characters who, by the very nature of their life-experiences and emotional baggage, make all sorts of bad choices when faced with ordinary decisions, and so move inexorably to their fate. The audience is gripped by the spectacle of exotic characters doomed by their inability to see the obvious. I confess, my personal experience with opera is limited to Bugs Bunny in “What’s Opera, Doc?” (Elmer Fudd is a great example of a bad decision maker, if ever one existed), but I was struck by my friend’s analogy. If all the DBA cubicles were a stage, I think we would hear many similarly tragic tales, played out to music: “Error handling? We write our code to never experience errors, so nah…“ “Backups failed today, but it’s okay, we’ll back up tomorrow (we’ll back up tomorrow)“ And similarly, they would leave their audience gasping, not necessarily at the beauty of the music, or poetry of the lyrics, but at the inevitable, grisly fate of the protagonists. If you choose not to use proper error handling, or if you choose to skip a backup because, hey, you haven’t had a server crash in 10 years, then inevitably, in that moment you expected to be enjoying a vacation, or a football game, with your family and friends, you will instead be sitting in front of a computer screen, paying for your poor choices. Tragedies are very much part of IT. Most of a DBA’s day to day work has limited potential to wreak havoc; paperwork, timesheets, random anonymous threats to developers, routine maintenance and whatnot. However, just occasionally, you, as a DBA, will face one of those decisions that really matter, and which has the possibility to greatly affect your future and the future of your user’s data. Make those decisions count, and you’ll avoid the tragic fate of many an operatic hero or villain.

    Read the article

  • Hello, T4MVC &ndash; Goodbye, ASP.NET MVC &ldquo;magic strings&rdquo;

    - by Brian Schroer
    I’m working on my first ASP.NET MVC project, and I really, really like MVC. I hate all of the “magic strings”, though: <div id="logindisplay"> <% Html.RenderPartial("LogOnUserControl"); %> </div> <div id="menucontainer"> <ul id="menu"> <li><%=Html.ActionLink("Find Dinner", "Index", "Dinners")%></li> <li><%=Html.ActionLink("Host Dinner", "Create", "Dinners")%></li> <li><%=Html.ActionLink("About", "About", "Home")%></li> </ul> </div> They’re prone to misspelling (causing errors that won’t be caught until runtime), there’s duplication, there’s no Intellisense, and they’re not friendly to refactoring tools.   I had started down the path of creating static classes with constants for the strings, e.g.: <li><%=Html.ActionLink("Find Dinner", DinnerControllerActions.Index, Controllers.Dinner)%></li> …but that was pretty tedious.   Then I discovered T4MVC (http://mvccontrib.codeplex.com/wikipage?title=T4MVC). Just add its T4MVC.tt and T4MVC.settings.t4 files to the root of your MVC application, and it magically (and this time, it’s good magic) generates code that allows you to replace the first code sample above with this: <div id="logindisplay"> <% Html.RenderPartial(MVC.Shared.Views.LogOnUserControl); %> </div> <div id="menucontainer"> <ul id="menu"> <li><%=Html.ActionLink("Find Dinner", MVC.Dinners.Index())%></li> <li><%=Html.ActionLink("Host Dinner", MVC.Dinners.Create())%></li> <li><%=Html.ActionLink("About", MVC.Home.About())%></li> </ul> </div> It gives you a strongly-typed alternative to magic strings for all of these scenarios: Html.Action Html.ActionLink Html.RenderAction Html.RenderPartial Html.BeginForm Url.Action Ajax.ActionLink view names inside controllers But wait, there’s more! It even gives you static helpers for image and script links, e.g.: <img src="<%= Links.Content.nerd_jpg %>" />   <script src="<%= Links.Scripts.Map_js %>" type="text/javascript"></script> …instead of: <img src="/Content/nerd.jpg" />   <script src="/Scripts/Map.js" type="text/javascript"></script>   Thanks to David Ebbo for creating this great tool. You can watch an eight and a half minute video about T4MVC on Channel 9 via this link: http://channel9.msdn.com/posts/jongalloway/Jon-Takes-Five-with-David-Ebbo-on-T4MVC/. You can download T4MVC from its CodePlex page: http://mvccontrib.codeplex.com/wikipage?title=T4MVC.

    Read the article

  • Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the analytics in Big Data Story. In this article we will understand how to become a Data Scientist for Big Data Story. Data Scientist is a new buzz word, everyone seems to be wanting to become Data Scientist. Let us go over a few key topics related to Data Scientist in this blog post. First of all we will understand what is a Data Scientist. In the new world of Big Data, I see pretty much everyone wants to become Data Scientist and there are lots of people I have already met who claims that they are Data Scientist. When I ask what is their role, I have got a wide variety of answers. What is Data Scientist? Data scientists are the experts who understand various aspects of the business and know how to strategies data to achieve the business goals. They should have a solid foundation of various data algorithms, modeling and statistics methodology. What do Data Scientists do? Data scientists understand the data very well. They just go beyond the regular data algorithms and builds interesting trends from available data. They innovate and resurrect the entire new meaning from the existing data. They are artists in disguise of computer analyst. They look at the data traditionally as well as explore various new ways to look at the data. Data Scientists do not wait to build their solutions from existing data. They think creatively, they think before the data has entered into the system. Data Scientists are visionary experts who understands the business needs and plan ahead of the time, this tremendously help to build solutions at rapid speed. Besides being data expert, the major quality of Data Scientists is “curiosity”. They always wonder about what more they can get from their existing data and how to get maximum out of future incoming data. Data Scientists do wonders with the data, which goes beyond the job descriptions of Data Analysist or Business Analysist. Skills Required for Data Scientists Here are few of the skills a Data Scientist must have. Expert level skills with statistical tools like SAS, Excel, R etc. Understanding Mathematical Models Hands-on with Visualization Tools like Tableau, PowerPivots, D3. j’s etc. Analytical skills to understand business needs Communication skills On the technology front any Data Scientists should know underlying technologies like (Hadoop, Cloudera) as well as their entire ecosystem (programming language, analysis and visualization tools etc.) . Remember that for becoming a successful Data Scientist one require have par excellent skills, just having a degree in a relevant education field will not suffice. Final Note Data Scientists is indeed very exciting job profile. As per research there are not enough Data Scientists in the world to handle the current data explosion. In near future Data is going to expand exponentially, and the need of the Data Scientists will increase along with it. It is indeed the job one should focus if you like data and science of statistics. Courtesy: emc Tomorrow In tomorrow’s blog post we will discuss about various Big Data Learning resources. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • A (Late) Meme Monday Post: On SQLFamily

    - by Argenis
      Yesterday a member of the SQL community who I deeply admire sent me a DM on Twitter asking whether I had done a SQLFamily post for Thomas LaRock’s (blog|@SQLRockstar) Meme Monday for November. I replied that I did not, and I regretted not having done so. A subtle DM followed my response: “Get on it, you have all week”. And indeed I must. So here’s an attempt to express some of my feelings on a community that has catapulted my career like nothing else before I embraced it. Nanos Gigantium Humeris Insidentes I stand on the shoulders of giants. My SQLFamily has given me support at all levels. Professionally and personally. There is never a lack of will to help and provide advice to others in this community. And I do my best to help. On #SQLHelp on Twitter, via email, or even on the phone. I expect no retribution, because I know that when and if I do run into problems, my SQLFamily will be there for me. I have met some of the most humble, dedicated and most professional people in the SQL community. And some of them have pretty big titles: MVPs, MCMs, Regional Mentors, and even leaders of PASS, SQLCAT members, and even PMs and Devs on the SQL Server team. All are welcome, and that includes YOU! I have also met some people that are rather reserved and don’t participate as much in the community, for whatever reason. Be as it may, let it be know to all that we are a very welcoming community – heck, some of my closest friends and people I can count on in the community have completely opposite political views. We share one goal: to get better and help others get better. Even if you are a lurker – my hope is that one day you’ll decide to give back some of what you have learned. You have to take it to the next level On one of my previous jobs as an IT Supervisor I used to tell my team all the time about the benefits of continuous education and self-driven learning. Shortly after I left that job, the company went bankrupt and some of my staff got laid off – some without any severance pay whatsoever. I eventually found out that some of them had a really hard time finding another job, because their skills were simply outdated. They had become stale professionals. Don’t be one of them. If you don’t take advantage of these learning resources, somebody else will – and that person has an advantage over you when applying for that awesome job position that got opened. There’s a severe shortage of good DBAs and DB Devs out there. What’s your excuse for not being excellent? Even if your knowledge of SQL Server is at the beginner level, really – you have no excuse to get better. Just go to SQLUniversity and learn from there. Don’t get stale! Thank You To all of you in the SQL community who put so much time and energy into helping others, my deepest gratitude to you. I can’t wait to meet you all again at the next event and share our SQL stories over a pint of beer (or a shot of Jaeger) Cheers! -Argenis

    Read the article

  • USB external drive is not recognized by any OS, how to troubleshoot in Ubuntu?

    - by Breno
    First of all I would like to inform you that I saw a question similar to mine but the error was different, so here's my problem... I have an external HD samsung s2 model of 500GB and a day to day just stopped working, tried in other systems (windows and mac) however are not recognized. In the windows device manager when I insert the usb it states that the device in question are not working properly. Well, in the logs of my ubuntu 4.12 I see the following message when I insert my usb device in: [ 2967.560216] usb 7-2: new full-speed USB device number 2 using uhci_hcd [ 2967.680182] usb 7-2: device descriptor read/64, error -71 [ 2967.904176] usb 7-2: device descriptor read/64, error -71 [ 2968.120227] usb 7-2: new full-speed USB device number 3 using uhci_hcd [ 2968.240207] usb 7-2: device descriptor read/64, error -71 [ 2968.464063] usb 7-2: device descriptor read/64, error -71 [ 2968.680087] usb 7-2: new full-speed USB device number 4 using uhci_hcd [ 2969.092085] usb 7-2: device not accepting address 4, error -71 [ 2969.208155] usb 7-2: new full-speed USB device number 5 using uhci_hcd [ 2969.624076] usb 7-2: device not accepting address 5, error -71 [ 2969.624118] hub 7-0:1.0: unable to enumerate USB device on port 2 [ 4520.240340] usb 7-1: new full-speed USB device number 6 using uhci_hcd [ 4520.364079] usb 7-1: device descriptor read/64, error -71 [ 4520.588109] usb 7-1: device descriptor read/64, error -71 [ 4520.804140] usb 7-1: new full-speed USB device number 7 using uhci_hcd [ 4520.924136] usb 7-1: device descriptor read/64, error -71 [ 4521.148083] usb 7-1: device descriptor read/64, error -71 [ 4521.364105] usb 7-1: new full-speed USB device number 8 using uhci_hcd [ 4521.776237] usb 7-1: device not accepting address 8, error -71 [ 4521.888206] usb 7-1: new full-speed USB device number 9 using uhci_hcd [ 4522.296102] usb 7-1: device not accepting address 9, error -71 [ 4522.296150] hub 7-0:1.0: unable to enumerate USB device on port 1 [ 4749.036104] usb 7-2: new full-speed USB device number 10 using uhci_hcd [ 4749.156209] usb 7-2: device descriptor read/64, error -71 [ 4749.380215] usb 7-2: device descriptor read/64, error -71 [ 4749.596206] usb 7-2: new full-speed USB device number 11 using uhci_hcd [ 4749.716409] usb 7-2: device descriptor read/64, error -71 [ 4749.940110] usb 7-2: device descriptor read/64, error -71 [ 4750.156257] usb 7-2: new full-speed USB device number 12 using uhci_hcd [ 4750.572150] usb 7-2: device not accepting address 12, error -71 [ 4750.684215] usb 7-2: new full-speed USB device number 13 using uhci_hcd [ 4751.100182] usb 7-2: device not accepting address 13, error -71 [ 4751.100224] hub 7-0:1.0: unable to enumerate USB device on port 2 Here is my system: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 002: ID 08ff:2810 AuthenTec, Inc. AES2810 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 00:1f.5 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 02:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev ba) 02:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 04) 02:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 21) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5756ME Gigabit Ethernet PCI Express 0c:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) Does anyone have any clue what would be the problem?

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • How to get paid and figure out if I want to keep this client [migrated]

    - by Heiner Fawkes
    I have a client who is not paying on time, but it looks like the specifics don't match similar questions on this SE site. I got a call from a client I did website work for years ago. I had not done this kind of work for many years and frankly I'm not sure I want to now, but nevertheless about a month ago I agreed to bring his website, SEO, social media, and overall marketing for his small business up to speed. Why? He has told me many times how I'm the most honest, most well-informed contractor he's had experience with. And I personally kind of like him too. So I started working on an hourly basis. I sent one very small invoice and got paid. Then we talked a whole lot about all sorts of feature he would like me to implement. I started that work, and sent a second invoice on the first of the month (one of my two stated billing days). I didn't get paid. On every invoice it states that I charge a whopping ten percent per week late. I sent many voicemails and emails asking to please let me know what's going on with payment, and didn't get replies. Then the 15th of the month rolled around (which I stated initially as one of my invoicing dates). Since I hadn't been paid for the last invoice, I simply didn't send him an invoice at that time but emailed him and said that I will combine it with the next scheduled invoice for this reason (probably a bad idea I realize). Eventually he sent a portion of the invoice payment. I emailed back to let him know that he's three weeks late and what the remaining balance is. Finally we got in touch via phone. He basically told me that he thought I hadn't done all of the work I said I did. He looked at the page source code and it didn't look complete to him. I explained why his perception would be different and what work I had done as specified. He accepted this and said that part of the reason he didn't pay in full is that he's been swamped with personal family stuff, and part of the reason is that he didn't think I did all the work. That struck me as pretty weird. He also expressed concern that he has no idea now how much all the changes he has asked for are going to cost. And once again, he told me how honest and high-quality my services are compared to others he has dealt with. He also said he would pay me more (but not all) of the now three weeks overdue invoice that day. I didn't receive any payment. Basically this is how the client relationship strikes me: He's not good at communication. He's very busy and English isn't his first language. He almost never replies to emails but phone calls are fine. He's asked me to avoid emails for communication and I've asked him to please use email. He might not have enough money to afford all the things he has asked for. But so far I have been working for an hourly fee (which is quite high). He also has started paying monthly for hosting and social media services from me. What seems very abnormal is for a client to be so overdue on payments and to actually withhold payment of an invoice without any communication because he didn't think the work was done. I told him that I will send dollar estimates of each module of remaining work so that we can decide which ones are the highest priority if he cannot afford them all. I also reiterated that in the future if he has doubts about the work or an inability to pay, he must contact me immediately to say so. I basically plan to state the following to him: I would like to work for him and help his business. I also have sympathy for his recent family difficulties. I am happy to figure out payment plans that would work better for him, but first I need to be paid in full for all outstanding invoices, especially given that I skipped one of them just to be nice. The most crucial thing I need is communication about any problems with my work or his ability to pay. Once again, he heeds to pay in full immediately before we negotiate anything else. Does the above seem like an appropriate communication? Is anything missing from it? Is anything I'm doing here really abnormal?

    Read the article

  • C# 5 Async, Part 2: Asynchrony Today

    - by Reed
    The .NET Framework has always supported asynchronous operations.  However, different mechanisms for supporting exist throughout the framework.  While there are at least three separate asynchronous patterns used through the framework, only the latest is directly usable with the new Visual Studio Async CTP.  Before delving into details on the new features, I will talk about existing asynchronous code, and demonstrate how to adapt it for use with the new pattern. The first asynchronous pattern used in the .NET framework was the Asynchronous Programming Model (APM).  This pattern was based around callbacks.  A method is used to start the operation.  It typically is named as BeginSomeOperation.  This method is passed a callback defined as an AsyncCallback, and returns an object that implements IAsyncResult.  Later, the IAsyncResult is used in a call to a method named EndSomeOperation, which blocks until completion and returns the value normally directly returned from the synchronous version of the operation.  Often, the EndSomeOperation call would be called from the callback function passed, which allows you to write code that never blocks. While this pattern works perfectly to prevent blocking, it can make quite confusing code, and be difficult to implement.  For example, the sample code provided for FileStream’s BeginRead/EndRead methods is not simple to understand.  In addition, implementing your own asynchronous methods requires creating an entire class just to implement the IAsyncResult. Given the complexity of the APM, other options have been introduced in later versions of the framework.  The next major pattern introduced was the Event-based Asynchronous Pattern (EAP).  This provides a simpler pattern for asynchronous operations.  It works by providing a method typically named SomeOperationAsync, which signals its completion via an event typically named SomeOperationCompleted. The EAP provides a simpler model for asynchronous programming.  It is much easier to understand and use, and far simpler to implement.  Instead of requiring a custom class and callbacks, the standard event mechanism in C# is used directly.  For example, the WebClient class uses this extensively.  A method is used, such as DownloadDataAsync, and the results are returned via the DownloadDataCompleted event. While the EAP is far simpler to understand and use than the APM, it is still not ideal.  By separating your code into method calls and event handlers, the logic of your program gets more complex.  It also typically loses the ability to block until the result is received, which is often useful.  Blocking often requires writing the code to block by hand, which is error prone and adds complexity. As a result, .NET 4 introduced a third major pattern for asynchronous programming.  The Task<T> class introduced a new, simpler concept for asynchrony.  Task and Task<T> effectively represent an operation that will complete at some point in the future.  This is a perfect model for thinking about asynchronous code, and is the preferred model for all new code going forward.  Task and Task<T> provide all of the advantages of both the APM and the EAP models – you have the ability to block on results (via Task.Wait() or Task<T>.Result), and you can stay completely asynchronous via the use of Task Continuations.  In addition, the Task class provides a new model for task composition and error and cancelation handling.  This is a far superior option to the previous asynchronous patterns. The Visual Studio Async CTP extends the Task based asynchronous model, allowing it to be used in a much simpler manner.  However, it requires the use of Task and Task<T> for all operations.

    Read the article

  • App Stores&ndash;In All Things, Its Quality Over Quantity

    - by D'Arcy Lussier
    Everybody has an opinion about Windows 8. People love it, people hate it, people are meh about it, people are apparently buying it from Microsoft stores in NYC as if it was water before a natural disaster…if there’s one thing that Microsoft product launches do well, its the ability to bring out strong emotional responses. Over at eweek.com, Don Reisinger wrote about 5 good and bad things about Windows 8. Yes, another opinion piece on WIndows 8. I figured since this one had good and bad it might be worthwhile to read. I then came across #10 on his list, and figured “What the hell…might as well post a bit of a rant on Windows 8 myself!” Here’s #10: 10. Bad: Too few apps Unfortunately, Microsoft wasn’t able to get too many developers to start producing applications for its Windows 8 Store. Microsoft hasn’t yet released official numbers, but some have said that the marketplace has less than 8,000 programs. Considering Apple’s App Store has 100 times that, it’s about time Microsoft starts leaning on developers to get more programs into its store. Believe me, Microsoft *has* been leaning on developers to get apps into the store. I’ve been asked at least 5 or 6 times from 5 or 6 different friends at Microsoft about whether I was going to write a Windows 8 app. I think Microsoft felt they had to try and address the number of apps available in their marketplace, since some people (like Don) would draw comparisons to the number of apps in the Apple marketplace. I feel for Microsoft in this, since the number of apps in a marketplace are an empty stat. Quality of Quantity I have an iPad that my family (wife, 10yo daughter, 3yo daughter) use. We all have our own apps installed on it. In addition, my wife has an iPhone 4S that she also installs apps on. As someone who gets asked by his kids often whether they can buy/download an app, the vast majority of the vast catalogue of iOS marketplace apps are crap! Do you realize how many “free” games are out there, only to really be not-free because you have to purchase in-game content to make the game actually playable? And how about searching – with such a vast array of apps and such high numbers of craptastic ones, trying to find something is incredibly difficult and can be frustrating. I would rather see that Microsoft has 8000 high quality apps in their store at launch, instead of 800000 that were mostly junk. Too Few Apps?! And seriously, 8000 is not a small number. How many iOS apps have I actually bought between the iPad and iPhone? I’ll be generous and say 30…heck, let’s round it up to 40. It’s not like I have 10,000 apps installed on my iPad, nor will that ever happen! So if people have, at the *launch* of a new platform ecosystem, EIGHT THOUSAND apps to choose from, I don’t see that as a fail at all! It should be noted that most of the most common apps (Netflix, Skype, etc.) are available for Windows 8 at launch – I guess I’ll have to wait a few weeks for My Pony Ranch and all its clones to start showing up; pity. Let’s Check Back in a Year So look, let’s check back in a year’s time and see what the app store looks like. My hope is that Microsoft doesn’t continue to push quantity over quality. Even knowing the optics that # of apps in the store carries and the pressure to catch Apple and Android marketplaces, I hope Microsoft avoids the scenario where there’s a good percentage of apps in the Windows Store that are utter rubbish and finding the gems will be cumbersome. But if that happens, we can thank guys like Dan who raised the false issue of app count at the launch for it.

    Read the article

  • 2013 Predictions for Retail

    - by David Dorf
    Its that time of year to roll out the predictions for next year.  I can't say I've really nailed it in the past, but feel free to look back at my 2012, 2011, and 2010 predictions.  I'm not expecting anything earth-shattering this year; just continued maturation of several technologies that are finally taking hold. 1. Next day delivery -- Amazon finally decided it wasn't worth fighting state taxes and instead decided to place distribution centers everywhere so they can potentially offer next-day deliveries.  Not to be outdone, Walmart is looking to leverage its huge physical presence to offer the same.  Clubs like ShopRunner are pushing delivery barriers as well, so the norm is shifting to free shipping in a few days or relatively cheap shipping overnight.  Retailers need be thinking about how to ship from physical stores. 2. Bring your own device -- Earlier this year Intuit bought AisleBuyer, a mobile self-checkout start-up, at least somewhat validating the BYOD approach.  Grocery stores, especially in Europe, have been supporting in-aisle self-scanning for a while and I'm betting it will find a home in certain verticals in the US too.  There's also the BYOD concept for employees.  Some retailers are considering issuing mobile devices at hiring along side the shirt and name-tag.  Employees become responsible for the hardware until they leave. 3. TV shopping -- Will Apple finally release a TV product in 2013?  Who knows?  But the industry isn't standing still. Companies like QVC and HSN are already successfully combining the TV and online experiences for shopping.  Comcast is partnering with Tivo to allow viewers to interact with ads with Paypal handing payment.  This will be a slow maturation, but expect TVs to get smarter and eventually become a new selling channel (pun intended) for retailers. 4. Privacy backlash -- It only takes one big incident to stir the public, and I'm betting we have one in 2013.  Facebook, Google, or Apple will test the boundaries of what the public is willing to accept.  It could involve a retailer using geo-location technology, or possibly video analytics.  And as is always the case, the offender will apologize, temporarily remove the technology, and wait 2-3 years for it to be generally accepted.  Privacy is a moving target. 5. More NFC -- I've come to the conclusion that adoption of any banking technology is going to be slow.  It was slow for credit cards, ATMs, and online billpay so why should it be any different for NFC?  Maybe, just maybe the iPhone 5S will have an NFC chip, but we're not going to see mainstream uptake for years.  Next year we'll continue to see incremental improvements from Isis, Google, and Paypal and a plethora of new startups, but don't toss your magstripe cards just yet. 6. In-store location -- The technologies for tracking people inside stores is really improving.  Retailers can track people using video cameras, infrared, and by the WiFi radios in mobile phones.  We're getting closer to the point where accuracy could be a shelf-facing, which will help retailers understand how people shop, where they spend time, and what displays attract them.  Expect CPG companies to get involved and partner with retailers, since the data benefits both parties.  Consumers will benefit by being directed right to the products they seek.  (In 2013 ARTS is forming a workteam to develop new standards in this area.) 7. M&A -- Looking back at 2012 there were some really big deals involving IBM, Oracle, JDA, and NCR and I expect that trend will likely continue as vendors add assets to bolster their portfolios.  Many retailers are due for an IT transformation to support anywhere, anytime shoppers, and one-stop-vendors can minimize complexity and costs. Predictions from other sources: Independent Retailer Stores Magazine IDC Insights Mobile Commerce Daily

    Read the article

  • SQL Azure: Notes on Building a Shard Technology

    - by Herve Roggero
    In Chapter 10 of the book on SQL Azure (http://www.apress.com/book/view/9781430229612) I am co-authoring, I am digging deeper in what it takes to write a Shard. It's actually a pretty cool exercise, and I wanted to share some thoughts on how I am designing the technology. A Shard is a technology that spreads the load of database requests over multiple databases, as transparently as possible. The type of shard I am building is called a Vertical Partition Shard  (VPS). A VPS is a mechanism by which the data is stored in one or more databases behind the scenes, but your code has no idea at design time which data is in which database. It's like having a mini cloud for records instead of services. Imagine you have three SQL Azure databases that have the same schema (DB1, DB2 and DB3), you would like to issue a SELECT * FROM Users on all three databases, concatenate the results into a single resultset, and order by last name. Imagine you want to ensure your code doesn't need to change if you add a new database to the shard (DB4). Now imagine that you want to make sure all three databases are queried at the same time, in a multi-threaded manner so your code doesn't have to wait for three database calls sequentially. Then, imagine you would like to obtain a breadcrumb (in the form of a new, virtual column) that gives you a hint as to which database a record came from, so that you could update it if needed. Now imagine all that is done through the standard SqlClient library... and you have the Shard I am currently building. Here are some lessons learned and techniques I am using with this shard: Parellel Processing: Querying databases in parallel is not too hard using the Task Parallel Library; all you need is to lock your resources when needed Deleting/Updating Data: That's not too bad either as long as you have a breadcrumb. However it becomes more difficult if you need to update a single record and you don't know in which database it is. Inserting Data: I am using a round-robin approach in which each new insert request is directed to the next database in the shard. Not sure how to deal with Bulk Loads just yet... Shard Databases:  I use a static collection of SqlConnection objects which needs to be loaded once; from there on all the Shard commands use this collection Extension Methods: In order to make it look like the Shard commands are part of the SqlClient class I use extension methods. For example I added ExecuteShardQuery and ExecuteShardNonQuery methods to SqlClient. Exceptions: Capturing exceptions in a multi-threaded code is interesting... but I kept it simple for now. I am using the ConcurrentQueue to store my exceptions. Database GUID: Every database in the shard is given a GUID, which is calculated based on the connection string's values. DataTable. The Shard methods return a DataTable object which can be bound to objects.  I will be sharing the code soon as an open-source project in CodePlex. Please stay tuned on twitter to know when it will be available (@hroggero). Or check www.bluesyntax.net for updates on the shard. Thanks!

    Read the article

  • Why Ultra-Low Power Computing Will Change Everything

    - by Tori Wieldt
    The ARM TechCon keynote "Why Ultra-Low Power Computing Will Change Everything" was anything but low-powered. The speaker, Dr. Johnathan Koomey, knows his subject: he is a Consulting Professor at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University, Yale University, and UC Berkeley's Energy and Resources Group. His current focus is creating a standard (computations per kilowatt hour) and measuring computer energy consumption over time. The trends are impressive: energy consumption has halved every 1.5 years for the last 60 years. Battery life has made roughly a 10x improvement each decade since 1960. It's these improvements that have made laptops and cell phones possible. What does the future hold? Dr. Koomey said that in the past, the race by chip manufacturers was to create the fastest computer, but the priorities have now changed. New computers are tiny, smart, connected and cheap. "You can't underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices," he said. There is also a confluence of trends in computing, communications, sensors, and controls. The challenge is how to reduce the power requirements for these tiny devices. Alternate sources of power that are being explored are light, heat, motion, and even blood sugar. The University of Michigan has produced a miniature sensor that harnesses solar energy and could last for years without needing to be replaced. Also, the University of Washington has created a sensor that scavenges power from existing radio and TV signals.Specific devices designed for a purpose are much more efficient than general purpose computers. With all these sensors, instead of big data, developers should focus on nano-data, personalized information that will adjust the lights in a room, a machine, a variable sign, etc.Dr. Koomey showed some examples:The Proteus Digital Health Feedback System, an ingestible sensor that transmits when a patient has taken their medicine and is powered by their stomach juices. (Gives "powered by you" a whole new meaning!) Streetline Parking Systems, that provide real-time data about available parking spaces. The information can be sent to your phone or update parking signs around the city to point to areas with available spaces. Less driving around looking for parking spaces!The BigBelly trash system that uses solar power, compacts trash, and sends a text message when it is full. This dramatically reduces the number of times a truck has to come to pick up trash, freeing up resources and slashing fuel costs. This is a classic example of the efficiency of moving "bits not atoms." But researchers are approaching the physical limits of sensors, Dr. Kommey explained. With the current rate of technology improvement, they'll reach the three-atom transistor by 2041. Once they hit that wall, it will force a revolution they way we do computing. But wait, researchers at Purdue University and the University of New South Wales are both working on a reliable one-atom transistors! Other researchers are working on "approximate computing" that will reduce computing requirements drastically. So it's unclear where the wall actually is. In the meantime, as Dr. Koomey promised, ultra-low power computing will change everything.

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • Browser Specific Extensions of HttpClient

    - by imran_ku07
            Introduction:                     REpresentational State Transfer (REST) causing/leaving a great impact on service/API development because it offers a way to access a service without requiring any specific library by embracing HTTP and its features. ASP.NET Web API makes it very easy to quickly build RESTful HTTP services. These HTTP services can be consumed by a variety of clients including browsers, devices, machines, etc. With .NET Framework 4.5, we can use HttpClient class to consume/send/receive RESTful HTTP services(for .NET Framework 4.0, HttpClient class is shipped as part of ASP.NET Web API). The HttpClient class provides a bunch of helper methods(for example, DeleteAsync, PostAsync, GetStringAsync, etc.) to consume a HTTP service very easily. ASP.NET Web API added some more extension methods(for example, PutAsJsonAsync, PutAsXmlAsync, etc) into HttpClient class to further simplify the usage. In addition, HttpClient is also an ideal choice for writing integration test for a RESTful HTTP service. Since a browser is a main client of any RESTful API, it is also important to test the HTTP service on a variety of browsers. RESTful service embraces HTTP headers and different browsers send different HTTP headers. So, I have created a package that will add overloads(with an additional Browser parameter) for almost all the helper methods of HttpClient class. In this article, I will show you how to use this package.           Description:                     Create/open your test project and install ImranB.SystemNetHttp.HttpClientExtensions NuGet package. Then, add this using statement on your class, using ImranB.SystemNetHttp;                     Then, you can start using any HttpClient helper method which include the additional Browser parameter. For example,  var client = new HttpClient(myserver); var task = client.GetAsync("http://domain/myapi", Browser.Chrome); task.Wait(); var response = task.Result; .                     Here is the definition  of Browser, public enum Browser { Firefox = 0, Chrome = 1, IE10 = 2, IE9 = 3, IE8 = 4, IE7 = 5, IE6 = 6, Safari = 7, Opera = 8, Maxthon = 9, }                     These extension methods will make it very easy to write browser specific integration test. It will also help HTTP service consumer to mimic the request sending behavior of a browser. This package source is available on github. So, you can grab the source and add some additional behavior on the top of these extensions.         Summary:                     Testing a REST API is an important aspect of service development and today, testing with a browser is crucial. In this article, I showed how to write integration test that will mimic the browser request sending behavior. I also showed an example. Hopefully you will enjoy this article too.

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • Eclipse no longer useful

    - by dgood1
    When I got my Eclipse from the Ubuntu Software Center, it was good and worked fine. I could work on Java projects fine. This week I was required to add ADT and tried the ADT-bundle, assuming it had everthing I needed, seeing that the SDK had more steps. So now, I can create Android apps using the ADT-bundle. I tried to work on my java projects again and I now discovered: I can't run my java projects: "The selection cannot be launched. And there are no recent launches." error. I also believe Eclipse doesn't know it's a java program because it all in black and white. Not the usual green/blue/red/black things when making comments, variables and Strings. I can't make new projects of ANYTHING unless I use the adt-bundle. New project only offers CVS (whatever that is) My perspectives seem limited. I remembered more choices and now I'm limited to [Java], Resource, CVS Repository, debug, Team Sync. I was told to be able to use perspectives to swap between Android and Java developing. Even after the ADT installation using "Install new Software",nothing. I can't uninstall/purge/remove Eclipse via the terminal. I tried removing it then reinstalling it via the Ubuntu Software Cetner. No results other than it's temporary removal. (Possibly unrelated) A large number of repositories are not found when updating Eclipse. (See Step 8 in Summary of what I did...) Although, on checking the versions and installation history, I confirmed Android and Java are installed. It probably just doesn't know it's there. Eclipse Indigo: Version: 3.7.2 Build id: I20110613-1736 Summary of what I did before and during the problem: Downloaded adt-bundle. Attempted instructions from teacher. (Install new Software) (Failed but other than an annoying "can't find repository" during each update, no damage to report) (Fixed) Ran "eclipse" executable from the adt-bundle. Updated Eclipse. (After restart, I noticed the problem) NOTE: other than window arrangement, I had no customizations. Played around with the Windowspreferences and Projectpropertied. Restored to default settings after no results. Tried "apt-get purge eclipse". Couldn't find Eclipse so, nothing happened. Used Software center. No results. Tried swapping workspaces. I tried different folder, deeper folder, renaming. All return the same problem. Deleted adt-bundle (browsed folders then delete). Got Adt-sdk only. Installed. Can't find any changes other than some disk space usage. Of course, I can't make Android apps until I unzip the bundle again. WindowsPreferencesInstall/UpdateAvailable Software Sites, Checked as many repositories as possible, then updated. Still nothing. I'm about to get a second try on uninstalling it, because I think my last action will just be taking up space. But I'll wait for tomorrow, in case the answer will help. Any thoughts?

    Read the article

  • Unity GUI not in build, but works fine in editor

    - by Darren
    I have: GUITexture attached to an object A script that has GUIStyles created for the Textfield and Buttons that are created in OnGUI(). This script is attached to the same object in number 1 3 GUIText objects each separate from the above. A script that enables the GUITexture and the script in number 1 and 2 respectively This is how it is supposed to work: When I cross the finish line, number 4 script enables number 1 GUITexture component and number 2 script component. The script component uses one of number 3's GUIText objects to show you your best lap time, and also makes a GUI.Textfield for name entry and 2 GUI.Buttons for "Submit" and "Skip". If you hit "Submit" the script will submit the time. No matter which button you press, The remaining 2 GUIText objects from number 3 will show you the top 10 best times. For some reason, when I run it in editor, everything works 100%, but when I'm in different kinds of builds, the results vary. When I am in a webplayer, The GUITexture and the textfield and buttons appear, but the textfield and buttons are plain and have no evidence of GUIStyles. When I click one of the buttons, the score gets submitted but I do not get the fastest times showing. When I am in a standalone build, the GUITexture shows up, but nothing else does. If I remove the GUIStyle parameter of the GUI.Textfield and GUI.Button, they show up. Why am I getting these variations and how can I fix it? Code below: void Start () { Names.text = ""; Times.text = ""; YourBestTime.text = "Your Best Lap: " + bestTime + "\nEnter your name:"; //StartCoroutine(GetTimes("Test")); } void Update() { if (!ShowButtons && !GettingTimes) { StartCoroutine(GetTimes()); GettingTimes = true; } } IEnumerator GetTimes () { Debug.Log("Getting times"); YourBestTime.text = "Loading Best Lap Times"; WWW times_get = new WWW(GetTimesUrl); yield return times_get; WWW names_get = new WWW(GetNamesUrl); yield return names_get; if(times_get.error != null || names_get.error != null) { print("There was an error retrieiving the data: " + names_get.error + times_get.error); } else { Times.text = times_get.text; Names.text = names_get.text; YourBestTime.text = "Your Best Lap: " + bestTime; } } IEnumerator PostLapTime (string Name, string LapTime) { string hash= MD5.Md5Sum(Name + LapTime + secretKey); string bestTime_url = SubmitTimeUrl + "&Name=" + WWW.EscapeURL(Name) + "&LapTime=" + LapTime + "&hash=" + hash; Debug.Log (bestTime_url); // Post the URL to the site and create a download object to get the result. WWW hs_post = new WWW(bestTime_url); //label = "Submitting..."; yield return hs_post; // Wait until the download is done if (hs_post.error != null) { print("There was an error posting the lap time: " + hs_post.error); //label = "Error: " + hs_post.error; //show = false; } else { Debug.Log("Posted: " + hs_post.text); ShowButtons = false; PostingTime = false; } } void OnGUI() { if (ShowButtons) { //makes text box nameString = GUI.TextField( new Rect((Screen.width/2)-111, (Screen.height/2)-130, 222, 25), nameString, 20, TextboxStyle); if (GUI.Button( new Rect( (Screen.width/2-74.0f), (Screen.height/2)- 90, 64, 32), "Submit", ButtonStyle)) { //SUBMIT TIME if (nameString == "") { nameString = "Player"; } if (!PostingTime) { StartCoroutine(PostLapTime(nameString, bestTime)); PostingTime = true; } } else if (GUI.Button( new Rect( (Screen.width/2+10.0f), (Screen.height/2)- 90, 64, 32), "Skip", ButtonStyle)) { ShowButtons = false; } } } }

    Read the article

  • Exploring TCP throughput with DTrace

    - by user12820842
    One key measure to use when assessing TCP throughput is assessing the amount of unacknowledged data in the pipe. This is sometimes termed the Bandwidth Delay Product (BDP) (note that BDP is often used more generally as the product of the link capacity and the end-to-end delay). In DTrace terms, the amount of unacknowledged data in bytes for the connection is the different between the next sequence number to send and the lowest unacknoweldged sequence number (tcps_snxt - tcps_suna). According to the theory, when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput. In other words, if we can fill the pipe without the peer TCP complaining (by virtue of its window size reaching 0), we are purely bandwidth-limited. If the peer's receive window is too small however, the sending TCP has to wait for acknowledgements before it can send more data. In this case the round-trip time (RTT) limits throughput. In such cases the effective throughput limit is the window size divided by the RTT, e.g. if the window size is 64K and the RTT is 0.5sec, the throughput is 128K/s. So a neat way to visually determine if the receive window of clients may be too small should be to compare the distribution of BDP values for the server versus the client's advertised receive window. If the BDP distribution overlaps the send window distribution such that it is to the right (or lower down in DTrace since quantizations are displayed vertically), it indicates that the amount of unacknowledged data regularly exceeds the client's receive window, so that it is possible that the sender may have more data to send but is blocked by a zero-window on the client side. In the following example, we compare the distribution of BDP values to the receive window advertised by the receiver (10.175.96.92) for a large file download via http. # dtrace -s tcp_tput.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 6 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 9 4096 | 14 8192 | 27 16384 | 67 32768 |@@ 1464 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 32396 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 16384 | 0 32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 17067 65536 | 0 Here we have a puzzle. We can see that the receiver's advertised window is in the 32768-65535 range, while the amount of unacknowledged data in the pipe is largely in the 65536-131071 range. What's going on here? Surely in a case like this we should see zero-window events, since the amount of data in the pipe regularly exceeds the window size of the receiver. We can see that we don't see any zero-window events since the SWND distribution displays no 0 values - it stays within the 32768-65535 range. The explanation is straightforward enough. TCP Window scaling is in operation for this connection - the Window Scale TCP option is used on connection setup to allow a connection to advertise (and have advertised to it) a window greater than 65536 bytes. In this case the scaling shift is 1, so this explains why the SWND values are clustered in the 32768-65535 range rather than the 65536-131071 range - the SWND value needs to be multiplied by two since the reciever is also scaling its window by a shift factor of 1. Here's the simple script that compares BDP and SWND distributions, fixed to take account of window scaling. #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @bdp["BDP(bytes)", args[2]-ip_daddr, args[4]-tcp_sport] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); } tcp:::receive / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @swnd["SWND(bytes)", args[2]-ip_saddr, args[4]-tcp_dport] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } And here's the fixed output. # dtrace -s tcp_tput_scaled.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 39 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 22 16384 | 37 32768 |@ 99 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3858 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 512 | 0 1024 | 1 2048 | 0 4096 | 2 8192 | 4 16384 | 7 32768 | 14 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1956 131072 | 0

    Read the article

  • To Bit or Not To Bit

    - by Johnm
    'Twas a long day of troubleshooting and firefighting and now, with most of the office vacant, you face a blank scripting window to create a new table in his database. Many questions circle your mind like dirty water gurgling down the bathtub drain: "How normalized should this table be?", "Should I use an identity column?", "NVarchar or Varchar?", "Should this column be NULLABLE?", "I wonder what apple blue cheese bacon cheesecake tastes like?" Well, there are times when the mind goes it's own direction. A Bit About Bit At some point during your table creation efforts you will encounter the decision of whether to use the bit data type for a column. The bit data type is an integer data type that recognizes only the values of 1, 0 and NULL as valid. This data type is often utilized to store yes/no or true/false values. An example of its use would be a column called [IsGasoline] which would be intended to contain the value of 1 if the row's subject (a car) had a gasoline engine and a 0 if the subject did not have a gasoline engine. The bit data type can even be found in some of the system tables of SQL Server. For example, the sysssispackages table in the msdb database which contains SQL Server Integration Services Package information for the packages stored in SQL Server. This table contains a column called [IsEncrypted]. A value of 1 indicates that the package has been encrypted while the value of 0 indicates that it is not. I have learned that the most effective way to disperse the crowd that surrounds the office coffee machine is to engage into SQL Server debates. The bit data type has been one of the most reoccurring, as well as the most enjoyable, of these topics. It contains a practical side and a philosophical side. Practical Consideration This data type certainly has its place and is a valuable option for database design; but it is often used in situations where the answer is really not a pure true/false response. In addition, true/false values are not very informative or scalable. Let's use the previously noted [IsGasoline] column for illustration. While on the surface it appears to be a rather simple question when evaluating a car: "Does the car have a gasoline engine?" If the person entering data is entering a row for a Jeep Liberty, the response would be a 1 since it has a gasoline engine. If the person is entering data is entering a row for a Chevrolet Volt, the response would be a 0 since it is an electric engine. What happens when a person is entering a row for the gasoline/electric hybrid Toyota Prius? Would one person's conclusion be consistent with another person's conclusion? The argument could be made that the current intent for the database is to be used only for pure gasoline and pure electric engines; but this is where the scalability issue comes into play. With the use of a bit data type a database modification and data conversion would be required if the business decided to take on hybrid engines. Whereas, alternatively, if the int data type were used as a foreign key to a reference table containing the engine type options, the change to include the hybrid option would only require an entry into the reference table. Philosophical Consideration Since the bit data type is often used for true/false or yes/no data (also called Boolean) it presents a philosophical conundrum of what to do about the allowance of the NULL value. The inclusion of NULL in a true/false or yes/no response simply violates the logical principle of bivalence which states that "every proposition is either true or false". If NULL is not true, then it must be false. The mathematical laws of Boolean logic support this concept by stating that the only valid values of this scenario are 1 and 0. There is another way to look at this conundrum: NULL is also considered to be the absence of a response. In other words, it is the equivalent to "undecided". Anyone who watches the news can tell you that polls always include an "undecided" option. This could be considered a valid option in the world of yes/no/dunno. Through out all of these considerations I have discovered one absolute certainty: When you have found a person, or group of persons, who are willing to entertain a philosophical debate of the bit data type, you have found some true friends.

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • Taking a Flying Leap

    - by Lance Shaw
    Yesterday, I went skydiving with three of my children.  It was thrilling, scary, invigorating and exciting. While there is obvious risk involved, the reward and feeling of success was well worth it. You might already be wondering what skydiving would have to with WebCenter, so let me explain. Implementing a skydiving program and becoming an instructor does not happen overnight.  It does not happen with the purchase of the needed technology. Not one of us would go out, buy a parachute, the harnesses, helmet and all the gear and be able to convince anyone that we are now ready to be a skydiving instructor. The fact is that obtaining the technology is merely a small piece of the overall process and so is the case with managing content in your company. You don't just buy the right software (Oracle WebCenter Content) and go to your boss and declare information management success. There is planning, research and effort that goes into deploying software of any kind and especially when it is as mission-critical to the success of your business as Enterprise Content Management. To become a certified skydiving instructor takes at least 3 years of commitment and often longer. In the United States, candidates must complete over 500 solo jumps of their own over a minimum of 36 months and then must complete additional rigorous training under observation.  When you consider the amount of time and effort involved, it's not unlike getting a college degree and anyone that has trusted their lives to one of these instructors will no doubt appreciate their dedication to the curriculum.  Implementing an ECM system won't take that long, but it certainly requires commitment, analysis and consideration. But guess what?  Humans are involved and that means that mistakes can happen and that rules change.  This struck me while reading an excellent post on darkreading.com by Glenn S. Phillips entitled "Mission Impossible: 4 Reasons Compliance is Impossible".  His over-arching point was that with information management and security, environments change and people are involved meaning the work is never done.  He stated that you can never claim your compliance efforts are complete because of the following reasons. People are involved.  And lets face it, some are more trustworthy than others. Change is Constant. There is always some new technology coming along that is disruptive. Consumer grade cloud file sharing and sync tools come to mind here. Compliance is interpreted, not defined.  Laws and the judges that read them are always on the move. Technology is a tool, not a complete solution. There is no magic pill. The skydiving analogy holds true here as well.  Ultimately, a single person packs your parachute.  For obvious reasons, you prefer that this person be trustworthy but there are no absolute guarantees of a 100% error-free scenario.  Weather and wind conditions are never a constant and the best-laid plans for a great day of skydiving are easily disrupted by forces outside of your control.  Rules and regulations vary by location and may be updated at any time and as I mentioned early on, even the best technology on its own will only get you started. The good news is that, like skydiving, with the right technology, the right planning, the right team and a proper understanding of the rules and regulations that govern your industry, your ECM deployment can be a great success.  Failure to plan for any of the 4 factors that Glenn outlined in his article will certainly put your deployment and maybe even your company at risk, so consider them carefully. As a final aside, for those of you who consider skydiving an incredibly dangerous and risky pastime, consider this comparative statistic.  In 2012, the U.S. Parachute Association recorded 19 fatal skydiving accidents in the U.S. out of roughly 3.1 million jumps.  That’s 0.006 fatalities per 1,000 jumps. By comparison, the U.S. National Highway Traffic Safety Administration reports that there were 34,080 deaths due to car accidents in 2012.  Based on the percentages, one could argue that it is safer to jump out of a plane than to drive to the airport where the skydiving will take place. While the way you manage, secure, classify, control, retain and dispose of company files may not carry as much risk as driving or skydiving, it certainly carries risk for the organization when not planned and deployed appropriately.  Consider all the factors involved in your organization as you make your content management plans.  For additional areas of consideration, be sure to download our free whitepaper on the topic entitled "The Top 10 Criteria for Choosing an ECM System" which is available for download here.

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >