Search Results

Search found 4886 results on 196 pages for 'geeks on hugs'.

Page 94/196 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Caveat utilitor - Can I run two versions of Microsoft Project side-by-side?

    - by Martin Hinshelwood
    A number of out customers have asked if there are any problems in installing and running multiple versions of Microsoft Project on a single client. Although this is a case of Caveat utilitor (Let the user beware), as long as the user understands and accepts the issues that can occur then they can do this. Although Microsoft provide the ability to leave old versions of Office products (except Outlook) on your client when you are installing a new version of the product they certainly do not endorse doing so. Figure: For Project you can choose to keep the old stuff   That being the case I would have preferred that they put a “(NOT RECOMMENDED)” after the options to impart that knowledge to the rest of us, but they did not. The default and recommended behaviour is for the newer version installer to remove the older versions. Of course this does not apply in the revers. There are no forward compatibility packs for Office. There are a number of negative behaviours (or bugs) that can occur in this configuration: There is only one MS Project In Windows a file extension can only be associated with a single program.  In this case, MPP files can be associated with only one version of winproj.exe.  The executables are in different folders so if a user double-clicks a Project file on the desktop, file explorer, or Outlook email, Windows will launch the winproj.exe associated with MPP and then load the MPP file.  There are problems associated with this situation and in some cases workarounds. The user double-clicks on a Project 2010 file, Project 2007 launches but is unable to open the file because it is a newer version.  The workaround is for the user to launch Project 2010 from the Start menu then open the file.  If the file is attached to an email they will need to first drag the file to the desktop. All your linked MS Project files need to be of the same version There are a number of problems that occur when people use on Microsoft’s Object Linking and Embedding (OLE) technology.  The three common uses of OLE are: for inserted projects where a Master project contains sub-projects and each sub-project resides in its own MPP file shared resource pools where multiple MPP files share a common resource pool kept in a single MPP file cross-project links where a task or milestone in one MPP file has a  predecessor/successor relationship with a task or milestone in a different MPP file What I’ve seen happen before is that if you are running in a version of Project that is not associated with the MPP extension and then try and activate an OLE link then Project tries to launch the other version of Project.  Things start getting very confused since different MPP files are being controlled by different versions of Project running at the same time.  I haven’t tried this in awhile so I can’t give you exact symptoms but I suspect that if Project 2010 is involved the symptoms will be different then in a Project 2003/2007 scenario.  I’ve noticed that Project 2010 gives different error messages for the exact same problem when it occurs in Project 2003 or 2007.  -Anonymous The recommendation would be either not to use this feature if you have to have multiple versions of Project installed or to use only a single version of Project. You may get unexpected negative behaviours if you are using shared resource pools or resource pools even when you are not running multiple versions as I have found that they can get broken very easily. If you need these thing then it is probably best to use Project Server as it was created to solve many of these specific issues. Note: I would not even allow multiple people to access a network copy of a Project file because of the way Windows locks files in write mode. This can cause write-locks that get so bad a server restart is required I’ve seen user’s files get write-locked to the point where the only resolution is to reboot the server. Changing the default version to run for an extension So what if you want to change the default association from Project 2007 to Project 2010?   Figure: “Control Panel | Folder Options | Change the file associated with a file extension” Windows normally only lists the last version installed for a particular extension. You can select a specific version by selecting the program you want to change and clicking “Change program… | Browse…” and then selecting the .exe you want to use on the file system. Figure: You will need to select the exact version of “winproj.exe” that you want to run Conclusion Although it is possible to run multiple versions of Project on one system in the main it does not really make sense.

    Read the article

  • Secretara si seful

    - by interesante
    Un bancher discuta cu prietenul sau:- Iti inchipui, m-am indragostit de secretara mea!Ea are 20 de ani, eu 65! Ce crezi, sansele mele vor creste daca ii voi spune ca am 50?- Sansele tale vor creste daca ii vei spune ca ai 80!Vezi si alte chestii haioase pe profilul meu de pe acest siteProprietarul unu hotel era nelamurit la calcularea unei facturi. Se decide sa-si intrebe secretara.- Asa-i ca ai terminat Politehnica?- Da, ii raspunde secretara.- Bun, atunci spune-mi, daca ai avea 20.000 de dolari din care ai scadea 14%, cu ce ai mai ramane?- Cu nimic in afara de cercei!

    Read the article

  • App Fabric Service Bus and Access Control Pricing

    - by kaleidoscope
    The Service Bus costs $3.99 per Connection-month on a consumption basis for individually provisioned connections. Data transfers charges would also apply. Or, if you are able to forecast your needs ahead of time, you can purchase “Packs” of Connections. For example: $9.95 for a pack of 5 Connections, $49.75 for a pack of 25, $199.00 for a pack of 100, or $995 for a pack of 500, plus data transfer charges. Connection Packs represent an effective rate of $1.99 per Connection-month. Access Control will be priced at $1.99 per 100,000 Transactions, which includes token requests and management operations, plus associated data transfer. Typically, Service Bus developers depend on Access Control to secure their Connections. More Information: http://azurefeeds.com/post/865/Announcing_Windows_Azure_platform_commercial_offer_availability_and_updated_AppFabric_pricing.aspx   Amit, S

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • WPF and Composite Application Library &ndash; Missing The Point

    - by David Totzke
    I have a headache and it’s not even 9AM yet.  Well, ok, it’s nearly ten here now in GMT –5 but it’s before nine somewhere still. Sometimes people will miss the point of something so utterly and completely that one is left wondering how such a person can even dress themselves. Writing an application using WPF and the Composite Application Library (Prism) means that one must learn the various programming idioms common to these frameworks.  The Windows Forms event driven model simply will not suffice.  You need to come to grips with the idea of a very loosely coupled application.  Concepts that must be absorbed and internalized include Data Binding, Control and Data Templates, Commands, Dependency Injection, and Inversion of Control, as well as the Supervising Controller, Presentation Model and Model-View-View-Model patterns. It is as simple as that.  Not to embrace these concepts is to invite pain.  It is to invite noodles; and not the holy kind. Someone actually said to me that “just because it’s not WPF, doesn’t mean it’s wrong.”  And he’s right.  Unless, of course, you are writing a WPF application and especially if you are using the Composite Application Library. In simple terms then; YOU’RE DOING IT WRONG!   Dave Just because I can…

    Read the article

  • Hello From South Florida

    - by Sam Abraham
    Fellow Blog Readers: I figured I use my first blog post on GeeksWithBlogs to introduce myself.   I recently relocated from Long Island, NY to South Florida where I joined a local company as Software Engineer specializing in technologies such as C#, ASP.Net 3.5, WCF, Silverlight, SQL Server 2008 and LINQ, to name a few. I am an MCP and MCTS ASP.Net 3.5, looking to get my .Net 4.0 certification soon.   Having been in industry for a few years so far, I figured I would share with you my take on the importance of being involved(at least attending) in local user groups.   I am a firm believer that besides using a certain technology, the best way to expand one’s knowledge is by sharing it with others and being equally open to learn from others just as much as you are willing to share what you know.   In my opinion, an important factor that makes a good developer stand-out is his/her ability to keep abreast with the latest and greatest even in areas outside his/her direct expertise.   Additionally, having spoken to various recruiters, technical user group attendees are always favorably looked upon as genuinely interested in their field and willing to take the initiative to expand their knowledge which offers job candidates good leverage when competing for jobs.   I believe I am very blessed to be in an area with a very strong and vibrant developer community. I found in the local .Net community leadership a genuine interest in constantly extending the opportunity to all developers to get more involved and encouraging those who are willing to take that initiative achieve their goal: Speak in meetings, volunteer at events or write and publish articles/blogs about latest and greatest technologies.   With Vishal Shukla (Site director for the West Palm Beach .Net User Group) traveling overseas, I have been extended the opportunity to come on board as site coordinator for FladotNet's WPB .Net User Group along with Venkata Subramanian, an opportunity which I gratefully accepted.   Being involved in running a .Net User Group will surely help me personally and professionally, but my real hope is to use this opportunity to assist in delivering the ultimate common-goal: spread the word about new .Net Technologies, help everybody get more involved and simply have fun learning new things.   With my introduction out of the way, in the next few days I will be posting some notes on an upcoming talk I will be giving about MVC2 and VS2010 in mid-April.   Environment.Exit(0); --Sam

    Read the article

  • Death March

    - by Nick Harrison
    It is a horrible sight to watch a project fail. There are few things as bad. Watching a project fail regardless of the reason is almost like sitting in a room with a "Dementor" from Harry Potter. It will literally suck all of the life and joy out of the room. Nearly every project that I have seen fail has failed because of political challenges or management challenges. Sometimes there are technical challenges that bring a project to its knees, but usually projects fail for less technical reasons. Here a few observations about projects failing for political reasons. Both the client and the consultants have to be committed to seeing the project succeed. Put simply, you cannot solve a problem when the primary stake holders do not truly want it solved. This could come from a consultant being more interested in extended the engagement. It could come from a client being afraid of what will happen to them once the problem is solved. It could come from disenfranchised stake holders. Sometimes a project is beset on all sides. When you find yourself working on a project that has this kind of threat, do all that you can to constrain the disruptive influences of the bad apples. If their influence cannot be constrained, you truly have no choice but to move on to a new project. Tough choices have to be made to make a project successful. These choices will affect everyone involved in the project. These choices may involve users not getting a change request through that they want. Developers may not get to use the tools that they want. Everyone may have to put in more hours that they originally planned. Steps may be skipped. Compromises will be made, but if everyone stays committed to the end goal, you can still be successful. If individuals start feeling disgruntled or resentful of the compromises reached, the project can easily be derailed. When everyone is not working towards a common goal, it is like driving with one foot on the break and one foot on the accelerator. Not only will you not get to where you are planning, you will also damage the car and possibly the passengers as well.   It is important to always keep the end result in mind. Regardless of the development methodology being followed, the end goal is not comprehensive documentation. In all cases, it is working software. Comprehensive documentation is nice but useless if the software doesn't work.   You can never get so distracted by the next goal that you fail to meet the current goal. Most projects are ultimately marathons. This means that the pace must be sustainable. Regardless of the temptations, you cannot burn the team alive. Processes will fail. Technology will get outdated. Requirements will change, but your people will adapt and learn and grow. If everyone on the team from the most senior analyst to the most junior recruit trusts and respects each other, there is no challenge that they cannot overcome. When everyone involved faces challenges with the attitude "This is my project and I will not let it fail" "You are my teammate and I will not let you fail", you will in fact not fail. When you find a team that embraces this attitude, protect it at all cost. Edward Yourdon wrote a book called Death March. In it, he included a graph for categorizing Death March project types based on the Happiness of the Team and the Chances of Success.   Chances are we have all worked on Death March projects. We will all most likely work on more Death March projects in the future. To a certain extent, they seem to be inevitable, but they should never be suicide or ugly. Ideally, they can all be "Mission Impossible" where everyone works hard, has fun, and knows that there is good chance that they will succeed. If you are ever lucky enough to work on such a project, you will know that sense of pride that comes from the eventual success. You will recognize a profound bond with the team that you worked with. Chances are it will change your life or at least your outlook on life. If you have not already read this book, get a copy and study it closely. It will help you survive and make the most out of your next Death March project.

    Read the article

  • Windows Azure SDK 1.2 Available - .NET 4.0 Support

    - by Shaun
    The Windows Azure team had just announced the release of the latest version of its tools and SDK (v1.2) at the TechED 2010 New Orleans. You can download it here. The biggest new feature/improvement of this version of the SDK would be Visual Studio 2010 RTM and .NET 4.0 support. It gives us the facilities to build our azure-based applications on top of .NET 3.5 and 4.0 as well. So the guys who is working on, like me, or is going to be working on .NET 4 would better to have this SDK installed I think. Also there are some other information about the envolution of the Windows Azure at this TechED session you can find here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • C-Sharpen Up at Philly.NET

    - by Steve Michelotti
    On October 6th, I’ll be presenting at C-Sharpen Up at Philly.NET at the Microsoft Malvern, PA location. I’ll be presenting along with Stephen Bohlen, Andy Schwam, and Danilo Diaz. This is a great one-day event that covers real-world usage of all major C# language features from C# 1.0 to C# 5.0. It also includes a great presentation on the SOLID principles by Stephen Bohlen. Registration won’t be open much longer. You can register here. Hope to see you there!

    Read the article

  • Wise settings for Git

    - by Marko Apfel
    These settings reflecting my Git-environment. It a result of reading and trying several ideas of input from others. Must-Haves Aliases [alias] ci = commit st = status co = checkout oneline = log --pretty=oneline br = branch la = log --pretty=\"format:%ad %h (%an): %s\" --date=short df = diff dc = diff --cached lg = log -p lol = log --graph --decorate --pretty=oneline --abbrev-commit lola = log --graph --decorate --pretty=oneline --abbrev-commit --all ls = ls-files ign = ls-files -o -i --exclude-standard Colors [color] ui = auto [color "branch"] current = yellow reverse local = yellow remote = green [color "diff"] meta = yellow bold frag = magenta bold old = red bold new = green bold whitespace = red reverse [color "status"] added = green changed = red untracked = cyan Core [core] autocrlf = true excludesfile = c:/Users/<user>/.gitignore editor = 'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession –noPlugin Nice to have Merge and Diff [merge] tool = kdiff3 [mergetool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [mergetool "p4merge"] path = c:/Program Files (x86)/Perforce Merge/p4merge.exe cmd = p4merge \"$BASE\" \"$LOCAL\" \"$REMOTE\" \"$MERGED\" keepTemporaries = false trustExitCode = false keepBackup = false [diff] guitool = kdiff3 [difftool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [difftool "p4merge"] path = C:/Users/<user>/My Applications/Perforce Merge/p4merge.exe cmd = \"p4merge.exe $LOCAL $REMOTE\" .

    Read the article

  • Windows CE and the Compact Framework are dead?

    - by Valter Minute
    This is one of the question that I’ve been asked more and more frequently at my public speeches and each time I meet customers. The announcement of the new Windows Phone 7 platform and the release of Visual Studio 2010 generated a bit of confusion around Windows CE and some of the technologies it supports. Windows CE is still alive and a lot of good programmers are working on the new releases (I had a chance to know some of them during the MVP summit in February). Here’s a blog post from Olivier Bloch that describes the situation and provides some good news about the OS: http://blogs.msdn.com/obloch/archive/2010/05/03/windows-ce-is-not-dead.aspx As you can read here, Windows Phone 7 keeps its “roots” inside Windows CE. Regarding the .NET Compact Framework, this article from the excellent “I know the answer (it’s 42)” blog from Abhinaba (it seems that we share a passion for photography, Douglas Adams and embedded development), explains that the .NET CF is the foundation of XNA and Silverlight implementation on the WP7 platform: http://blogs.msdn.com/abhinaba/archive/2010/03/18/what-is-netcf.aspx So Windows CE is here to stay, powering one of the most interesting smart phone platforms and ready to power also your devices. Add those blogs to your RSS reader list and stay tuned for more good news about CE and the Compact Framework!

    Read the article

  • Sample code for my #mix10 talk online

    - by Laurent Bugnion
    I just saw that the video for my MIX10 session is online already! Impressive work, MIX10 team. I also published the sample code on my web server, so here are the links: Powerpoint slides Video MVVM Demo 1 (start) MVVM Demo 1 (final) Command sample RelayCommand sample Messaging sample MVVM Demo 2 (start) MVVM Demo 2 (final) MVVM Light Toolkit Version 3 It was a real pleasure and an amazing experience to have this talk and to get all the great feedback! Thanks all for coming, and as usual don’t hesitate to send your feedback! Laurent

    Read the article

  • RSS blues

    - by Valter Minute
    It seems that the RSS feed is not updating. If you missed the last post, here's a list: Silverlight for Windows Embedded tutorial (step 4): http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2010/03/09/silverlight-for-windows-embedded-tutorial-step-3-again.aspx XAML2CPP 1.0.1.0: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2010/03/08/xaml2cpp-1.0.1.0.aspx

    Read the article

  • SharePoint Saturday Michigan Is Coming Up!

    - by Brian Jackett
    Next Saturday March 13th Ann Arbor, MI will be hosting SharePoint Saturday Michigan (SPSMI).  For those unfamiliar, SharePoint Saturday is a community driven event where various regional and national speakers gather to present at a FREE conference on all topics related to SharePoint.  This will be my third SharePoint Saturday and second one I’ve had the honor of presenting at.  My presentation is titled “Real World Deployment of SharePoint 2007 Solutions“ (click here for the SpeakerRate link.)     After taking a look at the speaker and session list I can tell you with great excitement that this event is packed with great speakers and topics.  Register here and come on out to SharePoint Saturday Michigan on March 13th.  If you’re attending feel free to track me down and say hi.  See you there.         -Frog Out

    Read the article

  • Agile Like Jazz

    - by Jeff Certain
    (I’ve been sitting on this for a week or so now, thinking that it needed to be tightened up a bit to make it less rambling. Since that’s clearly not going to happen, reader beware!) I had the privilege of spending around 90 minutes last night sitting and listening to Sonny Rollins play a concert at the Disney Center in LA. If you don’t know who Sonny Rollins is, I don’t know how to explain the experience; if you know who he is, I don’t need to. Suffice it to say that he has been recording professionally for over 50 years, and helped create an entire genre of music. A true master by any definition. One of the most intriguing aspects of a concert like this, however, is watching the master step aside and let the rest of the musicians play. Not just play their parts, but really play… letting them take over the spotlight, to strut their stuff, to soak up enthusiastic applause from the crowd. Maybe a lot of it has to do with the fact that Sonny Rollins has been doing this for more than a half-century. Maybe it has something to do with a kind of patience you learn when you’re on the far side of 80 – and the man can still blow a mean sax for 90 minutes without stopping! Maybe it has to do with the fact that he was out there for the love of the music and the love of the show, not because he had anything to prove to anyone and, I like to think, not for the money. Perhaps it had more to do with the fact that, when you’re at that level of mastery, the other musicians are going to be good. Really good. Whatever the reasons, there was a incredible freedom on that stage – the ability to improvise, for each musician to showcase their own specialization and skills, and them come back to the common theme, back to being on the same page, as it were. All this took place in the same venue that is home to the L.A. Phil. Somehow, I can’t ever see the same kind of free-wheeling improvisation happening in that context. And, since I’m a geek, I started thinking about agility. Rollins has put together a quintet that reflects his own particular style and past. No upright bass or piano for Rollins – drums, bongos, electric guitar and bass guitar along with his sax. It’s not about the mix of instruments. Other trios, quartets, and sextets use different mixes of instruments. New Orleans jazz tends towards trombones instead of sax; some prefer cornet or trumpet. But no matter what the choice of instruments, size matters. Team sizes are something I’ve been thinking about for a while. We’re on a quest to rethink how our teams are organized. They just feel too big, too unwieldy. In fact, they really don’t feel like teams at all. Most of the time, they feel more like collections or people who happen to report to the same manager. I attribute this to a couple factors. One is over-specialization; we have a tendency to have people work in silos. Although the teams are product-focused, within them our developers are both generalists and specialists. On the one hand, we expect them to be able to build an entire vertical slice of the application; on the other hand, each developer tends to be responsible for the vertical slice. As a result, developers often work on their own piece of the puzzle, in isolation. This sort of feels like working on a jigsaw in a group – each person taking a set of colors and piecing them together to reveal a portion of the overall picture. But what inevitably happens when you go to meld all those pieces together? Inevitably, you have some sections that are too big to move easily. These sections end up falling apart under their own weight as you try to move them. Not only that, but there are other challenges – figuring out where that section fits, and how to tie it into the rest of the puzzle. Often, this is when you find a few pieces need to be added – these pieces are “glue,” if you will. The other issue that arises is due to the overhead of maintaining communications in a team. My mother, who worked in IT for around 30 years, once told me that 20% per team member is a good rule of thumb for maintaining communication. While this is a rule of thumb, it seems to imply that any team over about 6 people is going to become less agile simple because of the communications burden. Teams of ten or twelve seem like they fall into the philharmonic organizational model. Complicated pieces of music requiring dozens of players to all be on the same page requires a much different model than the jazz quintet. There’s much less room for improvisation, originality or freedom. (There are probably orchestral musicians who will take exception to this characterization; I’m calling it like I see it from the cheap seats.) And, there’s one guy up front who is running the show, whose job is to keep all of those dozens of players on the same page, to facilitate communications. Somehow, the orchestral model doesn’t feel much like a self-organizing team, either. The first violin may be the best violinist in the orchestra, but they don’t get to perform free-wheeling solos. I’ve never heard of an orchestra getting together for a jam session. But I have heard of teams that organize their work based on the developers available, rather than organizing the developers based on the work required. I have heard of teams where desired functionality is deferred – or worse yet, schedules are missed – because one critical person doesn’t have any bandwidth available. I’ve heard of teams where people simply don’t have the big picture, because there is too much communication overhead for everyone to be aware of everything that is happening on a project. I once heard Paul Rayner say something to the effect of “you have a process that is perfectly designed to give you exactly the results you have.” Given a choice, I want a process that’s much more like jazz than orchestral music. I want a process that doesn’t burden me with lots of forms and checkboxes and stuff. Give me the simplest, most lightweight process that will work – and a smaller team of the best developers I can find. This seems like the kind of process that will get the kind of result I want to be part of.

    Read the article

  • Apress Deal of the day - 23/Feb/2011 - Ultra-Fast ASP.NET: Building Ultra-Fast and Ultra-Scalable Websites Using ASP.NET and SQL Server

    - by TATWORTH
    Today's $10 deal of the day at http://www.apress.com/info/dailydeal  is Ultra-Fast ASP.NET: Building Ultra-Fast and Ultra-Scalable Websites Using ASP.NET and SQL Server by Richard Kessig - ISBN 978-1-4302-2383-2 I won a copy of this book at 101 Books. Richard Kessig is an all-star member of forums.asp.net - see http://forums.asp.net/members/RickNZ.aspx - this book has been on before as deal of the day. If you did not get a copy then, I suggest getting it today. " Ultra-Fast ASP.NET provides a practical guide to building extremely fast and scalable web sites using ASP.NET and SQL Server. It strikes a balance between imparting usable advice and backing that advice up with supporting background information. $49.99 | Published Nov 2009 | Rick Kiessig"

    Read the article

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • Windows Phone SDK 7.1 Beta2

    - by Nikita Polyakov
    It’s here – the brand new - Windows Phone SDK 7.1 Beta2. This time it has ability for your to Flash your Developer Unlocked phone to Mango Beta! How awesome is that? Mega-Ultra-Mango-Awesome!! The Windows Phone SDK includes the following Windows Phone SDK 7.1 (Beta2) Windows Phone Emulator (Beta2) Windows Phone SDK 7.1 Assemblies (Beta2) Silverlight 4 SDK and DRT Windows Phone SDK 7.1 Extensions for XNA Game Studio 4.0 Microsoft Expression Blend SDK Preview for Windows Phone 7.1 WCF Data Services Client for Window Phone 7.1 Microsoft Advertising SDK for Windows Phone 7 The direct download link is: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=26648 Official Details and instructions: http://create.msdn.com/en-US/news/Mango_Beta Official Blog Post: Windows Phone 7 Developer Blog - Developers Get Goody Basket Full of Mangos "If you're registered for Windows Phone Marketplace, you'll receive an invitation from Microsoft Connect that will provide access to a firmware update for your retail Windows Phone device" #ProTip’s: -Uninstall any Mango 7.1 Windows Phone 7 Beta tools if you already had them installed. -If you have Visual Studio 2010 RTM installed, you must install Service Pack 1 RTM before you install Windows Phone SDK 7.1 Beta 2. Please refer to the Service Pack 1 release notes for installation issues. Visual Studio 2010 SP1 RTM -If you installed Visual Basic for Windows Phone Developer Tools 7.0, you must uninstall it before installing Windows Phone SDK 7.1 Beta 2. Uninstall the item Visual Basic for Windows Phone Developer Tools – RTW from the programs list on your computer. Visual Basic is now fully integrated into Windows Phone SDK 7.1 Beta 2; you do not need to install it separately. Follow the instructions very-very closely. Updating your phone yourself is serious business and should not be done while not paying attention!

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

  • Retrieving a list of eBay categories using the .NET SDK and GetCategoriesCall

    - by Bill Osuch
    eBay offers a .Net SDK for its Trading API - this post will show you the basics of making an API call and retrieving a list of current categories. You'll need the category ID(s) for any apps that post or search eBay. To start, download the latest SDK from https://www.x.com/developers/ebay/documentation-tools/sdks/dotnet and create a new console app project. Add a reference to the eBay.Service DLL, and a few using statements: using eBay.Service.Call; using eBay.Service.Core.Sdk; using eBay.Service.Core.Soap; I'm assuming at this point you've already joined the eBay Developer Network and gotten your app IDs and user tokens. If not: Join the developer program Generate tokens Next, add an app.config file that looks like this: <?xml version="1.0"?> <configuration>   <appSettings>     <add key="Environment.ApiServerUrl" value="https://api.ebay.com/wsapi"/>     <add key="UserAccount.ApiToken" value="YourBigLongToken"/>   </appSettings> </configuration> And then add the code to get the xml list of categories: ApiContext apiContext = GetApiContext(); GetCategoriesCall apiCall = new GetCategoriesCall(apiContext); apiCall.CategorySiteID = "0"; //Leave this commented out to retrieve all category levels (all the way down): //apiCall.LevelLimit = 4; //Uncomment this to begin at a specific parent category: //StringCollection parentCategories = new StringCollection(); //parentCategories.Add("63"); //apiCall.CategoryParent = parentCategories; apiCall.DetailLevelList.Add(DetailLevelCodeType.ReturnAll); CategoryTypeCollection cats = apiCall.GetCategories(); using (StreamWriter outfile = new StreamWriter(@"C:\Temp\EbayCategories.xml")) {    outfile.Write(apiCall.SoapResponse); } GetApiContext() (provided in the sample apps in the SDK) is required for any call:         static ApiContext GetApiContext()         {             //apiContext is a singleton,             //to avoid duplicate configuration reading             if (apiContext != null)             {                 return apiContext;             }             else             {                 apiContext = new ApiContext();                 //set Api Server Url                 apiContext.SoapApiServerUrl = ConfigurationManager.AppSettings["Environment.ApiServerUrl"];                 //set Api Token to access eBay Api Server                 ApiCredential apiCredential = new ApiCredential();                 apiCredential.eBayToken = ConfigurationManager.AppSettings["UserAccount.ApiToken"];                 apiContext.ApiCredential = apiCredential;                 //set eBay Site target to US                 apiContext.Site = SiteCodeType.US;                 return apiContext;             }         } Running this will give you a large (4 or 5 megs) XML file that looks something like this: <soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">    <soapenv:Body>       <GetCategoriesResponse >          <Timestamp>2012-06-06T16:03:46.158Z</Timestamp>          <Ack>Success</Ack>          <CorrelationID>d02dd9e3-295a-4268-9ea5-554eeb2e0e18</CorrelationID>          <Version>775</Version>          <Build>E775_CORE_BUNDLED_14891042_R1</Build> -          <CategoryArray>             <Category>                <BestOfferEnabled>true</BestOfferEnabled>                <AutoPayEnabled>true</AutoPayEnabled>                <CategoryID>20081</CategoryID>                <CategoryLevel>1</CategoryLevel>                <CategoryName>Antiques</CategoryName>                <CategoryParentID>20081</CategoryParentID>             </Category>             <Category>                <BestOfferEnabled>true</BestOfferEnabled>                <AutoPayEnabled>true</AutoPayEnabled>                <CategoryID>37903</CategoryID>                <CategoryLevel>2</CategoryLevel>                <CategoryName>Antiquities</CategoryName>                <CategoryParentID>20081</CategoryParentID>             </Category> (etc.) You could work with this, but I wanted a nicely nested view, like this: <CategoryArray>    <Category Name='Antiques' ID='20081' Level='1'>       <Category Name='Antiquities' ID='37903' Level='2'/> </CategoryArray> ...so I transformed the xml: private void TransformXML(CategoryTypeCollection cats)         {             XmlElement topLevelElement = null;             XmlElement childLevelElement = null;             XmlNode parentNode = null;             string categoryString = "";             XmlDocument returnDoc = new XmlDocument();             XmlElement root = returnDoc.CreateElement("CategoryArray");             returnDoc.AppendChild(root);             XmlNode rootNode = returnDoc.SelectSingleNode("/CategoryArray");             //Loop through CategoryTypeCollection             foreach (CategoryType category in cats)             {                 if (category.CategoryLevel == 1)                 {                     //Top-level category, so we know we can just add it                     topLevelElement = returnDoc.CreateElement("Category");                     topLevelElement.SetAttribute("Name", category.CategoryName);                     topLevelElement.SetAttribute("ID", category.CategoryID);                     rootNode.AppendChild(topLevelElement);                 }                 else                 {                     // Level number will determine how many Category nodes we are deep                     categoryString = "";                     for (int x = 1; x < category.CategoryLevel; x++)                     {                         categoryString += "/Category";                     }                     parentNode = returnDoc.SelectSingleNode("/CategoryArray" + categoryString + "[@ID='" + category.CategoryParentID[0] + "']");                     childLevelElement = returnDoc.CreateElement("Category");                     childLevelElement.SetAttribute("Name", category.CategoryName);                     childLevelElement.SetAttribute("ID", category.CategoryID);                     parentNode.AppendChild(childLevelElement);                 }             }             returnDoc.Save(@"C:\Temp\EbayCategories-Modified.xml");         } Yes, there are probably much cleaner ways of dealing with it, but I'm not an xml expert… Keep in mind, eBay categories do not change on a regular basis, so you should be able to cache this data (either in a file or database) for some time. The xml returns a CategoryVersion node that you can use to determine if the category list has changed. Technorati Tags: Csharp, eBay

    Read the article

  • Winnipeg VS.NET 2010 Launch Event Rolls On&hellip;

    - by D'Arcy Lussier
    We’re into the afternoon sessions at the Winnipeg VS.NET launch event! After Steve Porter does his magic on “What’s New for Teams with VS.NET 2010” I’ll be tag-teaming with my colleague Jason Klassen on ASP.NET and VS.NET 2010. Popcorn and prizes are coming up! Miguel Carrasco from Anvil Digital speaking to the masses. Steve starting in on What’s New for Teams in VS.NET 2010.

    Read the article

  • Sams Teach Yourself Windows Phone 7 Application Development in 24 Hours

    - by Nikita Polyakov
    I am extremely proud to announce that book I helped author is now out and available nationwide and online! Sams Teach Yourself Windows Phone 7 Application Development in 24 Hours It’s been a a great journey and I am honored to have worked with Scott Dorman, Joe Healy and Kevin Wolf on this title. Also worth mentioning the great work that editors from Sams and our technical reviewer Richard Bailey have put into this book! Thank you to everyone for support and encouragement! You can pick up the book from: http://www.informit.com/store/product.aspx?isbn=0672335395 http://www.amazon.com/Teach-Yourself-Windows-Application-Development/dp/0672335395  Here is the cover to look for in the stores: Description: Covers Windows Phone 7.5 In just 24 sessions of one hour or less, you’ll learn how to develop mobile applications for Windows Phone 7! Using this book’s straightforward, step-by-step approach, you’ll learn the fundamentals of Windows Phone 7 app development, how to leverage Silverlight or the XNA Framework, and how to get your apps into the Windows Marketplace. One step at a time, you’ll master new features ranging from the new sensors to using launchers and choosers. Each lesson builds on what you’ve already learned, helping you get the job done fast—and get it done right! Step-by-step instructions carefully walk you through the most common Windows Phone 7 app development tasks. Quizzes and exercises at the end of each chapter help you test your knowledge. By the Way notes present interesting information related to the discussion. Did You Know? tips offer advice or show you easier ways to perform tasks. Watch Out! cautions alert you to possible problems and give you advice on how to avoid them. Learn how to... Choose an application framework Use the sensors Develop touch-friendly apps Utilize push notifications Consume web data services Integrate with Windows Phone hubs Use the Bing Map control Get better performance out of your apps Work with data Localize your apps Use launchers and choosers Market and sell your apps Thank you!

    Read the article

  • Unexpected issues with SessionPageStatePersiste

    - by geekrutherford
    Several iterations ago I implemented the SessionPageStatePersister in an application as a way to cut down on the size of the hidden ViewState input on aspx pages.   At first it seemed utterly fantasic. The size of the ViewState appeared to be drastically reduced and the application did not appear to peform any slower than baseline.   Enter the iFrame &amp; user control. I added a user control which pings the web server every 20 seconds in order to show updated application information to the user (new messages, reports, etc.) After releasing this nifty little control into the QA environment I quickly began receiving emails from testers about "post back" related error messages which mostly centered around invalid ViewState exceptions.   At first I dismissed it as something related to all of the AJAX requests happening on the page and considered turning off page event validation. However, upon further investigation I came across the following article:   Things That You Should Watch Out For When Using SessionPageStatePersister   In this article the author specifically states:   If you application uses frames than each frame request will create a new session view state item and as before it will remove items when reaching the maximum, you come into a situation that one of the frames will probably loose it session view state because other frames did post backs.   Oh snap! That is precisely what I am doing. That combined with multiple users on the application equals dropped ViewStates!   The temporary fix has been to disable the use of the SessionPageStatePersister in my application. This results in a bloated hidden ViewState input, but the web server is no longer tasked with maintaing/retreiving it and the app. no longer loses ViewState information.

    Read the article

  • Inline code within HEAD section of ASP Web Form

    - by geekrutherford
    Today I needed to include inline code within the HEAD section of an Web Form in order to dynamically include stylesheets based on the theme set for the application.   Below is an example:      <link href="../../Resources/Themes/<% = Page.Theme %>Grid.Default.css" rel="stylesheet" type="text/css">   Upon saving and viewing the page I noted that the code was not being interpreted and instead was being treated as a string literal.  How to get around it?  Add a panel control in the HEAD section and place the links to the stylesheets as in the example above within the panel.  For whatever reason, ASP.NET does not want to interpret inline code in the HEAD section but will allow you to add .NET controls in the HEAD section.

    Read the article

  • TFS 2012 Upgrade and SQL Server - SharePoint - OS Requirements.

    - by Vishal
    Hello folks,Recently I was involved in Installation and Configuration of Team Foundation Server 2010 Farm for a client. A month after the installation and configuration was done and everything was working as it was supposed to, Microsoft released Team Foundation Server 2012 in mid August 2012. Well the company was using Borland Starteam as their source control and once starting to use TFS 2010, their developers and project managers were loving it since TFS is not just a source control tool and way much better then StarTeam. Anyways, long story short, they are now interested in thinking of upgrading to the newest version. Below are some basic Hardware and Software requirements for TFS 2012:Operating System:Windows Server 2008 with SP2 (only 64bit)Windows Server 2008 R2 with SP1 (only 64bit)Windows Server 2012 (only 64bit)SQL Server:SQL Server 2008 R2 and SQL Server 2012SQL Server 2008 is no longer supported.SQL Server Requirements for TFS.SharePoint Products:SharePoint Server 2010. (SharePoint Foundation 2010, Standard, Enterprise).MOSS 2007 (Standard, Enterprise)Windows SharePoint Services 3.0 (WSS 3.0)SharePoint Products Requirements for TFS.Project Server:Project Server 2010 with SP1.Project Server 2007 with SP2.Project Server Requirements for TFS.More information onf TFS Upgrade Requirements can be found here. Hardware Recommendations can be found here.Thanks,Vishal Mody

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >