Search Results

Search found 5810 results on 233 pages for 'staff of geeks'.

Page 27/233 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • Enabling 32-Bit Applications on IIS7 (also affects 32-bit oledb or odbc drivers) [Solved]

    - by Humprey Cogay, C|EH
    We just bought a new Web Server, after installing Windows 2008 R2(which is a 64bit OS and IIS7), SQL Server Standard 2008 R2 and IBM Client Access for V5R3 with its Dot Net Data Providers, I tried deploying our new project which is fully functional on an IIS6 Based Web Server, I encountered this Error The 'IBMDA400.DataSource.1' provider is not registered on the local machine. To remove the doubt that I still lack some Software Pre-Requesites or version conflicts  since I encountered some erros while installing my IBM Client Access, I created a Connection Tester which is Windows App that accepts a connection string as a parameter and verifies if that parameter is valid. After entering the Proper Conn String I tried hitting the button and the Test was Succesful. So now I trimmed my suspects to My Web App and IIS7. After Googling around I found this post by a Rakki Muthukumar(Microsoft Developer Support Engineer for ASP.NET and IIS7) http://blogs.msdn.com/b/rakkimk/archive/2007/11/03/iis7-running-32-bit-and-64-bit-asp-net-versions-at-the-same-time-on-different-worker-processes.aspx So I tried scouting on IIS7's management console and found this little tweak under the Application Pool where my App is a member of. After changing this parameter to TRUE Yahoo (although I'm a Google kind of person) the Web App Works .......

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Integrating F# in SharpDevelop

    - by Marko Apfel
    After installing SharpDevelop 4 the F# Interactive could not be activated. In my case the correct folder for the F# installation must by specified in die application config file. So i opened SharpDevelop.exe.config and set this entry in the appSettings section: <add key="alt_fs_bin_path" value="C:\Program Files\FSharp\bin" />

    Read the article

  • VSDB to SSDT Part 2 : SQL Server 2008 Server Project &hellip; with SSDT

    - by Etienne Giust
    With Visual Studio 2012 and the use of SSDT technology, there is only one type of database project : SQL Server Database Project. With Visual Studio 2010, we used to have SQL Server 2008 Server Project which we used to define server-level objects, mostly logins and linked servers. A convenient wizard allowed for creation of this type of projects. It does not exists anymore. Here is how to create an equivalent of the SQL Server 2008 Server Project  with Visual Studio 2012: Create a new SQL Server Database Project : it will be created empty Create a new SQL Schema Compare ( SQL menu item > Schema Compare > New Schema Comparison ) As a source, select any database on the SQL server you want to mimic Set the target to be your newly Database Project In the Schema Compare options (cog-like icon), Object Types pane, set the options as below. You might want to tweak those and select only the object types you want. Then, run the comparison, review and select your changes and apply them to the project.

    Read the article

  • Cryptographic Validation Explained

    - by MarkPearl
    We have been using LogicNP’s CryptoLicensing for some of our software and I was battling to understand how exactly the whole process worked. I was sent the following document which really helped explain it – so if you ever use the same tool it is well worth a read. Licensing Basics LogicNP CryptoLicensing For .Net is the most advanced and state-of-the art licensing and copy protection system you can use for your software. LogicNP CryptoLicensing System uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA algorithm which consists of a pair of keys called as the generation key and the validation key. Data encrypted using the generation key can only be decrypted using the corresponding validation key. How does cryptographic validation work? When a new license project is created, a unique validation-generation key pair is created for the project. When LogicNP CryptoLicensing For .Net generates licenses, it encrypts the license settings using the generation key. The validation key can be safely distributed with your software and is used during validation. During license validation, LogicNP CryptoLicensing For .Net attempts to decrypt the encrypted license code using the validation key. If the decryption is successful, this means that the data was encrypted using the generation key, since only the corresponding validation key can decrypt data encrypted with the generation key. This further means that not only is the license valid but that it was generated by you and only you since nobody else has access to the generation key. Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Note that the generation key pair is stored in the project file created by LogicNP CryptoLicensing For .Net, so it is very important to backup this file and to keep it secure. Once the file is lost, it is not possible to retrieve the key pair. FAQ Do I use the same validation key to validate all license codes? Yes, the validation key (and generation key) for the project remains the same; you use the same key to validate all license codes generated using the project. You can retrieve the validation key using the "Project" menu --> "Get Validation Key & Code" menu item. Can license codes generated using generation key from one project be validated using validation key of another project? No! Q. Is every generated license code unique? A. Yes, every license code generated by CryptoLicensing is guaranteed to be unique, even if you generate thousands of codes at a time. Q. What makes CryptoLicensing so secure? A. CryptoLicensing uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA asymmetric key algorithm which can use upto 3072-bit keys. Given current computing power, it takes years to break a 3072-bit key. Q. Is is possible for a hacker to develop a keygen for my software? A. Impossible. The cryptographic algorithm used by CryptoLicensing consists of a pair of keys called as the generation key and the validation key. Data encrypted with one key can only be decrypted by the other key and vice versa. Licenses are generated using the generation key and validated using the validation key. Without the generation key, it is impossible to generate valid licenses. Q. What is the difference between validation key and generation key? Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Q. Do I have to include the license project file (.licproj) with my software? A. No!!! This goes against the very essence of the security of the asymmetric cryptographic scheme because the project file contains both the validation and generation key. With your software, you only need to include the validation key which will be used to validate licenses generated by CryptoLicensing using the generation key. The license project file should be treated as any other valuable and confidential asset such as your source code. Q. Does the license service need the license project file? A. Yes. The license project file is needed whenever new licenses are generated (via the UI, via the API or via the license service). As just one example, the license service generates new machine-locked licenses when activated licenses are presented to it for activation, therefore the license service needs the license project file. Q. Is it possible to embed my own data in the generated licenses? A. Yes. You can embed any amount of additional data in the licenses. This data will have the same amount of security as the license code itself and will be tamper-proof. The embedded user data can be retrieved from your software. Q. What additional steps can I take to ensure that my software does not get cracked? A. There are many methods and techniques which can make it extremely difficult for a hacker to crack your software. See Writing Effective License Checking Code And Designing Effective Licenses for more information. Q. Why is the license service not working? A. The most common cause is not setting the CryptoLicense.LicenseServiceURL property before trying to validate a license. Make sure that this property is set to the correct URL where your license service is hosted. The most common cause after this is that the license project file on the web server where your license service is hosted is not the latest. This happens if you make changes to the license project (for example, set the 'Enable With Serials' setting for a profile), but don't upload the updated project file to your web server. Q. Why are my serials not working? Serial codes require the user of a license service. See Using Serial Codes for more details. Also see the earlier question 'Why is the license service not working?' Q. Is the same validation key used to validate license codes generated from different profiles. A. Yes. Profiles are just pre specified license settings for quickly generating licenses having those settings. The actual license code is still generated using the license project's cryptographic generation key and thus, can be validated using the project's validation key. Q. Why are changes made to a profile not getting saved? A. Simply changing license settings via UI and saving the license project does not save those license settings to the active profile. You must first save the license settings to a profile using the Save/Save As command from the Profiles menu (see above). Q. Why is validation of activated licenses failing from CryptoLicensing Generator, but works from my software? A. Make sure that you have specified the URL of the license service using the Project Properties Dialog. Also see the earlier question 'Why is the license service not working?' Q. How can I extend the trial period of my customer? A. To extend the evaluation period of the customer, simply send him a new license code specifying the desired evaluation limits. Evaluation information such as the current used days, executions, etc are stored in garbled form in a registry location which is derived from the license code. Therefore, when a new license code is used, the old evaluation information will not be used and a new evaluation period will be started.

    Read the article

  • FAQ: GridView Calculation with JavaScript

    - by Vincent Maverick Durano
    In my previous post I wrote a simple demo on how to Calculate Totals in GridView and Display it in the Footer. Basically what it does is it calculates the total amount by typing into the TextBox and display the grand total in the footer of the GridView and basically it was a server side implemenation.  Many users in the forums are asking how to do the same thing without postbacks and how to calculate both amount and total amount together. In this post I will demonstrate how to do this using JavaScript. To get started let's go ahead and set up the form. Just for the simplicity of this demo I just set up the form like this:   <asp:gridview ID="GridView1" runat="server" ShowFooter="true" AutoGenerateColumns="false"> <Columns> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Description" HeaderText="Item Description" /> <asp:TemplateField HeaderText="Item Price"> <ItemTemplate> <asp:Label ID="LBLPrice" runat="server" Text='<%# Eval("Price") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Quantity"> <ItemTemplate> <asp:TextBox ID="TXTQty" runat="server"></asp:TextBox> </ItemTemplate> <FooterTemplate> <b>Total Amount:</b> </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Sub-Total"> <ItemTemplate> <asp:Label ID="LBLSubTotal" runat="server"></asp:Label> </ItemTemplate> <FooterTemplate> <asp:Label ID="LBLTotal" runat="server" ForeColor="Green"></asp:Label> </FooterTemplate> </asp:TemplateField> </Columns> </asp:gridview>   As you can see there's no fancy about the mark up above. It just a standard GridView with BoundFields and TemplateFields on it. Now just for the purpose of this demo I just use a dummy data for populating the GridView. Here's the code below:   public partial class GridCalculation : System.Web.UI.Page { private void BindDummyDataToGrid() { DataTable dt = new DataTable(); DataRow dr = null; dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); dt.Columns.Add(new DataColumn("Description", typeof(string))); dt.Columns.Add(new DataColumn("Price", typeof(string))); dr = dt.NewRow(); dr["RowNumber"] = 1; dr["Description"] = "Nike"; dr["Price"] = "1000"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 2; dr["Description"] = "Converse"; dr["Price"] = "800"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 3; dr["Description"] = "Adidas"; dr["Price"] = "500"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 4; dr["Description"] = "Reebok"; dr["Price"] = "750"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 5; dr["Description"] = "Vans"; dr["Price"] = "1100"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 6; dr["Description"] = "Fila"; dr["Price"] = "200"; dt.Rows.Add(dr); //Bind the Gridview GridView1.DataSource = dt; GridView1.DataBind(); } protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { BindDummyDataToGrid(); } } }   Now try to run the page. The output should look something like below: The Client-Side Calculation Here's the code for the GridView calculation:   <script type="text/javascript"> function CalculateTotals() { var gv = document.getElementById("<%= GridView1.ClientID %>"); var tb = gv.getElementsByTagName("input"); var lb = gv.getElementsByTagName("span"); var sub = 0; var total = 0; var indexQ = 1; var indexP = 0; for (var i = 0; i < tb.length; i++) { if (tb[i].type == "text") { sub = parseFloat(lb[indexP].innerHTML) * parseFloat(tb[i].value); if (isNaN(sub)) { lb[i + indexQ].innerHTML = ""; sub = 0; } else { lb[i + indexQ].innerHTML = sub; } indexQ++; indexP = indexP + 2; total += parseFloat(sub); } } lb[lb.length -1].innerHTML = total; } </script>   The code above calculates the sub-total by multiplying the price and the quantity and at the same time calculates the total amount  by adding the sub-total values. Now you can simply call the JavaScript function above like this:   <ItemTemplate> <asp:TextBox ID="TXTQty" runat="server" onkeyup="CalculateTotals();"></asp:TextBox> </ItemTemplate>   Running the code above will display something like below: That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,JavaScript,GridView,TipsTricks

    Read the article

  • BUILD 2013 - Microsoft Set to Unveil It&rsquo;s Reinvention

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/24/153211.aspxSome thoughts as we head into BUILD this week… This week in San Francisco Microsoft will be hosting the BUILD conference. They’ll be talking up Windows 8.1 (Windows Blue), more Azure, some Windows Phone, XBox, Office 365… actually, they told us on the original BUILD announcement site what we’d be seeing:           While looking at this, consider a recent article from The Verge that talks about the speculation of a huge shake up at Microsoft . From the article: All Things D quotes one insider as saying they're "titanic" changes, noting they might be attached to Ballmer's legacy at the company. "It’s the first time in a long time that it feels like that there will be some major shifts, including some departures," says the alleged insider. Considering Ballmer let Sinofsky go right after the Windows 8 launch, the idea of Microsoft cutting loose some executives doesn’t seem to be big news. But the next piece of the article frames things more interestingly: Ballmer is reportedly considering a new structure that would create four separate divisions: enterprise business, hardware, applications and services, and an operating systems group. This statement got me thinking…what would this new structure look like? Below is one possibility: At a recent (this year or last year, I can’t recall which) Microsoft shareholder’s meeting, Ballmer made the statement that Microsoft is now a products and services company. At the time I don’t think I really let that statement sink in. Partially because I really liked the Microsoft of my professional youth – the one that was a software and platform company. In Canada, Microsoft has been pushing three platform areas: Lync, Azure, and SQL Server. I would expect those to change moving forward as Microsoft continues to look for Partners that will help them increase their Services revenue through solutions that incorporate/are based on Azure, Office 365, Lync, and Dynamics. I also wonder if we’re not seeing a culling of partners through changes to the Microsoft Partner Program. In addition to the changing certification requirements that align more to Microsoft’s goals (i.e. There is no desktop development based MCSD, only Windows 8 Store Apps), competencies that partners can qualify for are being merged, requirements changed, and licenses provided reduced. Ballmer warned as much at the last WPC though that they were looking for partners who were “all in” with Microsoft, and these programs seem to support that sentiment. Heading into BUILD this week, I’ll be looking to answer one question – what does it mean to be a Microsoft developer here in the 2010’s? What is the future of the Microsoft development platform? Sure, Visual Studio is still alive and well and Microsoft realizes that there’s a huge install base of .NET developers actively working on solutions. But they’ve ratcheted down the messaging around their development stack and instead focussed on promoting development for their platforms and services. Last year at BUILD with the release of Windows 8, Microsoft just breached the walls of its cocoon. After this BUILD and the organizational change announcements in July, we’ll see what Microsoft looks like fully emerged from its metamorphosis.

    Read the article

  • NINE Questions with Michelle Juett

    - by NINEQuestions
    Michelle Juett is one of the more interesting people I know, even though we’ve never met face to face. She’s part artist, part techie and all cool. We “met” via my good buddy George Clingerman and have plotting to take over the world, errr… I mean “collaborating” ever since. If you happen to live in the Seattle area, you can catch her and her work at Sakura Con on April 2-4, 2010 and various other gamer and art cons throughout the year. You can also find her on Twitter as @Shelldragon. Now that you know a little bit, I’ll let her tell you the rest of the story in these NINE Questions: 1. Where are you from? I was born in Clearwater, Florida. I like to tell people I'm from the Bermuda Triangle, it just makes explaining myself so much easier. My family moved to Washington when I was 5 and I've been in the Pacific Northwest ever since. We like to QQ about the rain but we really love the green trees and clean water. 2. What do you do? I fight evil by moonlight and win love by daylight.. or something like that.  I’ve been in quality assurance for games during the day since January 2008 and an artist for life. I currently work in QA for a really awesome game company in Bellevue.  At home, I work on personal digital art, making game assets as well as other random freelance projects as they pop up. 3. How did you get to where you are now? I'm still not where I want to be but I'm getting closer. The biggest piece of advice I can give is to work hard and never settle for the minimum required. I tend to overwork myself but I've never regretted it. You can want something really bad but if you aren't willing to work for it, then you can't expect it to just happen. I've always drawn and had an unhealthy love for video games that I was told I’d grow out of.  I knew I would not ‘grow out’ of games and that real adults make them and I could too. After I graduated, in searching for jobs, I discovered game testing. I figured this would be a good way to get my foot in the door and start networking. I’ve worked with consoles, websites and now, PC games.  I stuck with my journey, although it has been a rocky one, daylighting as a tester and moonlighting as an artist. I'm still on that journey but I wouldn't have it any other way. Test has given me a perspective that is difficult, if not impossible, to obtain any other way. It gives an unconditional respect for other hard working testers and an insight into creative problem solving. 4. So video game testing probably sounds WAY cooler than the reality. What's it like? What's a given day for you? Game testers don't get a lot of respect because of their stigmas and the fact most people don't actually know what we do.  People hear about the opening and closing disc trays all day. Many places do treat their testers like numbers. It all depends on where you work and how awesome your company is. I've had to deal with a lot of bad work situations to get to a really good one. QA exists to ensure the game is as flawless and enjoyable as it can be by the time it has to leave the nest and go out into the world. This includes everything obvious: “can I beat the level and save the princess?” to the more obscure: ‘What happens when I lose internet connection while trying to save right before falling into a pit to my death while holding the jump key then my cat pulls out my memory card and hides it in her litter box?” On the dev side, for developers, testers can be very scary people. Especially when the test team is not in house and you can’t see each other’s faces.  I've seen both sides. We don't mean to hurt your feelings. We really DO love you and want your game to be the best it can be! It can be some serious tough love. 5. You are also an accomplished artist. Got any major projects right now you'd like to talk about? LOL, I don't know if I’d say I'm an accomplished artist just yet. I’m still a long way from where I want to be. I figure that’s what makes you grow though: the desire to never stop improving. I like QA but I want to be a full time artist. I was lucky enough to register for a table at Sakura Con in the 11 second window that the tables sold out. As such, I’ll be selling my wares in the Artist Alley April 2-4th. Part of preparing for this is actually making the art to be sold there. Anime is a fun pass time but I don’t draw a whole lot of it so I’m making up for lost time. As I seem to enjoy burying myself in work, I’m an art lead for a secret project that’s so secret I might be killed tonight for even mentioning it. I also take on various freelance projects and do what I can to help out indie games. I discovered the XNA community a year and a half ago and developed a love for Indies when I was writing a weekly newsletter on XBLA news. I’m a little late to the party but I find myself in a unique position where I am an artist and also have technical skills in games. While not programmer myself, I have a lot of game sense and experience. I hope to make some awesome happen. Lastly, I have an ongoing web comic Shell’s Angels) that tends to get neglected when I get busy. I still love drawing comics and keep a little book with me to sketch down ideas as they pop into my head. I may pick it back up again as a larger project sometime in the future. 6. Can you talk about any of the other freelance projects you're doing or are you sworn to secrecy on those too? We wouldn't want a team of game developer ninjas to take you out or anything. All my projects are currently 2d. I have personal projects such as the ongoing comic as well as a graphic novel I've been picking at here and there. My main focus until April is Sakura Con, Sakura Con, Sakura Con.  I see it as a great way to get exposure and convention experience. I found out I love conventions a couple years ago and I want to get more involved in them. 7. As an artist, what is your weapon of choice? What do you use to get most of your stuff done? I am a Photoshop Hero and I have the hoodie to prove it. (http://www.pennyarcademerch.com/pah090011.html) I've dabbled in other paint programs but I always gravitate back to Photoshop. She is my one true love. I'd like to learn programs like Flash or Anime Studio when I get a bit more time because of their animation abilities. I've worked on frame by frame animation forever but I would love to learn 2d rigging. Still, nothing can compare to a simple sketchpad and a pencil. I always have one on me in case I come across or think of something interesting and can't get to a computer. If the Courier ever comes to exist it will be an ideal weapon for me. 8. You did some videos too, depicting the art creation process. What was the motivation behind those? The creative process is just as important as the final product, if not more so.  I've always loved watching speed paint videos and wanted to try it out myself. Turns out it's a lot of work and time but it's definitely fun to go back and rewatch them. Art isn't always the end result and is more often the process itself. 9. Got any interesting tattoos? Designed any for yourself or other people? Not yet, but not for lack of desire. I've toiled over what and where for years. Last year, I finally decided the back of my shoulders would be the place. Like anything permanent, I want it to have meaning. I thought of somehow incorporating games but I couldn't find something I felt would stand the test of time even with all the classic sprite games. I'm very picky so we'll see if I can get something solid decided. Come see me at Sakura Con April 2 -4!!!

    Read the article

  • MVC 3 AdditionalMetadata Attribute with ViewBag to Render Dynamic UI

    - by Steve Michelotti
    A few months ago I blogged about using Model metadata to render a dynamic UI in MVC 2. The scenario in the post was that we might have a view model where the questions are conditionally displayed and therefore a dynamic UI is needed. To recap the previous post, the solution was to use a custom attribute called [QuestionId] in conjunction with an “ApplicableQuestions” collection to identify whether each question should be displayed. This allowed me to have a view model that looked like this: 1: [UIHint("ScalarQuestion")] 2: [DisplayName("First Name")] 3: [QuestionId("NB0021")] 4: public string FirstName { get; set; } 5: 6: [UIHint("ScalarQuestion")] 7: [DisplayName("Last Name")] 8: [QuestionId("NB0022")] 9: public string LastName { get; set; } 10: 11: [UIHint("ScalarQuestion")] 12: [QuestionId("NB0023")] 13: public int Age { get; set; } 14: 15: public IEnumerable<string> ApplicableQuestions { get; set; } At the same time, I was able to avoid repetitive IF statements for every single question in my view: 1: <%: Html.EditorFor(m => m.FirstName, new { applicableQuestions = Model.ApplicableQuestions })%> 2: <%: Html.EditorFor(m => m.LastName, new { applicableQuestions = Model.ApplicableQuestions })%> 3: <%: Html.EditorFor(m => m.Age, new { applicableQuestions = Model.ApplicableQuestions })%> by creating an Editor Template called “ScalarQuestion” that encapsulated the IF statement: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <%@ Import Namespace="DynamicQuestions.Models" %> 3: <%@ Import Namespace="System.Linq" %> 4: <% 5: var applicableQuestions = this.ViewData["applicableQuestions"] as IEnumerable<string>; 6: var questionAttr = this.ViewData.ModelMetadata.ContainerType.GetProperty(this.ViewData.ModelMetadata.PropertyName).GetCustomAttributes(typeof(QuestionIdAttribute), true) as QuestionIdAttribute[]; 7: string questionId = null; 8: if (questionAttr.Length > 0) 9: { 10: questionId = questionAttr[0].Id; 11: } 12: if (questionId != null && applicableQuestions.Contains(questionId)) { %> 13: <div> 14: <%: Html.Label("") %> 15: <%: Html.TextBox("", this.Model)%> 16: </div> 17: <% } %> You might want to go back and read the full post in order to get the full context. MVC 3 offers a couple of new features that make this scenario more elegant to implement. The first step is to use the new [AdditionalMetadata] attribute which, so far, appears to be an under appreciated new feature of MVC 3. With this attribute, I don’t need my custom [QuestionId] attribute anymore - now I can just write my view model like this: 1: [UIHint("ScalarQuestion")] 2: [DisplayName("First Name")] 3: [AdditionalMetadata("QuestionId", "NB0021")] 4: public string FirstName { get; set; } 5:   6: [UIHint("ScalarQuestion")] 7: [DisplayName("Last Name")] 8: [AdditionalMetadata("QuestionId", "NB0022")] 9: public string LastName { get; set; } 10:   11: [UIHint("ScalarQuestion")] 12: [AdditionalMetadata("QuestionId", "NB0023")] 13: public int Age { get; set; } Thus far, the documentation seems to be pretty sparse on the AdditionalMetadata attribute. It’s buried in the Other New Features section of the MVC 3 home page and, after showing the attribute on a view model property, it just says, “This metadata is made available to any display or editor template when a product view model is rendered. It is up to you to interpret the metadata information.” But what exactly does it look like for me to “interpret the metadata information”? Well, it turns out it makes the view much easier to work with. Here is the re-implemented ScalarQuestion template updated for MVC 3 and Razor: 1: @{ 2: object questionId; 3: ViewData.ModelMetadata.AdditionalValues.TryGetValue("QuestionId", out questionId); 4: if (ViewBag.applicableQuestions.Contains((string)questionId)) { 5: <div> 6: @Html.LabelFor(m => m) 7: @Html.TextBoxFor(m => m) 8: </div> 9: } 10: } So we’ve gone from 17 lines of code (in the MVC 2 version) to about 7-8 lines of code here. The first thing to notice is that in MVC 3 we now have a property called “AdditionalValues” that hangs off of the ModelMetadata property. This is automatically populated by any [AdditionalMetadata] attributes on the property. There is no more need for me to explicitly write Reflection code to GetCustomAttributes() and then check to see if those attributes were present. I can just call TryGetValue() on the dictionary to see if they were present. Secondly, the “applicableQuestions” anonymous type that I passed in from the calling view – in MVC 3 I now have a dynamic ViewBag property where I can just “dot into” the applicableQuestions with a nicer syntax than dictionary square bracket syntax. And there’s no problems calling the Contains() method on this dynamic object because at runtime the DLR has resolved that it is a generic List<string>. At this point you might be saying that, yes the view got much nicer than the MVC 2 version, but my view model got slightly worse.  In the previous version I had a nice [QuestionId] attribute but now, with the [AdditionalMetadata] attribute, I have to type the string “QuestionId” for every single property and hope that I don’t make a typo. Well, the good news is that it’s easy to create your own attributes that can participate in the metadata’s additional values. The key is that the attribute must implement that IMetadataAware interface and populate the AdditionalValues dictionary in the OnMetadataCreated() method: 1: public class QuestionIdAttribute : Attribute, IMetadataAware 2: { 3: public string Id { get; set; } 4:   5: public QuestionIdAttribute(string id) 6: { 7: this.Id = id; 8: } 9:   10: public void OnMetadataCreated(ModelMetadata metadata) 11: { 12: metadata.AdditionalValues["QuestionId"] = this.Id; 13: } 14: } This now allows me to encapuslate my “QuestionId” string in just one place and get back to my original attribute which can be used like this: [QuestionId(“NB0021”)]. The [AdditionalMetadata] attribute is a powerful and under-appreciated new feature of MVC 3. Combined with the dynamic ViewBag property, you can do some really interesting things with your applications with less code and ceremony.

    Read the article

  • XNA Notes 011

    - by George Clingerman
    Even with a lot of the XNA community working on Dream Build Play entries ( I swear I’m going to finish mine this year!) people are still finding time to do side projects and be amazingly active in the XNA and XBLIG community. With my one eye on my code and one eye on the community, here’s what I noticed these over achievers doing this past week! Time Critical XNA News: Xbox LIVE Indie Games sales data will be delayed March 17-20th due to some schedule maintenance http://create.msdn.com/en-us/news/indie_games_data_delay_march2011 GameMarx is releasing a series of videos to help raise donations for victims of the earthquakes and tsunami in Japan. Help out if you can! http://www.gamemarx.com/video/special/29/help-japan-sushido.aspx XNA MVPs: Catalin Zima shares his thoughts on the MVP summit and my book! http://www.catalinzima.com/2011/03/mvp-summit-2011/ Glenn Wilson (@mykre) helps the XNA team announce some new educational content that you don’t want to miss if you’re porting your app or game to Windows Phone 7 http://www.virtualrealm.com.au/Blog/tabid/62/EntryId/653/Porting-your-App-or-Game-to-Windows-Phone-7.aspx and Windows Phone 7 from scratch http://www.virtualrealm.com.au/Blog/tabid/62/EntryId/654/Windows-Phone-from-Scratch.aspx and shares a link to some free architectural models and textures http://twitter.com/#!/Mykre/status/46410160784158720 George (that’s me!) shares his MVP Summit 2011 summary and XBLIG thoughts http://geekswithblogs.net/clingermangw/archive/2011/03/15/144366.aspx XNA Developers: @SmallCaveGames shares a Code of Ethics for Xbox LIVE Indie Game Developers http://smallcavegames.blogspot.com/2011/03/unofficial-xblig-developers-code-of.html Derek S adds more Xbox LIVE Indie Game studios to his master list of XBLIG links http://twitter.com/#!/Mr_Deeke/status/46140996056125440 http://xbl-indieverse.blogspot.com/p/xblig-links.html Making games and want to help kids? Then share your story with GameFace: America! http://gameitupinitiative.com/about-the-initiative/programs/gameface-america/ Xbox LIVE Indie Games (XBLIG): XonaGames shares some video footage of their booth from GDC 2011 Video 1: http://youtu.be/lxIV9nk3Gq4 Video 2: http://youtu.be/GgfrjqkxR_o Video 3: http://youtu.be/yVcpXrTX7SQ Joystiq on Mommy’s Best Games Serious Sam Double D http://www.joystiq.com/2011/03/16/the-most-important-thing-about-serious-sam-double-d/ And The Escapist recommends that gamers start learning to avoid cleavage now http://www.escapistmagazine.com/news/view/108543-Boobie-Bomber-Makes-First-Appearance-in-Serious-Sam-Double-D Magiko Gaming started a blog on the XBLIG dashboard daily Top 10 games in the US. Good way to go back in time and look at the history of which games were in the the Top 10. http://dailytop10indiegames.wordpress.com/ Where are they going now? XBLIG developers at a crossroads.. http://www.gamesetwatch.com/2011/03/where_are_they_going_now_xblig.php http://www.gamasutra.com/view/news/33527/InDepth_Where_Are_They_Going_Now_XBLIG_Developers_At_A_Crossroads_.php BinaryTweed’s Clover: A Curious Tail is Xbox LIVE’s Deal of the Week! http://www.armlessoctopus.com/2011/03/15/what-luck-clover-a-curious-tale-is-half-price-this-week/ Looking for an Xbox LIVE Indie Game to buy? Writings of Mass Deduction has over 125 suggestions at this point! http://writingsofmassdeduction.com/ SkaStudios shares Vampire Smile Achievements AND their PAX East 2011 Both Setup video http://www.ska-studios.com/2011/03/14/vampire-smile-achievement/ http://www.ska-studios.com/2011/03/15/pax-booth-setup-time-lapse/ MasterBlud and VVGTV starts a new community for XBLIG developers and gamers to join http://vvgtv.forumotion.com/ Raymond Matthews (@DrakstarMatryx) covers Mommy’s Best Games getting Serious http://www.darkstarmatryx.com/?p=286 XNA Development: Dave Henry (@mort8088) posts the 4th tutorial in his series XNA 4.0 SpriteBatch extended http://mort8088.com/2011/03/11/xna-4-0-tutorial-4-spritebatch-extended/ Tutorial 5 - Creating a manual blank texture http://mort8088.com/2011/03/13/xna-4-tutorial-5-manual-blank-texture/ XNA 4.0 Tutorial 6 - Spritesheet Object http://mort8088.com/2011/03/18/xna-4-0-tutorial-6-spritesheet-object/ Jason Mitchell shares a tutorial on setting the alpha value for spritebatch in XNA 4.0 http://www.jason-mitchell.com/index.php/2011/03/13/setting-alpha-value-for-spritebatch-draw-in-xna-4/ XNA for Silverlight Developers: Part 7 - Collision Detection http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-7-Collision-detection.aspx Markus Ewald (@Cygon4) shares the full Ninject 2.0 binding for XNA and Sunburn http://twitter.com/#!/Cygon4/status/48330203826622464 Michael B. McLaughlin shares an AccelerometerInput XNA GameComponent he created (which I’m probably going to snag for a game I’m working on...) http://geekswithblogs.net/mikebmcl/archive/2011/03/17/accelerometerinput-xna-gamecomponent.aspx Extra Credit tackles the building of a good tutorial. Must watch for all Indie game devs (thanks for pointing it out Evan Johnson!) http://twitter.com/#!/johnsonevan/status/48452115680604160 http://www.escapistmagazine.com/videos/view/extra-credits/2921-Tutorials-101 ExEn is fully funded at this point so definitely something for XBLIG developers to keep an eye on as they consider releasing their games on other platforms http://rockethub.com/projects/752-exen-xna-for-iphone-android-and-silverlight Channel 9 and Greg Duncan post Mixing the Game State Management and Platformer XNA Recipes http://channel9.msdn.com/coding4fun/blog/Mixing-the-Game-State-Management-and-Platformer-XNA-Recipes Sgt. Conker has noticed Mike McLaughlin has been crazy productive and has done a recap of his recent posts http://www.sgtconker.com/2011/03/recap-of-mikebmcls-posts/

    Read the article

  • Creating typed WSDL’s for generic WCF services of the ESB Toolkit

    - by charlie.mott
    source: http://geekswithblogs.net/charliemott Question How do you make it easy for client systems to consume the generic WCF services exposed by the ESB Toolkit using messages that conform to agreed schemas\contracts?  Usually the developer of a system consuming a web service adds a service reference using a WSDL. However, the WSDL’s for the generic services exposed by the ESB Toolkit do not make it easy to develop clients that conform to agreed schemas\contracts. Recommendation Take a copy of the generic WSDL’s and modify it to use the proper contracts. This is very easy.  It will work with the generic on ramps so long as the <part>?</part> wrapping is removed from the WCF adapter configuration in the BizTalk receive locations.  Attempting to create a WSDL where the input and output messages are sent/returned with a <part> wrapper is a nightmare.  I have not managed it.  Consequences I can only see the following consequences of removing the <part> wrapper: ESB Test Client – I needed to modify the out-of-the-box ESB Test Client source code to make it send non-wrapped messages.  Flat file formatted messages – the endpoint will no longer support flat file message formats.  However, even if you needed to support this integration pattern through WCF, you would most-likely want to create a separate receive location anyway with its’ own independently configured XML disassembler pipeline component. Instructions These steps show how to implement a request-response implementation of this. WCF Receive Locations In BizTalk, for the WCF receive location for the ESB on-ramp, set the adapter Message settings\bindings to “UseBodyPath”: Inbound BizTalk message body  = Body Outbound WCF message body = Body Create a WSDL’s for each supported integration use-case Save a copy of the WSDL for the WCF generic receive location above that you intend the client system to use. Give it a name that mirrors the interface agreement (e.g. Esb_SuppliersSearchCommand_wsHttpBinding.wsdl).   Add any xsd schemas files imported below to this same folder.   Edit the WSDL to import schemas For example, this: <xsd:schema targetNamespace=http://microsoft.practices.esb/Imports /> … would become something like: <xsd:schema targetNamespace="http://microsoft.practices.esb/Imports">     <xsd:import schemaLocation="SupplierSearchCommand_V1.xsd"                            namespace="http://schemas.acme.co.uk/suppliersearchcommand/1.0"/>     <xsd:import  schemaLocation="SuppliersDocument_V1.xsd"                              namespace="http://schemas.acme.co.uk/suppliersdocument/1.0"/>     <xsd:import schemaLocation="Types\Supplier_V1.xsd"                              namespace="http://schemas.acme.co.uk/types/supplier/1.0"/>     <xsd:import  schemaLocation="GovTalk\bs7666-v2-0.xsd"                               namespace="http://www.govtalk.gov.uk/people/bs7666"/>     <xsd:import  schemaLocation="GovTalk\CommonSimpleTypes-v1-3.xsd"                             namespace="http://www.govtalk.gov.uk/core"/>     <xsd:import  schemaLocation="GovTalk\AddressTypes-v2-0.xsd"                              namespace="http://www.govtalk.gov.uk/people/AddressAndPersonalDetails"/> </xsd:schema> Modify the Input and Output message For example, this: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> … would become something like: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part"                       element="ssc:SupplierSearchEvent"                         xmlns:ssc="http://schemas.acme.co.uk/suppliersearchcommand/1.0" /> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part"                       element="sd:SuppliersDocument"                       xmlns:sd="http://schemas.acme.co.uk/suppliersdocument/1.0"/> </wsdl:message> This WSDL can now be added as a service reference in client solutions.

    Read the article

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • How to reproject a shapefile from WGS 84 to Spherical/Web Mercator projection.

    - by samkea
    Definitions: You will need to know the meaning of these terms below. I have given a small description to the acronyms but you can google and know more about them. #1:WGS-84- World Geodetic Systems (1984)- is a standard reference coordinate system used for Cartography, Geodesy and Navigation. #2: EPGS-European Petroleum Survey Group-was a scientific organization with ties to the European petroleum industry consisting of specialists working in applied geodesy, surveying, and cartography related to oil exploration. EPSG::4326 is a common coordinate reference system that refers to WGS84 as (latitude, longitude) pair coordinates in degrees with Greenwich as the central meridian. Any degree representation (e.g., decimal or DMSH: degrees minutes seconds hemisphere) may be used. Which degree representation is used must be declared for the user by the supplier of data. So, the Spherical/Web Mercator projection is referred to as EPGS::3785 which is renamed to EPSG:900913 by google for use in googlemaps. The associated CRS(Coordinate Reference System) for this is the "Popular Visualisation CRS / Mercator ". This is the kind of projection that is used by GoogleMaps, BingMaps,OSM,Virtual Earth, Deep Earth excetra...to show interactive maps over the web with thier nearly precise coordinates.  Reprojection: After reading alot about reprojecting my coordinates from the deepearth project on Codeplex, i still could not do it. After some help from a colleague, i got my ball rolling.This is how i did it. #1 You need to download and open your shapefile using Q-GIS; its the one with the biggest number of coordinate reference systems/ projections. #2 Use the plugins menu, and enable ftools and the WFS plugin. #3 Use the Vector menu--> Data Management Tools and choose define current projection. Enable, use predefined reference system and choose WGS 84 coodinate system. I am personally in zone 36, so i chose WGS84-UTM Zone 36N under ( Projected Coordinate Systems--> Universal Transverse Mercator) and click ok. #4 Now use the Vector menu--> Data Management Tools and choose export to new projection. The same dialog will pop-up. Now choose WGS 84 EPGS::4326 under Geodetic Coordinate Systems. My Input user Defined Spatial Reference System should looks like this: +proj=tmerc +lat_0=0 +lon_0=33 +k=0.9996 +x_0=500000 +y_0=200000 +ellps=WGS84 +datum=WGS84 +units=m +no_defs Your Output user Defined Spatial Reference System should look like this: +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs Browse for the place where the shapefile is going to be and give the shapefile a name(like origna_reprojected). If it prompts you to add the projected layer to the TOC, accept. There, you have your re-projected map with latitude and longitude pair of coordinates. #5 Now, this is not the actual Spherical/Web Mercator projection, but dont worry, this is where you have to stop. All the other custom web-mapping portals will pick this projection and transform it into EPGS::3785 or EPSG:900913 but the coordinates will still remain as the LatLon pair of the projected shapefile. If you want to test, a particular know point, Q-GIS has a lot of room for that. Go ahead and test it.

    Read the article

  • Wordpress Podcast | Josh Holmes

    - by Josh Holmes
    I was thrilled and honored to be a guest on the Wordpress Podcast on WebMasterRadio.fm. This podcast is hosted by my friend Joost de Valk and Frederick Townes. You can read about the podcast, PHP on IIS, PHP on Azure and much more on my blog at Wordpress Podcast…

    Read the article

  • Silverlight Cream for June 13, 2010 -- #881

    - by Dave Campbell
    In this Issue: Mark Monster. Shoutouts: Adam Kinney has moved his blog, and his first post there is to announce New tutorials on .toolbox on PathListBox and Fluid UI Awesome graphics for the MEF'ed Video Player by Alan Beasley: New MEF Video Player Controls (1st Draft – Article to follow…) It must be a slow relaxing summer weekend, because I only found one post... and Mark submitted this one to me :) From SilverlightCream.com: How to improve the Windows Phone 7 Licensing development experience? Mark Monster is ahead of all of us if he's already programming his WP7 apps for 'trial versions'... but maybe it's time to start learning how to do that stuff :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Transparency and AlphaBlending

    - by TechTwaddle
    In this post we'll look at the AlphaBlend() api and how it can be used for semi-transparent blitting. AlphaBlend() takes a source device context and a destination device context (DC) and combines the bits in such a way that it gives a transparent effect. Follow the links for the msdn documentation. So lets take a image like, and AlphaBlend() it on our window. The code to do so is below, (under the WM_PAINT message of WndProc) HBITMAP hBitmap=NULL, hBitmapOld=NULL; HDC hMemDC=NULL; BLENDFUNCTION bf; hdc = BeginPaint(hWnd, &ps); hMemDC = CreateCompatibleDC(hdc); hBitmap = LoadBitmap(g_hInst, MAKEINTRESOURCE(IDB_BITMAP1)); hBitmapOld = SelectObject(hMemDC, hBitmap); bf.BlendOp = AC_SRC_OVER; bf.BlendFlags = 0; bf.SourceConstantAlpha = 80; //transparency value between 0-255 bf.AlphaFormat = 0;    AlphaBlend(hdc, 0, 25, 240, 100, hMemDC, 0, 0, 240, 100, bf); SelectObject(hMemDC, hBitmapOld); DeleteDC(hMemDC); DeleteObject(hBitmap); EndPaint(hWnd, &ps);   The code above creates a memory DC (hMemDC) using CreateCompatibleDC(), loads a bitmap onto the memory DC and AlphaBlends it on the device DC (hdc), with a transparency value of 80. The result is: Pretty simple till now. Now lets try to do something a little more exciting. Lets get two images involved, each overlapping the other, giving a better demonstration of transparency. I am also going to add a few buttons so that the user can increase or decrease the transparency by clicking on the buttons. Since this is the first time I played around with GDI apis, I ran into something that everybody runs into sometime or the other, flickering. When clicking the buttons the images would flicker a lot, I figured out why and used something called double buffering to avoid flickering. We will look at both my first implementation and the second implementation just to give the concept a little more depth and perspective. A few pre-conditions before I dive into the code: - hBitmap and hBitmap2 are handles to the two images obtained using LoadBitmap(), these variables are global and are initialized under WM_CREATE - The two buttons in the application are labeled Opaque++ (make more opaque, less transparent) and Opaque-- (make less opaque, more transparent) - DrawPics(HWND hWnd, int step=0); is the function called to draw the images on the screen. This is called from under WM_PAINT and also when the buttons are clicked. When Opaque++ is clicked the 'step' value passed to DrawPics() is +20 and when Opaque-- is clicked the 'step' value is -20. The default value of 'step' is 0 Now lets take a look at my first implementation: //this funciton causes flicker, cos it draws directly to screen several times void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL;     BLENDFUNCTION bf;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     //create a memory DC     hMemDC = CreateCompatibleDC(hdc);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents of screen     if (step < 0)     {         RECT rect = {0, 0, 240, 200};         FillRect(hdc, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     BitBlt(hdc, 0, 25, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hdc, 0, 75, 240, 100, hMemDC, 0, 0, 240, 100, bf);     DeleteDC(hMemDC);     ReleaseDC(hWnd, hdc); }   In the code above, we first get the window DC using GetDC() and create a memory DC using CreateCompatibleDC(). Then we select hBitmap2 onto the memory DC and Blt it on the window DC (hdc). Next, we select the other image, hBitmap, onto memory DC and AlphaBlend() it over window DC. As I told you before, this implementation causes flickering because it draws directly on the screen (hdc) several times. The video below shows what happens when the buttons were clicked rapidly: Well, the video recording tool I use captures only 15 frames per second and so the flickering is not visible in the video. So you're gonna have to trust me on this, it flickers (; To solve this problem we make sure that the drawing to the screen happens only once and to do that we create an additional memory DC, hTempDC. We perform all our drawing on this memory DC and finally when it is ready we Blt hTempDC on hdc, and the images are displayed in one go. Here is the code for our new DrawPics() function: //no flicker void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL, hTempDC=NULL;     BLENDFUNCTION bf;     HBITMAP hBitmapTemp=NULL, hBitmapOld=NULL;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     hMemDC = CreateCompatibleDC(hdc);     hTempDC = CreateCompatibleDC(hdc);     hBitmapTemp = CreateCompatibleBitmap(hdc, 240, 150);     hBitmapOld = (HBITMAP)SelectObject(hTempDC, hBitmapTemp);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents     if (step < 0)     {         RECT rect = {0, 0, 240, 150};         FillRect(hTempDC, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     //Blt hBitmap2 directly to hTempDC     BitBlt(hTempDC, 0, 0, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hTempDC, 0, 50, 240, 100, hMemDC, 0, 0, 240, 100, bf);     //now hTempDC is ready, blt it directly on hdc     BitBlt(hdc, 0, 25, 240, 150, hTempDC, 0, 0, SRCCOPY);     SelectObject(hTempDC, hBitmapOld);     DeleteObject(hBitmapTemp);     DeleteDC(hMemDC);     DeleteDC(hTempDC);     ReleaseDC(hWnd, hdc); }   This function is very similar to the first version, except for the use of hTempDC. Another point to note is the use of CreateCompatibleBitmap(). When a memory device context is created using CreateCompatibleDC(), the context is exactly one monochrome pixel high and one monochrome pixel wide. So in order for us to draw anything onto hTempDC, we first have to set a bitmap on it. We use CreateCompatibleBitmap() to create a bitmap of required dimension (240x150 above), and then select this bitmap onto hTempDC. Think of it as utilizing an extra canvas, drawing everything on the canvas and finally transferring the contents to the display in one scoop. And with this version the flickering is gone, video follows:   If you want the entire solutions source code then leave a message, I will share the code over SkyDrive.

    Read the article

  • Resources for the &ldquo;What&rsquo;s New in VS 2013&rdquo; Presentation

    - by John Alexander
    Originally posted on: http://geekswithblogs.net/jalexander/archive/2013/10/24/resources-for-the-ldquowhatrsquos-new-in-vs-2013rdquo-presentation.aspxThanks for attending the “What’s New in Visual Studio 2013 (and TFS too) presentation. As promised, here are some links! Note: if you didn’t attend, its ok. This is for you, too. The bits themselves.  This article introduces new and enhanced features in Visual Studio 2013 Visual Studio Virtual Launch – Lots of Videos here and and then on November 13th, live sessions and a q and a session… What features map to what Visual Studio editions Visual Studio 2013 New Editor Features Visual Studio 2013 Application Lifecycle Management Virtual Machine and Hands-on-Labs / Demo Scripts from Brian Keller More on CodeLens from Zain Naboulsi  What are Web Essentials? You can now download Web Essentials for Visual Studio 2013 RTM. A great overview on TFS 2013 from Brian Harry The release archive lists updates made to Team Foundation Service along with which version of Team Foundation Server the updates are a part of. REST API for Team Rooms  “What's new in Visual Studio for Web Developers and Front End Devs” screencasts – quick, easy and painless from the always awesome Scott Hanselman Introducing ASP.NET Identity –-- A membership system for ASP.NET applications Visual Studio 2013 Adds New Project Templates with Improvements and Social Accounts Authentication

    Read the article

  • Windows Azure AppFabric: ServiceBus Queue WPF Sample

    - by xamlnotes
    The latest version of the AppFabric ServiceBus now has support for queues and topics. Today I will show you a bit about using queues and also talk about some of the best practices in using them. If you are just getting started, you can check out this site for more info on Windows Azure. One of the 1st things I thought if when Azure was announced back when was how we handle fault tolerance. Web sites hosted in Azure are no much of an issue unless they are using SQL Azure and then you must account for potential fault or latency issues. Today I want to talk a bit about ServiceBus and how to handle fault tolerance.  And theres stuff like connecting to the servicebus and so on you have to take care of. To demonstrate some of the things you can do, let me walk through this sample WPF app that I am posting for you to download. To start off, the application is going to need things like the servicenamespace, issuer details and so forth to make everything work.  To facilitate this I created settings in the wpf app for all of these items. Then I mapped a static class to them and set the values when the program loads like so: StaticElements.ServiceNamespace = Convert.ToString(Properties.Settings.Default["ServiceNamespace"]); StaticElements.IssuerName = Convert.ToString(Properties.Settings.Default["IssuerName"]); StaticElements.IssuerKey = Convert.ToString(Properties.Settings.Default["IssuerKey"]); StaticElements.QueueName = Convert.ToString(Properties.Settings.Default["QueueName"]);   Now I can get to each of these elements plus some other common values or instances directly from the StaticElements class. Now, lets look at the application.  The application looks like this when it starts:   The blue graphic represents the queue we are going to use.  The next figure shows the form after items were added and the queue stats were updated . You can see how the queue has grown: To add an item to the queue, click the Add Order button which displays the following dialog: After you fill in the form and press OK, the order is published to the ServiceBus queue and the form closes. The application also allows you to read the queued items by clicking the Process Orders button. As you can see below, the form shows the queued items in a list and the  queue has disappeared as its now empty. In real practice we normally would use a Windows Service or some other automated process to subscribe to the queue and pull items from it. I created a class named ServiceBusQueueHelper that has the core queue features we need. There are three public methods: * GetOrCreateQueue – Gets an instance of the queue description if the queue exists. if not, it creates the queue and returns a description instance. * SendMessageToQueue = This method takes an order instance and sends it to the queue. The call to the queue is wrapped in the ExecuteAction method from the Transient Fault Tolerance Framework and handles all the retry logic for the queue send process. * GetOrderFromQueue – Grabs an order from the queue and returns a typed order from the queue. It also marks the message complete so the queue can remove it.   Now lets turn to the WPF window code (MainWindow.xaml.cs). The constructor contains the 4 lines shown about to setup the static variables and to perform other initialization tasks. The next few lines setup certain features we need for the ServiceBus: TokenProvider credentials = TokenProvider.CreateSharedSecretTokenProvider(StaticElements.IssuerName, StaticElements.IssuerKey); Uri serviceUri = ServiceBusEnvironment.CreateServiceUri("sb", StaticElements.ServiceNamespace, string.Empty); StaticElements.CurrentNamespaceManager = new NamespaceManager(serviceUri, credentials); StaticElements.CurrentMessagingFactory = MessagingFactory.Create(serviceUri, credentials); The next two lines update the queue name label and also set the timer to 20 seconds.             QueueNameLabel.Content = StaticElements.QueueName;             _timer.Interval = TimeSpan.FromSeconds(20);             Next I call the UpdateQueueStats to initialize the UI for the queue:             UpdateQueueStats();             _timer.Tick += new EventHandler(delegate(object s, EventArgs a)                         {                      UpdateQueueStats();                  });             _timer.Start();         } The UpdateQueueStats method shown below. You can see that it uses the GetOrCreateQueue method mentioned earlier to grab the queue description, then it can get the MessageCount property.         private void UpdateQueueStats()         {             _queueDescription = _serviceBusQueueHelper.GetOrCreateQueue();             QueueCountLabel.Content = "(" + _queueDescription.MessageCount + ")";             long count = _queueDescription.MessageCount;             long queueWidth = count * 20;             QueueRectangle.Width = queueWidth;             QueueTickCount += 1;             TickCountlabel.Content = QueueTickCount.ToString();         }   The ReadQueueItemsButton_Click event handler calls the GetOrderFromQueue method and adds the order to the listbox. If you look at the SendQueueMessageController, you can see the SendMessage method that sends an order to the queue. Its pretty simple as it just creates a new CustomerOrderEntity instance,fills it and then passes it to the SendMessageToQueue. As you can see, all of our interaction with the queue is done through the helper class (ServiceBusQueueHelper). Now lets dig into the helper class. First, before you create anything like this, download the Transient Fault Handling Framework. Microsoft provides this free and they also provide the C# source. Theres a great article that shows how to use this framework with ServiceBus. I included the entire ServiceBusQueueHelper class in List 1. Notice the using statements for TransientFaultHandling: using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; The SendMessageToQueue in Listing 1 shows how to use the async send features of ServiceBus with them wrapped in the Transient Fault Handling Framework.  It is not much different than plain old ServiceBus calls but it sure makes it easy to have the fault tolerance added almost for free. The GetOrderFromQueue uses the standard synchronous methods to access the queue. The best practices article walks through using the async approach for a receive operation also.  Notice that this method makes a call to Receive to get the message then makes a call to GetBody to get a new strongly typed instance of CustomerOrderEntity to return. Listing 1 using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging; using System.Xml.Serialization; using System.Diagnostics; namespace WPFServicebusPublishSubscribeSample {     class ServiceBusQueueHelper     {         RetryPolicy currentPolicy = new RetryPolicy<ServiceBusTransientErrorDetectionStrategy>(RetryPolicy.DefaultClientRetryCount);         QueueClient currentQueueClient;         public QueueDescription GetOrCreateQueue()         {                        QueueDescription queue = null;             bool createNew = false;             try             {                 // First, let's see if a queue with the specified name already exists.                 queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 createNew = (queue == null);             }             catch (MessagingEntityNotFoundException)             {                 // Looks like the queue does not exist. We should create a new one.                 createNew = true;             }             // If a queue with the specified name doesn't exist, it will be auto-created.             if (createNew)             {                 try                 {                     var newqueue = new QueueDescription(StaticElements.QueueName);                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.CreateQueue(newqueue); });                 }                 catch (MessagingEntityAlreadyExistsException)                 {                     // A queue under the same name was already created by someone else,                     // perhaps by another instance. Let's just use it.                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 }             }             currentQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName);             return queue;         }         public void SendMessageToQueue(CustomerOrderEntity Order)         {             BrokeredMessage msg = null;             GetOrCreateQueue();             // Use a retry policy to execute the Send action in an asynchronous and reliable fashion.             currentPolicy.ExecuteAction             (                 (cb) =>                 {                     // A new BrokeredMessage instance must be created each time we send it. Reusing the original BrokeredMessage instance may not                     // work as the state of its BodyStream cannot be guaranteed to be readable from the beginning.                     msg = new BrokeredMessage(Order);                     // Send the event asynchronously.                     currentQueueClient.BeginSend(msg, cb, null);                 },                 (ar) =>                 {                     try                     {                         // Complete the asynchronous operation.                         // This may throw an exception that will be handled internally by the retry policy.                         currentQueueClient.EndSend(ar);                     }                     finally                     {                         // Ensure that any resources allocated by a BrokeredMessage instance are released.                         if (msg != null)                         {                             msg.Dispose();                             msg = null;                         }                     }                 },                 (ex) =>                 {                     // Always dispose the BrokeredMessage instance even if the send                     // operation has completed unsuccessfully.                     if (msg != null)                     {                         msg.Dispose();                         msg = null;                     }                     // Always log exceptions.                     Trace.TraceError(ex.Message);                 }             );         }                 public CustomerOrderEntity GetOrderFromQueue()         {             CustomerOrderEntity Order = new CustomerOrderEntity();             QueueClient myQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName, ReceiveMode.PeekLock);             BrokeredMessage message;             ServiceBusQueueHelper serviceBusQueueHelper = new ServiceBusQueueHelper();             QueueDescription queueDescription;             queueDescription = serviceBusQueueHelper.GetOrCreateQueue();             if (queueDescription.MessageCount > 0)             {                 message = myQueueClient.Receive(TimeSpan.FromSeconds(90));                 if (message != null)                 {                     try                     {                         Order = message.GetBody<CustomerOrderEntity>();                         message.Complete();                     }                     catch (Exception ex)                     {                         throw ex;                     }                 }                 else                 {                     throw new Exception("Did not receive the messages");                 }             }             return Order;         }     } } I will post a link to the download demo in a separate post soon.

    Read the article

  • Work Item Visualizer for TFS 2010 - New Extension

    - by MikeParks
    I released another new extension to the Visual Studio Gallery again today called Work Item Visualizer for TFS 2010. I've only heard positive things about it so far, hopefully it stays that way :) Basically, it creates a diagram of all work items linked to a work item ID which the user specifies in a search box. This extension was coded using DGML (the same graph rendering language used for the Visual Studio 2010 Architecture Tools). It was pretty cool getting a chance to create something using some of the newest technology out there. Well, I just wanted to throw a blog up to get the word out on it a little more. If you're using Visual Studio 2010 with Team Foundation Server 2010, feel free to check it out! Thanks everyone. Download Link: http://visualstudiogallery.msdn.microsoft.com/en-us/a35b6010-750b-47f6-a7a5-41f0fa7294d2   What it does: ·         Creates a DGML graph to visualize linked TFS Work Items by entering a Work Item ID in the toolbar search box   How it benefits you: ·         Allows you to easily analyze the hierarchy of your TFS Work Items ·         Gain the ability to perform basic risk/impact analysis when creating or editing Work Items ·         Great for meetings in the case that you need to discuss the entire scope of linked Work Items ·         Easier project planning ·         Eliminates the need to create TFS queries or reports to view tree of Work Items ·         Easily lets you see the entire tree of work items linked to the one you’re working on   Navigation Tips: ·         Use Ctrl + Mouse Wheel Scroll to zoom in and out ·         Use Ctrl + Left Mouse click (and hold) to move document around ·         Right click on DGML area for more options (Like copy image or viewing in groups) ·         Clicking on each node highlights that node and the links connected to it ·         Colors in the legend can be changed ·         When work item nodes are deleted, the view is automatically updated ·         Double clicking on work item node will open up the Work Items URL   Try it out on work items that have several of links and let us know what you think. A big thanks goes out to everyone working on the http://visualization.codeplex.com/ project for publishing the source code on CodePlex which really helped me learn how DGML (Directed Graph Markup Language - New to Visual Studio 2010 Architecture Tools) works!    - Mike

    Read the article

  • Illegal characters for SharePoint 2010 Content Type name

    - by Kelly Jones
    Quick tip: you can’t include a backslash in the name of the SharePoint 2010 Content Type.  In fact, there are several illegal characters:  \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. What, you didn’t know that after entering one of these characters in the name?  Is it because you saw this screen: Oh, that’s right….you need to turn off custom errors in the layouts folder…See this blog post for details and you’ll also need to turn off for the web application. Once you do that, you’ll see this: I wonder why the SharePoint team just doesn’t let the user know that the content type name contains illegal characters before the user hits the create button. Here’s a copy of the complete error (for the search engines): Server Error in '/' Application. -------------------------------------------------------------------------------- The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Microsoft.SharePoint.SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.  Stack Trace: [SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab.]    Microsoft.SharePoint.SPContentType.ValidateName(String name) +27419522    Microsoft.SharePoint.SPContentType.ValidateNameWithResource(String strVal, String& strLocalized) +423    Microsoft.SharePoint.SPContentType.set_Name(String value) +151    Microsoft.SharePoint.SPContentType.Initialize(SPContentType parentContentType, SPContentTypeCollection collection, String name) +112    Microsoft.SharePoint.SPContentType..ctor(SPContentType parentContentType, SPContentTypeCollection collection, String name) +132    Microsoft.SharePoint.ApplicationPages.ContentTypeCreatePage.BtnOK_Click(Object sender, EventArgs e) +497    System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115    System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140    System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2981   -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927

    Read the article

  • Entity Framework v1 &hellip; Brief Synopsis and Tips &ndash; Part 2

    - by Rohit Gupta
    Using Entity Framework with ASMX Web sErvices and WCF Web Service: If you use ASMX WebService to expose Entity objects from Entity Framework... then the ASMX Webservice does not  include object graphs, one work around is to use Facade pattern or to use WCF Service. The other important aspect of using ASMX Web Services along with Entity Framework is that the ASMX Client is not aware of the existence of EF v1 since the client solely deals with C# objects (not EntityObjects or ObjectContext). Since the client is not aware of the ObjectContext hence the client cannot participate in change tracking since the client only receives the Current Values and not the Orginal values when the service sends the the Entity objects to the client. Thus there are 2 drawbacks to using EntityFramework with ASMX Web Service: 1. Object state is not maintained... so to overcome this limitation we need insert/update single entity at a time and retrieve the original values for the entity being updated on the server/service end before calling Save Changes. 2. ASMX does not maintain object graphs... i.e. Customer.Reservations or Customer.Reservations.Trip relationships are not maintained. Thus you need to send these relationships separately from service to client. WCF Web Service overcomes the object graph limitation of ASMX Web Service, but we need to insure that we are populating all the non-null scalar properties of all the objects in the object graph before calling Update. WCF Web service still cannot overcome the second limitation of tracking changes to entities at the client end. Also note that the "Customer" class in the Client is very different from the "Customer" class in the Entity Framework Model Entities. They are incompatible with each other hence we cannot cast one to the other. However the .NET Framework translates the client "Customer" Entity to the EFv1 Model "customer" Entity once the entity is serialzed back on the ASMX server end. If you need change tracking enabled on the client then we need to use WCF Data Services which is available with VS 2010. ====================================================================================================== In WCF when adding an object that has relationships, the framework assumes that every object in the object graph needs to be added to store. for e.g. in a Customer.Reservations.Trip object graph, when a Customer Entity is added to the store, the EFv1 assumes that it needs to a add a Reservations collection and also Trips for each Reservation. Thus if we need to use existing Trips for reservations then we need to insure that we null out the Trip object reference from Reservations and set the TripReference to the EntityKey of the desired Trip instead. ====================================================================================================== Understanding Relationships and Associations in EFv1 The Golden Rule of EF is that it does not load entities/relationships unless you ask it to explicitly do so. However there is 1 exception to this rule. This exception happens when you attach/detach entities from the ObjectContext. If you detach an Entity in a ObjectGraph from the ObjectContext, then the ObjectContext removes the ObjectStateEntry for this Entity and all the relationship Objects associated with this Entity. For e.g. in a Customer.Order.OrderDetails if the Customer Entity is detached from the ObjectContext then you cannot traverse to the Order and OrderDetails Entities (that still exist in the ObjectContext) from the Customer Entity(which does not exist in the Object Context) Conversely, if you JOIN a entity that is not in the ObjectContext with a Entity that is in the ObjContext then the First Entity will automatically be added to the ObjContext since relationships for the 2 Entities need to exist in the ObjContext. ========================================================= You cannot attach an EntityCollection to an entity through its navigation property for e.g. you cannot code myContact.Addresses = myAddressEntityCollection ========================================================== Cascade Deletes in EDM: The Designer does not support specifying cascase deletes for a Entity. To enable cascasde deletes on a Entity in EDM use the Association definition in CSDL for the Entity. for e.g. SalesOrderDetail (SOD) has a Foreign Key relationship with SalesOrderHeader (SalesOrderHeader 1 : SalesOrderDetail *) if you specify a cascade Delete on SalesOrderHeader Entity then calling deleteObject on SalesOrderHeader (SOH) Entity will send delete commands for SOH record and all the SOD records that reference the SOH record. ========================================================== As a good design practise, if you use Cascade Deletes insure that Cascade delete facet is used both in the EDM as well as in the database. Even though it is not absolutely mandatory to have Cascade deletes on both Database and EDM (since you can see that just the Cascade delete spec on the SOH Entity in EDM will insure that SOH record and all related SOD records will be deleted from the database ... even though you dont have cascade delete configured in the database in the SOD table) ============================================================== Maintaining relationships in Code When Setting a Navigation property of a Entity (for e.g. setting the Contact Navigation property of Address Entity) the following rules apply : If both objects are detached, no relationship object will be created. You are simply setting a property the CLR way. If both objects are attached, a relationship object will be created. If only one of the objects is attached, the other will become attached and a relationship object will be created. If that detached object is new, when it is attached to the context its EntityState will be Added. One important rule to remember regarding synchronizing the EntityReference.Value and EntityReference.EntityKey properties is that when attaching an Entity which has a EntityReference (e.g. Address Entity with ContactReference) the Value property will take precedence and if the Value and EntityKey are out of sync, the EntityKey will be updated to match the Value. ====================================================== If you call .Load() method on a detached Entity then the .Load() operation will throw an exception. There is one exception to this rule. If you load entities using MergeOption.NoTracking, you will be able to call .Load() on such entities since these Entities are accessible by the ObjectContext. So the bottomline is that we need Objectontext to be able to call .Load() method to do deffered loading on EntityReference or EntityCollection. Another rule to remember is that you cannot call .Load() on entities that have a EntityState.Added State since the ObjectContext uses the EntityKey of the Primary (Parent) Entity when loading the related (Child) Entity (and not the EntityKey of the child (even if the EntityKey of the child is present before calling .Load()) ====================================================== You can use ObjContext.Add() to add a entity to the ObjContext and set the EntityState of the new Entity to EntityState.Added. here no relationships are added/updated. You can also use EntityCollection.Add() method to add an entity to another entity's related EntityCollection for e.g. contact has a Addresses EntityCollection so to add a new address use contact.Addresses.Add(newAddress) to add a new address to the Addresses EntityCollection. Note that if the entity does not already exist in the ObjectContext then calling contact.Addresses.Add(myAddress) will cause a new Address Entity to be added to the ObjContext with EntityState.Added and it will also add a RelationshipEntry (a relationship object) with EntityState.Added which connects the Contact (contact) with the new address newAddress. Note that if the entity already exists in the Objectcontext (being part theOtherContact.Addresses Collection), then calling contact.Addresses.Add(existingAddress) will add 2 RelationshipEntry objects to the ObjectStateEntry Collection, one with EntityState.Deleted and the other with EntityState.Added. This implies that the existingAddress Entity is removed from the theOtherContact.Addresses Collection and Added to the contact.Addresses Collection..effectively reassigning the address entity from the theOtherContact to "contact". This is called moving an existing entity to a new object graph. ====================================================== You usually use ObjectContext.Attach() and EntityCollection.Attach() methods usually when you need to reconstruct the ObjectGraph after deserializing the objects as received from a ASMX Web Service Client. Attach is usually used to connect existing Entities in the ObjectContext. When EntityCollection.Attach() is called the EntityState of the RelationshipEntry (the relationship object) remains as EntityState.unchanged whereas when EntityCollection.Add() method is called the EntityState of the relationship object changes to EntityState.Added or EntityState.Deleted as the situation demands. ========================================================= LINQ To Entities Tips: Select Many does Inner Join by default.   for e.g. from c in Contact from a in c.Address select c ... this will do a Inner Join between the Contacts and Addresses Table and return only those Contacts that have a Address. ======================================================== Group Joins Do LEFT Join by default. e.g. from a in Address join c in Contact ON a.Contact.ContactID == c.ContactID Into g WHERE a.CountryRegion == "US" select g; This query will do a left join on the Contact table and return contacts that have a address in "US" region The following query : from c in Contact join a in Address.Where(a1 => a1.CountryRegion == "US") on c.ContactID  equals a.Contact.ContactID into addresses select new {c, addresses} will do a left join on the Address table and return All Contacts. In these Contacts only those will have its Address EntityCollection Populated which have a Address in the "US" region, the other contacts will have 0 Addresses in the Address collection (even if addresses for those contacts exist in the database but are in a different region) ======================================================== Linq to Entities does not support DefaultIfEmpty().... instead use .Include("Address") Query Builder method to do a Left JOIN or use Group Joins if you need more control like Filtering on the Address EntityCollection of Contact Entity =================================================================== Use CreateSourceQuery() on the EntityReference or EntityCollection if you need to add filters during deferred loading of Entities (Deferred loading in EFv1 happens when you call Load() method on the EntityReference or EntityCollection. for e.g. var cust=context.Contacts.OfType<Customer>().First(); var sq = cust.Reservations.CreateSourceQuery().Where(r => r.ReservationDate > new DateTime(2008,1,1)); cust.Reservations.Attach(sq); This populates only those reservations that are older than Jan 1 2008. This is the only way (in EFv1) to Attach a Range of Entities to a EntityCollection using the Attach() method ================================================================== If you need to get the Foreign Key value for a entity e.g. to get the ContactID value from a Address Entity use this :                                address.ContactReference.EntityKey.EntityKeyValues.Where(k=> k.Key == "ContactID")

    Read the article

  • Sam Abraham to Speak about MVC2 at the Florida.Net Miramar .Net User Group on July 13 2010

    - by Sam Abraham
    I am scheduled to give a presentation at the Miramar .Net User Group on July 13, 2010 about MVC and the new features in MVC2. This will be similar yet will have more advanced content since the group had already had a introduction to MVC in a previous meeting. Here is the topic and speaker bio: Sam Abraham To Speak At The LI .Net User Group on June 3rd, 2010 As you might know, I lived and worked on LI, NY for 11 years before relocating to South Florida. As I will be visiting my family who still live there in the first week of June, I couldn't resist reaching out to Dan Galvez, LI  .Net User Group Leader, and asking if he needed a speaker for June's meeting. Apparently the stars were lined up right and I am now scheduled to speak at my "home" group on June 3rd, which I am pretty excited about. Here is a brief abstract of my talk and speaker bio. What's New in MVC2 We will start by briefly reviewing the basics of the Microsoft MVC Framework. Next, we will look at the new features introduced in the latest and greatest MVC2. Many new enhancements were introduced to both the MS MVC Framework and to VS2010 to improve developers' experience and reduce development time. We will be talking about new MVC2 features such as: Model Validation, Areas and Template Helpers. We will also discuss the new built-in MVC project templates that ship with VS2010. About the Speaker Sam Abraham is a Microsoft Certified Professional (MCP) and Microsoft Certified Technology Specialist (MCTS ASP.Net 3.5) He currently lives in South Florida where he leads the West Palm Beach .Net User Group (www.fladotnet.com) and actively participates in various local .Net Community events as organizer and/or technical speaker. Sam is also an active committee member on various initiatives at the South Florida Chapter of the Project Management Institute (www.southfloridapmi.org). Sam finds his passion in leveraging latest and greatest .Net Technologies along with proven Project Management practices and methodologies to produce high quality, cost-competitive software.  Sam can be reached through his blog: http://www.geekswithblogs.net/wildturtle

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >