Search Results

Search found 10548 results on 422 pages for 'standard deviation'.

Page 367/422 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • Surface V2.0

    - by Dennis Vroegop
    It’s been quiet around here. And the reason for that is that it’s been quiet around Surface for a while. Now, a lot of people assume that when a product team isn’t making too much noise that must mean they stopped working on their product. Remember the PDC keynote in 2010? Just because they didn’t mention WPF there a lot of people had the idea that WPF was dead and abandoned for Silverlight. Of course, this couldn’t be farther from the truth. The same applies to Surface. While we didn’t hear much from the team in Redmond they were busy putting together the next version of the platform. And at the CES in January the world saw what they have been up to all along: Surface V2.0 as it’s commonly known. Of course, the product is still in development. It’s not here yet, we can’t buy one yet. However, more and more information comes available and I think this is a good time to share with you what it’s all about! The biggest change from an organizational point of view is that Microsoft decided to stop producing the hardware themselves. Instead, they have formed a partnership with Samsung who will manufacture the devices. This means that you as a buyer get the benefits of a large, worldwide supplier with all the services they can offer. Not that Microsoft didn’t do that before but since Surface wasn’t a ‘big’ product it was sometimes hard to get to the right people. The new device is officially called the “Samsung SUR 40 for Microsoft Surface” which is quite a mouthful. The software that runs the device is of course still coming from Microsoft. Let’s dive into the technical specs (note: all of this is preliminary, it’s still in the Alpha phase!): Audio out HDMI / StereoRCA / SPDIF / 2 times 3.5mm audio out jack Brightness 300 CD/m2 Communications 1GB Ethernet/802.11/Bluetooth Contrast Ratio 1:1000 CPU AMD Athlon X2 245e 2.9Ghz Dual Core Display Resolution Full HD 1080p 1920x1080 / 16:9 aspect ratio GPU AMD Radeon HD 6750 1GB GDDRS HDD 320 GB / 7200 RPM HDMI In / HDMI out Yes I/O Ports 4 USB, SD Card reader Operation System Embedded Windows 7 Professional 64 bits Panel Size 40” diagonal Protection Glass Gorilla Glass RAM 4 GB DD3 Weight / with standard legs 70.0 Kg / 154 lbs Weight / standalone 39.5 Kg / 87 lbs Height (without legs) 4 inch Contact points recognized > 50 Cool Factor Extremely   Ok, the last point is not official, but I do think it needs to be there. Let’s talk software. As noted, it runs Windows 7 Professional 64 bit, which means you can run Visual Studio 2010 on it. The software is going to be developed in WPF4.0 with the additional Surface SDK 2.0. It will contain all the things you’ve seen before plus some extra’s. They have taken some steps to align it more with the Surface Toolkit which you can download today, so if you do things right your software should be portable between a WPF4.0 Windows 7 Multi-touch app and the Surface v2 environment. It still uses infrared to detect contacts, so in that respect nothing much has changed conceptually. We still can differentiate between a finger, a tag or a blob. Of course, since the new platform has a much higher resolution (compared to the 1024x768 of the first version) you might need to look at your code again. I’ve seen a lot of applications on Surface that assume the old resolution and moving that to V2 is going to be some work. To be honest: as I am under NDA I cannot disclose much about the new software besides what I have told you here, but trust me: it’s going to blow people away. Now, the biggest question for me is: when can I get one? Until we can, have a look here: Tags van Technorati: surface,samsung,WPF

    Read the article

  • Page output caching for dynamic web applications

    - by Mike Ellis
    I am currently working on a web application where the user steps (forward or back) through a series of pages with "Next" and "Previous" buttons, entering data until they reach a page with the "Finish" button. Until finished, all data is stored in Session state, then sent to the mainframe database via web services at the end of the process. Some of the pages display data from previous pages in order to collect additional information. These pages can never be cached because they are different for every user. For pages that don't display this dynamic data, they can be cached, but only the first time they load. After that, the data that was previously entered needs to be displayed. This requires Page_Load to fire, which means the page can't be cached at that point. A couple of weeks ago, I knew almost nothing about implementing page caching. Now I still don't know much, but I know a little bit, and here is the solution that I developed with the help of others on my team and a lot of reading and trial-and-error. We have a base page class defined from which all pages inherit. In this class I have defined a method that sets the caching settings programmatically. For pages that can be cached, they call this base page method in their Page_Load event within a if(!IsPostBack) block, which ensures that only the page itself gets cached, not the data on the page. if(!IsPostBack) {     ...     SetCacheSettings();     ... } protected void SetCacheSettings() {     Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(Validate), null);     Response.Cache.SetExpires(DateTime.Now.AddHours(1));     Response.Cache.SetSlidingExpiration(true);     Response.Cache.SetValidUntilExpires(true);     Response.Cache.SetCacheability(HttpCacheability.ServerAndNoCache); } The AddValidationCallback sets up an HttpCacheValidateHandler method called Validate which runs logic when a cached page is requested. The Validate method signature is standard for this method type. public static void Validate(HttpContext context, Object data, ref HttpValidationStatus status) {     string visited = context.Request.QueryString["v"];     if (visited != null && "1".Equals(visited))     {         status = HttpValidationStatus.IgnoreThisRequest; //force a page load     }     else     {         status = HttpValidationStatus.Valid; //load from cache     } } I am using the HttpValidationStatus values IgnoreThisRequest or Valid which forces the Page_Load event method to run or allows the page to load from cache, respectively. Which one is set depends on the value in the querystring. The value in the querystring is set up on each page in the "Next" and "Previous" button click event methods based on whether the page that the button click is taking the user to has any data on it or not. bool hasData = HasPageBeenVisited(url); if (hasData) {     url += VISITED; } Response.Redirect(url); The HasPageBeenVisited method determines whether the destination page has any data on it by checking one of its required data fields. (I won't include it here because it is very system-dependent.) VISITED is a string constant containing "?v=1" and gets appended to the url if the destination page has been visited. The reason this logic is within the "Next" and "Previous" button click event methods is because 1) the Validate method is static which doesn't allow it to access non-static data such as the data fields for a particular page, and 2) at the time at which the Validate method runs, either the data has not yet been deserialized from Session state or is not available (different AppDomain?) because anytime I accessed the Session state information from the Validate method, it was always empty.

    Read the article

  • Why is my RAID /dev/md1 showing up as /dev/md126? Is mdadm.conf being ignored?

    - by mmorris
    I created a RAID with: sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 sudo mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sdb2 /dev/sdc2 sudo mdadm --detail --scan returns: ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb Which I appended it to /etc/mdadm/mdadm.conf, see below: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Mon, 29 Oct 2012 16:06:12 -0500 # by mkconf $Id$ ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdb2[0] sdc2[1] 208629632 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: brw-rw---- 1 root disk 9, 1 Oct 30 11:06 md1 brw-rw---- 1 root disk 9, 2 Oct 30 11:06 md2 So I think all is good and I reboot. After the reboot, /dev/md1 is now /dev/md126 and /dev/md2 is now /dev/md127????? sudo mdadm --detail --scan returns: ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc2[1] sdb2[0] 208629632 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: drwxr-xr-x 2 root root 80 Oct 30 11:18 md brw-rw---- 1 root disk 9, 126 Oct 30 11:18 md126 brw-rw---- 1 root disk 9, 127 Oct 30 11:18 md127 All is not lost, I: sudo mdadm --stop /dev/md126 sudo mdadm --stop /dev/md127 sudo mdadm --assemble --verbose /dev/md1 /dev/sdb1 /dev/sdc1 sudo mdadm --assemble --verbose /dev/md2 /dev/sdb2 /dev/sdc2 and verify everything: sudo mdadm --detail --scan returns: ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdb2[0] sdc2[1] 208629632 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: brw-rw---- 1 root disk 9, 1 Oct 30 11:26 md1 brw-rw---- 1 root disk 9, 2 Oct 30 11:26 md2 So once again, I think all is good and I reboot. Again, after the reboot, /dev/md1 is /dev/md126 and /dev/md2 is /dev/md127????? sudo mdadm --detail --scan returns: ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc2[1] sdb2[0] 208629632 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: drwxr-xr-x 2 root root 80 Oct 30 11:42 md brw-rw---- 1 root disk 9, 126 Oct 30 11:42 md126 brw-rw---- 1 root disk 9, 127 Oct 30 11:42 md127 What am I missing here?

    Read the article

  • SQL SERVER – What are Actions in SSAS and How to Make a Reporting Action

    - by Pinal Dave
    Actions are used for customized browsing and drilling of data for the end-user. It’s an event that a user can raise while accessing the cube data. They are used in cube browsers like excel and are triggered when a user in a client tool clicks on a particular member, level, dimension, cells or may be the cube itself.  For example a user might be able to see a reporting services report, open a web page or drill through to detailed information related to the cube data. Analysis server supports 3 types of actions :- Report Drill-through Standard Actions In this blog post, I will explain the Reporting  action. The objective of this action is to return a report with details of the product where the sales amount is greater than 1000 in cube browser analysis. You need to create a basic cube first with the facts and dimensions you want in the analysis. Following are the steps to create reporting action. Go to SQL server data tools and open the analysis services project. Navigate to actions and click on new reporting action. 2.) Specify the name of the action and choose target type as attribute members since we have to create the action on members for a attribute. 3.) Specify the Target object of your report action. Target object would be the dimension or attribute on which you want the report to appear. In our case it is product name. 4.) Next you have to define the condition on which you want the report link to appear. However, this is an optional feature. In this example we are specifying a condition, which will check if the sales amount is greater than 10,000. So, that the link appears only for those products where the defined condition is met. 5.) Next you have to specify the server name on which the report is present, report path  and the report format in which you want the report to appear. 6.) Additionally you can specify the parameters. As with conditional expression, the parameters should be a valid MDX expression. The parameter name should be same as the one defined in the report. 7.) Deploy your solution after you are done with specifying parameters and go to the cube browser. 8.) Click on the analyze in excel button, this will open your cube in excel 9.) Make an analysis which shows product names and their sales amount. 10.) Right click on a product where sales amount is greater than 10000 you will see the reporting action link. Click on that and you will be taken to your reporting services report. 11.) Clicking on the link will take you to the URL of the report. I created this report using report project wizard in SQL server data tools. So, this is how we can launch reports from a cube browser. Similarly you can open web pages, run applications and a number of  other tasks. Koenig Solutions offers SSAS training which contains all Analysis Services including Reporting in great detail. In my next blog post I will talk about drill-through actions. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • Change the Way Google Search Results Display in Firefox

    - by Asian Angel
    Are you tired of the default look for search results at Google? If you want a different and customized pleasing look for them, then join us as we look at the GoogleMonkeyR User Script. Note: User Style Scripts & User Scripts can be added to most browsers but we are using Firefox & the Greasemonkey extension for our example here. Before Here is the standard look for search results at Google…not bad but it really does not stand out that well either. Installing the User Script You may be asking yourself what makes this particular user script different from others. Take a look at the list of goodies that you get access to and you will understand: Multiple columns of results Removes “Sponsored Links” Add numbers to the results Auto-load more results Removes web search dialogues Open links in a new tab Favicons GooglePreview Self updating Can be configured from a simple user dialogue To get started click on the Webpage Install Button. Once you click on the Webpage Install Button you will see the following window asking for confirmation to add the user script to Firefox. Click Install to complete the process. GoogleMonkeyR in Action Refreshing the same search page shown above shows a noticeable difference already. The light blue background makes the search results stand out a bit better. This is an improvement from before but you will definitely want to have a look to see just how far you can go… Right click on the Greasemonkey Status Bar Icon, go to User Script Commands, and select GoogleMonkeyR Preferences. Once you have clicked on GoogleMonkeyR Preferences the search page will be shaded out and you will have access to the user script’s preferences. This is where you can really make your search results unique looking! Here are the changes that we started out with… After refreshing our search results things looked even better. A look at the entire page of results with our browser maximized and set for two columns. If you have the Auto load more results Option enabled new results will be added very quickly as you scroll down. Our set of search results after adding Favicons & GooglePreview Images. Conclusion If you have been wanting a more dramatic and pleasing look for the search results at Google then you can not go wrong with the GoogleMonkeyR User Script. Change as little or as much as you want to get that perfect look in your browser. Link Install the GoogleMonkeyR User Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysMake Firefox Built-In Search Box Use Google’s Experimental Search KeysMake Firefox Show Google Results for Default Address Bar SearchesCombine Wolfram Alpha & Google Search Results in FirefoxHow To Run 4 Different Google Searches at Once In the Same Tab TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics

    Read the article

  • ATG Live Webcast June 14: Technical Preview of EBS 12.2 Online Patching

    - by BillSawyer
    Online Patching is is one of the cornerstone new features in our upcoming Oracle E-Business Suite 12.2 release. This ground-breaking feature is based upon Edition-Based Redefinition, a new 11gR2 Database feature that was built to Oracle Applications division specifications to allow the E-Business Suite's database tier to be patched while the environment is running.  Online Patching combines the use of Edition-Based Redefinition and new E-Business Suite technologies to allow patching to the E-Business Suite's database and application tier servers while the environment is being actively used by its end-users. This webcast provides a detailed technical preview of: How this new feature works How it affects E-Business Suite end-users How it affects E-Business Suite database administrators and patching lifecycles How it affects developers and third-party software vendors responsible for E-Business Suite customizations and extensions The presenter for this event is Kevin Hudson, Senior Director and one of the Online Patching architects. There will be a special extended Q&A Session at the end of this presentation, given the nature of the materials and the questions that we expect from you. ATG Development staff supporting the Q&A session will include Elke Phelps, Santiago Bastidas, Max Arderius, and other ATG architects. Date:               Thursday, June 14, 2012Time:              8:00 AM - 10:00 AM Pacific Standard Time (Special 2-hour Time)Presenter:    Kevin Hudson, Senior Director, Applications Technology IntegrationWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128   International Participant Dial-In Number:      706-634-9568   Dial-In Passcode:                                              100815To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  597470987If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. When will Oracle E-Business Suite 12.2 be released? Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. We'll post updates here as soon as soon as they're available.    

    Read the article

  • Python Coding standards vs. productivity

    - by Shroatmeister
    I work for a large humanitarian organisation, on a project building software that could help save lives in emergencies by speeding up the distribution of food. Many NGOs desperately need our software and we are weeks behind schedule. One thing that worries me in this project is what I think is an excessive focus on coding standards. We write in python/django and use a version of PEP0008, with various modifications e.g. line lengths can go up to 160 chars and all lines should go that long if possible, no blank lines between imports, line wrapping rules that apply only to certain kinds of classes, lots of templates that we must use, even if they aren't the best way to solve a problem etc. etc. One core dev spent a week rewriting a major part of the system to meet the then new coding standards, throwing away several suites of tests in the process, as the rewrite meant they were 'invalid'. We spent two weeks rewriting all the functionality that was lost, and fixing bugs. He is the lead dev and his word carries weight, so he has convinced the project manager that these standards are necessary. The junior devs do as they are told. I sense that the project manager has a strong feeling of cognitive dissonance about all this but nevertheless agrees with it vehemently as he feels unsure what else to do. Today I got in serious trouble because I had forgotten to put some spaces after commas in a keyword argument. I was literally shouted at by two other devs and the project manager during a Skype call. Personally I think coding standards are important but also think that we are wasting a lot of time obsessing with them, and when I verbalized this it provoked rage. I'm seen as a troublemaker in the team, a team that is looking for scapegoats for its failings. Since the introduction of the coding standards, the team's productivity has measurably plummeted, however this only reinforces the obsession, i.e. the lead dev simply blames our non-adherence to standards for the lack of progress. He believes that we can't read each other's code if we don't adhere to the conventions. This is starting to turn sticky. Now I am trying to modify various scripts, autopep8, pep8ify and PythonTidy to try to match the conventions. We also run pep8 against source code but there are so many implicit amendments to our standard that it's hard to track them all. The lead dev simple picks faults that the pep8 script doesn't pick up and shouts at us in the next stand-up meeting. Every week there are new additions to the coding standards that force us to rewrite existing, working, tested code. Thank heavens we still have tests, (I reverted some commits and fixed a bunch of the ones he removed). All the while there is increasing pressure to meet the deadline. I believe a fundamental issue is that the lead dev and another core dev refuse to trust other developers to do their job. But how to deal with that? We can't do our job because we are too busy rewriting everything. I've never encountered this dynamic in a software engineering team. Am I wrong to question their adherence to coding standards? Has anyone else experienced a similar situation and how have they dealt with it successfully? (I'm not looking for a discussion just actual solutions people have found)

    Read the article

  • ATG Live Webcast June 28: Scrambling Sensitive Data in EBS 12 Cloned Environments

    - by BillSawyer
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.   The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. This ATG Live Webcast is your chance to come learn about the Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack from the experts. Oracle E-Business Suite Release 12.1.3 Template for Data Masking The agenda for the Oracle E-Business Suite Template for Data Masking Pack webcast includes the following topics: What does data masking do in E-Business Suite environments? De-identify the data Mask sensitive data Maintain data validity How can EBS customers use data masking? References Join Eric Bing, Senior Director and Elke Phelps, Senior Principal Product Manager, as they discusses the Oracle E-Business Suite Template for Data Masking Pack.Date:                  Thursday, June 28, 2012Time:                 8:00AM Pacific Standard TimePresenters:     Eric Bing, Senior Director                           Elke Phelps, Senior Principal Product ManagerWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              100865To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  599097152If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Orchestrating the Virtual Enterprise, Part II

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's CSO & Vice President, SCM Product Strategy Almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value. Jon ChorleyChief Sustainability Officer & Vice President, SCM Product StrategyOracle Corporation

    Read the article

  • Data Connection Library in SharePoint Server 2010

    - by Wayne
    What is a Data Connection Library in SharePoint Server 2010? A Data Connection Library in Microsoft SharePoint Server 2010 is a library that can contain two kinds of data connections: an Office Data Connection (ODC) file or a Universal Data Connection (UDC) file. Microsoft InfoPath 2010 uses data connections that comply with the Universal Data Connection (UDC) file schema and typically have either a *.udcx or *.xml file name extension. Data sources described by these data connections are stored on the server and can be used in standard form templates and browser-enabled form templates. How to create a SharePoint Server Data Connection Library? 1. Browse to a SharePoint Server 2010 site on which you have at least Design permissions. If you are on the root site, create a new site before you continue with the next step. 2. On the Site Actions menu, click More Options. 3. On the Create page, click Library under Filter By, and then click Data Connection Library. 4. On the right side of the Create page, type a name for the library, and then click the Create button. 5. Copy the URL of the new data connection library. How to create a new data connection file in InfoPath? 1. Open InfoPath Designer 2010, click Blank Form, and then click Design Form. 2. On the Data tab, click Data Connections, and then click Add. 3. In the Data Connection Wizard, click Create a new connection to, click Receive data, and then click Next. 4. Click the kind of data source that you are connecting to, such as Database, Web service, or SharePoint library or list, and then click Next. 5. Complete the remaining steps in the Data Connection Wizard to configure your data connection, and then click Finish to return to the Data Connections dialog box. 6. In the Data Connections dialog box, click Convert to Connection File. 7. In the Convert Data Connection dialog box, enter the URL of the data connection library that you previously copied (delete "Forms/AllItems.aspx" and anything following it from the URL), enter a name for the data connection file at the end of the URL, and then click OK. It will take a few moments to convert and save the data connection file to the library. 8. Confirm that the data connection was converted successfully by examining the Details section of the Data Connections dialog box while the name of the converted data connection is selected. 9. Browse to the SharePoint data connection library, click the drop-down next to the name of the data connection, click Approve/Reject, click Approved, and then click OK. Take advantage of SharePoint Data Connection Library and other useful features of SharePoint family of products that include, SharePoint Foundation 2010, MOSS 2007 and free SharePoint templates or web parts.

    Read the article

  • LINQ to Twitter Queries with LINQPad

    - by Joe Mayo
    LINQPad is a popular utility for .NET developers who use LINQ a lot.  In addition to standard SQL queries, LINQPad also supports other types of LINQ providers, including LINQ to Twitter.  The following sections explain how to set up LINQPad for making queries with LINQ to Twitter. LINQPad comes in a couple versions and this example uses LINQPad4, which runs on the .NET Framework 4.0. 1. The first thing you'll need to do is set up a reference to the LinqToTwitter.dll. From the Query menu, select query properties. Click the Browse button and find the LinqToTwitter.dll binary. You should see something similar to the Query Properties window below. 2. While you have the query properties window open, add the namespace for the LINQ to Twitter types.  Click the Additional Namespace Imports tab and type in LinqToTwitter. The results are shown below: 3. The default query type, when you first start LINQPad, is C# Expression, but you'll need to change this to support multiple statements.  Change the Language dropdown, on the Main window, to C# Statements. 4. To query LINQ to Twitter, instantiate a TwitterContext, by typing the following into the LINQPad Query window: var ctx = new TwitterContext(); Note: If you're getting syntax errors, go back and make sure you did steps #2 and #3 properly. 5. Next, add a query, but don't materialize it, like this: var tweets = from tweet in ctx.Status where tweet.Type == StatusType.Public select new { tweet.Text, tweet.Geo, tweet.User }; 6. Next, you want the output to be displayed in the LINQPad grid, so do a Dump, like this: tweets.Dump(); The following image shows the final results:   That was an unauthenticated query, but you can also perform authenticated queries with LINQ to Twitter's support of OAuth.  Here's an example that uses the PinAuthorizer (type this into the LINQPad Query window): var auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Process.Start(pageLink), GetPin = () => { // this executes after user authorizes, which begins with the call to auth.Authorize() below. Console.WriteLine("\nAfter you authorize this application, Twitter will give you a 7-digit PIN Number.\n"); Console.Write("Enter the PIN number here: "); return Console.ReadLine(); } }; // start the authorization process (launches Twitter authorization page). auth.Authorize(); var ctx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/"); var tweets = from tweet in ctx.Status where tweet.Type == StatusType.Public select new { tweet.Text, tweet.Geo, tweet.User }; tweets.Dump(); This code is very similar to what you'll find in the LINQ to Twitter downloadable source code solution, in the LinqToTwitterDemo project.  For obvious reasons, I changed the value assigned to ConsumerKey and ConsumerSecret, which you'll have to obtain by visiting http://dev.twitter.com and registering your application. One tip, you'll probably want to make this easier on yourself by creating your own DLL that encapsulates all of the OAuth logic and then call a method or property on you custom class that returns a fully functioning TwitterContext.  This will help avoid adding all this code every time you want to make a query. Now, you know how to set up LINQPad for LINQ to Twitter, perform unauthenticated queries, and perform queries with OAuth. Joe

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • Inspiring a co-worker to adopt better coding practices?

    - by Aaronaught
    In the Handling my antiquated coworker question, various people discussed strategies for dealing with coworkers who are unwilling to integrate their workflow with the team's. I'd like, if possible, to learn some strategies for "teaching" a coworker who is merely ignorant of modern techniques and tools, and possibly a little apathetic. I've started working with a programmer who until recently has been working in relative isolation, in a different part of the company. He has extensive domain knowledge and most importantly he has demonstrated good problem-solving skills, something which many candidates seem to lack. However, the actual (C#) code I've seen is a throwback to the VB6 days. Procedural structure, Hungarian notation, global variables (abuse of static), no interfaces, no tests, non-use of Generics, throwing System.Exception... you get the idea. This programmer is a fair bit older than I am and, by first impressions at least, doesn't actively seek positive change. I'm not going to say resistant to change, because I think that is largely an issue of how the topic gets broached, and I want to be prepared. Programmers tend to be stubborn people, and going in with guns blazing and instituting rip-it-to-shreds code reviews and strictly-enforced policies is very likely not going to produce the end result that I want. If this were a new hire, a junior programmer, I wouldn't think twice about taking a "mentor" stance, but I'm extremely wary of treating an experienced employee as a clueless newbie (which he's not - he just hasn't kept pace with certain advancements in the field). How might I go about raising this developer's code quality standard the Dale Carnegie way, through gentle persuasion and non-material incentives? What would be the best strategy for effecting subtle, gradual changes, without creating an adversarial situation? Have other people - especially lead developers - been in this type of situation before? Which strategies were successful at stimulating interest and creating a positive group dynamic? Which strategies weren't successful and would be better to avoid? Clarifications: I really feel that several people are answering based on personal feelings without actually reading all of the details of the question. Please note the following, which should have been implied but I am now making explicit: This coworker is only my "senior" by virtue of age. I never said that his title, sphere of influence, or years at the organization exceed mine, and in fact, none of those things are true. He's a LOB programmer who's been absorbed into the main development shop. That's it. I am not a new hire, junior programmer, or other naïve idiot with grand plans to transform the company overnight. I am basically in charge of the software process, but as many who've worked as "leads" will know, responsibilities don't always correlate precisely with the org chart. I'm not asking people how to get my way, come hell or high water. I could do that if I wanted to, with the net result being that this person would become resentful and/or quit. Please try to understand that I am looking for a social, cooperative method of driving change. The mention of "...global variables... no tests... throwing System.Exception" was intended to demonstrate that the problems are not just superficial or aesthetic. Practices that may work for relatively small CRUD apps do not necessarily work for large enterprise apps, and in fact, none of the code so far has actually passed the integration tests. Please, try to take the question at face value, accept that I actually know what I'm talking about, and either answer the question that I actually asked or move on. P.S. My sincerest gratitude to those who -did- offer constructive advice rather than arguing with the premise. I'm going to leave this open for a while longer as I'm hoping to hear more in the way of real-world experiences.

    Read the article

  • Barcodes and Bugs

    - by Tim Dexter
    A great mail from Mike at Browning last week. He has been through the ringer getting his BIP barcoding sorted out but he's now out of the woods. Here's the final result. By way of explanation, an excerpt from Mike's email:   This is an example of the GS1_128 carton shipping labels we are now producing with BIP in our web application for our vendors who drop ship products to our dealers. It produces 4 labels per printed page, in PDF format, on peel & stick label paper. Each label has a unique carton number, and a unique carton serial number in the SSCC-18 barcode. This example is for Cabelas (each customer has slightly different GS1-128 label format requirements – custom template for each - a pain!). I am using custom java encoders I wrote for the UPC and SSCC-18 barcodes, and a standard encoder (code128b) for the ShipTo zip barcode. Is there any way yet to get around that SUPER ANNOYING bug when opening the rtf template in MS Word, and it replaces my xsl code text in the barcode fields with gibberish??? Every time I open it I have to re-enter all the xsl code. Not only to be able to read & edit it, but also to get it to work in BIP (BIP doesn’t like the gibberish if I upload the template that has it). Mike's last point, regarding the annoying bug in the template builder, is one that I have experienced occasionally. The development team have looked at it and found it to be an issue with MSWord and not a plugin problem. That's all well and good but how can you get around it? Well, you can take advantage of the font mapping that BIP offers to get the barcodes into the PDF output. As many of you know, getting a barcode font to appear in the PDF output, you need employ the use of the xdo.cfg file in the template builder config directory.You would normally have an entry such as this:         <font family="Code 128" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>to map a barcode font to get it to render in the PDF output when testing from the template builder plugin.   Mike's issue is only present when the formfield is highlighted with a barcode font. The other fields in the template are OK. What you can do to get around the issue is to bend the config entry to get around having to use the barcode font in the template at all. Changing the entry to something like:         <font family="Calibri" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>   Note that we are mapping the Calibri; a humanly readable and non 'erroring' font in the template, to the code 128 barcode font. Where you used to highlight the field with the barcode in MSWord, you now use the Calibri font instead. At run time, BIP will go look for the Calibri font mapping and will drop in the Code128 font. Of course, Calibri is an example; you need to pick a font that you are not going to use any where else in the layout.

    Read the article

  • Taking Your Business Scorecard Golfing

    - by tobyehatch
    Our workplace world is definitely changing. Not only are we taking work home, but we are working during odd hours in some very strange places.  I had the pleasure of interviewing Jacques Vigeant, Product Strategy Manager for Oracle Business Intelligence and Enterprise Performance Management, on a Podcast, and he enlightened me about how our mobile devices and business scorecards are enabling us to be more accountable and keep a watchful eye on business – even while on the golf course.Business scorecards have been around for many years - so I asked Jacques if he felt they had changed significantly due to technology. His answer was, “Yes, and no.”  Jacques agreed that scorecard enthusiasts are still passionate about executing the company strategy and monitoring Key Performance Indicators (KPIs), but scorecards and Business Intelligence (BI) as a whole have changed.  He explained that five to six years ago, people did BI work at the office and, for the most part, disconnected from their computer and workplace when they went home – with the exception of checking email and making a phone call or two. But now, that is no longer the case. People are virtually always connected with work and, more importantly, expect their BI and scorecards to be ‘always on,’ regardless of whether they are at their desk or somewhere else.Basically, the BI paradigm has changed from a 'pull' model, where employees are at their desks querying or pulling information from the system, to a 'push' model where employees expect their BI and scorecard systems to reach out (or push information) to them when there is something of note to learn or something on which they need to take action. I found this very interesting. However mobile devices do have their limitations with respect to screen sizes – does it really make sense to look at your strategy/scorecard on tiny devices? What kind of scorecard activities can you really expect to be able to do? Jacques’ answer was very logical. “When you think of a scorecard, it is really comprised of an organization of KPIs that are aligned with the strategic objectives of your company. KPIs are the heart of how you will execute your strategy. So, if you decompose that a little more, each KPI is well defined with the thresholds that you should keep an eye on and who is responsible for them. When we talk about scorecarding on a phone, we aren’t talking about surfing the strategy and exploring the strategy map like we do on the desktop. In a scorecarding context, we use the phone more as an alerting mechanism or simple monitoring device for your KPIs.”Jacques gave a great example of an inventory manager who took part of an afternoon off to go golfing before winter finally hit, and while on the front nine holes, his phone vibrated. His scorecard was alerting him that the inventory levels for one of the products was below some threshold that he had set.  From his phone, he had set up three options within Oracle Scorecard and Strategy Management (OSSM) for this type of situation:  1. Contact the warehouse manager directly by phone and work it out (standard phone function)  2. Tap/hold the KPI and add an annotation to the KPI in OSSM using the dictation capabilities of the phone and deal with it more fully when he gets back to the office  3. Tap/hold the KPI and invoke a business process from OSSM to transfer product from another warehouse with higher stock levels to the one that needs it  Being on a phone should still give you options to quickly deal with situations as needed, but mobile phones are not designed for nor should try to replicate the full desktop experience. We covered other interesting subjects in the interview, including how Oracle is keeping pace with mobile innovation and new devices such as Google Glasses, Galaxy Gear, Pebble Watches and more, and how Oracle is handling mobile security– which is great news for our mobile workforce. To listen to the entire Podcast, click here.To learn more about Oracle Scorecard and Strategy Management, click here.

    Read the article

  • Customizing UPK outputs (Part 1)

    - by [email protected]
    If you are familiar with Oracle's User Productivity Kit, you are aware that UPK is a great product for rapidly developing application training. Did you know that you can also customize the UPK outputs to incorporate your company's logo, colors, and preferred styles? There are several areas that support customization: Logo - Within the developer, you can change the logo for all outputs at one time. Player - The player output uses a style sheet that can be updated to change colors, graphics and other visual branding. Documentation - The print documentation uses a Word-based template that can be modified to match your corporate standards. I'll discuss the first one today, and we'll cover the others in subsequent blogs. Before you begin: If you are working in a multi-user environment, ensure that you have "Modify" permissions for the Styles directory under the Publishing folder. Make a copy of the current styles. This recommendation is for backup purposes. If something goes wrong, you will have a way to recover. Consider creating your own category by creating a new folder under the Styles directory, and then copying the styles into your new folder. When you upgrade to future versions, the system will overwrite the standard styles with any new feature additions and updates that have been made. With your own category, all of your customizations will remain intact. To update the logos in all outputs: From the Tools Menu, choose Customize Logo. Select the category if necessary. Browse to select your logo. You can use any size logo, in any graphic format (*.bmp, *.gif, *.jpeg, *.jpg, *.png, or *.tif). The system will make a copy of your logo and add it to each of the publishing styles. Choose OK, and the update process begins. It may take a few minutes. Helpful hints: The logo you select is used "as is" - no resizing or cropping occurs during this process. The Customize Logo process automates replacing all the logo graphics for online deployment (small_logo.gif and large_logo.gif) and the headers in the documentation outputs. You can manually replace these graphics on an individual style basis if you prefer. The recommended logo size is 230 pixels wide x 44 pixels high. Prior to updating the logos, the system will display the size of the selected logo. If you use a logo that is much larger than the recommended size, the heading area will resize to fit the new logo, but that will impact the space available for your training material. If you are using a multi-user environment, the system will check out the publishing styles to you for the logo updates. After you review the styles, remember to check them in so the rest of your team can access the new changes. I'd be interested in hearing (or seeing) how you brand your UPK. Feel free to share in the comments! --Maria Cozzolino, Manager of Requirements & UI for UPK Product Development PS. For those of you who want to customize the player and documentation NOW, check out the detailed instructions in the Publishing Content chapter of the Content Development Guide.

    Read the article

  • TechEd Israel 2010 may only accept speakers from sponsors

    - by RoyOsherove
    A month or so ago, Microsoft Israel started sending out emails to its partners and registered event users to “Save the date!” – Micraoft Teched Israel is coming, and it’s going to be this november! “Great news” I thought to myself. I’d been to a couple of the MS teched events, as a speaker and as an attendee, and it was lovely and professionally done. Israel is an amazing place for technology and development and TechEd hosted some big names in the world of MS software. A couple of weeks ago, I was shocked to hear from a couple of people that Microsoft Israel plans to only accept non-MS teched speakers, only from sponsors of the event. That means that according to the amount that you have paid, you get to insert one or more of your own selected speakers as part of teched. I’ve spent the past couple of weeks trying to gather more evidence of this, and have gotten some input from within MS about this information. It looks like that is indeed the case, though no MS rep. was prepared to answer any email I had publicly. If they approach me now I’d be happy to print their response. What does this mean? If this is true, it means that Microsoft Israel is making a grave mistake – They are diluting the quality of the speakers for pure money factors. That means, that as a teched attendee, who paid good money, you might be sitting down to watch nothing more that a bunch of infomercials, or sub-standard speakers – since speakers are no longer selected on quality or interest in their topic. They are turning the conference from a learning event to a commercial driven event They are closing off the stage to the community of speakers who may not be associated with any organization  willing to be a sponsor They are losing speakers (such as myself) who will not want to be part of such an event. (yes – even if my company ends up sponsoring the event, I will not take part in it, Sorry Eli!) They are saying “F&$K you” to the community of MVPs who should be the people to be approached first about technical talks (my guess is many MVPs wouldn’t want to talk at an event driven that way anyway ) I do hope this ends up not being true, but it looks like it is. MS Israel had already done such a thing with the Developer Days event previouly held in Israel – only sponsors were allowed to insert speakers into the event. If this turns out to be true I would urge the MS community in Israel to NOT TAKE PART AT THIS EVENT in any form (attendee, speaker, sponsor or otherwise). by taking part, you will be telling MS Israel it’s OK to piss all over the community that they are quietly suffocating anyway. The MVP case MS Israel has managed to screw the MVP program as well. MS MVPs (I’m one) have had a tough time here in Israel the past couple of years. ever since yosi taguri left the blue badge ranks, there was not real community leader left. Whoever runs things right now has their eyes and minds set elsewhere, with the software MVP community far from mind and heart. No special MVP events (except a couple of small ones this year). No real MVP leadership happens here, with the MVP MEA lead (Ruari) being on a remote line, is not really what’s needed. “MVP? What’s that?” I’m sure many MS Israel employees would say. Exactly my point. Last word I’ve been disappointed by the MS machine for a while now, but their slowness to realize what real community means in the past couple of years really turns me off. Maybe it’s time to move on. Maybe I shouldn’t be chasing people at MS Israel begging for a room to host the Agile Israel user group. Maybe it’s time to say a big bye bye and start looking at a life a bit more disconnected.

    Read the article

  • Subterranean IL: Custom modifiers

    - by Simon Cooper
    In IL, volatile is an instruction prefix used to set a memory barrier at that instruction. However, in C#, volatile is applied to a field to indicate that all accesses on that field should be prefixed with volatile. As I mentioned in my previous post, this means that the field definition needs to store this information somehow, as such a field could be accessed from another assembly. However, IL does not have a concept of a 'volatile field'. How is this information stored? Attributes The standard way of solving this is to apply a VolatileAttribute or similar to the field; this extra metadata notifies the C# compiler that all loads and stores to that field should use the volatile prefix. However, there is a problem with this approach, namely, the .NET C++ compiler. C++ allows methods to be overloaded using properties, like volatile or const, on the parameters; this is perfectly legal C++: public ref class VolatileMethods { void Method(int *i) {} void Method(volatile int *i) {} } If volatile was specified using a custom attribute, then the VolatileMethods class wouldn't be compilable to IL, as there is nothing to differentiate the two methods from each other. This is where custom modifiers come in. Custom modifiers Custom modifiers are similar to custom attributes, but instead of being applied to an IL element separately to its declaration, they are embedded within the field or parameter's type signature itself. The VolatileMethods class would be compiled to the following IL: .class public VolatileMethods { .method public instance void Method(int32* i) {} .method public instance void Method( int32 modreq( [mscorlib]System.Runtime.CompilerServices.IsVolatile)* i) {} } The modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) is the custom modifier. This adds a TypeDef or TypeRef token to the signature of the field or parameter, and even though they are mostly ignored by the CLR when it's executing the program, this allows methods and fields to be overloaded in ways that wouldn't be allowed using attributes. Because the modifiers are part of the signature, they need to be fully specified when calling such a method in IL: call instance void Method( int32 modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)*) There are two ways of applying modifiers; modreq specifies required modifiers (like IsVolatile), and modopt specifies optional modifiers that can be ignored by compilers (like IsLong or IsConst). The type specified as the modifier argument are simple placeholders; if you have a look at the definitions of IsVolatile and IsLong they are completely empty. They exist solely to be referenced by a modifier. Custom modifiers are used extensively by the C++ compiler to specify concepts that aren't expressible in IL, but still need to be taken into account when calling method overloads. C++ and C# That's all very well and good, but how does this affect C#? Well, the C++ compiler uses modreq(IsVolatile) to specify volatility on both method parameters and fields, as it would be slightly odd to have the same concept represented using a modifier or attribute depending on what it was applied to. Once you've compiled your C++ project, it can then be referenced and used from C#, so the C# compiler has to recognise the modreq(IsVolatile) custom modifier applied to fields, and vice versa. So, even though you can't overload fields or parameters with volatile using C#, volatile needs to be expressed using a custom modifier rather than an attribute to guarentee correct interoperability and behaviour with any C++ dlls that happen to come along. Next up: a closer look at attributes, and how certain attributes compile in unexpected ways.

    Read the article

  • Replacing ASP.NET Forms Authentication with WIF Session Authentication (for the better)

    - by Your DisplayName here!
    ASP.NET Forms Authentication and WIF Session Authentication (which has *nothing* to do with ASP.NET sessions) are very similar. Both inspect incoming requests for a special cookie that contains identity information, if that cookie is present it gets validated and if that is successful, the identity information is made available to the application via HttpContext.User/Thread.CurrentPrincipal. The main difference between the two is the identity to cookie serialization engine that sits below. Whereas ForsmAuth can only store the name of the user and an additional UserData string. It is limited to a single cookie and hardcoded to protection via the machine key. WIF session authentication in turn has these additional features: Can serialize a complete ClaimsPrincipal (including claims) to the cookie(s). Has a cookie overflow mechanism when data gets too big. In total it can create up to 8 cookies (á 4 KB) per domain (not that I would recommend round tripping that much data). Supports server side caching (which is an extensible mechanism). Has an extensible mechanism for protection (DPAPI by default, RSA as an option for web farms, and machine key based protection is coming in .NET 4.5) So in other words – session authentication is the superior technology, and if done cleverly enough you can replace FormsAuth without any changes to your application code. The only features missing is the redirect mechanism to a login page and an easy to use API to set authentication cookies. But that’s easy to add ;) FormsSessionAuthenticationModule This module is a sub class of the standard WIF session module, adding the following features: Handling EndRequest to do the redirect on 401s to the login page configured for FormsAuth. Reads the FormsAuth cookie name, cookie domain, timeout and require SSL settings to configure the module accordingly. Implements sliding expiration if configured for FormsAuth. It also uses the same algorithm as FormsAuth to calculate when the cookie needs renewal. Implements caching of the principal on the server side (aka session mode) if configured in an AppSetting. Supports claims transformation via a ClaimsAuthenticationManager. As you can see, the whole module is designed to easily replace the FormsAuth mechanism. Simply set the authentication mode to None and register the module. In the spirit of the FormsAuthentication class, there is also now a SessionAuthentication class with the same methods and signatures (e.g. SetAuthCookie and SignOut). The rest of your application code should not be affected. In addition the session module looks for a HttpContext item called “NoRedirect”. If that exists, the redirect to the login page will *not* happen, instead the 401 is passed back to the client. Very useful if you are implementing services or web APIs where you want the actual status code to be preserved. A corresponding UnauthorizedResult is provided that gives you easy access to the context item. The download contains a sample app, the module and an inspector for session cookies and tokens. Let’s hope that in .NET 4.5 such a module comes out of the box. HTH

    Read the article

  • Inside Red Gate - Experimenting In Public

    - by Simon Cooper
    Over the next few weeks, we'll be performing experiments on SmartAssembly to confirm or refute various hypotheses we have about how people use the product, what is stopping them from using it to its full extent, and what we can change to make it more useful and easier to use. Some of these experiments can be done within the team, some within Red Gate, and some need to be done on external users. External testing Some external testing can be done by standard usability tests and surveys, however, there are some hypotheses that can only be tested by building a version of SmartAssembly with some things in the UI or implementation changed. We'll then be able to look at how the experimental build is used compared to the 'mainline' build, which forms our baseline or control group, and use this data to confirm or refute the relevant hypotheses. However, there are several issues we need to consider before running experiments using separate builds: Ideally, the user wouldn't know they're running an experimental SmartAssembly. We don't want users to use the experimental build like it's an experimental build, we want them to use it like it's the real mainline build. Only then will we get valid, useful, and informative data concerning our hypotheses. There's no point running the experiments if we can't find out what happens after the download. To confirm or refute some of our hypotheses, we need to find out how the tool is used once it is installed. Fortunately, we've applied feature usage reporting to the SmartAssembly codebase itself to provide us with that information. Of course, this then makes the experimental data conditional on the user agreeing to send that data back to us in the first place. Unfortunately, even though this does limit the amount of useful data we'll be getting back, and possibly skew the data, there's not much we can do about this; we don't collect feature usage data without the user's consent. Looks like we'll simply have to live with this. What if the user tries to buy the experiment? This is something that isn't really covered by the Lean Startup book; how do you support users who give you money for an experiment? If the experiment is a new feature, and the user buys a license for SmartAssembly based on that feature, then what do we do if we later decide to pivot & scrap that feature? We've either got to spend time and money bringing that feature up to production quality and into the mainline anyway, or we've got disgruntled customers. Either way is bad. Again, there's not really any good solution to this. Similarly, what if we've removed some features for an experiment and a potential new user downloads the experimental build? (As I said above, there's no indication the build is an experimental build, as we want to see what users really do with it). The crucial feature they need is missing, causing a bad trial experience, a lost potential customer, and a lost chance to help the customer with their problem. Again, this is something not really covered by the Lean Startup book, and something that doesn't have a good solution. So, some tricky issues there, not all of them with nice easy answers. Turns out the practicalities of running Lean Startup experiments are more complicated than they first seem!

    Read the article

  • WIF, ASP.NET 4.0 and Request Validation

    - by Your DisplayName here!
    Since the response of a WS-Federation sign-in request contains XML, the ASP.NET built-in request validation will trigger an exception. To solve this, request validation needs to be turned off for pages receiving such a response message. Starting with ASP.NET 4.0 you can plug in your own request validation logic. This allows letting WS-Federation messages through, while applying all standard request validation to all other requests. The WIF SDK (v4) contains a sample validator that does exactly that: public class WSFedRequestValidator : RequestValidator {     protected override bool IsValidRequestString(       HttpContext context,       string value,       RequestValidationSource requestValidationSource,       string collectionKey,       out int validationFailureIndex)     {         validationFailureIndex = 0;         if ( requestValidationSource == RequestValidationSource.Form &&              collectionKey.Equals(                WSFederationConstants.Parameters.Result,                StringComparison.Ordinal ) )         {             SignInResponseMessage message =               WSFederationMessage.CreateFromFormPost(context.Request)                as SignInResponseMessage;             if (message != null)             {                 return true;             }         }         return base.IsValidRequestString(           context,           value,           requestValidationSource,           collectionKey,           out validationFailureIndex );     } } Register this validator via web.config: <httpRuntime requestValidationType="WSFedRequestValidator" />

    Read the article

  • Some Original Expressions

    - by Phil Factor
    Guest Editorial for Simple-Talk newsletterIn a guest editorial for the Simple-Talk Newsletter, Phil Factor wonders if we are still likely to find some more novel and unexpected ways of using the newer features of Transact SQL: or maybe in some features that have always been there! There can be a great deal of fun to be had in trying out recent features of SQL Expressions to see if  they provide new functionality.  It is surprisingly rare to find things that couldn’t be done before, but in a different   and more cumbersome way; but it is great to experiment or to read of someone else making that discovery.  One such recent feature is the ‘table value constructor’, or ‘VALUES constructor’, that managed to get into SQL Server 2008 from Standard SQL.  This allows you to create derived tables of up to 1000 rows neatly within select statements that consist of  lists of row values.  E.g. SELECT Old_Welsh, number FROM (VALUES ('Un',1),('Dou',2),('Tri',3),('Petuar',4),('Pimp',5),('Chwech',6),('Seith',7),('Wyth',8),('Nau',9),('Dec',10)) AS WelshWordsToTen (Old_Welsh, number) These values can be expressions that return single values, including, surprisingly, subqueries. You can use this device to create views, or in the USING clause of a MERGE statement. Joe Celko covered  this here and here.  It can become extraordinarily handy to use once one gets into the way of thinking in these terms, and I’ve rewritten a lot of routines to use the constructor, but the old way of using UNION can be used the same way, but is a little slower and more long-winded. The use of scalar SQL subqueries as an expression in a VALUES constructor, and then applied to a MERGE, has got me thinking. It looks very clever, but what use could one put it to? I haven’t seen anything yet that couldn’t be done almost as  simply in SQL Server 2000, but I’m hopeful that someone will come up with a way of solving a tricky problem, just in the same way that a freak of the XML syntax forever made the in-line  production of delimited lists from an expression easy, or that a weird XML pirouette could do an elegant  pivot-table rotation. It is in this sort of experimentation where the community of users can make a real contribution. The dissemination of techniques such as the Number, or Tally table, or the unconventional ways that the UPDATE statement can be used, has been rapid due to articles and blogs. However, there is plenty to be done to explore some of the less obvious features of Transact SQL. Even some of the features introduced into SQL Server 2000 are hardly well-known. Certain operations on data are still awkward to perform in Transact SQL, but we mustn’t, I think, be too ready to state that certain things can only be done in the application layer, or using a CLR routine. With the vast array of features in the product, and with the tools that surround it, I feel that there is generally a way of getting tricky things done. Or should we just stick to our lasts and push anything difficult out into procedural code? I’d love to know your views.

    Read the article

  • Is inconsistent formatting a sign of a sloppy programmer?

    - by dreza
    I understand that everyone has their own style of programming and that you should be able to read other people's styles and accept it for what it is. However, would one be considered a sloppy programmer if one's style of coding was inconsistent across whatever standard they were working against? Some example of inconsistencies might be: Sometimes naming private variables with _ and sometimes not Sometimes having varying indentations within code blocks Not aligning braces up i.e. same column if using start using new line style Spacing not always consistent around operators i.e. p=p+1, p+=1 vs other times p =p+1 or p = p + 1 etc Is this even something that as a programmer I should be concerned with addressing? Or is it such a minor nit picking thing that at the end of the day I should just not worry about it and worry about what the end user sees and whether the code works rather than how it looks while working? Is it sloppy programming or just over obsessive nit picking? EDIT: After some excellent comments I realized I may have left out some information in my question. This question came about after reviewing another colleagues code check-in and noticing some of these things and then realizing that I've seen these kind of in-consistencies in previous check-ins. It then got me thinking about my code and whether I do the same things and noticed that I typically don't etc I'm not suggesting his technique is either bad or good in this question or whether his way of doing things is right or wrong. EDIT: To answer some queries to some more good feed back. The specific instance this review occurred in was using Visual Studio 2010 and programming in c# so I don't think the editor would cause any issues. In fact it should only help I would hope. Sorry if I left that piece of info out and it effects any current answers. I was trying to be a bit more generic in understanding if this would be considered sloppy etc. And to add an even more specific example of a code piece I saw during reading of the check-in: foreach(var block in Blocks) { // .. some other code in here foreach(var movement in movements) { movement.Moved.Zero(); } // the un-formatted brace } Such a minor thing I know, but many small things add up(???), and I did have to double glance at the code at the time to see where everything lined up I guess. Please note this code was formatted appropriately before this check-in. EDIT: After reading some great answers and varying thoughts, the summary I've taken from this was. It's not necessarily a sign of a sloppy programmer however as programmers we have a duty (to ourselves and other programmers) to make the code as readable as possible to assist in further ongoing development. However it can hint at inadequacies which is something that is only possible to review on a case by case (person by person) basis. There are many reasons why this might occur. They should be taken in context and worked through with the person/people involved if reasonable. We have a duty to try and help all programmers become better programmers! In the good old days when development was done using good old notepad (or other simple text editing tool) this occurred much more frequently. However we have the assistance of modern IDE's now so although we shouldn't necessarily become OTT about this, it should still probably be addressed to some degree. We as programmers vary in our standards, styles and approaches to solutions. However it seems that in general we all take PRIDE in our work and as a trait it is something that can stand programmers apart. Making something to the best of our abilities both internal (code) and external (end user result) goes along way to giving us that big fat pat on the back that we may not go looking for but swells our heart with pride. And finally to quote CrazyEddie from his post below. Don't sweat the small stuff

    Read the article

  • St. Louis Day of .NET 2010

    - by Scott Spradlin
    Register now at http://www.stlouisdayofdotnet.com/registration.aspx The Date This year's conference will be held on Friday and Saturday, August 20-21, 2010, at the Ameristar Conference Center in St. Charles, Missouri.  Sessions will begin at 8:00 a.m. and run through 4:30 p.m. on both days.  Registration and sign-in will open at 7:00 a.m. on Friday morning, and will run throughout the event. The Venue Based on the almost unanimous feedback from last year's event, we are very excited to bring our conference back to the Ameristar Conference Center. The Ameristar has worked with us to offer a great rate on their large suites, should you be traveling from out-of-town -- or are just interested in a night away from home.  Attendees can book a suite at a discounted rate of only $139/night, which is a substantial discount from their standard rates.  We encourage you take the opportunity to hang around, spend the night, and enjoy the social events and networking opportunities that we have planned. If you are interested in taking advantage of the discounted hotel rate, you can reserve your room online at Ameristar's Online Registration Site, using the special offer code: GDOTH10.  You can also call the hotel's reservation number at (636) 940-4301 and let them know you are attending the St. Louis Day of .NET 2010 to receive your discounted rates. The Content All attendees will have access to over 80 technical sessions by many great regional and national technology experts, covering a wide range of .NET development topics.  In addition to refreshments throughout the event, all attendees will be provided with breakfast and lunch on both days of the conference. You will find sessions on many of the most current .NET development topics including: Visual Studio .NET 2010 Silverlight 4.0 Windows 7 Series Phone Development ASP.NET MVC DotNetNuke SharePoint 2010 Architecture Windows Presentation Foundation (WPF) And much, much more... This year's event will also include many informal "Open Space" sessions where all attendees with similar interests can discuss current trends or issues they are facing in today's real-world development environments. Finally, all attendees are invited to a social networking event at the HOME Nightclub at the Ameristar, which will be held on the Friday evening of the conference. The Cost The cost of this year's conference is $200 per attendee.  However, for a limited time we are offering a $75 discount for early registrants. To take advantage of this discounted rate, you must register on our site prior to July 10, 2010.  We accept Visa, MasterCard, and American Express.  In addition, this year we allow for a single user of our site to easily register multiple attendees at once. To register, please visit the official St. Louis Day of .NET site at www.stldodn.com, and click on the "Registration" tab. For More Information And for the most up-to-the-minute information on the event, please follow us online: Twitter:  @stldodn Facebook: http://www.facebook.com/stldodn We strongly encourage you to share this email, as well as the attached flier, with your peers and colleagues, and anyone else you think might be interested in this exciting event. If you have any questions regarding registration, you can email us at [email protected] and we will be happy to address them. Sponsors We are extremely thankful to the many great sponsors who are partnering with us this year to help make the St. Louis Day of .NET 2010 a huge success. (There are still sponsorship opportunities available. For complete information, visit the sponsor page on the web site.)

    Read the article

  • DotNetNuke 7.0 Only Weeks Away!

    - by sbwalker
    The software industry moves at a lightning pace, and it is only through constant focus and continuous investment that a software product can remain both stable and relevant over the long term. As we approach the 10 Year Anniversary of the DotNetNuke platform, it seems only fitting that we are on the verge of announcing yet another significant product milestone. DotNetNuke 7.0 is just around the corner and represents a bold step forward for our Content Management Platform, including substantial business productivity enhancements, investments in web platform relevance, and a significant overhaul and modernization of the user interface and user experience. It has been five months since I posted the announcement that the next major version of the platform was going to be DotNetNuke 7.0.  This announcement created tremendous excitement and anticipation in the DotNetNuke community, as major version increments have always been utilized as an opportunity  to introduce revolutionary new product features and capabilities. After months of intense product development, the finish line is finally in sight. With that, I am pleased to announce that we released a Release Candidate (RC) of DotNetNuke 7.0 yesterday. You can download the RC from our project page on Codeplex. A Release Candidate represents a software version which is very near to “release” quality. So although we will not be officially endorsing the RC for production use, or providing an official upgrade path, it does represent a significant milestone in our software development efforts ( if you are looking for a more detailed explanation of our software release terminology, I would encourage you to read the blog written by Co-Founder, Joe Brinkman titled "What's In A Name?" ). Modernizing a software platform does have its share of challenges from a backward compatibility perspective and, as usual, we are taking great care in ensuring a seamless upgrade path for our customers. In order to remain relevant and progressive, you need to be aware that DotNetNuke 7.0 has adopted a new set of baseline infrastructure requirements including ASP.NET 4.0.  As a result we are encouraging all major stakeholders in the ecosystem ( module developers, designers, partners, customers, etc... ) to take the opportunity to install the RC in their own local environments. This is the last opportunity to let us know about any final issues which may need to be addressed prior to final release. Mark your calendars now… the expected public release date (RTM) for DotNetNuke 7.0 will be Wednesday, November 28th. On a side note, we expect to release a 6.2.5 Maintenance version today. This release contains some high priority product quality improvements as well as security patches for some vulnerabilities reported through our standard ecosystem channels. As a result we will be encouraging all of our customers to upgrade to the 6.2.5 release as soon as it is available. I hope everyone is as excited as I am about the upcoming DotNetNuke 7.0 release. Please take the opportunity over the next week to put the new platform through its paces. Remember, only through our collective efforts can we ensure that this release has the greatest market impact of any DotNetNuke release to date.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >