Search Results

Search found 10235 results on 410 pages for 'inner loop'.

Page 304/410 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • SQL SERVER – 4 Tips for ETL Software IDE Developers

    - by pinaldave
    In a previous blog, I introduced the notion of Semantic Types. To an end-user, a seamlessly integrated semantic typing engine significantly increases the ease of use of an ETL IDE (integrated development environment, or developer studio). This led me to think about other ease-of-use issues I have encountered while building ETL applications. When I get stumped while programming, I find myself asking the variations on these questions: “How do I…?” “Now what?” “Why isn’t this working?” “Why do I have to redo the work I just did?” It seems to me that a good ETL IDE will anticipate these questions and seek to answer them before they are even asked. So here are my tips to help software vendors build developer IDEs that actually make development easier. How do I…? While developing an ETL application, have you ever asked yourself: “How do I set up the connection to my SQL Server database?”,“How do I import my table definitions from Access?”, etc. An easy answer might be “read the manual” but sometimes product manuals are not robust or easily accessible. So, integrating robust how-to instructions directly into your ETLstudio would help users get the information they need at the time they need it. Now what? IDEs in general know where you last clicked or performed an action using an input device such as a keyboard; so they should be able to reasonably predict the design context you are in and suggest the next steps accordingly. Context-sensitive suggestions based on the state of the user’s work will help users move forward in ETL application development. Why isn’t this working? Or why do I have to wait till I compile to be told about a critical design issue? If an ETL IDE is smart enough to signal to users what in their design structures is left to be completed or has been completed incorrectly, then the developer can spend much less time in the designàcompileàerror-correct loop. Just-in-time validation helps users detect and correct programming errors earlier in the ETL development life cycle. Why do I have to redo the work I just did? In ETL development, schemas, transformation rules, connectivity objects, etc., can be reused in various situations. Using mouse-clicks to build and manage libraries of reusable design objects implies that the application development effort should decrease over time and as the library acquires more objects. I met a great company at SQL Pass that is trying to address many of these usability issues. Check them out at www.expressor-software.com. What other ease-of-use suggestions do you have for ETL software vendors? Please post your valuable comments. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL

    Read the article

  • Java Spotlight Episode 139: Mark Heckler and José Pereda on JES based Energy Monitoring @MkHeck @JPeredaDnr

    - by Roger Brinkley
    Interview with Mark Heckler and José Pereda on using JavaSE Embedded with the Java Embedded Suite on a RaspberryPI along with a JavaFX client to monitor an energy production system and their JavaOne Tutorial- Java Embedded EXTREME MASHUPS: Building self-powering sensor nets for the IoT Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News Java Virtual Developer Day Session Videos Available JavaFX Maven Plugin 2.0 Released JavaFX Scene Builder 1.1 build b28 FXForm 2 release 0.2.2 OpenJDK8/Zero cross compile build for Foundation model HSAIL-based GPU offload: the Quest for Java Performance Begins Progress on Moving to Gradle Java EE 7 Launch Keynote Replay Java EE 7 Technical Breakouts Replay Java EE 7 support in NetBeans 7.3.1 Java EE 7 support in Eclipse 4.3 Java Magazine - May/June Events Jul 16-19, Uberconf, Denver, USA Jul 22-24, JavaOne Shanghai, China Jul 29-31, JVM Language Summit, Santa Clara Sep 11-12, JavaZone, Oslo, Norway Sep 19-20, Strange Loop, St. Louis Sep 22-26 JavaOne San Francisco 2013, USA Feature Interview Mark Heckler is an Oracle Corporation Java/Middleware/Core Tech Engineer with development experience in numerous environments. He has worked for and with key players in the manufacturing, emerging markets, retail, medical, telecom, and financial industries to develop and deliver critical capabilities on time and on budget. Currently, he works primarily with large government customers using Java throughout the stack and across the enterprise. He also participates in open-source development at every opportunity, being a JFXtras project committer and developer of DialogFX, MonologFX, and various other projects. When Mark isn't working with Java, he enjoys writing about his experiences at the Java Jungle website (https://blogs.oracle.com/javajungle/) and on Twitter (@MkHeck). José Pereda is a Structural Engineer working in the School of Engineers in the University of Valladolid in Spain for more than 15 years, and his passion is related to applying programming to solve real problems. Being involved with Java since 1999, José shares his time between JavaFX and the Embedded world, developing commercial applications and open source projects (https://github.com/jperedadnr), and blogging (http://jperedadnr.blogspot.com.es/) or tweeting (@JPeredaDnr) of both. What’s Cool AquaFX 0.1 - Mac OS X skin for JavaFX by Claudine Zillmann DromblerFX adds a docking framework Part 2 of Gerrit’s taming the Nashorn for writing JavaFX apps in Javascript Tool from mihosoft called JSelect for quickly switching JDKs Apache Maven Javadoc Plugin 2.9.1 Released Proposal: Java Concurrency Stress tests (jcstress) Slide-free Code-driven session at SV JUG JavaOne approvals/rejects gone out

    Read the article

  • Metro: Using Templates

    - by Stephen.Walther
    The goal of this blog post is to describe how templates work in the WinJS library. In particular, you learn how to use a template to display both a single item and an array of items. You also learn how to load a template from an external file. Why use Templates? Imagine that you want to display a list of products in a page. The following code is bad: var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productsHTML = ""; for (var i = 0; i < products.length; i++) { productsHTML += "<h1>Product Details</h1>" + "<div>Product Name: " + products[i].name + "</div>" + "<div>Product Price: " + products[i].price + "</div>"; } document.getElementById("productContainer").innerHTML = productsHTML; In the code above, an array of products is displayed by creating a for..next loop which loops through each element in the array. A string which represents a list of products is built through concatenation. The code above is a designer’s nightmare. You cannot modify the appearance of the list of products without modifying the JavaScript code. A much better approach is to use a template like this: <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> A template is simply a fragment of HTML that contains placeholders. Instead of displaying a list of products by concatenating together a string, you can render a template for each product. Creating a Simple Template Let’s start by using a template to render a single product. The following HTML page contains a template and a placeholder for rendering the template: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> <!-- Place where Product Template is Rendered --> <div id="productContainer"></div> </body> </html> In the page above, the template is defined in a DIV element with the id productTemplate. The contents of the productTemplate are not displayed when the page is opened in the browser. The contents of a template are automatically hidden when you convert the productTemplate into a template in your JavaScript code. Notice that the template uses data-win-bind attributes to display the product name and price properties. You can use both data-win-bind and data-win-bindsource attributes within a template. To learn more about these attributes, see my earlier blog post on WinJS data binding: http://stephenwalther.com/blog/archive/2012/02/26/windows-web-applications-declarative-data-binding.aspx The page above also includes a DIV element named productContainer. The rendered template is added to this element. Here’s the code for the default.js script which creates and renders the template: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var product = { name: "Tesla", price: 80000 }; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); productTemplate.render(product, document.getElementById("productContainer")); } }; app.start(); })(); In the code above, a single product object is created with the following line of code: var product = { name: "Tesla", price: 80000 }; Next, the productTemplate element from the page is converted into an actual WinJS template with the following line of code: var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); The template is rendered to the templateContainer element with the following line of code: productTemplate.render(product, document.getElementById("productContainer")); The result of this work is that the product details are displayed: Notice that you do not need to call WinJS.Binding.processAll(). The Template render() method takes care of the binding for you. Displaying an Array in a Template If you want to display an array of products using a template then you simply need to create a for..next loop and iterate through the array calling the Template render() method for each element. (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); var productContainer = document.getElementById("productContainer"); var i, product; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product, productContainer); } } }; app.start(); })(); After each product in the array is rendered with the template, the result is appended to the productContainer element. No changes need to be made to the HTML page discussed in the previous section to display an array of products instead of a single product. The same product template can be used in both scenarios. Rendering an HTML TABLE with a Template When using the WinJS library, you create a template by creating an HTML element in your page. One drawback to this approach of creating templates is that your templates are part of your HTML page. In order for your HTML page to validate, the HTML within your templates must also validate. This means, for example, that you cannot enclose a single HTML table row within a template. The following HTML is invalid because you cannot place a TR element directly within the body of an HTML document:   <!-- Product Template --> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> This template won’t validate because, in a valid HTML5 document, a TR element must appear within a THEAD or TBODY element. Instead, you must create the entire TABLE element in the template. The following HTML page illustrates how you can create a template which contains a TR element: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Product Template --> <div id="productTemplate"> <table> <tbody> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> </tbody> </table> </div> <!-- Place where Product Template is Rendered --> <table> <thead> <tr> <th>Name</th><th>Price</th> </tr> </thead> <tbody id="productContainer"> </tbody> </table> </body> </html>   In the HTML page above, the product template includes TABLE and TBODY elements: <!-- Product Template --> <div id="productTemplate"> <table> <tbody> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> </tbody> </table> </div> We discard these elements when we render the template. The only reason that we include the TABLE and THEAD elements in the template is to make the HTML page validate as valid HTML5 markup. Notice that the productContainer (the target of the template) in the page above is a TBODY element. We want to add the rows rendered by the template to the TBODY element in the page. The productTemplate is rendered in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); var productContainer = document.getElementById("productContainer"); var i, product, row; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product).then(function (result) { row = WinJS.Utilities.query("tr", result).get(0); productContainer.appendChild(row); }); } } }; app.start(); })(); When the product template is rendered, the TR element is extracted from the rendered template by using the WinJS.Utilities.query() method. Next, only the TR element is added to the productContainer: productTemplate.render(product).then(function (result) { row = WinJS.Utilities.query("tr", result).get(0); productContainer.appendChild(row); }); I discuss the WinJS.Utilities.query() method in depth in a previous blog entry: http://stephenwalther.com/blog/archive/2012/02/23/windows-web-applications-query-selectors.aspx When everything gets rendered, the products are displayed in an HTML table: You can see the actual HTML rendered by looking at the Visual Studio DOM Explorer window:   Loading an External Template Instead of embedding a template in an HTML page, you can place your template in an external HTML file. It makes sense to create a template in an external file when you need to use the same template in multiple pages. For example, you might need to use the same product template in multiple pages in your application. The following HTML page does not contain a template. It only contains a container that will act as a target for the rendered template: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Place where Product Template is Rendered --> <div id="productContainer"></div> </body> </html> The template is contained in a separate file located at the path /templates/productTemplate.html:   Here’s the contents of the productTemplate.html file: <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> Notice that the template file only contains the template and not the standard opening and closing HTML elements. It is an HTML fragment. If you prefer, you can include all of the standard opening and closing HTML elements in your external template – these elements get stripped away automatically: <html> <head><title>product template</title></head> <body> <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> </body> </html> Either approach – using a fragment or using a full HTML document  — works fine. Finally, the following default.js file loads the external template, renders the template for each product, and appends the result to the product container: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(null, { href: "/templates/productTemplate.html" }); var productContainer = document.getElementById("productContainer"); var i, product, row; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product, productContainer); } } }; app.start(); })(); The path to the external template is passed to the constructor for the Template class as one of the options: var productTemplate = new WinJS.Binding.Template(null, {href:"/templates/productTemplate.html"}); When a template is contained in a page then you use the first parameter of the WinJS.Binding.Template constructor to represent the template – instead of null, you pass the element which contains the template. When a template is located in an external file, you pass the href for the file as part of the second parameter for the WinJS.Binding.Template constructor. Summary The goal of this blog entry was to describe how you can use WinJS templates to render either a single item or an array of items to a page. We also explored two advanced topics. You learned how to render an HTML table by extracting the TR element from a template. You also learned how to place a template in an external file.

    Read the article

  • SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database

    - by Pinal Dave
    This is the third post in the series of the blog posts I am writing about NuoDB. NuoDB is very innovative and easy-to-use product. I can clearly see how one can scale-out NuoDB with so much ease and confidence. In my very first blog post we discussed how we can install NuoDB (link), and in my second post I discussed how we can manage the NuoDB database transaction engines and storage managers with a few clicks (link). Note: You can Download NuoDB from here. In this post, we will learn how we can use the Explorer feature of NuoDB to do various SQL operations. NuoDB has a browser-based Explorer, which is very powerful and has many of the features any IDE would normally have. Let us see how it works in the following step-by-step tutorial. Let us go to the NuoDBNuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the QuickStart screen. Make sure that you have created the sample database. If you have not created sample database, click on Create Database and create it successfully. Now go to the NuoDB Explorer by clicking on the main tab, and it will ask you for your domain username and password. Enter the username as a domain and password as a bird. Alternatively you can also enter username as a quickstart and password as a quickstart. Once you enter the password you will be able to see the databases. In our example we have installed the Sample Database hence you will see the Test database in our Database Hierarchy screen. When you click on database it will ask for the database login. Note that Database Login is different from Domain login and you will have to enter your database login over here. In our case the database username is dba and password is goalie. Once you enter a valid username and password it will display your database. Further expand your database and you will notice various objects in your database. Once you explore various objects, select any database and click on Open. When you click on execute, it will display the SQL script to select the data from the table. The autogenerated script displays entire result set from the database. The NuoDB Explorer is very powerful and makes the life of developers very easy. If you click on List SQL Statements it will list all the available SQL statements right away in Query Editor. You can see the popup window in following image. Here is the cool thing for geeks. You can even click on Query Plan and it will display the text based query plan as well. In case of a SELECT, the query plan will be much simpler, however, when we write complex queries it will be very interesting. We can use the query plan tab for performance tuning of the database. Here is another feature, when we click on List Tables in NuoDB Explorer.  It lists all the available tables in the query editor. This is very helpful when we are writing a long complex query. Here is a relatively complex example I have built using Inner Join syntax. Right below I have displayed the Query Plan. The query plan displays all the little details related to the query. Well, we just wrote multi-table query and executed it against the NuoDB database. You can use the NuoDB Admin section and do various analyses of the query and its performance. NuoDB is a distributed database built on a patented emergent architecture with full support for SQL and ACID guarantees.  It allows you to add Transaction Engine processes to a running system to improve the performance of your system.  You can also add a second Storage Engine to your running system for redundancy purposes.  Conversely, you can shut down processes when you don’t need the extra database resources. NuoDB also provides developers and administrators with a single intuitive interface for centrally monitoring deployments. If you have read my blog posts and have not tried out NuoDB, I strongly suggest that you download it today and catch up with the learnings with me. Trust me though the product is very powerful, it is extremely easy to learn and use. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big

    - by pinaldave
    After reading my earlier article SQL SERVER – master Database Log File Grew Too Big, I received an email recently from another reader asking why does the log file of model database grow every day when he is not carrying out any operation in the model database. As per the email, he is absolutely sure that he is doing nothing on his model database; he had used policy management to catch any T-SQL operation in the model database and there were none. This was indeed surprising to me. I sent a request to access to his server, which he happily agreed for and within a min, we figured out the issue. He was taking the backup of the model database every day taking the database backup every night. When I explained the same to him, he did not believe it; so I quickly wrote down the following script. The results before and after the usage of the script were very clear. What is a model database? The model database is used as the template for all databases created on an instance of SQL Server. Any object you create in the model database will be automatically created in subsequent user database created on the server. NOTE: Do not run this in production environment. During the demo, the model database was in full recovery mode and only full backup operation was performed (no log backup). Before Backup Script Backup Script in loop DECLARE @FLAG INT SET @FLAG = 1 WHILE(@FLAG < 1000) BEGIN BACKUP DATABASE [model] TO  DISK = N'D:\model.bak' SET @FLAG = @FLAG + 1 END GO After Backup Script Why did this happen? The model database was in full recovery mode and taking full backup is logged operation. As there was no log backup and only full backup was performed on the model database, the size of the log file kept growing. Resolution: Change the backup mode of model database from “Full Recovery” to “Simple Recovery.”. Take full backup of the model database “only” when you change something in the model database. Let me know if you have encountered a situation like this? If so, how did you resolve it? It will be interesting to know about your experience. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Debugging OWB generated SAP ABAP code executed through RFC

    - by Anil Menon
    Within OWB if you need to execute ABAP code using RFC you will have to use the SAP Function Module RFC_ABAP_INSTALL_AND_RUN. This function module is specified during the creation of the SAP source location. Usually in a Production environment a copy of this function module is used due to security restrictions. When you execute the mapping by using this Function Module you can’t see the actual ABAP code that is passed on to the SAP system. In case you want to take a look at the code that will be executed on the SAP system you need to use a custom Function Module in SAP. The easiest way to do this is to make a copy of the Function Module RFC_ABAP_INSTALL_AND_RUN and call it say Z_TEST_FM. Then edit the code of the Function Module in SAP as below FUNCTION Z_TEST_FM . DATA: BEGIN OF listobj OCCURS 20. INCLUDE STRUCTURE abaplist. DATA: END OF listobj. DATA: begin_of_line(72). DATA: line_end_char(1). DATA: line_length type I. DATA: lin(72). loop at program. append program-line to WRITES. endloop. ENDFUNCTION. Within OWB edit the SAP Location and use Z_TEST_FM as the “Execution Function Module” instead of  RFC_ABAP_INSTALL_AND_RUN. Then register this location. The Mapping you want to debug will have to be deployed. After deployment you can right click the mapping and click on “Start”.   After clicking start the “Input Parameters” screen will be displayed. You can make changes here if you need to. Check that the parameter BACKGROUND is set to “TRUE”. After Clicking “OK” the log for the execution will be displayed. The execution of Mappings will always fail when you use the above function module. Clicking on the icon “I” (information) the ABAP code will be displayed.   The ABAP code displayed is the code that is passed through the Function Module. You can also find the code by going through the log files on the server which hosts the OWB repository. The logs will be located under <OWB_HOME>/owb/log. Patch #12951045 is recommended while using the SAP Connector with OWB 11.2.0.2. For recommended patches for other releases please check with Oracle Support at http://support.oracle.com

    Read the article

  • Up in the Air: Team Oracle Play-by-Play

    - by Aaron Lazenby
    Yesterday, I had the amazing opportunity to fly along with Sean D. Tucker and Team Oracle. Leaving from the San Carols airport, we did a 30 minute flight over the Pacific just south of the coastal town of Half Moon Bay. In that half hour, I rode through a massive 4G loop, survived a crushing hammerhead, and took control of the plane to perform a basic wing over (you can learn what the heck I'm talking about by visiting this website). I have lots of great video, but it's going to take me some time to make sense of it. For now, here's my Twitter-based play-by-play of yesterday's events. Many thanks to Sean D. Tucker and the whole crew (Ben and Ian, especially) for this great opportunity to fly with Team Oracle.Live tweets from @OracleProfitI will be spending the afternoon in a stunt plane, upside down above the San Francisco bay. http://bit.ly/cwkrkIAt the San Carlos airport. More than slightly freaked out. Shaking hands diminish texting ability. Slightly reassuring. http://yfrog.com/1qt61nj There go the doors to the photo plane... #teamoracle http://yfrog.com/58ywljSean D Tucker assures me: "The sky is a great place to be." Helpful, but I'm still nervous. #teamoracle"You get a parachute. He gets a harness." How was this decision made? #teamoracleThe plane with @radu43 has returned. I'm up next...Couldn't help myself...drank a soda before flying. Mistake? We'll see... #teamoracleAdvice of the day "If you pull with two hands, you improve the chances of the chute deploying on the first try." Lovely. #teamoracleI feel so strange. But I flew a high performance airplane. And did an aerobatics move. Wild. #teamoracle"Flying ten feet off he ground, upside-down at 250 miles per hour isn't exciting to me." Sean D. Tucker #teamoracle"What is exciting to me is flying that perfect pattern, just like I imagined it in my head." Sean D. Tucker #teamoracle"You're going to sleep well tonight. You just carried four times your body weight." #teamoracle #gforce Just watched the #teamoracle plane take off for its flight home. I'm waiting for Caltrain. #undignifiedanticlimaxEnough with the #teamoracle. Check http://blogs.oracle.com/profit for the video. Coming soon! 

    Read the article

  • Java Spotlight Episode 138: Paul Perrone on Life Saving Embedded Java

    - by Roger Brinkley
    Interview with Paul Perrone, founder and CEO of Perrone Robotics, on using Java Embedded to test autonomous vehicle operations for the Insurance Institute for Highway Safety that will save lives. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News JDK 8 is Feature Complete Java SE 7 Update 25 Released What should the JCP be doing? 2013 Duke's Choice Award Nominations Another Quick update to Code Signing Article on OTN Events June 24, Austin JUG, Austin, TX June 25, Virtual Developer Day - Java, EMEA, 10AM CEST Jul 16-19, Uberconf, Denver, USA Jul 22-24, JavaOne Shanghai, China Jul 29-31, JVM Summit Language, Santa Clara Sep 11-12, JavaZone, Oslo, Norway Sep 19-20, Strange Loop, St. Louis Sep 22-26 JavaOne San Francisco 2013, USA Feature Interview Paul J. Perrone is founder/CEO of Perrone Robotics. Paul architected the Java-based general-purpose robotics and automation software platform known as “MAX”. Paul has overseen MAX’s application to rapidly field self-driving robotic cars, unmanned air vehicles, factory and road-side automation applications, and a wide range of advanced robots and automaton applications. He fielded a self-driving autonomous robotic dune buggy in the historic 2005 Grand Challenge race across the Mojave desert and a self-driving autonomous car in the 2007 Urban Challenge through a city landscape. His work has been featured in numerous televised and print media including the Discovery Channel, a theatrical documentary, scientific journals, trade magazines, and international press. Since 2008, Paul has also been working as the chief software engineer, CTO, and roboticist automating rock star Neil Young’s LincVolt, a 1959 Lincoln Continental retro-fitted as a fully autonomous extended range electric vehicle. Paul has been an engineer, author of books and articles on Java, frequent speaker on Java, and entrepreneur in the robotics and software space for over 20 years. He is a member of the Java Champions program, recipient of three Duke Awards including a Gold Duke and Lifetime Achievement Award, has showcased Java-based robots at five JavaOne keynotes, and is a frequent JavaOne speaker and show floor participant. He holds a B.S.E.E. from Rutgers University and an M.S.E.E. from the University of Virginia. What’s Cool Shenandoah: A pauseless GC for OpenJDK

    Read the article

  • Gravity stops when side-collision detected

    - by Adrian Marszalek
    Please, look at this GIF: The label on the animation says "Move button is pressed, then released". And you can see when it's pressed (and player's getCenterY() is above wall getCenterY()), gravity doesn't work. I'm trying to fix it since yesterday, but I can't. All methods are called from game loop. public void move() { if (left) { switch (game.currentLevel()) { case 1: for (int i = 0; i < game.lvl1.getX().length; i++) game.lvl1.getX()[i] += game.physic.xVel; break; } } else if (right) { switch (game.currentLevel()) { case 1: for (int i = 0; i < game.lvl1.getX().length; i++) game.lvl1.getX()[i] -= game.physic.xVel; break; } } } int manCenterX, manCenterY, boxCenterX, boxCenterY; //gravity stop public void checkCollision() { for (int i = 0; i < game.lvl1.getX().length; i++) { manCenterX = (int) game.man.getBounds().getCenterX(); manCenterY = (int) game.man.getBounds().getCenterY(); if (game.man.getBounds().intersects(game.lvl1.getBounds(i))) { boxCenterX = (int) game.lvl1.getBounds(i).getCenterX(); boxCenterY = (int) game.lvl1.getBounds(i).getCenterY(); if (manCenterY - boxCenterY > 0 || manCenterY - boxCenterY < 0) { game.man.setyPos(-2f); game.man.isFalling = false; } } } } //left side of walls public void colliLeft() { for (int i = 0; i < game.lvl1.getX().length; i++) { if (game.man.getBounds().intersects(game.lvl1.getBounds(i))) { if (manCenterX - boxCenterX < 0) { for (int i1 = 0; i1 < game.lvl1.getX().length; i1++) { game.lvl1.getX()[i1] += game.physic.xVel; game.man.isFalling = true; } } } } } //right side of walls public void colliRight() { for (int i = 0; i < game.lvl1.getX().length; i++) { if (game.man.getBounds().intersects(game.lvl1.getBounds(i))) { if (manCenterX - boxCenterX > 0) { for (int i1 = 0; i1 < game.lvl1.getX().length; i1++) { game.lvl1.getX()[i1] += -game.physic.xVel; game.man.isFalling = true; } } } } } public void gravity() { game.man.setyPos(yVel); } //not called from gameloop: public void setyPos(float yPos) { this.yPos += yPos; }

    Read the article

  • The softer side of BPM

    - by [email protected]
    BPM and RTD are great complementary technologies that together provide a much higher benefit than each of them separately. BPM covers the need for automating processes, making sure that there is uniformity, that rules and regulations are complied with and that the process runs smoothly and quickly processes the units flowing through it. By nature, this automation and unification can lead to a stricter, less flexible process. To avoid this problem it is common to encounter process definition that include multiple conditional branches and human input to help direct processing in the direction that best applies to the current situation. This is where RTD comes into play. The selection of branches and conditions and the optimization of decisions is better left in the hands of a system that can measure the results of its decisions in a closed loop fashion and make decisions based on the empirical knowledge accumulated through observing the running of the process.When designing a business process there are key places in which it may be beneficial to introduce RTD decisions. These are:Thresholds - whenever a threshold is used to determine the processing of a unit, there may be an opportunity to make the threshold "softer" by introducing an RTD decision based on predicted results. For example an insurance company process may have a total claim threshold to initiate an investigation. Instead of having that threshold, RTD could be used to help determine what claims to investigate based on the likelihood they are fraudulent, cost of investigation and effect on processing time.Human decisions - sometimes a process will let the human participants make decisions of flow. For example, a call center process may leave the escalation decision to the agent. While this has flexibility, it may produce undesired results and asymetry in customer treatment that is not based on objective functions but subjective reasoning by the agent. Instead, an RTD decision may be introduced to recommend escalation or other kinds of treatments.Content Selection - a process may include the use of messaging with customers. The selection of the most appropriate message to the customer given the content can be optimized with RTD.A/B Testing - a process may have optional paths for which it is not clear what populations they work better for. Rather than making the arbitrary selection or selection by committee of the option deeped the best, RTD can be introduced to dynamically determine the best path for each unit.In summary, RTD can be used to make BPM based process automation more dynamic and adaptable to the different situations encountered in processing. Effectively making the automation softer, less rigid in its processing.

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • Mathematica Programming Language&ndash;An Introduction

    - by JoshReuben
    The Mathematica http://www.wolfram.com/mathematica/ programming model consists of a kernel computation engine (or grid of such engines) and a front-end of notebook instances that communicate with the kernel throughout a session. The programming model of Mathematica is incredibly rich & powerful – besides numeric calculations, it supports symbols (eg Pi, I, E) and control flow logic.   obviously I could use this as a simple calculator: 5 * 10 --> 50 but this language is much more than that!   for example, I could use control flow logic & setup a simple infinite loop: x=1; While [x>0, x=x,x+1] Different brackets have different purposes: square brackets for function arguments:  Cos[x] round brackets for grouping: (1+2)*3 curly brackets for lists: {1,2,3,4} The power of Mathematica (as opposed to say Matlab) is that it gives exact symbolic answers instead of a rounded numeric approximation (unless you request it):   Mathematica lets you define scoped variables (symbols): a=1; b=2; c=a+b --> 5 these variables can contain symbolic values – you can think of these as partially computed functions:   use Clear[x] or Remove[x] to zero or dereference a variable.   To compute a numerical approximation to n significant digits (default n=6), use N[x,n] or the //N prefix: Pi //N -->3.14159 N[Pi,50] --> 3.1415926535897932384626433832795028841971693993751 The kernel uses % to reference the lastcalculation result, %% the 2nd last, %%% the 3rd last etc –> clearer statements: eg instead of: Sqrt[Pi+Sqrt[Sqrt[Pi+Sqrt[Pi]]] do: Sqrt[Pi]; Sqrt[Pi+%]; Sqrt[Pi+%] The help system supports wildcards, so I can search for functions like so: ?Inv* Mathematica supports some very powerful programming constructs and a rich function library that allow you to do things that you would have to write allot of code for in a language like C++.   the Factor function – factorization: Factor[x^3 – 6*x^2 +11x – 6] --> (-3+x) (-2+x) (-1+x)   the Solve function – find the roots of an equation: Solve[x^3 – 2x + 1 == 0] -->   the Expand function – express (1+x)^10 in polynomial form: Expand[(1+x)^10] --> 1+10x+45x^2+120x^3+210x^4+252x^5+210x^6+120x^7+45x^8+10x^9+x^10 the Prime function – what is the 1000th prime? Prime[1000] -->7919 Mathematica also has some powerful graphics capabilities:   the Plot function – plot the graph of y=Sin x in a single period: Plot[Sin[x], {x,0,2*Pi}] you can also plot 3D surfaces of functions using Plot3D function

    Read the article

  • Plan Operator Tuesday round-up

    - by Rob Farley
    Eighteen posts for T-SQL Tuesday #43 this month, discussing Plan Operators. I put them together and made the following clickable plan. It’s 1000px wide, so I hope you have a monitor wide enough. Let me explain this plan for you (people’s names are the links to the articles on their blogs – the same links as in the plan above). It was clearly a SELECT statement. Wayne Sheffield (@dbawayne) wrote about that, so we start with a SELECT physical operator, leveraging the logical operator Wayne Sheffield. The SELECT operator calls the Paul White operator, discussed by Jason Brimhall (@sqlrnnr) in his post. The Paul White operator is quite remarkable, and can consume three streams of data. Let’s look at those streams. The first pulls data from a Table Scan – Boris Hristov (@borishristov)’s post – using parallel threads (Bradley Ball – @sqlballs) that pull the data eagerly through a Table Spool (Oliver Asmus – @oliverasmus). A scalar operation is also performed on it, thanks to Jeffrey Verheul (@devjef)’s Compute Scalar operator. The second stream of data applies Evil (I figured that must mean a procedural TVF, but could’ve been anything), courtesy of Jason Strate (@stratesql). It performs this Evil on the merging of parallel streams (Steve Jones – @way0utwest), which suck data out of a Switch (Paul White – @sql_kiwi). This Switch operator is consuming data from up to four lookups, thanks to Kalen Delaney (@sqlqueen), Rick Krueger (@dataogre), Mickey Stuewe (@sqlmickey) and Kathi Kellenberger (@auntkathi). Unfortunately Kathi’s name is a bit long and has been truncated, just like in real plans. The last stream performs a join of two others via a Nested Loop (Matan Yungman – @matanyungman). One pulls data from a Spool (my post – @rob_farley) populated from a Table Scan (Jon Morisi). The other applies a catchall operator (the catchall is because Tamera Clark (@tameraclark) didn’t specify any particular operator, and a catchall is what gets shown when SSMS doesn’t know what to show. Surprisingly, it’s showing the yellow one, which is about cursors. Hopefully that’s not what Tamera planned, but anyway...) to the output from an Index Seek operator (Sebastian Meine – @sqlity). Lastly, I think everyone put in 110% effort, so that’s what all the operators cost. That didn’t leave anything for me, unfortunately, but that’s okay. Also, because he decided to use the Paul White operator, Jason Brimhall gets 0%, and his 110% was given to Paul’s Switch operator post. I hope you’ve enjoyed this T-SQL Tuesday, and have learned something extra about Plan Operators. Keep your eye out for next month’s one by watching the Twitter Hashtag #tsql2sday, and why not contribute a post to the party? Big thanks to Adam Machanic as usual for starting all this. @rob_farley

    Read the article

  • How To Approach 360 Degree Snake

    - by Austin Brunkhorst
    I've recently gotten into XNA and must say I love it. As sort of a hello world game I decided to create the classic game "Snake". The 90 degree version was very simple and easy to implement. But as I try to make a version of it that allows 360 degree rotation using left and right arrows, I've come into sort of a problem. What i'm doing now stems from the 90 degree version: Iterating through each snake body part beginning at the tail, and ending right before the head. This works great when moving every 100 milliseconds. The problem with this is that it makes for a choppy style of gameplay as technically the game progresses at only 6 fps rather than it's potential 60. I would like to move the snake every game loop. But unfortunately because the snake moves at the rate of it's head's size it goes way too fast. This would mean that the head would need to move at a much smaller increment such as (2, 2) in it's direction rather than what I have now (32, 32). Because I've been working on this game off and on for a couple of weeks while managing school I think that I've been thinking too hard on how to accomplish this. It's probably a simple solution, i'm just not catching it. Here's some pseudo code for what I've tried based off of what makes sense to me. I can't really think of another way to do it. for(int i = SnakeLength - 1; i > 0; i--){ current = SnakePart[i], next = SnakePart[i - 1]; current.x = next.x - (current.width * cos(next.angle)); current.y = next.y - (current.height * sin(next.angle)); current.angle = next.angle; } SnakeHead.x += cos(SnakeAngle) * SnakeSpeed; SnakeHead.y += sin(SnakeAngle) * SnakeSpeed; This produces something like this: Code in Action. As you can see each part always stays behind the head and doesn't make a "Trail" effect. A perfect example of what i'm going for can be found here: Data Worm. Not the viewport rotation but the trailing effect of the triangles. Thanks for any help!

    Read the article

  • Performance problems loading XML with SSIS, an alternative way!

    - by AtulThakor
    I recently needed to load several thousand XML files into a SQL database, I created an SSIS package which was created as followed: Using a foreach container to loop through a directory and load each file path into a variable, the “Import XML” dataflow would then load each XML file into a SQL table.       Running this, it took approximately 1 second to load each file which seemed a massive amount of time to parse the XML and load the data, speaking to my colleague Martin Croft, he suggested the use of T-SQL Bulk Insert and OpenRowset, so we adjusted the package as followed:     The same foreach container was used but instead the following SQL command was executed (this is an expression):     "INSERT INTO MyTable(FileDate) SELECT   CAST(bulkcolumn AS XML)     FROM OPENROWSET(         BULK         '" + @[User::CurrentFile]  + "',         SINGLE_BLOB ) AS x"     Using this method we managed to load approximately 20 records per second, much faster…for data loading! For what we wanted to achieve this was perfect but I’ll leave you with the following points when making your own decision on which solution you decide to choose!      Openrowset Method Much faster to get the data into SQL You’ll need to parse or create a view over the XML data to allow the data to be more usable(another post on this!) Not able to apply validation/transformation against the data when loading it The SQL Server service account will need permission to the file No schema validation when loading files SSIS Slower (in our case) Schema validation Allows you to apply transformations/joins to the data Permissions should be less of a problem Data can be loaded into the final form through the package When using a schema validation errors can fail the package (I’ll do another post on this)

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • Project Euler 18: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 18.  As always, any feedback is welcome. # Euler 18 # http://projecteuler.net/index.php?section=problems&id=18 # By starting at the top of the triangle below and moving # to adjacent numbers on the row below, the maximum total # from top to bottom is 23. # # 3 # 7 4 # 2 4 6 # 8 5 9 3 # # That is, 3 + 7 + 4 + 9 = 23. # Find the maximum total from top to bottom of the triangle below: # 75 # 95 64 # 17 47 82 # 18 35 87 10 # 20 04 82 47 65 # 19 01 23 75 03 34 # 88 02 77 73 07 63 67 # 99 65 04 28 06 16 70 92 # 41 41 26 56 83 40 80 70 33 # 41 48 72 33 47 32 37 16 94 29 # 53 71 44 65 25 43 91 52 97 51 14 # 70 11 33 28 77 73 17 78 39 68 17 57 # 91 71 52 38 17 14 91 43 58 50 27 29 48 # 63 66 04 68 89 53 67 30 73 16 69 87 40 31 # 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 # NOTE: As there are only 16384 routes, it is possible to solve # this problem by trying every route. However, Problem 67, is the # same challenge with a triangle containing one-hundred rows; it # cannot be solved by brute force, and requires a clever method! ;o) import time start = time.time() triangle = [ [75], [95, 64], [17, 47, 82], [18, 35, 87, 10], [20, 04, 82, 47, 65], [19, 01, 23, 75, 03, 34], [88, 02, 77, 73, 07, 63, 67], [99, 65, 04, 28, 06, 16, 70, 92], [41, 41, 26, 56, 83, 40, 80, 70, 33], [41, 48, 72, 33, 47, 32, 37, 16, 94, 29], [53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14], [70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57], [91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48], [63, 66, 04, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31], [04, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 04, 23]] # Loop through each row of the triangle starting at the base. for a in range(len(triangle) - 1, -1, -1): for b in range(0, a): # Get the maximum value for adjacent cells in current row. # Update the cell which would be one step prior in the path # with the new total. For example, compare the first two # elements in row 15. Add the max of 04 and 62 to the first # position of row 14.This provides the max total from row 14 # to 15 starting at the first position. Continue to work up # the triangle until the maximum total emerges at the # triangle's apex. triangle [a-1][b] += max(triangle [a][b], triangle [a][b+1]) print triangle [0][0] print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • How to pad number with leading zero with C#

    - by Jalpesh P. Vadgama
    Recently I was working with a project where I was in need to format a number in such a way which can apply leading zero for particular format.  So after doing such R and D I have found a great way to apply this leading zero format. I was having need that I need to pad number in 5 digit format. So following is a table in which format I need my leading zero format. 1-> 00001 20->00020 300->00300 4000->04000 50000->5000 So in the above example you can see that 1 will become 00001 and 20 will become 00200 format so on. So to display an integer value in decimal format I have applied interger.Tostring(String) method where I have passed “Dn” as the value of the format parameter, where n represents the minimum length of the string. So if we pass 5 it will have padding up to 5 digits. So let’s create a simple console application and see how its works. Following is a code for that. using System; namespace LeadingZero { class Program { static void Main(string[] args) { int a = 1; int b = 20; int c = 300; int d = 4000; int e = 50000; Console.WriteLine(string.Format("{0}------>{1}",a,a.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", b, b.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", c, c.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", d, d.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", e, e.ToString("D5"))); Console.ReadKey(); } } } As you can see in the above code I have use string.Format function to display value of integer and after using integer value’s  ToString method. Now Let’s run the console application and following is the output as expected. Here you can see the integer number are converted into the exact output that we requires. That’s it you can see it’s very easy. We have written code in nice clean way and without writing any extra code or loop. Hope you liked it. Stay tuned for more.. Till than happy programming.

    Read the article

  • Nesting Linq-to-Objects query within Linq-to-Entities query –what is happening under the covers?

    - by carewithl
    var numbers = new int[] { 1, 2, 3, 4, 5 }; var contacts = from c in context.Contacts where c.ContactID == numbers.Max() | c.ContactID == numbers.FirstOrDefault() select c; foreach (var item in contacts) Console.WriteLine(item.ContactID); Linq-to-Entities query is first translated into Linq expression tree, which is then converted by Object Services into command tree. And if Linq-to-Entities query nests Linq-to-Objects query, then this nested query also gets translated into an expression tree. a) I assume none of the operators of the nested Linq-to-Objects query actually get executed, but instead data provider for particular DB (or perhaps Object Services) knows how to transform the logic of Linq-to-Objects operators into appropriate SQL statements? b) Data provider knows how to create equivalent SQL statements only for some of the Linq-to-Objects operators? c) Similarly, data provider knows how to create equivalent SQL statements only for some of the non-Linq methods in the Net Framework class library? EDIT: I know only some Sql so I can't be completely sure, but reading Sql query generated for the above code it seems data provider didn't actually execute numbers.Max method, but instead just somehow figured out that numbers.Max should return the maximum value and then proceed to include in generated Sql query a call to TSQL's build-in MAX function. It also put all the values held by numbers array into a Sql query. SELECT CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN '0X0X' ELSE '0X1X' END AS [C1], [Extent1].[ContactID] AS [ContactID], [Extent1].[FirstName] AS [FirstName], [Extent1].[LastName] AS [LastName], [Extent1].[Title] AS [Title], [Extent1].[AddDate] AS [AddDate], [Extent1].[ModifiedDate] AS [ModifiedDate], [Extent1].[RowVersion] AS [RowVersion], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[CustomerTypeID] END AS [C2], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[InitialDate] END AS [C3], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryDesintation] END AS [C4], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryDestination] END AS [C5], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryActivity] END AS [C6], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryActivity] END AS [C7], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[Notes] END AS [C8], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[RowVersion] END AS [C9], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[BirthDate] END AS [C10], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[HeightInches] END AS [C11], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[WeightPounds] END AS [C12], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[DietaryRestrictions] END AS [C13] FROM [dbo].[Contact] AS [Extent1] LEFT OUTER JOIN (SELECT [Extent2].[ContactID] AS [ContactID], [Extent2].[BirthDate] AS [BirthDate], [Extent2].[HeightInches] AS [HeightInches], [Extent2].[WeightPounds] AS [WeightPounds], [Extent2].[DietaryRestrictions] AS [DietaryRestrictions], [Extent3].[CustomerTypeID] AS [CustomerTypeID], [Extent3].[InitialDate] AS [InitialDate], [Extent3].[PrimaryDesintation] AS [PrimaryDesintation], [Extent3].[SecondaryDestination] AS [SecondaryDestination], [Extent3].[PrimaryActivity] AS [PrimaryActivity], [Extent3].[SecondaryActivity] AS [SecondaryActivity], [Extent3].[Notes] AS [Notes], [Extent3].[RowVersion] AS [RowVersion], cast(1 as bit) AS [C1] FROM [dbo].[ContactPersonalInfo] AS [Extent2] INNER JOIN [dbo].[Customers] AS [Extent3] ON [Extent2].[ContactID] = [Extent3].[ContactID]) AS [Project1] ON [Extent1].[ContactID] = [Project1].[ContactID] LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll3].[C1] AS [C1] FROM (SELECT [UnionAll2].[C1] AS [C1] FROM (SELECT [UnionAll1].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable1] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable2]) AS [UnionAll1] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable3]) AS [UnionAll2] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable4]) AS [UnionAll3] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable5]) AS [c]) AS [Limit1] ON 1 = 1 LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll7].[C1] AS [C1] FROM (SELECT [UnionAll6].[C1] AS [C1] FROM (SELECT [UnionAll5].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable6] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable7]) AS [UnionAll5] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable8]) AS [UnionAll6] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable9]) AS [UnionAll7] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable10]) AS [c]) AS [Limit2] ON 1 = 1 CROSS JOIN (SELECT MAX([UnionAll12].[C1]) AS [A1] FROM (SELECT [UnionAll11].[C1] AS [C1] FROM (SELECT [UnionAll10].[C1] AS [C1] FROM (SELECT [UnionAll9].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable11] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable12]) AS [UnionAll9] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable13]) AS [UnionAll10] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable14]) AS [UnionAll11] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable15]) AS [UnionAll12]) AS [GroupBy1] WHERE [Extent1].[ContactID] IN ([GroupBy1].[A1], (CASE WHEN ([Limit1].[C1] IS NULL) THEN 0 ELSE [Limit2].[C1] END)) Based on this, is it possible that Linq2Entities provider indeed doesn't execute non-Linq and Linq-to-Object methods, but instead creates equivalent SQL statements for some of them ( and for others it throws an exception )? Thank you in advance

    Read the article

  • Scrum and Team Consolidation

    - by John K. Hines
    I’m still working my way through one of the more painful team consolidations of my career.  One thing that’s made it hard was my assumption that the use of Agile methods and Scrum would make everything easy.  Take three teams, make all work visible, track it, and presto: An efficient, functioning software development team. What I’ve come to realize is that the primary benefit of Scrum is that Scrum brings teams closer to their customers.  Frequent meetings, short iterations, and phased deployments are all meant to keep the customer in the loop.  It’s true that as teams become proficient with Scrum they tend to become more efficient.  But I don’t think it’s true that Scrum automatically helps people work together. Instead, Scrum can point out when teams aren’t good at working together.   And it really illustrates when teams, especially teams in sustaining mode, are reacting to their customers instead of innovating with them.  At the moment we’ve inherited a huge backlog of tools, processes, and personalities.  It’s up to us to sort them all out.  Unfortunately, after 7 &frac12; months we’re still sorting. What I’d recommend for any blended team is to look at your current product lifecycles and work on a single lifecycle for all work.  If you can’t objectively come up with one process, that’s a good indication that the new team might not be a good fit for being a single unit (which happens all the time in bigger companies).  Go ahead & self-organize into sub-teams.  Then repeat the process. If you can come up with a single process, tackle each piece and standardize all of them.  Do this as soon as possible, as it can be uncomfortable.  Standardize your requirements gathering and tracking, your exploration and technical analysis, your project planning, development standards, validation and sustaining processes.  Standardize all of it.  Make this your top priority, get it out of the way, and get back to work. Lastly, managers of blended teams should realize what I’m suggesting is a disruptive process.  But you’ve just reorganized the team is already disrupted.   Don’t pull the bandage off slowly and force the team through a prolonged transition phase, lowering their productivity over the long term.  You can role model leadership to your team and drive a true consolidation.  Destroy roadblocks, reassure those on your team who are afraid of change, and push forward to create something efficient and beautiful.  Then use Scrum to reengage your customers in a way that they’ll love. Technorati tags: Scrum Scrum Process

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Craftsmanship Tour: Day 2 Obtiva

    - by Liam McLennan
    I like Chicago. It is a great city for travellers. From the moment I got off the plane at O’Hare everything was easy. I took the train to ‘the Loop’ and walked around the corner to my hotel, Hotel Blake on Dearborn St. Sadly, the elevated train lines in downtown Chicago remind me of ‘Shall We Dance’. Hotel Blake is excellent (except for the breakfast) and the concierge directed me to a pizza place called Lou Malnati's for Chicago style deep-dish pizza. Lou Malnati’s would be a great place to go with a group of friends. I felt strange dining there by myself, but the food and service were excellent. As usual in the United States the portion was so large that I could not finish it, but oh how I tried. Dave Hoover, who invited me to Obtiva for the day, had asked me to arrive at 9:45am. I was up early and had some time to kill so I stopped at the Willis Tower, since it was on my way to the office. Willis Tower is 1,451 feet (442 m) tall and has an observation deck at the top. Around the observation deck are a set of acrylic boxes, protruding from the side of the building. Brave soles can walk out on the perspex and look between their feet all the way down to the street. It is unnerving. Obtiva is a progressive, craftsmanship-focused software development company in downtown Chicago. Dave even wrote a book, Apprenticeship Patterns, that provides a catalogue of patterns to assist aspiring software craftsmen to achieve their goals. I spent the morning working in Obtiva’s software studio, an open xp-style office that houses Obtiva’s in-house development team. For lunch Dave Hoover, Corey Haines, Cory Foy and I went to a local Greek restaurant (not Dancing Zorbas). Dave, Corey and Cory are three smart and motivated guys and I found their ideas enlightening. It was especially great to chat with Corey Haines since he was the inspiration for my craftsmanship tour in the first place. After lunch I recorded a brief interview with Dave. Unfortunately, the battery in my camera went flat so I missed recording some interesting stuff. Interview with Dave Hoover In the evening Obtiva hosted an rspec hackfest with David Chelimsky and others. This was an excellent opportunity to be around some of the very best ruby programmers. At 10pm I went back to my hotel to get some rest before my train north the next morning.

    Read the article

  • TFS 2010 SDK: Integrating Twitter with TFS Programmatically

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS API,Integrate Twitter TFS,TFS Programming,ALM,TwitterSharp   Friends at ‘Twitter Sharp’ have created a wonderful .net API for twitter. With this blog post i will try to show you a basic TFS – Twitter integration scenario where i will retrieve the Team Project details programmatically and then publish these details on my twitter page. In future blogs i will be demonstrating how to create a windows service to capture the events raised by TFS and then publishing them in your social eco-system. Download Working Demo: Integrate Twitter - Tfs Programmatically   1. Setting up Twitter API Download Tweet Sharp from => https://github.com/danielcrenna/tweetsharp  Before you can start playing around with this, you will need to register an application on twitter. This is because Twitter uses the OAuth authentication protocol and will not issue an Access token unless your application is registered with them. Go to https://dev.twitter.com/ and register your application   Once you have registered your application, you will need ‘Customer Key’, ‘Customer Secret’, ‘Access Token’, ‘Access Token Secret’ 2. Connecting to Twitter using the Tweet Sharp API Create a new C# windows forms project and add reference to ‘Hammock.ClientProfile’, ‘Newtonsoft.Json’, ‘TweetSharp’ Add the following keys to the App.config (Note – The values for the keys below are in correct and if you try and connect using them then you will get an authorization failure error). Add a new class ‘TwitterProxy’ and use the following code to connect to the TwitterService (Read more about OAuthentication - http://dev.twitter.com/pages/auth) using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Configuration;using TweetSharp; namespace WindowsFormsApplication2{ public class TwitterProxy { private static string _hero; private static string _consumerKey; private static string _consumerSecret; private static string _accessToken; private static string _accessTokenSecret;  public static TwitterService ConnectToTwitter() { _consumerKey = ConfigurationManager.AppSettings["ConsumerKey"]; _consumerSecret = ConfigurationManager.AppSettings["ConsumerSecret"]; _accessToken = ConfigurationManager.AppSettings["AccessToken"]; _accessTokenSecret = ConfigurationManager.AppSettings["AccessTokenSecret"];  return new TwitterService(_consumerKey, _consumerSecret, _accessToken, _accessTokenSecret); } }} Time to Tweet! _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); _twitterService.SendTweet("Hello World"); SendTweet will return the TweetStatus, If you do not get a 200 OK status that means you have failed authentication, please revisit the Access tokens. --RESPONSE: https://api.twitter.com/1/statuses/update.json HTTP/1.1 200 OK X-Transaction: 1308476106-69292-41752 X-Frame-Options: SAMEORIGIN X-Runtime: 0.03040 X-Transaction-Mask: a6183ffa5f44ef11425211f25 Pragma: no-cache X-Access-Level: read-write X-Revision: DEV X-MID: bd8aa0abeccb6efba38bc0a391a73fab98e983ea Cache-Control: no-cache, no-store, must-revalidate, pre-check=0, post-check=0 Content-Type: application/json; charset=utf-8 Date: Sun, 19 Jun 2011 09:35:06 GMT Expires: Tue, 31 Mar 1981 05:00:00 GMT Last-Modified: Sun, 19 Jun 2011 09:35:06 GMT Server: hi Vary: Accept-Encoding Content-Encoding: Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked   3. Integrate with TFS In my blog post Connect to TFS Programmatically i have in depth demonstrated how to connect to TFS using the TFS API. 1: // Update the AppConfig with the URI of the Team Foundation Server you want to connect to, Make sure you have View Team Project Collection Details permissions on the server 2: private static string _myUri = ConfigurationManager.AppSettings["TfsUri"]; 3: private static TwitterService _twitterService = null; 4:   5: private void button1_Click(object sender, EventArgs e) 6: { 7: lblNotes.Text = string.Empty; 8:   9: try 10: { 11: StringBuilder notes = new StringBuilder(); 12:   13: _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); 14:   15: _twitterService.SendTweet("Hello World"); 16:   17: TfsConfigurationServer configurationServer = 18: TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri)); 19:   20: CatalogNode catalogNode = configurationServer.CatalogNode; 21:   22: ReadOnlyCollection<CatalogNode> tpcNodes = catalogNode.QueryChildren( 23: new Guid[] { CatalogResourceTypes.ProjectCollection }, 24: false, CatalogQueryOptions.None); 25:   26: // tpc = Team Project Collection 27: foreach (CatalogNode tpcNode in tpcNodes) 28: { 29: Guid tpcId = new Guid(tpcNode.Resource.Properties["InstanceId"]); 30: TfsTeamProjectCollection tpc = configurationServer.GetTeamProjectCollection(tpcId); 31:   32: notes.AppendFormat("{0} Team Project Collection : {1}{0}", Environment.NewLine, tpc.Name); 33: _twitterService.SendTweet(String.Format("http://Lunartech.codeplex.com - Connecting to Team Project Collection : {0} ", tpc.Name)); 34:   35: // Get catalog of tp = 'Team Projects' for the tpc = 'Team Project Collection' 36: var tpNodes = tpcNode.QueryChildren( 37: new Guid[] { CatalogResourceTypes.TeamProject }, 38: false, CatalogQueryOptions.None); 39:   40: foreach (var p in tpNodes) 41: { 42: notes.AppendFormat("{0} Team Project : {1} - {2}{0}", Environment.NewLine, p.Resource.DisplayName,  "This is an open source project hosted on codeplex"); 43: _twitterService.SendTweet(String.Format(" Connected to Team Project: '{0}' – '{1}' ", p.Resource.DisplayName, "This is an open source project hosted on codeplex")); 44: } 45: } 46: notes.AppendFormat("{0} Updates posted on Twitter : {1} {0}", Environment.NewLine, @"http://twitter.com/lunartech1"); 47: lblNotes.Text = notes.ToString(); 48: } 49: catch (Exception ex) 50: { 51: lblError.Text = " Message : " + ex.Message + (ex.InnerException != null ? " Inner Exception : " + ex.InnerException : string.Empty); 52: } 53: }   The extensions you can build integrating TFS and Twitter are incredible!   Share this post :

    Read the article

  • How do you keep code with continuations/callbacks readable?

    - by Heinzi
    Summary: Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks? I'm using a JavaScript library that does a lot of stuff asynchronously and heavily relies on callbacks. It seems that writing a simple "load A, load B, ..." method becomes quite complicated and hard to follow using this pattern. Let me give a (contrived) example. Let's say I want to load a bunch of images (asynchronously) from a remote web server. In C#/async, I'd write something like this: disableStartButton(); foreach (myData in myRepository) { var result = await LoadImageAsync("http://my/server/GetImage?" + myData.Id); if (result.Success) { myData.Image = result.Data; } else { write("error loading Image " + myData.Id); return; } } write("success"); enableStartButton(); The code layout follows the "flow of events": First, the start button is disabled, then the images are loaded (await ensures that the UI stays responsive) and then the start button is enabled again. In JavaScript, using callbacks, I came up with this: disableStartButton(); var count = myRepository.length; function loadImage(i) { if (i >= count) { write("success"); enableStartButton(); return; } myData = myRepository[i]; LoadImageAsync("http://my/server/GetImage?" + myData.Id, function(success, data) { if (success) { myData.Image = data; } else { write("error loading image " + myData.Id); return; } loadImage(i+1); } ); } loadImage(0); I think the drawbacks are obvious: I had to rework the loop into a recursive call, the code that's supposed to be executed in the end is somewhere in the middle of the function, the code starting the download (loadImage(0)) is at the very bottom, and it's generally much harder to read and follow. It's ugly and I don't like it. I'm sure that I'm not the first one to encounter this problem, so my question is: Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks?

    Read the article

  • Implement Negascout Algorithm with stack

    - by Dan
    I'm not familiar with how these stack exchange accounts work so if this is double posting I apologize. I asked the same thing on stackoverflow. I have added an AI routine to a game I am working on using the Negascout algorithm. It works great, but when I set a higher maximum depth it can take a few seconds to complete. The problem is it blocks the main thread, and the framework I am using does not have a way to deal with multi-threading properly across platforms. So I am trying to change this routine from recursively calling itself, to just managing a stack (vector) so that I can progress through the routine at a controlled pace and not lock up the application while the AI is thinking. I am getting hung up on the second recursive call in the loop. It relies on a returned value from the first call, so I don't know how to add these to a stack. My Working c++ Recursive Code: MoveScore abNegascout(vector<vector<char> > &board, int ply, int alpha, int beta, char piece) { if (ply==mMaxPly) { return MoveScore(evaluation.evaluateBoard(board, piece, oppPiece)); } int currentScore; int bestScore = -INFINITY; MoveCoord bestMove; int adaptiveBeta = beta; vector<MoveCoord> moveList = evaluation.genPriorityMoves(board, piece, findValidMove(board, piece, false)); if (moveList.empty()) { return MoveScore(bestScore); } bestMove = moveList[0]; for(int i=0;i<moveList.size();i++) { MoveCoord move = moveList[i]; vector<vector<char> > newBoard; newBoard.insert( newBoard.end(), board.begin(), board.end() ); effectMove(newBoard, piece, move.getRow(), move.getCol()); // First Call ****** MoveScore current = abNegascout(newBoard, ply+1, -adaptiveBeta, -max(alpha,bestScore), oppPiece); currentScore = - current.getScore(); if (currentScore>bestScore){ if (adaptiveBeta == beta || ply>=(mMaxPly-2)){ bestScore = currentScore; bestMove = move; }else { // Second Call ****** current = abNegascout(newBoard, ply+1, -beta, -currentScore, oppPiece); bestScore = - current.getScore(); bestMove = move; } if(bestScore>=beta){ return MoveScore(bestMove,bestScore); } adaptiveBeta = max(alpha, bestScore) + 1; } } return MoveScore(bestMove,bestScore); } If someone can please help by explaining how to get this to work with a simple stack. Example code would be much appreciated. While c++ would be perfect, any language that demonstrates how would be great. Thank You.

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >