Search Results

Search found 47615 results on 1905 pages for 'make it useful keep it simple'.

Page 436/1905 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Understanding the JSF Lifecycle and ADF Optimized Lifecycle

    - by Steven Davelaar
    While coaching ADF development teams over the years, I have noticed that many developers lack a basic understanding of Java Server Faces, in particular the JSF lifecycle and how ADF optimizes this lifecycle in specific situations. As a result, ADF developers who are tasked to build a seemingly simple ADF page, can get extremely frustrated by the -in their eyes- unexpected or unlogical behavior of ADF.  They start to play with the immediate property and the partialTriggers property in a trial-and-error manner. Often, they play with these properties until their specific issue is solved, unaware of other more severe bugs that might be introduced by the values they choose for these properties. So, I decided to submit a presentation for the UKOUG entitled "What you need to know about JSF to be succesful with ADF".  The abstract was accepted, and I started putting together the presentation and demo application. I built up a demo application step-by-step, trying to cover the JSF-related  top issues and challenges I encountered over the years in a simple "Hello World" demo. This turned out to be both a very time-consuming and very interesting journey. I had never thought I would learn so much myself in preparing this presentation. I never thought I would end up with potentially controversial conclusions like "Never set immediate=true on an editable component".  I did not realize the sometimes immense implications of the ADF optimized lifecycle beforehand. I never thought that "Hello World" demo's could get so complex. But as I went on I was confident this was valuable material, even for experienced ADF developers with a good understanding of JSF. When I finished, I realized the original title and abstract was misleading, as was the target audience. Yes, it was covering the JSF lifecycle, but no other aspects of JSF you need to know for ADF development. Yes, it was covering some JSF basics as mentioned in the abstract, but all in all it had become a pretty advanced presentation. At the same time, the issues discussed are very common, novice ADF developers might easily run into them while building their first pages. I ran out of time, so I decided to just present what I had, apologizing at the beginning for the misleading title, showing a second slide with a better title "18 invaluable lessons about ADF-JSF interaction". I think the presentation was well received overall, although people who don't like it or don't understand it, usually don't come and tell you afterwards.... I am still struggling with the title, for this blog post I used yet another title, anyway, you can download the presentation-that-still-lacks-a-good-title here. The finished JDev 11.1.1.6 demo app can be downloaded here.  The 18 lessons mentioned in the presentation are summarized here. As mentioned on the last slide, print out the lessons, and learn them by heart, I am pretty sure it will save you lots of time and frustration!

    Read the article

  • How to use Ajax : CollapsiblePanelExtender in ASP.NET

    - by SAMIR BHOGAYTA
    //It is simple method, Other properties will be set which you want Step 1: Take one panel and all the content you want to collapse put into this panel. Step 2: Set the Collapsed Property true. Step 3: ExpandControlID/CollapseControlID : The Controls that will expand or collapse the panel on a click, respectively. If these values are the same, the panel will automatically toggle its state on each click. Step 4: TargetControlID is PanelID Step 5: Select Panel and Set the Property SuppressPostBack="True"

    Read the article

  • Google I/O 2012 - Best Practices for Maps API Developers

    Google I/O 2012 - Best Practices for Maps API Developers Susannah Raub, Jez Fletcher The Google Maps API makes it easy to add simple maps to your applications, but we want to take you to the next level. In this session we reveal our recommended best practices for Maps API developers, including developer tools, testing, and API features that will save you time, avoid a headache or two, and delight your users. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 400 8 ratings Time: 48:52 More in Science & Technology

    Read the article

  • Benefits of Using Both SEO Techniques and Google AdWords

    As the name suggests, Search Engine Optimization is the process by which your web page is optimized to appear in the first few results of Google's search page also known as SERP (Search Engine Results Page). This is not as simple as it sounds unfortunately. To get your page onto the first page you must actively promote your site using legitimate and natural means by which the site gets more visitors, reciprocal links, backlinks and ultimately high quality traffic.

    Read the article

  • “Play Now” via website vs. download & install

    - by Inside
    I've spent some time looking over the various threads here on GDSE and also on the regular Stackoverflow site, and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on. In particular, I'm talking about browser games vs. desktop games. I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D (and also some synchronized/networked programming), but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now). Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does it make sense to even go with a web-environment for the kind of game that I want to make? Does anyone have any experiences with decisions like this? (* With the exception of Flash-based engines it seems like most of the other approaches have downsides when compared to what is available for desktop-based environments. I'd go with Flash, but I'm worried that Flash's 3D capabilities aren't mature enough right now to do what I want easily. There's also Unity3D, but I'm not sure how I feel about that at all. It seems highly polished, but requires a plugin to be downloaded for the game to be played -- at that rate I might as well have players download my game.) For simple & short games the Newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "OK, I'm going to download and play that"?

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Apache on Windows - splitting vHost logs

    - by Cylindric
    I have a Windows Server 2008 running Apache, and it will be hosting several virtual hosts. I'd rather not use the logrotate tool (|bin/logrotate), as it seems create significant extra overhead with all the processes. Is there a simple Windows alternative to get the log entries from a combined log file split into several per-site files? Preferably with custom output directories, but that is optional.

    Read the article

  • In rails, what defines unit testing as opposed to other kinds of testing

    - by junky
    Initially I thought this was simple: unit testing for models with other testing such as integration for controller and browser testing for views. But more recently I've seen a lot of references to unit testing that doesn't seem to exactly follow this format. Is it possible to have a unit test of a controller? Does that mean that just one method is called? What's the distinction? What does unit testing really means in my rails world?

    Read the article

  • Parse/Write JSON with Unity iOS

    - by DannoEterno
    anybody know a tutorial or maybe can help me to develop a parser/reader for JSON compatible with Unity iOS pro? I've already tried different third part libraries but without luck (i've tried json.net, jsonfx, litjson). Im pretty in hurry of doing a simple parser/writer that i can use also under iOS and not only in Desktop. P.s. i can also use third part library, but please, first of suggest be sure that it will work under iOS! Thank you all

    Read the article

  • Create Your CRM Style

    - by Ruth
    Company branding can create a sense of spirit, belonging, familiarity, and fun. CRM On Demand has long offered company branding options, but now, with Release 17, those options have become quicker, easier, and more flexible. Themes (also known as Skins) allow you to customize the appearance of the CRM On Demand application for your entire company, or for individual roles. Users may also select the theme that works best for them. You can create a new theme in 5 minutes or less, but if you're anything like me, you may enjoy tinkering with it for a while longer. Before you begin tinkering, I recommend spending a few moments coming up with a design plan. If you have specific colors or logos you want for your theme, gather those first...that will move the process along much faster. If you want to match the color of an existing Web site or application, you can use tools, like Pixie, to match the HEX/HTML color values. Logos must be in a JPEG, JPG, PNG, or GIF file format. Header logos must be approximately 70 pixels high by 1680 pixels wide. Footer logos must be no more than 200 pixels wide. And, of course, you must have permission to use the images that you upload for your theme. Creating the theme itself is the simple part. Here are a few simple steps. Note: You must have the Manage Themes privilege to create custom themes. Click the Admin global link. Navigate to Application Customization Themes. Click New. Note: You may also choose to copy and edit and existing theme. Enter information for the following fields: Theme Name - Enter a name for your new theme. Show Default Help Link - Online help holds valuable information for all users, so I recommend selecting this check box. Show Default Training and Support Link - The Training and Support Center holds valuable information for all users, so I recommend selecting this check box. Description - Enter a description for your new theme. Click Save. Once you click Save, the Theme Detail page opens. From there, you can design your theme. The preview shows the Home, Detail, and List pages, with the new theme applied. For more detailed information about themes, click the Help link from any page in CRM On Demand Release 17, then search or browse to find the Creating New Themes page (Administering CRM On Demand Application Customization Creating New Themes). Click the Show Me link on that Help page to access the Creating Custom Themes quick guide. This quick guide shows how each of the page elements are defined.

    Read the article

  • Automated build platform for .NET portfolio - best choice?

    - by jkohlhepp
    I am involved with maintaining a fairly large portfolio of .NET applications. Also in the portfolio are legacy applications built on top of other platforms - native C++, ECLIPS Forms, etc. I have a complex build framework on top of NAnt right now that manages the builds for all of these applications. The build framework uses NAnt to do a number of different things: Pull code out of Subversion, as well as create tags in Subversion Build the code, using MSBuild for .NET or other compilers for other platforms Peek inside AssemblyInfo files to increment version numbers Do deletes of certain files that shouldn't be included in builds / releases Releases code to deployment folders Zips code up for backup purposes Deploy Windows services; start and stop them Etc. Most of those things can be done with just NAnt by itself, but we did build a couple of extension tasks for NAnt to do some things that were specific to our environment. Also, most of those processes above are genericized and reused across a lot of our different application build scripts, so that we don't repeat logic. So it is not simple NAnt code, and not simple build scripts. There are dozens of NAnt files that come together to execute a build. Lately I've been dissatisfied with NAnt for a couple reasons: (1) it's syntax is just awful - programming languages on top of XML are really horrific to maintain, (2) the project seems to have died on the vine; there haven't been a ton of updates lately and it seems like no one is really at the helm. Trying to get it working with .NET 4 has cause some pain points due to this lack of activity. So, with all of that background out of the way, here's my question. Given some of the things that I want to accomplish based on that list above, and given that I am primarily in a .NET shop, but I also need to build non-.NET projects, is there an alternative to NAnt that I should consider switching to? Things on my radar include Powershell (with or without psake), MSBuild by itself, and rake. These all have pros and cons. For example, is MSBuild powerful enough? I remember using it years ago and it didn't seem to have as much power as NAnt. Do I really want to have my team learn Ruby just to do builds using rake? Is psake really mature enough of a project to pin my portfolio to? Is Powershell "too close to the metal" and I'll end up having to write my own build library akin to psake to use it on its own? Are there other tools that I should consider? If you were involved with maintaining a .NET portfolio of significant complexity, what build tool would you be looking at? What does your team currently use?

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • Subscribe to RSS Feeds in Chrome with a Single Click

    - by Asian Angel
    Do you have a Google Reader account and need a quick simple way to subscribe to new RSS feeds while you browse? Then you will definitely want to have a look at the Chrome Reader extension for Chrome. Before If you want to add a new feed to your Google Reader account in Chrome then you have to do it manually. A single feed now and then is not a problem but if you are wanting to build a serious set of RSS feeds quickly then not so good. Chrome Reader in Action Once the extension is installed you are ready to go. Any time that you visit a webpage with an RSS feed available you will see the familiar orange feed icon appear in your “Address Bar”. To add the feed to your Google Reader account just click on the orange feed icon. Note: You will need to be logged into your Google Reader account in your browser. When you click on the orange feed icon a small drop-down window will appear where you can modify the feed name and/or add it to a “custom folder” if desired. Notice that the orange feed icon has changed to the familiar Google Reader icon indicating that the feed has been added to the account. Now you are ready to continue browsing…no other actions are required. And now to subscribe to the Microsoft feed at Ars Technica. Once again a single click and all done. Refreshing our Google Reader page shows both of our new RSS feeds ready to enjoy. Conclusion The Chrome Reader extension makes it as simple as can be to add new RSS feeds to your Google Reader account while browsing with Chrome. Links Download the Chrome Reader extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Access Your favorite RSS Feeds in Windows Media CenterChange Default Feed Reader in FirefoxUse Outlook 2007 as an RSS ReaderInstall Extensions in Google ChromeMake Outlook Stop Using Internet Explorer’s RSS Feeds TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Be Careful When Referencing SPList.Items

    - by Brian Jackett
    Be very careful how you reference your SPListItem objects through the SharePoint API.  I’ll say it again.  Be very careful how you reference your SPListItem objects through the SharePoint API.  Ok, now that you get the point that this will be a “learn from my mistakes and don’t do unsmart things like I did” post, let’s dig into what it was that I did poorly. Scenario     For the past year I’ve been building custom .Net applications that are hosted through SharePoint.  These application involve a number of SharePoint lists, external databases, custom web parts, and other SharePoint elements to provide functionality.  About two weeks ago I received a message from one of our end users that a custom application was performing slowly.  Specifically performance was slow when users were performing actions that interacted with the primary SharePoint list storing data for that app. The Problem     I took a copy of the production site into a dev environment to investigate the code that was executing.  After attaching the debugger and running through the code I quickly found pieces of code referencing SPListItem objects (like below) that were performing very poorly: SPListItem myItem = SPContext.Current.Web.Lists["List Name"].Items.GetItemById(value); // do updates on SPListItem retrieved     As it turns out the SPList I was referencing was fairly large at ~1000 items and weighing in over 150 MB.  You see the problem with my above code is that I retrieved the SPListItem by first (unnecessarily) going through the Items member of the list.  As I understand it, when doing so the executing code will attempt to resolve that entity and pull it from the database and into RAM (all 150 MB.)  This causes the equivalent of a 50 car pile up in terms of performance with a single update taking more than 15 seconds. The Solution     The solution is actually quite simple and I wish I had realized this during development.  Instead of going through the Items member it is possible to call GetItemById(…) directly on the SPList as in the example below: SPListItem myItem = SPContext.Current.Web.Lists["List Name"].GetItemById(value); // do updates on SPListItem retrieved     After making this simple change performance skyrocketed and updates were back to less than a second.   Conclusion     When given the option between two solutions, usually the simplest is the best solution.  In my scenario I was adding extra complexity going through the API the long way around to get to the objects I needed and it ended up hurting performance greatly.  Luckily we were able to find and resolve the performance issue in a relatively short amount of time.  Like I said at the beginning of the post, learn from my mistakes and hope it helps you.         -Frog Out   Image linked from http://www.freespirit.com/files/IMAGE/COVER/LARGE/BeCarefulSafe.jpg

    Read the article

  • Scheduling Jobs in SQL Server Express - Part 2

    In my previous article Scheduling Jobs in SQL Server Express we saw how to make simple job scheduling in SQL Server 2005 Express work. We limited the scheduling to one time or daily repeats. Sometimes this isn't enough. In this article we'll take a look at how to make a scheduling solution based on Service Broker worthy of the SQL Server Agent itself.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to auto-unlock Keyring Manager?

    - by Torben Gundtofte-Bruun
    How can I auto-unlock the Keyring Manager in Oneiric Ocelot? I have found this description for Intrepid, but Ocelot looks different so I can't follow the instructions. I have set up my machine to automatically log in to my account. I am using Unity. I don't mind the lesser security of having the keyring automatically unlocked. (This is a home desktop computer of a simple user, not a missile launch system.)

    Read the article

  • SQL Server SELECT INTO

    - by Derek Dieter
    The most efficient method of copying a result set into a new table is to use the SELECT INTO method. This method also follows a very simple syntax. [/sql] SELECT * INTO dbo.NewTableName FROM dbo.ExistingTable [sql] Once the query above is executed, all the columns and data in the table ExistingTable (along with their datatypes) will be copied into a [...]

    Read the article

  • CodePlex Daily Summary for Thursday, June 17, 2010

    CodePlex Daily Summary for Thursday, June 17, 2010New ProjectsAstalanumerator: A JavaScript based recursive DOM/JS object inspector. Uses a simple tree menu to enumerate all properties of a object.BDD Log Converter: A simple .NET class and console application that will convert BDD logs (MDT) into XML format.CastleInvestProj: Castle Investigating project Easy Callback: This library facilitates the use of multiple asynchronous calls on the same page, and asynchronous calls from a user control also have a clean cod...Easy Wings: Small webApp to manage aircraft booking in flying club. French only for the moment.EPiServer Template Foundation: EPiServer Template Foundation builds on top of Page Type Builder to provide a framework for common site features such as basic page type properties...guidebook: a project to plan your road trip.Look into documents for e-discovery: Search, browse, tag, annotate documents such as MS Word, PDF, e-mail, etc. Good for legal professionals do e-discovery. One Bus Away for Windows Phone: A Windows Phone 7 application written in Silverlight for the OneBusAway (www.onebusaway.org) website. Allows mobile users to search for public tra...OneBusAway for Windows Phone 7: OneBusAway is a service with transit information for the Seattle, WA region. We are creating a mobile application for Windows Phone 7 utilizing th...PoFabLab - Poetry Generation Library and Editor in .NET: PoFabLab is an open source library and word processor designed for digital poets. The library can scan lines, perform Markov analysis, filter text...Project Axure: More details coming soon.Чат кутежа 2.0: ИРЦ чат специально для форума ЕНЕ简易代码生成器: 初次使用CodePlex,这只是一个测试项目。打算用WPF做一个简单的代码生成器,兼具SQL Server Client功能。使用.Net 4.0, C#开发。运营工作系统: TRAS(Team resource assist system) is a toolkit that help the studio to manage and distribute the daily work, like publish the news, GM broadcast a...New ReleasesAmuse - A New MU* Client For Windows: 2010 June: Important Notice to TestersPlease uninstall any previous versions of Amuse prior to this one before installing. Changes and InformationFirst relea...ASP.NET Generic Data Source Control: V1.0: GenericDataSource - Version 1.0Binary This is the first official binary release of the GenericDataSource for ASP.NET - stable and ready for product...Astalanumerator: Astalanumerator 0.7: I wanted to map all properties in javascript and inspect them regardless if they were objects or not. IE doesn’t support for(i in..) for native pro...BDD Log Converter: BDD Log Converter 0.1.0: First release (0.1.0).DVD Swarm: 0.8.10.616: Major update with improvements to encoding speed.Easy Callback: Easy Callback 1.0.0.0: Easy Callback library 1.0.0.0Facebook Connect Authentication for ASP.NET: Facebook Connect Authentication for ASP.NET - v1.0: Now supporting Facebook's new Open Graph API JavaScript SDK, this release of FBConnectAuth also adds support for running in partially trusted envir...FlickrNet API Library: 3.0 Beta 3: Another small Beta. Changed parsing code so exceptions aren't raised when new attributes are added by Flickr. This affects searches where you are ...Infragistics Analytics Framework: Infragistics Analytics Framework 10.2: An updated version of Infragistics Analytics Framework, which utilizes the newest version (v.1.4.4) of MSAF as well as the newest release (v.10.2) ...NUnit Add-in for Growl Notifications: NUnit Add-in for Growl Notifications 1.0 build 1: Version 1.0 build 1:[change] Test run failure notification now disappears automaticallyOpen Source PLM Activities: 3dxml player integration for Aras Innovator: This is just a simple html file you need to add to your Aras Innovator install directory. It loads the 3Dxml player for your 3dxml files. Tested o...patterns & practices - Windows Azure Guidance: WAAG - Part 2 - Drop 1: First code and docs drop for Part 2 of the Windows Azure Architecture Guide Part 1 of the Guide is released here. Highlights of this release are:...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (June 2010): Installer of the latest binaries of Phalanger 2.0 (June 2010) and its integration into Visual Studio 2008 SP1. * Improved compatibility with P...RIA Services Essentials: Book Club Application (June 16, 2010): Added some XAML to hide/show link to BookShelf page based on whether the user is logged in or not. Updated IsBookOwner authorization rule implement...secs4net: Relase 1.01: version 1.01 releasesELedit: sELedit v1.1c: Added: Tool for exporting NPC/Mob database file that is used by sNPCeditSharePoint Ad Rotator: SPAdRotator 2.0 Beta 2: Added: Open tool pane link to default Web Part text Made all images except the first hidden by default, so the Web Part will degrade gracefully w...sMAPtool: sMAPtool v0.7f (without Maps): Added: 3rd party magnifier softwaresNPCedit: sNPCedit v0.9c: Added: npc/mob names and corresponding datbaseSolidWorks Addin Development: GenericAddinFrameworkR1-06.17.2010: .sTASKedit: sTASKedit v0.8: Important BugFix: there was an mistake in the structure, team-member block and get-items block was swapped internally. Tasks that contains both blo...stefvanhooijdonk.com: UnitTesting-SP2010-TFS2010: Files for my post on TFS2010 and NUnit testing with SP2010 projects. see the post here: http://wp.me/pMnlQ-88 The XSLT here is from http://nunit4t...Telerik CAB Enabling Kit for RadControls for WinForms: TCEK 2010.1.10.504: What's new in v2010.1.0610 (Beta): RadDocking component has been replaced with the latest RadDock control Requirements: Visual Studio 2005+ Tele...TFS Buddy: TFS Buddy 1.2: Fixes a problem with notificationsThales Simulator Library: Version 0.9: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptogra...Triton Application Framework: Tools - Code Generator - Build 1.0: This is the first release of the Generator. This is buggy but works.VCC: Latest build, v2.1.30616.0: Automatic drop of latest buildXsltDb - DotNetNuke Module Builder: 01.01.27: Code completion for XsltDb, HTML and XSL stuff!! Full screen editing Some bugs are still in EditArea component and object lists in code completi...Чат кутежа 2.0: 0.9a build 2 версия: вторая сборка первой альфа-версии ирц-клиента.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsdotSpatialpatterns & practices: Enterprise Library Contribpatterns & practices – Enterprise LibraryBlogEngine.NETLightweight Fluent WorkflowRhyduino - Arduino and Managed CodeSunlit World SchemeNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSolidWorks Addin DevelopmentN2 CMS

    Read the article

  • Custom session: Window does not capture full screen area by default. 12.04

    - by juzerali
    I am trying to create a custom session by creating a custom.desktop file in /usr/share/xesessions folder. Remember this is not a gnome or some other session. I have created my own application for this session, which are simple. Case 1 Chrome Browser Contents of custom.desktop file [Desktop Entry] Name=Internet Kiosk Comment=This is an internet kiosk Exec=google-chrome --kiosk TryExec= Icon= Type=Application Issue Chrome browser starts in kiosk mode but does not capture complete screen area. Some area is left at the bottom and right side of the screen. Case 2 Custom pyGTK app (Quickly) Contents of custom.desktop file [Desktop Entry] Name=Custom Kiosk Comment=This is a custom kiosk Exec=~/MyCustomPyGTKApp TryExec= Icon= Type=Application Issue My custom pyGTK app has window.fullScreen() in the code. That means it should open in full screen without the window chrome (and it does under the normal session). But that too, leaves lots of space around it. Need Help Can anyone tell me whats going on here. I think its some issue with borders as pointed out at http://www.instructables.com/id/Setting-Up-Ubuntu-as-a-Kiosk-Web-Appliance/?ALLSTEPS in Step 8 If by chance, Google Chromium is not stretched to the edges with the --kiosk switch enabled there is a simple fix. To stretch Chromium simply log in as your regular user and edit chromeKiosk.sh to not have the --kiosk switch. Then log in as the restricted user, click the wrench and choose options. Then on the Personal Stuff tab select Hide system title bar and use compact borders. Close the options screen and stretch Chromium to fit the monitor. Then go back into the options window and set it to Use system title bar and borders. After this is done, log out of your restricted user (might need to just reboot) and log into your regular user. Edit chromeKiosk.sh back to include the --kiosk switch again and Chromium should be full screen next time you log into the restricted user. If I were to use a custom pyGTK or a gtkmm app, how should I get around this issue. window.fullScreen() should occupy the complete screen area. This has to be done programmatically or in some other way that can scale. I have to deploy this on large number of machines located at different geographical areas. Doing it manually on every machine is not possible.

    Read the article

  • Data Quality Services Performance Best Practices Guide

    This guide details high-level performance numbers expected and a set of best practices on getting optimal performance when using Data Quality Services (DQS) in SQL Server 2012 with Cumulative Update 1. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Why should main() be short?

    - by Stargazer712
    I've been programming for over 9 years, and according to the advice of my first programming teacher, I always keep my main() function extremely short. At first I had no idea why. I just obeyed without understanding, much to the delight of my professors. After gaining experience, I realized that if I designed my code correctly, having a short main() function just sortof happened. Writing modularized code and following the single responsibility principle allowed my code to be designed in "bunches", and main() served as nothing more than a catalyst to get the program running. Fast forward to a few weeks ago, I was looking at Python's souce code, and I found the main() function: /* Minimal main program -- everything is loaded from the library */ ... int main(int argc, char **argv) { ... return Py_Main(argc, argv); } Yay Python. Short main() function == Good code. Programming teachers were right. Wanting to look deeper, I took a look at Py_Main. In its entirety, it is defined as follows: /* Main program */ int Py_Main(int argc, char **argv) { int c; int sts; char *command = NULL; char *filename = NULL; char *module = NULL; FILE *fp = stdin; char *p; int unbuffered = 0; int skipfirstline = 0; int stdin_is_interactive = 0; int help = 0; int version = 0; int saw_unbuffered_flag = 0; PyCompilerFlags cf; cf.cf_flags = 0; orig_argc = argc; /* For Py_GetArgcArgv() */ orig_argv = argv; #ifdef RISCOS Py_RISCOSWimpFlag = 0; #endif PySys_ResetWarnOptions(); while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) { if (c == 'c') { /* -c is the last option; following arguments that look like options are left for the command to interpret. */ command = (char *)malloc(strlen(_PyOS_optarg) + 2); if (command == NULL) Py_FatalError( "not enough memory to copy -c argument"); strcpy(command, _PyOS_optarg); strcat(command, "\n"); break; } if (c == 'm') { /* -m is the last option; following arguments that look like options are left for the module to interpret. */ module = (char *)malloc(strlen(_PyOS_optarg) + 2); if (module == NULL) Py_FatalError( "not enough memory to copy -m argument"); strcpy(module, _PyOS_optarg); break; } switch (c) { case 'b': Py_BytesWarningFlag++; break; case 'd': Py_DebugFlag++; break; case '3': Py_Py3kWarningFlag++; if (!Py_DivisionWarningFlag) Py_DivisionWarningFlag = 1; break; case 'Q': if (strcmp(_PyOS_optarg, "old") == 0) { Py_DivisionWarningFlag = 0; break; } if (strcmp(_PyOS_optarg, "warn") == 0) { Py_DivisionWarningFlag = 1; break; } if (strcmp(_PyOS_optarg, "warnall") == 0) { Py_DivisionWarningFlag = 2; break; } if (strcmp(_PyOS_optarg, "new") == 0) { /* This only affects __main__ */ cf.cf_flags |= CO_FUTURE_DIVISION; /* And this tells the eval loop to treat BINARY_DIVIDE as BINARY_TRUE_DIVIDE */ _Py_QnewFlag = 1; break; } fprintf(stderr, "-Q option should be `-Qold', " "`-Qwarn', `-Qwarnall', or `-Qnew' only\n"); return usage(2, argv[0]); /* NOTREACHED */ case 'i': Py_InspectFlag++; Py_InteractiveFlag++; break; /* case 'J': reserved for Jython */ case 'O': Py_OptimizeFlag++; break; case 'B': Py_DontWriteBytecodeFlag++; break; case 's': Py_NoUserSiteDirectory++; break; case 'S': Py_NoSiteFlag++; break; case 'E': Py_IgnoreEnvironmentFlag++; break; case 't': Py_TabcheckFlag++; break; case 'u': unbuffered++; saw_unbuffered_flag = 1; break; case 'v': Py_VerboseFlag++; break; #ifdef RISCOS case 'w': Py_RISCOSWimpFlag = 1; break; #endif case 'x': skipfirstline = 1; break; /* case 'X': reserved for implementation-specific arguments */ case 'U': Py_UnicodeFlag++; break; case 'h': case '?': help++; break; case 'V': version++; break; case 'W': PySys_AddWarnOption(_PyOS_optarg); break; /* This space reserved for other options */ default: return usage(2, argv[0]); /*NOTREACHED*/ } } if (help) return usage(0, argv[0]); if (version) { fprintf(stderr, "Python %s\n", PY_VERSION); return 0; } if (Py_Py3kWarningFlag && !Py_TabcheckFlag) /* -3 implies -t (but not -tt) */ Py_TabcheckFlag = 1; if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') Py_InspectFlag = 1; if (!saw_unbuffered_flag && (p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0') unbuffered = 1; if (!Py_NoUserSiteDirectory && (p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0') Py_NoUserSiteDirectory = 1; if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') { char *buf, *warning; buf = (char *)malloc(strlen(p) + 1); if (buf == NULL) Py_FatalError( "not enough memory to copy PYTHONWARNINGS"); strcpy(buf, p); for (warning = strtok(buf, ","); warning != NULL; warning = strtok(NULL, ",")) PySys_AddWarnOption(warning); free(buf); } if (command == NULL && module == NULL && _PyOS_optind < argc && strcmp(argv[_PyOS_optind], "-") != 0) { #ifdef __VMS filename = decc$translate_vms(argv[_PyOS_optind]); if (filename == (char *)0 || filename == (char *)-1) filename = argv[_PyOS_optind]; #else filename = argv[_PyOS_optind]; #endif } stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0); if (unbuffered) { #if defined(MS_WINDOWS) || defined(__CYGWIN__) _setmode(fileno(stdin), O_BINARY); _setmode(fileno(stdout), O_BINARY); #endif #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ); #else /* !HAVE_SETVBUF */ setbuf(stdin, (char *)NULL); setbuf(stdout, (char *)NULL); setbuf(stderr, (char *)NULL); #endif /* !HAVE_SETVBUF */ } else if (Py_InteractiveFlag) { #ifdef MS_WINDOWS /* Doesn't have to have line-buffered -- use unbuffered */ /* Any set[v]buf(stdin, ...) screws up Tkinter :-( */ setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); #else /* !MS_WINDOWS */ #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ); #endif /* HAVE_SETVBUF */ #endif /* !MS_WINDOWS */ /* Leave stderr alone - it should be unbuffered anyway. */ } #ifdef __VMS else { setvbuf (stdout, (char *)NULL, _IOLBF, BUFSIZ); } #endif /* __VMS */ #ifdef __APPLE__ /* On MacOS X, when the Python interpreter is embedded in an application bundle, it gets executed by a bootstrapping script that does os.execve() with an argv[0] that's different from the actual Python executable. This is needed to keep the Finder happy, or rather, to work around Apple's overly strict requirements of the process name. However, we still need a usable sys.executable, so the actual executable path is passed in an environment variable. See Lib/plat-mac/bundlebuiler.py for details about the bootstrap script. */ if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0') Py_SetProgramName(p); else Py_SetProgramName(argv[0]); #else Py_SetProgramName(argv[0]); #endif Py_Initialize(); if (Py_VerboseFlag || (command == NULL && filename == NULL && module == NULL && stdin_is_interactive)) { fprintf(stderr, "Python %s on %s\n", Py_GetVersion(), Py_GetPlatform()); if (!Py_NoSiteFlag) fprintf(stderr, "%s\n", COPYRIGHT); } if (command != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } if (module != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' so that PySys_SetArgv correctly sets sys.path[0] to '' rather than looking for a file called "-m". See tracker issue #8202 for details. */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind); if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) && isatty(fileno(stdin))) { PyObject *v; v = PyImport_ImportModule("readline"); if (v == NULL) PyErr_Clear(); else Py_DECREF(v); } if (command) { sts = PyRun_SimpleStringFlags(command, &cf) != 0; free(command); } else if (module) { sts = RunModule(module, 1); free(module); } else { if (filename == NULL && stdin_is_interactive) { Py_InspectFlag = 0; /* do exit on SystemExit */ RunStartupFile(&cf); } /* XXX */ sts = -1; /* keep track of whether we've already run __main__ */ if (filename != NULL) { sts = RunMainFromImporter(filename); } if (sts==-1 && filename!=NULL) { if ((fp = fopen(filename, "r")) == NULL) { fprintf(stderr, "%s: can't open file '%s': [Errno %d] %s\n", argv[0], filename, errno, strerror(errno)); return 2; } else if (skipfirstline) { int ch; /* Push back first newline so line numbers remain the same */ while ((ch = getc(fp)) != EOF) { if (ch == '\n') { (void)ungetc(ch, fp); break; } } } { /* XXX: does this work on Win/Win64? (see posix_fstat) */ struct stat sb; if (fstat(fileno(fp), &sb) == 0 && S_ISDIR(sb.st_mode)) { fprintf(stderr, "%s: '%s' is a directory, cannot continue\n", argv[0], filename); fclose(fp); return 1; } } } if (sts==-1) { /* call pending calls like signal handlers (SIGINT) */ if (Py_MakePendingCalls() == -1) { PyErr_Print(); sts = 1; } else { sts = PyRun_AnyFileExFlags( fp, filename == NULL ? "<stdin>" : filename, filename != NULL, &cf) != 0; } } } /* Check this environment variable at the end, to give programs the * opportunity to set it from Python. */ if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') { Py_InspectFlag = 1; } if (Py_InspectFlag && stdin_is_interactive && (filename != NULL || command != NULL || module != NULL)) { Py_InspectFlag = 0; /* XXX */ sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0; } Py_Finalize(); #ifdef RISCOS if (Py_RISCOSWimpFlag) fprintf(stderr, "\x0cq\x0c"); /* make frontend quit */ #endif #ifdef __INSURE__ /* Insure++ is a memory analysis tool that aids in discovering * memory leaks and other memory problems. On Python exit, the * interned string dictionary is flagged as being in use at exit * (which it is). Under normal circumstances, this is fine because * the memory will be automatically reclaimed by the system. Under * memory debugging, it's a huge source of useless noise, so we * trade off slower shutdown for less distraction in the memory * reports. -baw */ _Py_ReleaseInternedStrings(); #endif /* __INSURE__ */ return sts; } Good God Almighty...it is big enough to sink the Titanic. It seems as though Python did the "Intro to Programming 101" trick and just moved all of main()'s code to a different function called it something very similar to "main". Here's my question: Is this code terribly written, or are there other reasons reasons to have a short main function? As it stands right now, I see absolutely no difference between doing this and just moving the code in Py_Main() back into main(). Am I wrong in thinking this?

    Read the article

  • Partner Webcast – More out of ODA with DB Options - 19 July 2012

    - by Thanos
    The Simple, Reliable, Affordable Path to High-Availability Databases Critical business data needs to be available 24/7 for users and customers, but it can be a struggle to find the time and resources to build a highly available database system that’s reliable and affordable. That’s why Oracle created the new Oracle Database Appliance—a complete package of software, server, storage, and networking. The Oracle Database Appliance integrates the world’s most popular database - Oracle Database 11g  - with system software, servers, storage and networking in a single box. Business gets the benefit of a reliable, secure and highly available database to support applications and maintain continuity – as well as groundbreaking ease of use. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. The benefits?   Unmatched performance, reliability & security for your data that’s there when you need it – which is all the time. Fast installation, simple deployment, easy management. Out of the box. Significant cost savings & reduced risk and complexity compared to integrating all the elements yourself. Ongoing lower total cost of ownership with multiple automated support, detection & correction functions that also save you time.   Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Excellent Windows Azure benchmarks

    - by Sarang
    The Extreme computing group has released a fairly comprehensive set of benchmarks  for almost all aspects of WA. They have also provided the source code to alleviate all doubts that may surface with the MSFT logo lurking around the top right of their homepage :) (Which also resides at a cloudapp.net url). The code is simple and the tests comprehensive enough to hold as data points for customer interactions. Add to it the clean no nonsense Silverlight charts to render the benchmarks and you are set to sell. Technorati Tags: Azure,Benchmark,Extreme Computing Group

    Read the article

  • Timestep schemes for physics simulations

    - by ktodisco
    The operations used for stepping a physics simulation are most commonly: Integrate velocity and position Collision detection and resolution Contact resolution (in advanced cases) A while ago I came across this paper from Stanford that proposed an alternative scheme, which is as follows: Collision detection and resolution Integrate velocity Contact resolution Integrate position It's intriguing because it allows for robust solutions to the stacking problem. So it got me wondering... What, if any, alternative schemes are available, either simple or complex? What are their benefits, drawbacks, and performance considerations?

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >