Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 857/1381 | < Previous Page | 853 854 855 856 857 858 859 860 861 862 863 864  | Next Page >

  • SQL SERVER – Information Related to DATETIME and DATETIME2

    - by pinaldave
    I recently received interesting comment on the blog regarding workaround to overcome the precision issue while dealing with DATETIME and DATETIME2. I have written over this subject earlier over here. SQL SERVER – Difference Between GETDATE and SYSDATETIME SQL SERVER – Difference Between DATETIME and DATETIME2 – WITH GETDATE SQL SERVER – Difference Between DATETIME and DATETIME2 SQL Expert Jing Sheng Zhong has left following comment: The issue you found in SQL server new datetime type is related time source function precision. Folks have found the root reason of the problem – when data time values are converted (implicit or explicit) between different data type, which would lose some precision, so the result cannot match each other as thought. Here I would like to gave a work around solution to solve the problem which the developers met. -- Declare and loop DECLARE @Intveral INT, @CurDate DATETIMEOFFSET; CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2, GlobalDate DATETIMEOFFSET) SET @Intveral = 10000 WHILE (@Intveral > 0) BEGIN ----SET @CurDate = SYSDATETIMEOFFSET(); -- higher precision for future use only SET @CurDate = TODATETIMEOFFSET(GETDATE(),DATEDIFF(N,GETUTCDATE(),GETDATE())); -- lower precision to match exited date process INSERT #TimeTable (FirstDate, LastDate, GlobalDate) VALUES (@CurDate, @CurDate, @CurDate) SET @Intveral = @Intveral - 1 END GO -- Distinct Values SELECT COUNT(DISTINCT FirstDate) D_DATETIME, COUNT(DISTINCT LastDate) D_DATETIME2, COUNT(DISTINCT GlobalDate) D_SYSGETDATE FROM #TimeTable GO -- Join SELECT DISTINCT a.FirstDate,b.LastDate, b.GlobalDate, CAST(b.GlobalDate AS DATETIME) GlobalDateASDateTime FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = CAST(b.GlobalDate AS DATETIME) GO -- Select SELECT * FROM #TimeTable GO -- Clean up DROP TABLE #TimeTable GO If you read my blog SQL SERVER – Difference Between DATETIME and DATETIME2 you will notice that I have achieved the same using GETDATE(). Are you using DATETIME2 in your production environment? If yes, I am interested to know the use case. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Developer Training – Various Options for Maximum Benefit – Part 4

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series, by now you are aware of all the pros and cons that can come along with training.  We’ve asked and answered hard questions, and investigated them “whys” and “hows” of training.  Now it is time to talk about all the different kinds of training that are out there! On Job Training The most common type of training is on the job training.  Everyone receives this kind of education – even experts who come in to consult have to be taught where the printer, pens, and copy machines are.  If you are thinking about more concrete topics, though, on the job training can be some of the easiest to come across.  Picture this: someone in the company whom you really admire is hard at work on a project.  You come up to them and ask to help them out – if they are a busy developer, the odds are that they will say “yes, please!”   If you phrase your question as an offer of help, you can receive training without ever putting someone in the awkward position of acting as a mentor.  However, some people may want the task of being a mentor.  It can never hurt to ask.  Most people will be more than willing to pass their knowledge along. Extreme Programming If your company and coworkers are willing, you can even investigate Extreme Programming.  This is a type of programming that allows small teams to quickly develop code and products that are released with almost immediate user feedback.  You can find more information at http://www.extremeprogramming.org/.  If this is something your company could use, suggest it to your supervisor.  Even if they say no, it will make it clear that you are a go-getter who is interested in new and exciting projects.  If the answer is yes, then you have the opportunity to get some of the best on the job training around. In Person Training Click on Image to Enlarge When you say the word “training,” most people’s minds go back to the classroom, an image they are familiar with.  While training doesn’t always have to be in a traditional setting, because it is so familiar it can also be the most valuable type of training.  There are many ways to get training through a live instructor.  Some companies may be willing to send a representative to you, where employees will get training, sometimes food and coffee, and a live instructor who can answer questions immediately.  Sometimes these trainers are also able to do consultations at the same time, which can invaluable to a company.  If you are the one to asks your supervisor for a training session that can also be turned into a consultation, you may stick in their minds as an incredibly dedicated employee.  If you can’t find a representative, local colleges can also be a good resource for free or cheap classes – or they may have representatives coming who are willing to take on a few more students. Benefits of On Demand Developer Training Of course, you can often get the best of all these types of training with online or On Demand training.  You can get the benefit of a live instructor who is willing to answer questions (although in this case, usually through e-mail or other online venues), there are often real-world examples to follow along – like on the job training – and best of all you can learn whenever you have the time or need.  Did a problem with your server come up at midnight when all your supervisors are safe at home and probably in bed?  No problem!  On Demand training is especially useful if you need to slow down, pause, or rewind a training session.  Not even a real-life instructor can do that! When I was writing this blog post, I felt that each of the subject, which I have covered can be blog posts of itself. However, I wanted to keep the the blog post concise and so touch based on three major training aspects 1) On Job Training 2) In Person Training and 3) Online training. Here is the question for you – is there any other kind of training methods available, which are effective and one should consider it? If yes, what are those, I may write a follow up blog post on the same subject next week. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle UCM Integration with WebCenter

    - by john.brunswick
    Portal deployments always contain some level of content that requires management. Like peanut butter and jelly, the ying and yang, they are inseparable. Unfortunately, unlike peanut butter and jelly content and portals usually require that an extensive amount of work be completed to create a seamless experience for end users who will be serviced by the portal, as well as for users who will be contributing and managing the content. With WebCenter Suite Oracle has understood this need and addressed it by including Universal Content Management (UCM, formerly Stellent) licensing to allow content to be delivered into the portal from a mature, class-leading content management technology. To unlock the most value from this content technology, WebCenter portal technology can leverage a series of integration strategies available through its open standards support, as well as a series of native components to enable content consumption from UCM. This have been done to enable IT teams to reduce solution deployment time and provide quick wins to their business stakeholders. The ongoing cost of ownership for the solution is also greatly reduced through these various integrations. Within this post we will explore various ways in which the content can be Contributed through out of the box interfaces Displayed natively within the portal (configuration) Exposed programmatically (development) The information below showcases how to quickly take advantage of WebCenter's marriage of content and portal technologies, then leverage various programmatic integrations available with UCM.

    Read the article

  • swapon --all --verbose : 'read swap header failed: Invalid argument'

    - by user66088
    Recently ran through EnableHibernateWithEncryptedSwap and ran the following command: swapon --all --verbose and received: 'read swap header failed: Invalid argument' How do I fix this? Here's some more pertinent output... Output of sudo fdisk -l: Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00006d20 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 156301311 77899777 5 Extended /dev/sda5 501760 156301311 77899776 8e Linux LVM Disk /dev/mapper/ubuntu--t10194-root: 75.5 GB, 75539415040 bytes 255 heads, 63 sectors/track, 9183 cylinders, total 147537920 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ubuntu--t10194-root doesn't contain a valid partition table Disk /dev/mapper/ubuntu--t10194-swap_1: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders, total 8257536 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x08040000 Disk /dev/mapper/ubuntu--t10194-swap_1 doesn't contain a valid partition table Disk /dev/mapper/cryptswap1: 4225 MB, 4225761280 bytes 255 heads, 63 sectors/track, 513 cylinders, total 8253440 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd2236983 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Thanks for any and ALL help!

    Read the article

  • How to Backup and Transfer Opera Settings, Profiles, and Browsing Sessions

    - by Lori Kaufman
    We’ve previously shown you how to backup Firefox profiles using an extension and third-party software and how to backup Google Chrome profiles. If you use Opera, there is a free tool that makes it easy to backup Opera profiles, settings, and even browsing sessions. Opera offers a sync service, called Opera Link, which allows you to sync your bookmarks, personal bar, history, Speed Dial, notes, and search engines with other computers. However, this service does not sync your current browsing sessions and passwords. We found a free tool, called Stu’s Opera Settings Import & Export tool, that allows you to export all your Opera settings, profiles, and browsing sessions to an archive and import it into Opera on the same or another computer. Stu’s Opera Settings Import & Export tool is portable and does not need to be installed. Simply download the .zip file using the link at the end of this article. Double-click the osie.exe file to run the program. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Create my own database system

    - by Xananax
    Ok so before I get bashed: I know it's something huge for one person; I don't care if the end product can actually be used or not. I need to learn how databases work in order to use them more efficiently, and my way of learning is by doing. So I want to create my own database system. I am not referring to creating a pseudo-database that would use query to parse files; this would simply be a filesystem interface with a query language. I am talking about the actual structure of a database engine. And since what I have in mind is neither relational nor document-oriented (it's "node-oriented", if that even exists), I would need any resource to be as abstract and high-level as possible. So how would I go about creating that? What resources/tutorials/books can I read to understand? The language does not matter in the slightest. Ideally, the code would be pseudo-code to illustrate the concept, not tied to a particular language, but anything would do. I was not able to find anything on the matter on google (since I am so illiterate on the subject, maybe I am just not entering the right search). If such resources are not available, then I guess something about how to create a client would at least be a step in the right direction.

    Read the article

  • Globe Trotters: Asian Healthcare CIOs need ‘Security Inside Out’ Approach

    - by Tanu Sood
    In our second edition of Globe trotters, wanted to share a feature article that was recently published in Enterprise Innovation. EnterpriseInnovation.net, part of Questex Media Group, is Asia's premier business and technology publication. The article featured MOH Holdings (a holding company of Singapore’s Public Healthcare Institutions) and highlighted the project around National Electronic Health Record (NEHR) system currently being deployed within Singapore.  According to the feature, the NEHR system was built to facilitate seamless exchanges of medical information as patients move across different healthcare settings and to give healthcare providers more timely access to patient’s healthcare records in Singapore. The NEHR consolidates all clinically relevant information from patients’ visits across the healthcare system throughout their lives and pulls them in as a single record. It allows for data sharing, making it accessible to authorized healthcare providers, across the continuum of care throughout the country. In healthcare, patient data privacy is critical as is the need to avoid unauthorized access to the electronic medical records. As Alan Dawson, director for infrastructure and operations at MOH Holdings is quoted in the feature, “Protecting the perimeter is no longer enough. Healthcare CIOs today need to adopt a ‘security inside out’ approach that protects information assets all the way from databases to end points.” Oracle has long advocated the ‘Security Inside Out’ approach. From operating systems, infrastructure to databases, middleware all the way to applications, organizations need to build in security at every layer and between these layers. This comprehensive approach to security has never been as important as it is today in the social, mobile, cloud (SoMoClo) world. To learn more about Oracle’s Security Inside Out approach, visit our Security page. And for more information on how to prevent unauthorized access, streamline user administration, bolster security and enforce compliance in healthcare, learn more about Oracle Identity Management.

    Read the article

  • Change the User Interface Language in Ubuntu

    - by Matthew Guay
    Would you like to use your Ubuntu computer in another language?  Here’s how you can easily change your interface language in Ubuntu. Ubuntu’s default install only includes a couple languages, but it makes it easy to find and add a new interface language to your computer.  To get started, open the System menu, select Administration, and then click Language Support. Ubuntu may ask if you want to update or add components to your current default language when you first open the dialog.  Click Install to go ahead and install the additional components, or you can click Remind Me Later to wait as these will be installed automatically when you add a new language. Now we’re ready to find and add an interface language to Ubuntu.  Click Install / Remove Languages to add the language you want. Find the language you want in the list, and click the check box to install it.  Ubuntu will show you all the components it will install for the language; this often includes spellchecking files for OpenOffice as well.  Once you’ve made your selection, click Apply Changes to install your new language.  Make sure you’re connected to the internet, as Ubuntu will have to download the additional components you’ve selected. Enter your system password when prompted, and then Ubuntu will download the needed languages files and install them.   Back in the main Language & Text dialog, we’re now ready to set our new language as default.  Find your new language in the list, and then click and drag it to the top of the list. Notice that Thai is the first language listed, and English is the second.  This will make Thai the default language for menus and windows in this account.  The tooltip reminds us that this setting does not effect system settings like currency or date formats. To change these, select the Text Tab and pick your new language from the drop-down menu.  You can preview the changes in the bottom Example box. The changes we just made will only affect this user account; the login screen and startup will not be affected.  If you wish to change the language in the startup and login screens also, click Apply System-Wide in both dialogs.  Other user accounts will still retain their original language settings; if you wish to change them, you must do it from those accounts. Once you have your new language settings all set, you’ll need to log out of your account and log back in to see your new interface language.  When you re-login, Ubuntu may ask you if you want to update your user folders’ names to your new language.  For example, here Ubuntu is asking if we want to change our folders to their Thai equivalents.  If you wish to do so, click Update or its equivalents in your language. Now your interface will be almost completely translated into your new language.  As you can see here, applications with generic names are translated to Thai but ones with specific names like Shutter keep their original name. Even the help dialogs are translated, which makes it easy for users around to world to get started with Ubuntu.  Once again, you may notice some things that are still in English, but almost everything is translated. Adding a new interface language doesn’t add the new language to your keyboard, so you’ll still need to set that up.  Check out our article on adding languages to your keyboard to get this setup. If you wish to revert to your original language or switch to another new language, simply repeat the above steps, this time dragging your original or new language to the top instead of the one you chose previously. Conclusion Ubuntu has a large number of supported interface languages to make it user-friendly to people around the globe.  And since you can set the language for each user account, it’s easy for multi-lingual individuals to share the same computer. Or, if you’re using Windows, check out our article on how you can Change the User Interface Language in Vista or Windows 7, too! Similar Articles Productive Geek Tips Restart the Ubuntu Gnome User Interface QuicklyChange the User Interface Language in Vista or Windows 7Create a Samba User on UbuntuInstall Samba Server on UbuntuSee Which Groups Your Linux User Belongs To TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED

    Read the article

  • Parsing HTML Documents with the Html Agility Pack

    Screen scraping is the process of programmatically accessing and processing information from an external website. For example, a price comparison website might screen scrape a variety of online retailers to build a database of products and what various retailers are selling them for. Typically, screen scraping is performed by mimicking the behavior of a browser - namely, by making an HTTP request from code and then parsing and analyzing the returned HTML. The .NET Framework offers a variety of classes for accessing data from a remote website, namely the WebClient class and the HttpWebRequest class. These classes are useful for making an HTTP request to a remote website and pulling down the markup from a particular URL, but they offer no assistance in parsing the returned HTML. Instead, developers commonly rely on string parsing methods like String.IndexOf, String.Substring, and the like, or through the use of regular expressions. Another option for parsing HTML documents is to use the Html Agility Pack, a free, open-source library designed to simplify reading from and writing to HTML documents. The Html Agility Pack constructs a Document Object Model (DOM) view of the HTML document being parsed. With a few lines of code, developers can walk through the DOM, moving from a node to its children, or vice versa. Also, the Html Agility Pack can return specific nodes in the DOM through the use of XPath expressions. (The Html Agility Pack also includes a class for downloading an HTML document from a remote website; this means you can both download and parse an external web page using the Html Agility Pack.) This article shows how to get started using the Html Agility Pack and includes a number of real-world examples that illustrate this library's utility. A complete, working demo is available for download at the end of this article. Read on to learn more! Read More >

    Read the article

  • Internationalize WebCenter Portal - Content Presenter

    - by Stefan Krantz
    Lately we have been involved in engagements where internationalization has been holding the project back from success. In this post we are going to explain how to get Content Presenter and its editorials to comply with the current selected locale for the WebCenter Portal session. As you probably know by now WebCenter Portal leverages the Localization support from Java Server Faces (JSF), in this post we will assume that the localization is controlled and enforced by switching the current browsers locale between English and Spanish. There is two main scenarios in internationalization of a content enabled pages, since Content Presenter offers both presentation of information as well as contribution of information, in this post we will look at how to enable seamless integration of correct localized version of the back end content file and how to enable the editor/author to edit the correct localized version of the file based on the current browser locale. Solution Scenario 1 - Localization aware content presentation Due to the amount of steps required to implement the enclosed solution proposal I have decided to share the solution with you in group components for each facet of the solution. If you want to get more details on each step, you can review the enclosed components. This post will guide you through the steps of enabling each component and what it enables/changes in each section of the system. Enable Content Presenter Customization By leveraging a predictable naming convention of the data files used to hold the content for the Content Presenter instance we can easily develop a component that will dynamically switch the name out before presenting the information. The naming convention we have leverage is the industry best practice by having a shared identifier as prefix (ContentABC) and a language enabled suffix (_EN) (_ES). So the assumption is that each file pair in above example should look like following:- English version - (ContentABC_EN)- Spanish version - (ContentABC_ES) Based on above theory we can now easily regardless of the primary version assigned to the content presenter instance switch the language out by using the localization support from JSF. Below java bean (oracle.webcenter.doclib.internal.view.presenter.NLSHelperBean) is enclosed in the customization project available for download at the bottom of the post: 1: public static final String CP_D_DOCNAME_FORMAT = "%s_%s"; 2: public static final int CP_UNIQUE_ID_INDEX = 0; 3: private ContentPresenter presenter = null; 4:   5:   6: public NLSHelperBean() { 7: super(); 8: } 9:   10: /** 11: * This method updates the configuration for the pageFlowScope to have the correct datafile 12: * for the current Locale 13: */ 14: public void initLocaleForDataFile() { 15: String dataFile = null; 16: // Checking that state of presenter is present, also make sure the item is eligible for localization by locating the "_" in the name 17: if(presenter.getConfiguration().getDatasource() != null && 18: presenter.getConfiguration().getDatasource().isNodeDatasource() && 19: presenter.getConfiguration().getDatasource().getNodeIdDatasource() != null && 20: !presenter.getConfiguration().getDatasource().getNodeIdDatasource().equals("") && 21: presenter.getConfiguration().getDatasource().getNodeIdDatasource().indexOf("_") > 0) { 22: dataFile = presenter.getConfiguration().getDatasource().getNodeIdDatasource(); 23: FacesContext fc = FacesContext.getCurrentInstance(); 24: //Leveraging the current faces contenxt to get current localization language 25: String currentLocale = fc.getViewRoot().getLocale().getLanguage().toUpperCase(); 26: String newDataFile = dataFile; 27: String [] uniqueIdArr = dataFile.split("_"); 28: if(uniqueIdArr.length > 0) { 29: newDataFile = String.format(CP_D_DOCNAME_FORMAT, uniqueIdArr[CP_UNIQUE_ID_INDEX], currentLocale); 30: } 31: //Replacing the current Node datasource with localized datafile. 32: presenter.getConfiguration().getDatasource().setNodeIdDatasource(newDataFile); 33: } 34: } With this bean code available to our WebCenter Portal implementation we can start the next step, by overriding the standard behavior in content presenter by applying a MDS Taskflow customization to the content presenter taskflow, following taskflow customization has been applied to the customization project attached to this post:- Library: WebCenter Document Library Service View- Path: oracle.webcenter.doclib.view.jsf.taskflows.presenter- File: contentPresenter.xml Changes made in above customization view:1. A new method invocation activity has been added (initLocaleForDataFile)2. The method invocation invokes the new NLSHelperBean3. The default activity is moved to the new Method invocation (initLocaleForDataFile)4. The outcome from the method invocation goes to determine-navigation (original default activity) The above changes concludes the presentation modification to support a compatible localization scenario for a content driven page. In addition this customization do not limit or disables the out of the box capabilities of WebCenter Portal. Steps to enable above customization Start JDeveloper and open your WebCenter Portal Application Select "Open Project" and include the extracted project you downloaded (CPNLSCustomizations.zip) Make sure the build out put from CPNLSCustomizations project is a dependency to your Portal project Deploy your Portal Application to your WC_CustomPortal managed server Make sure your naming convention of the two data files follow above recommendation Example result of the solution: Solution Scenario 2 - Localization aware content creation and authoring As you could see from Solution Scenario 1 we require the naming convention to be strictly followed, this means in the hands of a user with limited technology knowledge this can be one of the failing links in this solutions. Therefore I strongly recommend that you also follow this part since this will eliminate this risk and also increase the editors/authors usability with a magnitude. The current WebCenter Portal Architecture leverages WebCenter Content today to maintain, publish and manage content, therefore we need to make few efforts in making sure this part of the architecture is on board with our new naming practice and also simplifies the creation of content for our end users. As you probably remember the naming convention required a prefix to be common so I propose we enable a new component that help you auto name the content items dDocName (this means that the readable title can still be in a human readable format). The new component (WCP-LocalizationSupport.zip) built for this scenario will enable a couple of things: 1. A new service where a sequential number can be generate on request - service name: GET_WCP_LOCALE_CONTENTID 2. The content presenter is leveraging a specific function when launching the content creation wizard from within Content Presenter. Assumption is that users will create the content by clicking "Create Web Content" button. When clicking the button the wizard opened is actually running in side of WebCenter Content server, file executed (contentwizard.hcsp). This file uses JSON commands that will generate operations in the content server, I have extend this file to create two identical data files instead of one.- First it creates the English version by leveraging the new Service and a Global Rule to set the dDocName on the original check in screen, this global rule is available in a configuration package attached to this blog (NLSContentProfileRule.zip)- Secondly we run a set of JSON javascripts to create the Spanish version with the same details except for the name where we replace the suffix with (_ES)- Then content creation wizard ends with its out of the box behavior and assigns the Content Presenter instance the English versionSee Javascript markup below - this can be changed in the (WCP-LocalizationSupport.zip/component/WCP-LocalizationSupport/publish/webcenter) 1: //---------------------------------------A-TEAM--------------------------------------- 2: WCM.ContentWizard.CheckinContentPage.OnCheckinComplete = function(returnParams) 3: { 4: var callback = WCM.ContentWizard.CheckinContentPage.checkinCompleteCallback; 5: WCM.ContentWizard.ChooseContentPage.OnSelectionComplete(returnParams, callback); 6: // Load latest DOC_INFO_SIMPLE 7: var cgiPath = DOCLIB.config.httpCgiPath; 8: var jsonBinder = new WCM.Idc.JSONBinder(); 9: jsonBinder.SetLocalDataValue('IdcService', 'DOC_INFO_SIMPLE'); 10: jsonBinder.SetLocalDataValue('dID', returnParams.dID); 11: jsonBinder.Send(cgiPath, $CB(this, function(http) { 12: var ret = http.GetResponseText(); 13: var binder = new WCM.Idc.JSONBinder(ret); 14: var dDocName = binder.GetResultSetValue('DOC_INFO', 'dDocName', 0); 15: if(dDocName.indexOf("_") > 0){ 16: var ssBinder = new WCM.Idc.JSONBinder(); 17: ssBinder.SetLocalDataValue('IdcService', 'SS_CHECKIN_NEW'); 18: //Additional Localization dDocName generated 19: ssBinder.SetLocalDataValue('dDocName', getLocalizedDocName(dDocName, "es")); 20: ssBinder.SetLocalDataValue('primaryFile', 'default.xml'); 21: ssBinder.SetLocalDataValue('ssDefaultDocumentToken', 'SSContributorDataFile'); 22:   23: for(var n = 0 ; n < binder.GetResultSetFields('DOC_INFO').length ; n++) { 24: var field = binder.GetResultSetFields('DOC_INFO')[n]; 25: if(field != 'dID' && 26: field != 'dDocName' && 27: field != 'dID' && 28: field != 'dReleaseState' && 29: field != 'dRevClassID' && 30: field != 'dRevisionID' && 31: field != 'dRevLabel') { 32: ssBinder.SetLocalDataValue(field, binder.GetResultSetValue('DOC_INFO', field, 0)); 33: } 34: } 35: ssBinder.Send(cgiPath, $CB(this, function(http) {})); 36: } 37: })); 38: } 39:   40: //Support function to create localized dDocNames 41: function getLocalizedDocName(dDocName, lang) { 42: var result = dDocName.replace("_EN", ("_" + lang)); 43: return result; 44: } 45: //---------------------------------------A-TEAM--------------------------------------- 3. By applying the enclosed NLSContentProfileRule.zip, the check in screen for DataFile creation will have auto naming enabled with localization suffix (default is English)You can change the default language by updating the GlobalNlsRule and assign preferred prefix.See Rule markup for dDocName field below: <$executeService("GET_WCP_LOCALE_CONTENTID")$><$dprDefaultValue=WCP_LOCALE.LocaleContentId & "_EN"$> Steps to enable above extensions and configurations Install WebCenter Component (WCP-LocalizationSupport.zip), via the AdminServer in WebCenter Content Administration menus Enable the component and restart the content server Apply the configuration bundle to enable the new Global Rule (GlobalNlsRule), via the WebCenter Content Administration/Config Migration Admin New Content Creation Experience Result Content EditingContent editing will by default be enabled for authoring in the current select locale since the content file is selected by (Solution Scenario 1), this means that a user can switch his browser locale and then get the editing experience adaptable to the current selected locale. NotesA-Team are planning to post a solution on how to inline switch the locale of the WebCenter Portal Session, so the Content Presenter, Navigation Model and other Face related features are localized accordingly. Content Presenter examples used in this post is an extension to following post:https://blogs.oracle.com/ATEAM_WEBCENTER/entry/enable_content_editing_of_iterative Downloads CPNLSCustomizations.zip - WebCenter Portal, Content Presenter Customization https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/CPNLSCustomizations.zip WCP-LocalizationSupport.zip - WebCenter Content, Extension Component to enable localization creation of files with compliant auto naminghttps://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/WCP-LocalizationSupport.zip NLSContentProfileRule.zip - WebCenter Content, Configuration Update Bundle to enable Global rule for new check in naming of data fileshttps://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/NLSContentProfileRule.zip

    Read the article

  • Ten - oh, wait, eleven - Eleven things you should know about the ASP.NET Fall 2012 Update

    - by Jon Galloway
    Today, just a little over two months after the big ASP.NET 4.5 / ASP.NET MVC 4 / ASP.NET Web API / Visual Studio 2012 / Web Matrix 2 release, the first preview of the ASP.NET Fall 2012 Update is out. Here's what you need to know: There are no new framework bits in this release - there's no change or update to ASP.NET Core, ASP.NET MVC or Web Forms features. This means that you can start using it without any updates to your server, upgrade concerns, etc. This update is really an update to the project templates and Visual Studio tooling, conceptually similar to the ASP.NET MVC 3 Tools Update. It's a relatively lightweight install. It's a 41MB download. I've installed it many times and usually takes 5-7 minutes; it's never required a reboot. It adds some new project templates to ASP.NET MVC: Facebook Application and Single Page Application templates. It adds a lot of cool enhancements to ASP.NET Web API. It adds some tooling that makes it easy to take advantage of features like SignalR, Friendly URLs, and Windows Azure Authentication. Most of the new features are installed via NuGet packages. Since ASP.NET is open source, nightly NuGet packages are available, and the roadmap is published, most of this has really been publicly available for a while. The official name of this drop is the ASP.NET Fall 2012 Update BUILD Prerelease. Please do not attempt to say that ten times fast. While the EULA doesn't prohibit it, it WILL legally change your first name to Scott. As with all new releases, you can find out everything you need to know about the Fall Update at http://asp.net/vnext (especially the release notes!) I'm going to be showing all of this off, assisted by special guest code monkey Scott Hanselman, this Friday at BUILD: Bleeding edge ASP.NET: See what is next for MVC, Web API, SignalR and more… (and I've heard it will be livestreamed). Let's look at some of those things in more detail. No new bits ASP.NET 4.5, MVC 4 and Web API have a lot of great core features. I see the goal of this update release as making it easier to put those features to use to solve some useful scenarios by taking advantage of NuGet packages and template code. If you create a new ASP.NET MVC application using one of the new templates, you'll see that it's using the ASP.NET MVC 4 RTM NuGet package (4.0.20710.0): This means you can install and use the Fall Update without any impact on your existing projects and no worries about upgrading or compatibility. New Facebook Application Template ASP.NET MVC 4 (and ASP.NET 4.5 Web Forms) included the ability to authenticate your users via OAuth and OpenID, so you could let users log in to your site using a Facebook account. One of the new changes in the Fall Update is a new template that makes it really easy to create full Facebook applications. You could create Facebook application in ASP.NET already, you'd just need to go through a few steps: Search around to find a good Facebook NuGet package, like the Facebook C# SDK (written by my friend Nathan Totten and some other Facebook SDK brainiacs). Read the Facebook developer documentation to figure out how to authenticate and integrate with them. Write some code, debug it and repeat until you got something working. Get started with the application you'd originally wanted to write. What this template does for you: eliminate steps 1-3. Erik Porter, Nathan and some other experts built out the Facebook Application template so it automatically pulls in and configures the Facebook NuGet package and makes it really easy to take advantage of it in an ASP.NET MVC application. One great example is the the way you access a Facebook user's information. Take a look at the following code in a File / New / MVC / Facebook Application site. First, the Home Controller Index action: [FacebookAuthorize(Permissions = "email")] public ActionResult Index(MyAppUser user, FacebookObjectList<MyAppUserFriend> userFriends) { ViewBag.Message = "Modify this template to jump-start your Facebook application using ASP.NET MVC."; ViewBag.User = user; ViewBag.Friends = userFriends.Take(5); return View(); } First, notice that there's a FacebookAuthorize attribute which requires the user is authenticated via Facebook and requires permissions to access their e-mail address. It binds to two things: a custom MyAppUser object and a list of friends. Let's look at the MyAppUser code: using Microsoft.AspNet.Mvc.Facebook.Attributes; using Microsoft.AspNet.Mvc.Facebook.Models; // Add any fields you want to be saved for each user and specify the field name in the JSON coming back from Facebook // https://developers.facebook.com/docs/reference/api/user/ namespace MvcApplication3.Models { public class MyAppUser : FacebookUser { public string Name { get; set; } [FacebookField(FieldName = "picture", JsonField = "picture.data.url")] public string PictureUrl { get; set; } public string Email { get; set; } } } You can add in other custom fields if you want, but you can also just bind to a FacebookUser and it will automatically pull in the available fields. You can even just bind directly to a FacebookUser and check for what's available in debug mode, which makes it really easy to explore. For more information and some walkthroughs on creating Facebook applications, see: Deploying your first Facebook App on Azure using ASP.NET MVC Facebook Template (Yao Huang Lin) Facebook Application Template Tutorial (Erik Porter) Single Page Application template Early releases of ASP.NET MVC 4 included a Single Page Application template, but it was removed for the official release. There was a lot of interest in it, but it was kind of complex, as it handled features for things like data management. The new Single Page Application template that ships with the Fall Update is more lightweight. It uses Knockout.js on the client and ASP.NET Web API on the server, and it includes a sample application that shows how they all work together. I think the real benefit of this application is that it shows a good pattern for using ASP.NET Web API and Knockout.js. For instance, it's easy to end up with a mess of JavaScript when you're building out a client-side application. This template uses three separate JavaScript files (delivered via a Bundle, of course): todoList.js - this is where the main client-side logic lives todoList.dataAccess.js - this defines how the client-side application interacts with the back-end services todoList.bindings.js - this is where you set up events and overrides for the Knockout bindings - for instance, hooking up jQuery validation and defining some client-side events This is a fun one to play with, because you can just create a new Single Page Application and hit F5. Quick, easy install (with one gotcha) One of the cool engineering changes for this release is a big update to the installer to make it more lightweight and efficient. I've been running nightly builds of this for a few weeks to prep for my BUILD demos, and the install has been really quick and easy to use. The install takes about 5 minutes, has never required a reboot for me, and the uninstall is just as simple. There's one gotcha, though. In this preview release, you may hit an issue that will require you to uninstall and re-install the NuGet VSIX package. The problem comes up when you create a new MVC application and see this dialog: The solution, as explained in the release notes, is to uninstall and re-install the NuGet VSIX package: Start Visual Studio 2012 as an Administrator Go to Tools->Extensions and Updates and uninstall NuGet. Close Visual Studio Navigate to the ASP.NET Fall 2012 Update installation folder: For Visual Studio 2012: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio 2012 For Visual Studio 2012 Express for Web: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio Express 2012 for Web Double click on the NuGet.Tools.vsix to reinstall NuGet This took me under a minute to do, and I was up and running. ASP.NET Web API Update Extravaganza! Uh, the Web API team is out of hand. They added a ton of new stuff: OData support, Tracing, and API Help Page generation. OData support Some people like OData. Some people start twitching when you mention it. If you're in the first group, this is for you. You can add a [Queryable] attribute to an API that returns an IQueryable<Whatever> and you get OData query support from your clients. Then, without any extra changes to your client or server code, your clients can send filters like this: /Suppliers?$filter=Name eq ‘Microsoft’ For more information about OData support in ASP.NET Web API, see Alex James' mega-post about it: OData support in ASP.NET Web API ASP.NET Web API Tracing Tracing makes it really easy to leverage the .NET Tracing system from within your ASP.NET Web API's. If you look at the \App_Start\WebApiConfig.cs file in new ASP.NET Web API project, you'll see a call to TraceConfig.Register(config). That calls into some code in the new \App_Start\TraceConfig.cs file: public static void Register(HttpConfiguration configuration) { if (configuration == null) { throw new ArgumentNullException("configuration"); } SystemDiagnosticsTraceWriter traceWriter = new SystemDiagnosticsTraceWriter() { MinimumLevel = TraceLevel.Info, IsVerbose = false }; configuration.Services.Replace(typeof(ITraceWriter), traceWriter); } As you can see, this is using the standard trace system, so you can extend it to any other trace listeners you'd like. To see how it works with the built in diagnostics trace writer, just run the application call some API's, and look at the Visual Studio Output window: iisexpress.exe Information: 0 : Request, Method=GET, Url=http://localhost:11147/api/Values, Message='http://localhost:11147/api/Values' iisexpress.exe Information: 0 : Message='Values', Operation=DefaultHttpControllerSelector.SelectController iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=DefaultHttpControllerActivator.Create iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=HttpControllerDescriptor.CreateController iisexpress.exe Information: 0 : Message='Selected action 'Get()'', Operation=ApiControllerActionSelector.SelectAction iisexpress.exe Information: 0 : Operation=HttpActionBinding.ExecuteBindingAsync iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuting iisexpress.exe Information: 0 : Message='Action returned 'System.String[]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync iisexpress.exe Information: 0 : Message='Will use same 'JsonMediaTypeFormatter' formatter', Operation=JsonMediaTypeFormatter.GetPerRequestFormatterInstance iisexpress.exe Information: 0 : Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate iisexpress.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK) iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuted, Status=200 (OK) iisexpress.exe Information: 0 : Operation=ValuesController.ExecuteAsync, Status=200 (OK) iisexpress.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://localhost:11147/api/Values, Message='Content-type='application/json; charset=utf-8', content-length=unknown' iisexpress.exe Information: 0 : Operation=JsonMediaTypeFormatter.WriteToStreamAsync iisexpress.exe Information: 0 : Operation=ValuesController.Dispose API Help Page When you create a new ASP.NET Web API project, you'll see an API link in the header: Clicking the API link shows generated help documentation for your ASP.NET Web API controllers: And clicking on any of those APIs shows specific information: What's great is that this information is dynamically generated, so if you add your own new APIs it will automatically show useful and up to date help. This system is also completely extensible, so you can generate documentation in other formats or customize the HTML help as much as you'd like. The Help generation code is all included in an ASP.NET MVC Area: SignalR SignalR is a really slick open source project that was started by some ASP.NET team members in their spare time to add real-time communications capabilities to ASP.NET - and .NET applications in general. It allows you to handle long running communications channels between your server and multiple connected clients using the best communications channel they can both support - websockets if available, falling back all the way to old technologies like long polling if necessary for old browsers. SignalR remains an open source project, but now it's being included in ASP.NET (also open source, hooray!). That means there's real, official ASP.NET engineering work being put into SignalR, and it's even easier to use in an ASP.NET application. Now in any ASP.NET project type, you can right-click / Add / New Item... SignalR Hub or Persistent Connection. And much more... There's quite a bit more. You can find more info at http://asp.net/vnext, and we'll be adding more content as fast as we can. Watch my BUILD talk to see as I demonstrate these and other features in the ASP.NET Fall 2012 Update, as well as some other even futurey-er stuff!

    Read the article

  • Looking for HTML 5 Presentations? Download Google’s HTML 5 presentation with embedded demos

    - by Gopinath
    Are you interested in learning HTML 5 and looking for good presentation? Are you willing to take a session on HTML 5 to your colleagues or students and looking for a presentation? If so your search is going to end now. Google Chrome team has created an online HTML 5 presentation to showcase the bleeding edge features for modern desktop and mobile browsers. You can access the presentation  at http://slides.html5rocks.com and present it audience with working demos of various HTML 5 features.  If you want to have offline access to the presentations, you can download the entire source code from http://code.google.com/p/html5rocks and play it offline on your computer. The presentation is regularly updated by Google Chrome team and as I write this post the following are the features showcased Offline  Storage Real-time  Communication File  Hardware Access Semantics & Markup Graphics  Multimedia CSS3 Nuts & Bolts The best part of this presentation is the embedded demos that lets you showcase the features as you present them with live hands on experience. For example in Offline Storage slide you can create a Web Sql database, create tables, add new rows,  retrieve data and drop the tables. Interface of demos is very simple and easy to showcase. As they are built by Google Chrome to showcase the features they built into Chrome, it’s recommended to use Chrome browser for presentation walkthrough. Link to HTML 5 Presentation: http://slides.html5rocks.com

    Read the article

  • Java ME SDK 3.2 is now live

    - by SungmoonCho
    Hi everyone, It has been a while since we released the last version. We have been very busy integrating new features and making lots of usability improvements into this new version. Datasheet is available here. Please visit Java ME SDK 3.2 download page to get the latest and best version yet! Some of the new features in this version are described below. Embedded Application SupportOracle Java ME SDK 3.2 now supports the new Oracle® Java ME Embedded. This includes support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). You can test and debug applications either on the built-in device emulators or on your device. Memory MonitorThe Memory Monitor shows memory use as an application runs. It displays a dynamic detailed listing of the memory usage per object in table form, and a graphical representation of the memory use over time. Eclipse IDE supportOracle Java ME SDK 3.2 now officially supports Eclipse IDE. Once you install the Java ME SDK plugins on Eclipse, you can start developing, debugging, and profiling your mobile or embedded application. Skin CreatorWith the Custom Device Skin Creator, you can create your own skins. The appearance of the custom skins is generic, but the functionality can be tailored to your own specifications.  Here are the release highlights. Implementation and support for the new Oracle® Java Wireless Client 3.2 runtime and the Oracle® Java ME Embedded runtime. The AMS in the CLDC emulators has a new look and new functionality (Install Application, Manage Certificate Authorities and Output Console). Support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). The IMP-NG platform is implemented as a subset of CLDC. Support includes: A new emulator for headless devices. Javadocs for the following Oracle APIs: Device Access API, Logging API, AMS API, and AccessPoint API. New demos for IMP-NG features can be run on the emulator or on a real device running the Oracle® Java ME Embedded runtime. New Custom Device Skin Creator. This tool provides a way to create and manage custom emulator skins. The skin appearance is generic, but the functionality, such as the JSRs supported or the device properties, are up to you. This utility only supported in NetBeans. Eclipse plugin for CLDC/MIDP. For the first time Oracle Java ME SDK is available as an Eclipse plugin. The Eclipse version does not support CDC, the Memory Monitor, and the Custom Device Skin Creator in this release. All Java ME tools are implemented as NetBeans plugins. As of the plugin integrates Java ME utilities into the standard NetBeans menus. Tools > Java ME menu is the place to launch Java ME utilities, including the new Skin Creator. Profile > Java ME is the place to work with the Network Monitor and the Memory Monitor. Use the standard NetBeans tools for debugging. Profiling, Network monitoring, and Memory monitoring are integrated with the NetBeans profiling tools. New network monitoring protocols are supported in this release: WMA, SIP, Bluetooth and OBEX, SATSA APDU and JCRMI, and server sockets. Java ME SDK Update Center. Oracle Java ME SDK can be updated or extended by new components. The Update Center can download, install, and uninstall plugins specific to the Java ME SDK. A plugin consists of runtime components and skins. Bug fixes and enhancements. This version comes with a few known problems. All of them have workarounds, so I hope you don't get stuck in these issues when you are using the product. It you cannot watch static variables during an Eclipse debugging session, and sometimes the Variable view cannot show data. In the source code, move the mouse over the required variable to inspect the variable value. A real device shown in the Device Selector is deleted from the Device Manager, yet it still appears. Kill the device manager in the system tray, and relaunch it. Then you will see the device removed from the list. On-device profiling does not work on a device. CPU profiling, networking monitoring, and memory monitoring do not work on the device, since the device runtime does not yet support it. Please do the profiling with your emulator first, and then test your application on the device. In the Device Selector, using Clean Database on real external device causes a null pointer exception. External devices do not have a database recognized by the SDK, so you can disregard this exception message. Suspending the Emulator during a Memory Monitor session hangs the emulator. Do not use the Suspend option (F5) while the Memory Monitor is running. If the emulator is hung, open the Windows task manager and stop the emulator process (javaw). To switch to another application while the Memory Monitor is running, choose Application > AMS Home (F4), and select a different application. Please let us know how we can improve it even better, by sending us your feedback. -Java ME SDK Team

    Read the article

  • Letter to Ballmer: Making Better Consumer Devices

    - by andrewbrust
    Last year, I wrote Steve Ballmer an email, and he was kind enough to write me back.  The email contained a scan of a column I wrote praising Microsoft’s BI strategy.  His reply contained three simple words: “Super nice  thanks.” Well, now I’d like to write to Steve again, in an open letter format, and this time the love may be a bit tougher.  But I’m still super earnest. The past two days have been eventful ones for Microsoft: The company announced the departure of company veterans Robbie Bach and J Allard and the market announced Apple is now besting Microsoft in market capitalization. Plus, announcements were made that make it plain that Ballmer will, in effect, be running Microsoft’s Entertainment & Devices division himself. With that in mind, I’d like to offer my list of a dozen things I think Microsoft’s CEO should do to improve that division’s offerings and, hopefully, its bottom line. So here goes:   1. On Windows Phone 7, Stay the Course The press is teeming with headlines and reader comments proclaiming the death-before-arrival of Windows Phone 7.  That’s plain silly.  You’ve got the makings of a great and unique SmartPhone platform, and you’re the only company (even considering RIM) that can offer full fidelity Exchange integration, not to mention implementing Office on the device.  Let the existing team finish this puppy and ship it. And then have them pump out a few updates, over-the-air, quickly.  Show them that Google Android’s not the only product that can do good, rapid dot releases. And another thing: make sure your OEMs’ devices have flawless touch screens.  If they don’t, then you shouldn’t certify them for delivery to customers.  Period. Oh, and kill the Kin, quietly.  It was DOA, and you know it.   2. Move Media Center to the Xbox Platform Media Center is, at its core, a good product.  But delivering a media distribution and DVR platform on a sophisticated PC operating system like Windows 7 just creates too many moving parts.  Xbox already functions as the best Media Center extender device – it should actually be the hub as well. Media Center is mostly based on .NET code – and XNA is a .NET environment for Xbox – find a way to bridge that small gap and make Media Center a joy to work with instead of a frustration.  Beating Apple TV out of this sub-market is the lowest hanging fruit on the tree (goofy pun, but it’s true).   3. Integrate Media Center with Mediaroom, or Kill the Latter You have two media products with almost identical names.  One is for standalone DVRs and the other is for IPTV cable set tops with DVR capabilities.  Can we merge these please?  My previous request of putting Media Center on Xbox would seem to tie into this nicely, since you’ve announced plans to do that with Mediaroom already.   4. Fix the Red Ring of Death People love the Xbox, but they really don’t love sending their consoles back every 18-24 months, when they get a bunch of red lights flashing on power up.  You’ve handled this defect about as gracefully as possible, but it’s been around for a long time now and it doesn’t seem to be fixed yet.  You can do better.  In fact, you must do better, or you insult your customers.   5. Add Blu Ray to Xbox I know, streaming movies are the future; physical media is legacy technology.  So if that’s true, why did you back HD DVD so hard?  You know why: for now, the film studios won’t allow a large selection of new release, HD, surround sound content be distributed on any medium other than Blu Ray or cable pay per view/on-demand.  Don’t you want home theater buffs to see the Xbox as a fantastic device for their rigs?  Don’t you want to put PlayStation 3 out of its misery?  And if you follow my suggestions above (move Media Center to the Xbox and fix the Red Ring problem), you’d have it all sewn up.  Do I think Blu Ray functionality will move a lot of units?  No.  Do I think that it would move more units with desperately needed influential home theater consumers?  You bet.  And you might sell more ZunePass subscriptions in the process. But while you’re at it, make the fan quieter, please.   6. Make More of Windows Home Server Home Server is a fantastic product.  And for reasons unknown to me, it seems like you’re letting it languish.  Development of the add-in ecosystem seems underfunded.  WHS’ unparalleled ease of use and reliability for home PC backup (and emergency restores) goes unsung.  Product cycles are slow.  Support for your OEMs, who are doing great work, especially in the green space with Atom CPUs, seems lacking.  You’ve married a trophy girl and you keep her cloistered at home!  That’s cruel, unusual and, um, incredibly ill-advised.  Make use of this ace card, and while you’re at it, give it real integration with Media Center.  The integration thus far proof-of-concept quality.  You should go way past that – both products will benefit immeasurably.   7. Set Up a Partner Platform for Custom Installers There’s a whole sub-industry of companies that install, integrate and configure home theater, security and connected home products.  They have an industry group. They are influential in the high-end of the consumer electronics industry, and so are their customers.  They love Media Center and they love Windows Home Server.  But I have talked to several of them at the Consumer Electronics Show and they tell me you don’t love them.  They find it very difficult to do business with Microsoft, even though they want nothing more than to sell and evangelize your platform.  This is a travesty.  Please fix it.  Get Allison Watson and the Microsoft Partner Network on board and have her hire someone who knows how to run a channel program for consumer electronics companies.  Problem solved.  Markets expanded.   8. Make Your Own Hardware In other areas, I know you love your partners.  I help run one, so I appreciate that.  But when it came to Xbox and Zune you built them it yourself (albeit on a contract basis, which is fine).  Windows Phone 7 has a chance to work as an OEM play, but it would work better if you produced the devices.  At least consider building a reference device that sells alongside your OEMs’ offerings.  That’s what Google did with the Nexxus One.  And while that phone was not itself a big seller, it catalyzed two wonderful things : (1) a quality bar was set and (2) partners exceeded it.  Before the Nexxus One, the best Android handset out there was the Motorola Droid. The Nexxus One was better, and the HTC Droid Incredible and Evo 4G are now even better than Google’s phone, which is why Verizon and Sprint decided not to carry it.  Imagine if all Windows Phone 6.x devices were on par with the HTC HD2.  I tend to believe you’d have a lot bigger market share than you do now.   9. Continue with Your Retail Initiative From what I hear, it sounds like it’s going well.  And this goes right along with making your own hardware.  When you build it, they will come.  And then it makes the likes of Best Buy and Staples do better.   10. Make an Acquisition (or Two) TiVo and/or Moxi look ripe for the picking.  With their ability to build stuff people love and your ability to run a business, you might just have something.  But do a better job than you did when you bought Danger.  Buy the ideas, not just the customers, eh?   11. Make Beautiful Stuff You’ve heard this one before, I know.  But I have some head-shrinking advice on this one.  You know that Apple obsesses over its industrial design.  You know that appeals to consumers.  But it seems you think doing so is Apple’s game exclusively and so you shouldn’t even try.  Bull dinky.  Come to New York and visit the Museum of Modern Art’s Architecture and Design gallery.  You’ll see that lots of companies and product categories have had very high design value well before Apple existed.  You can do this, and the Zune HD was a great start.  Now run with that.  Find those negative voices in your head that are telling you that you can’t and shut them up.  For good.   12. Burst the Bubble Some of the products you’ve built seem like they were conceived in a bizarro world.  That would appear to be the result of groupthink.  You must do better.  And there’s lots of people willing to advise you.  This includes just about everyone in the Regional Director program, and probably a bunch of MVPs.  Heck, I bet the guys at Engadget could help out too.  Imagine if you let them see the Kin before it shipped.  Talk to high-end gear consumers.  Talk to Best Buy and CostCo customers too.   Signing Off I hope this was of value to you.  As I wrote this I kept telling myself how obvious, even trite, some of these pieces of advice were and then, because of that, doubting they’d really help.  But I decided that they must not be obvious to Microsoft.  Sometimes when you get wrapped up in stuff, it’s hard to clear your head.  I think my head’s pretty clear here though (I’m wrapped up in other stuff), so maybe my perspective can help.  If not, well, then, I guess they all can’t be super nice.

    Read the article

  • SQL SERVER – Difference Between DATETIME and DATETIME2 – WITH GETDATE

    - by pinaldave
    Earlier I wrote blog post SQL SERVER – Difference Between GETDATE and SYSDATETIME which inspired me to write SQL SERVER – Difference Between DATETIME and DATETIME2. Now earlier two blog post inspired me to write this blog post (and 4 emails and 3 reads from readers). I previously populated DATETIME and DATETIME2 field with SYSDATETIME, which gave me very different behavior as SYSDATETIME was rounded up/down for the DATETIME datatype. I just ran the same experiment but instead of populating SYSDATETIME in this script I will be using GETDATE function. DECLARE @Intveral INT SET @Intveral = 10000 CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2) WHILE (@Intveral > 0) BEGIN INSERT #TimeTable (FirstDate, LastDate) VALUES (GETDATE(), GETDATE()) SET @Intveral = @Intveral - 1 END GO SELECT COUNT(DISTINCT FirstDate) D_FirstDate, COUNT(DISTINCT LastDate) D_LastDate FROM #TimeTable GO SELECT DISTINCT a.FirstDate, b.LastDate FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = b.LastDate GO SELECT * FROM #TimeTable GO DROP TABLE #TimeTable GO Let us run above script and observe the results. You will find that the values of GETDATE which is populated in both the columns FirstDate and LastDate are very much same. This is because GETDATE is of datatype DATETIME and the precision of the GETDATE is smaller than DATETIME2 there is no rounding happening. In other word, this experiment is pointless. I have included this as I got 4 emails and 3 twitter questions on this subject. If your datatype of variable is smaller than column datatype there is no manipulation of data, if data type of variable is larger than column datatype the data is rounded. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

  • Step by Step Install of MAAS and JUJU

    - by John S
    I am working on understanding the pieces that I am missing in being able to deploy Juju across the other MAAS nodes. I don't know If I have a step out of place, or missing a few. The server owns the router which handles the DHCP and DNS. Any assistance is greatly appreciated. When I am at the end I will either get a 409 error, or arbitrary pick tools 1.16.0 error. It is worth mentioning that local, and aws works fine. Hopefully with all of these steps spelled out it will help someone else along the way too. Steps Setting Up MAAS and JUJU - 12.04 LTS Clean install SSH only from the package selection during install sudo apt-get install software-properties-common sudo apt-get install python-software-properties sudo add-apt-repository ppa:maas-maintainers/stable sudo add-apt-repository ppa:juju/stable sudo apt-get update sudo apt-get dist-upgrade sudo reboot sudo apt-get install maas maas-dns maas-dhcp sudo ufw disable sudo reboot - edit /etc/dhcp/dhcpd.conf authoritive subnet 10.0.0.0 netmask 255.255.255.0 { next-server 10.0.0.2; filename "pxelinux.0"; } sudo maas createsuperuser sudo maas-import-pxe-files Login to MAAS http://10.x.x.x/MAAS cluster controller configuration for eth0 manage dhcp and dns IP 10.0.0.2 subnet 255.255.255.0 broadcast 10.0.0.0 routerip 10.0.0.1 ip low 10.0.0.5 ip high 10.0.0.180 Commissioning default and distro is set at 12.04 default domain is at local sudo maas-cli login maas http://10.x.x.x/MAAS/api/1.0 api-key ssh-keygen -t rsa -b 2048 - enter - no password - cat id_rsa.pub and enter key into MAAS ssh sudo maas-cli maas nodes accept-all (interestingly enough I only get back [] when executing this ) PXE one machine, accept and commision, start and deploy. sudo apt-get install juju-core juju-local MAAS config: maas: type: maas maas-server: '://10.x.x.x:80/MAAS' maas-oauth: 'MAAS_API_KEY' admin-secret: 'nothing' default-series: 'precise' juju switch maas sudo juju bootstrap --show-log

    Read the article

  • SQL SERVER – Online Session on What is New in Denali – Today Online

    - by pinaldave
    I will be presenting today on subject Inside of Next Generation SQL Server – Denali online at Zeollar.com. This sessions are really fun as they are online, downloadable, and 100% demo oriented. I will be using SQL Server ‘Denali’ CTP 1 to present on the subject of What is New in Denali. The webcast will start at 12:30 PM sharp and will end at 1 PM India Time. It will be 100% demo oriented and no slides. I will be covering following topics in the session. SQL SERVER – Denali Feature – Zoom Query Editor SQL SERVER – Denali – Improvement in Startup Options SQL SERVER – Denali – Clipboard Ring – CTRL+SHIFT+V SQL SERVER – Denali – Multi-Monitor SSMS Windows SQL SERVER – Denali – Executing Stored Procedure with Result Sets SQL SERVER – Performance Improvement with of Executing Stored Procedure with Result Sets in Denali SQL SERVER – ‘Denali’ – A Simple Example of Contained Databases SQL SERVER – Denali – ObjectID in Negative – Local TempTable has Negative ObjectID SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison SQL SERVER – Denali – SEQUENCE is not IDENTITY SQL SERVER – Denali – Introduction to SEQUENCE – Simple Example of SEQUENCE If time permits we will cover few more topics as well. The session will be recorded as well. My earlier session on the Topic of Best Practices Analyzer is also available to watch online here: SQL SERVER – Video – Best Practices Analyzer using Microsoft Baseline Configuration Analyzer Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How to Use An Antivirus Boot Disc or USB Drive to Ensure Your Computer is Clean

    - by Chris Hoffman
    If your computer is infected with malware, running an antivirus within Windows may not be enough to remove it. If your computer has a rootkit, the malware may be able to hide itself from your antivirus software. This is where bootable antivirus solutions come in. They can clean malware from outside the infected Windows system, so the malware won’t be running and interfering with the clean-up process. The Problem With Cleaning Up Malware From Within Windows Standard antivirus software runs within Windows. If your computer is infected with malware, the antivirus software will have to do battle with the malware. Antivirus software will try to stop the malware and remove it, while the malware will attempt to defend itself and shut down the antivirus. For really nasty malware, your antivirus software may not be able to fully remove it from within Windows. Rootkits, a type of malware that hides itself, can be even trickier. A rootkit could load at boot time before other Windows components and prevent Windows from seeing it, hide its processes from the task manager, and even trick antivirus applications into believing that the rootkit isn’t running. The problem here is that the malware and antivirus are both running on the computer at the same time. The antivirus is attempting to fight the malware on its home turf — the malware can put up a fight. Why You Should Use an Antivirus Boot Disc Antivirus boot discs deal with this by approaching the malware from outside Windows. You boot your computer from a CD or USB drive containing the antivirus and it loads a specialized operating system from the disc. Even if your Windows installation is completely infected with malware, the special operating system won’t have any malware running within it. This means the antivirus program can work on the Windows installation from outside it. The malware won’t be running while the antivirus tries to remove it, so the antivirus can methodically locate and remove the harmful software without it interfering. Any rootkits won’t be able to set up the tricks they use at Windows boot time to hide themselves from the rest o the operating system. The antivirus will be able to see the rootkits and remove them. These tools are often referred to as “rescue disks.” They’re meant to be used when you need to rescue a hopelessly infected system. Bootable Antivirus Options As with any type of antivirus software, you have quite a few options. Many antivirus companies offer bootable antivirus systems based on their antivirus software. These tools are generally free, even when they’re offered by companies that specialized in paid antivirus solutions. Here are a few good options: avast! Rescue Disk – We like avast! for offering a capable free antivirus with good detection rates in independent tests. avast! now offers the ability to create an antivirus boot disc or USB drive. Just navigate to the Tools -> Rescue Disk option in the avast! desktop application to create bootable media. BitDefender Rescue CD – BitDefender always seems to receive good scores in independent tests, and the BitDefender Rescue CD offers the same antivirus engine in the form of a bootable disc. Kaspersky Rescue Disk – Kaspersky also receives good scores in independent tests and offers its own antivirus boot disc. These are just a handful of options. If you prefer another antivirus for some reason — Comodo, Norton, Avira, ESET, or almost any other antivirus product — you’ll probably find that it offers its own system rescue disk. How to Use an Antivirus Boot Disc Using an antivirus boot disc or USB drive is actually pretty simple. You’ll just need to find the antivirus boot disc you want to use and burn it to disc or install it on a USB drive. You can do this part on any computer, so you can create antivirus boot media on a clean computer and then take it to an infected computer. Insert the boot media into the infected computer and then reboot. The computer should boot from the removable media and load the secure antivirus environment. (If it doesn’t, you may need to change the boot order in your BIOS or UEFI firmware.) You can then follow the instructions on your screen to scan your Windows system for malware and remove it. No malware will be running in the background while you do this. Antivirus boot discs are useful because they allow you to detect and clean malware infections from outside an infected operating system. If the operating system is severely infected, it may not be possible to remove — or even detect — all the malware from within it. Image Credit: aussiegall on Flickr     

    Read the article

  • Type Object does not support slicing Unity3D

    - by Vish
    I am getting the following error in my code and I can't seem to understand why. Can anyone help me with it? This is my current code. The line causing the error is marked in a comment near the end. var rows : int = 4; var cols : int = 4; var totalCards : int = cols * rows; var matchesNeedToWin : int = totalCards * 0.5; var matchesMade : int = 0; var cardW : int = 100; var cardH : int = 100; var aCards : Array; var aGrid : Array; // This Array will store the two cards that the player flipped var aCardsFlipped : ArrayList; // To prevent player from clicking buttons when we don't want him to var playerCanClick : boolean; var playerHasWon : boolean = false; class Card extends System.Object { var isFaceUp : boolean = false; var isMatched : boolean = false; var img : String; function Card () { img = "robot"; } } function Start () { var i : int = 0; var j : int = 0; playerCanClick = true; aCards = new Array (); aGrid = new Array (); aCardsFlipped = new ArrayList (); for ( i = 0; i < rows; i++) { aGrid [i] = new Array (); for (j = 0; j < cols; cols++ ) { aGrid [i] [j] = new Card (); // <------ Error over here } } } function Update () { Debug.Log("Game Screen has loaded"); } The error states as follows: Error BCE0048: Type 'Object' does not support slicing. (BCE0048) (Assembly-UnityScript)

    Read the article

  • ODI SDK: Retrieving Information From the Logs

    - by Christophe Dupupet
    It is fairly common to want to retrieve data from the ODI logs: statistics, execution status, even the generated code can be retrieved from the logs. The ODI SDK provides a robust set of APIs to parse the repository and retreve such information. To locate the information you are looking for, you have to keep in mind the structure of the logs: sessions contain steps; steps containt tasks. The session is the execution unit: basically, each time you execute something (interface, package, procedure, scenario) you create a new session. The steps are the individual entries found in a session: these will be the icons in your package for instance. Or if you are running an interface, you will have one single step: the interface itself. The tasks will represent the more atomic elements of the steps: the individual DDL, DML, scripts and so forth that are generated by ODI, along with all the detailed statistics for that task. All these details can be retrieved with the SDK. Because I had a question recently on the API ODIStepReport, I focus explicitly in this code on Scenario logs, but a lot more can be done with these APIs. Here is the code sample (you can just cut and paste that code in your ODI 11.1.1.6 Groovy console). Just save, adapt the code to your environment (in particular to connect to your repository) and hit "run" //Created by ODI Studioimport oracle.odi.core.OdiInstanceimport oracle.odi.core.config.OdiInstanceConfigimport oracle.odi.core.config.MasterRepositoryDbInfo import oracle.odi.core.config.WorkRepositoryDbInfo import oracle.odi.core.security.Authentication  import oracle.odi.core.config.PoolingAttributes import oracle.odi.domain.runtime.scenario.finder.IOdiScenarioFinder import oracle.odi.domain.runtime.scenario.OdiScenario import java.util.Collection import java.io.* /* ----------------------------------------------------------------------------------------- Simple sample code to list all executions of the last version of a scenario,along with detailed steps information----------------------------------------------------------------------------------------- */ /* update the following parameters to match your environment => */def url = "jdbc:oracle:thin:@myserver:1521:orcl"def driver = "oracle.jdbc.OracleDriver"def schema = "ODIM1116"def schemapwd = "ODIM1116PWD"def workrep = "WORKREP1116"def odiuser= "SUPERVISOR"def odiuserpwd = "SUNOPSIS" // Rather than hardcoding the project code and folder name, // a great improvement here would be to parse the entire repository def scenario_name = "LOAD_DWH" /*Scenario Name*/ /* <=End of the update section */ //--------------------------------------//Connection to the repository// Note for ODI 11.1.1.6: you could use predefined odiInstance variable if you are // running the script from a Studio that is already connected to the repository def masterInfo = new MasterRepositoryDbInfo(url, driver, schema, schemapwd.toCharArray(), new PoolingAttributes())def workInfo = new WorkRepositoryDbInfo(workrep, new PoolingAttributes())def odiInstance = OdiInstance.createInstance(new OdiInstanceConfig(masterInfo, workInfo)) //--------------------------------------// In all cases, we need to make sure we have authorized access to the repositorydef auth = odiInstance.getSecurityManager().createAuthentication(odiuser, odiuserpwd.toCharArray())odiInstance.getSecurityManager().setCurrentThreadAuthentication(auth) //--------------------------------------// Retrieve the scenario we are looking fordef odiScenario = ((IOdiScenarioFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiScenario.class)).findLatestByName(scenario_name) if (odiScenario == null){    println("Error: cannot find scenario "+scenario_name);    return} //--------------------------------------// Retrieve all reports for the scenario def OdiScenarioReportsList = odiScenario.getScenarioReports() println("*** Listing all reports for Scenario \""+scenario_name+"\" ") //--------------------------------------// For each report, print the folowing:// - start time// - duration// - status// - step reports: selection of details for (s in OdiScenarioReportsList){        println("\tStart time: " + s.getSessionStartTime())        println("\tDuration: " + s.getSessionDuration())        println("\tStatus: " + s.getSessionStatus())                def OdiScenarioStepReportsList = s.getStepReports()        for (st in OdiScenarioStepReportsList){            println("\t\tStep Name: " + st.getStepName())            println("\t\tStep Resource Name: " + st.getStepResourceName())            println("\t\tStep Start time: " + st.getStepStartTime())            println("\t\tStep Duration: " + st.getStepDuration())            println("\t\tStep Status: " + st.getStepStatus())            println("\t\tStep # of inserts: " + st.getStepInsertCount())            println("\t\tStep # of updates: " + st.getStepUpdateCount()+'\n')      }      println("\t")}

    Read the article

  • Customizing the processing of ListItems for asp:RadioButtonList with "Flow" layout and "Horizontal"

    - by evovision
    Hi, recently I was asked to add an ability to pad specific elements from each other to a certain distance in RadioButtonList control. Not quite common everyday task I would say :)   Ok, let's get started!   Prerequisites: ASP.NET Page having RadioButtonList control with RepeatLayout="Flow" RepeatDirection="Horizontal" properties set.   Implementation:  The underlying data was coming from another source, so the only fast way to add meta information about padding was the text value itself (yes, not very optimal solution): Id = 1, Name = "This is first element" and for padding we agreed to use <space/> meta tag: Id = 2, Name = "<space padcount="30px"/>This is second padded element"   To handle items rendering in RadioButtonList control I've created custom class and subclassed from it:    public class CustomRadioButtonList : RadioButtonList    {        private Action<ListItem, HtmlTextWriter> _preProcess;         protected override void RenderItem(ListItemType itemType, int repeatIndex, RepeatInfo repeatInfo, HtmlTextWriter writer)        {            if (_preProcess != null)            {                _preProcess(this.Items[repeatIndex], writer);            }             base.RenderItem(itemType, repeatIndex, repeatInfo, writer);        }         public void SetPrePrenderItemFunction(Action<ListItem, HtmlTextWriter> func)        {            _preProcess = func;        }    }   It is pretty straightforward approach, the key is to override RenderItem method. Class has SetPrePrenderItemFunction method which is used to pass custom processing function that takes 2 parameters: ListItem and HtmlTextWriter objects.   Now update existing RadioButtonList control in Default.aspx: add this to beginning of the page:   <%@ Register Namespace="Sample.Controls" TagPrefix="uc1" %>   and update the control to:   <uc1:CustomRadioButtonList ID="customRbl" runat="server" DataValueField="Id" DataTextField="Name"            RepeatLayout="Flow" RepeatDirection="Horizontal"></uc1:CustomRadioButtonList>   Now, from codebehind of the page:   Add regular expression that will be used for parsing:   private Regex _regex = new Regex(@"(?:[<]space padcount\s*?=\s*?(?:'|"")(?<padcount>\d+)(?:(?:\s+)?px)?(?:'|"")\s*?/>)(?<content>.*)?", RegexOptions.IgnoreCase | RegexOptions.Compiled);   and finally setup the processing function in Page_Load:   protected void Page_Load(object sender, EventArgs e)    {        customRbl.DataSource = DataObjects;         customRbl.SetPrePrenderItemFunction((listItem, writer) =>        {            Match match = _regex.Match(listItem.Text);            if (match.Success)            {                writer.Write(string.Format(@"<span style=""padding-left:{0}"">Extreme values: </span>", match.Groups["padcount"].Value + "px"));                 // if you need to pad listitem use code below                //x.Attributes.CssStyle.Add("padding-left", match.Groups["padcount"].Value + "px");                 // remove meta tag from text                listItem.Text = match.Groups["content"].Value;            }        });         customRbl.DataBind();    }   That's it! :)   Run the attached sample application:     P.S.: of course several other approaches could have been used for that purpose including events and the functionality for processing could also be embedded inside control itself. Current solution suits slightly better due some other reasons for situation where it was used, in your case consider this as a kick start for your own implementation :)   Source application: CustomRadioButtonList.zip

    Read the article

  • Best Practices vs Reality

    - by RonHill
    On a scale depicting how closely best practices are followed, with "always" on one end and "never" on the other, my current company falls uncomfortably close to the latter. Just a couple trivial examples: We have no code review process There is very little documentation despite a very large code base (and some of it is blatantly incorrect/misleading) Untested/buggy/uncompilable code is frequently checked in to source control It is comically complicated to create a debuggable build for some of our components because of its underlying architecture. Unhandled exceptions are not uncommon in our releases Empty Catch{ } blocks are everywhere. Now, with the understanding that it's neither practical nor realistic to follow ALL best practices ALL the time, my question is this: How closely have commonly accepted best practices been followed at the companies you've worked for? I'm kind of a noob--this is only the second company I've worked for--so I'm not sure if I'm just more of an anal retentive coder or if I've just ended up at mediocre companies. My guess (hope?) is the latter, but a coworker with way more experience than me says every company he's ever worked for is like this. Given the obvious benefits of following most best practices most of the time, I find it hard to believe it's like this everywhere. Am I wrong?

    Read the article

  • WebCenter Spaces 11g - UI Customization

    - by john.brunswick
    When developing on top of a portal platform to support an intranet or extranet, a portion of the development time is spent adjusting the out-of-box user templates to adjust the look and feel of the platform for your organization. Generally your deployment will not need to look like anything like the sites posted on http://cssremix.com/ or http://www.webcreme.com/, but will meet business needs by adjusting basic elements like navigation, color palate and logo placement. After spending some time doing custom UI development with WebCenter Spaces 11G I have gathered a few tips that I hope can help to speed anyone's efforts to quickly "skin" a WebCenter Spaces deployment. A detailed white paper was released that outlines a technique to quickly update the UI during runtime - http://www.oracle.com/technology/products/webcenter/pdf/owcs_r11120_cust_skins_runtime_wp.pdf. Customizing at "runtime" means using CSS and images to adjust the page layout and feel, which when creatively done can change the pages drastically. WebCenter also allows for detailed templates to manage the placement of major page elements like menus, sidebar, etc, but by adjusting only images and CSS we can end up with something like the custom solution shown below. view large image Let's dive right in and take a look at some tools to make our efforts more efficient.

    Read the article

< Previous Page | 853 854 855 856 857 858 859 860 861 862 863 864  | Next Page >