Search Results

Search found 9008 results on 361 pages for 'undocumented feature'.

Page 326/361 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Problems with opening CHM Help files from Network or Internet

    - by Rick Strahl
    As a publisher of a Help Creation tool called Html Help Help Builder, I’ve seen a lot of problems with help files that won't properly display actual topic content and displays an error message for topics instead. Here’s the scenario: You go ahead and happily build your fancy, schmanzy Help File for your application and deploy it to your customer. Or alternately you've created a help file and you let your customers download them off the Internet directly or in a zip file. The customer downloads the file, opens the zip file and copies the help file contained in the zip file to disk. She then opens the help file and finds the following unfortunate result:     The help file  comes up with all topics in the tree on the left, but a Navigation to the WebPage was cancelled or Operation Aborted error in the Help Viewer's content window whenever you try to open a topic. The CHM file obviously opened since the topic list is there, but the Help Viewer refuses to display the content. Looks like a broken help file, right? But it's not - it's merely a Windows security 'feature' that tries to be overly helpful in protecting you. The reason this happens is because files downloaded off the Internet - including ZIP files and CHM files contained in those zip files - are marked as as coming from the Internet and so can potentially be malicious, so do not get browsing rights on the local machine – they can’t access local Web content, which is exactly what help topics are. If you look at the URL of a help topic you see something like this:   mk:@MSITStore:C:\wwapps\wwIPStuff\wwipstuff.chm::/indexpage.htm which points at a special Microsoft Url Moniker that in turn points the CHM file and a relative path within that HTML help file. Try pasting a URL like this into Internet Explorer and you'll see the help topic pop up in your browser (along with a warning most likely). Although the URL looks weird this still equates to a call to the local computer zone, the same as if you had navigated to a local file in IE which by default is not allowed.  Unfortunately, unlike Internet Explorer where you have the option of clicking a security toolbar, the CHM viewer simply refuses to load the page and you get an error page as shown above. How to Fix This - Unblock the Help File There's a workaround that lets you explicitly 'unblock' a CHM help file. To do this: Open Windows Explorer Find your CHM file Right click and select Properties Click the Unblock button on the General tab Here's what the dialog looks like:   Clicking the Unblock button basically, tells Windows that you approve this Help File and allows topics to be viewed.   Is this insecure? Not unless you're running a really old Version of Windows (XP pre-SP1). In recent versions of Windows Internet Explorer pops up various security dialogs or fires script errors when potentially malicious operations are accessed (like loading Active Controls), so it's relatively safe to run local content in the CHM viewer. Since most help files don't contain script or only load script that runs pure JavaScript access web resources this works fine without issues. How to avoid this Problem As an application developer there's a simple solution around this problem: Always install your Help Files with an Installer. The above security warning pop up because Windows can't validate the source of the CHM file. However, if the help file is installed as part of an installation the installation and all files associated with that installation including the help file are trusted. A fully installed Help File of an application works just fine because it is trusted by Windows. Summary It's annoying as all hell that this sort of obtrusive marking is necessary, but it's admittedly a necessary evil because of Microsoft's use of the insecure Internet Explorer engine that drives the CHM Html Engine's topic viewer. Because help files are viewing local content and script is allowed to execute in CHM files there's potential for malicious code hiding in CHM files and the above precautions are supposed to avoid any issues. © Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to Tell If Your Computer is Overheating and What to Do About It

    - by Chris Hoffman
    Heat is a computer’s enemy. Computers are designed with heat dispersion and ventilation in mind so they don’t overheat. If too much heat builds up, your computer may become unstable or suddenly shut down. The CPU and graphics card produce much more heat when running demanding applications. If there’s a problem with your computer’s cooling system, an excess of heat could even physically damage its components. Is Your Computer Overheating? When using a typical computer in a typical way, you shouldn’t have to worry about overheating at all. However, if you’re encountering system instability issues like abrupt shut downs, blue screens, and freezes — especially while doing something demanding like playing PC games or encoding video — your computer may be overheating. This can happen for several reasons. Your computer’s case may be full of dust, a fan may have failed, something may be blocking your computer’s vents, or you may have a compact laptop that was never designed to run at maximum performance for hours on end. Monitoring Your Computer’s Temperature First, bear in mind that different CPUs and GPUs (graphics cards) have different optimal temperature ranges. Before getting too worried about a temperature, be sure to check your computer’s documentation — or its CPU or graphics card specifications — and ensure you know the temperature ranges your hardware can handle. You can monitor your computer’s temperatures in a variety of different ways. First, you may have a way to monitor temperature that is already built into your system. You can often view temperature values in your computer’s BIOS or UEFI settings screen. This allows you to quickly see your computer’s temperature if Windows freezes or blue screens on you — just boot the computer, enter the BIOS or UEFI screen, and check the temperatures displayed there. Note that not all BIOSes or UEFI screens will display this information, but it is very common. There are also programs that will display your computer’s temperature. Such programs just read the sensors inside your computer and show you the temperature value they report, so there are a wide variety of tools you can use for this, from the simple Speccy system information utility to an advanced tool like SpeedFan. HWMonitor also offer this feature, displaying a wide variety of sensor information. Be sure to look at your CPU and graphics card temperatures. You can also find other temperatures, such as the temperature of your hard drive, but these components will generally only overheat if it becomes extremely hot in the computer’s case. They shouldn’t generate too much heat on their own. If you think your computer may be overheating, don’t just glance as these sensors once and ignore them. Do something demanding with your computer, such as running a CPU burn-in test with Prime 95, playing a PC game, or running a graphical benchmark. Monitor the computer’s temperature while you do this, even checking a few hours later — does any component overheat after you push it hard for a while? Preventing Your Computer From Overheating If your computer is overheating, here are some things you can do about it: Dust Out Your Computer’s Case: Dust accumulates in desktop PC cases and even laptops over time, clogging fans and blocking air flow. This dust can cause ventilation problems, trapping heat and preventing your PC from cooling itself properly. Be sure to clean your computer’s case occasionally to prevent dust build-up. Unfortunately, it’s often more difficult to dust out overheating laptops. Ensure Proper Ventilation: Put the computer in a location where it can properly ventilate itself. If it’s a desktop, don’t push the case up against a wall so that the computer’s vents become blocked or leave it near a radiator or heating vent. If it’s a laptop, be careful to not block its air vents, particularly when doing something demanding. For example, putting a laptop down on a mattress, allowing it to sink in, and leaving it there can lead to overheating — especially if the laptop is doing something demanding and generating heat it can’t get rid of. Check if Fans Are Running: If you’re not sure why your computer started overheating, open its case and check that all the fans are running. It’s possible that a CPU, graphics card, or case fan failed or became unplugged, reducing air flow. Tune Up Heat Sinks: If your CPU is overheating, its heat sink may not be seated correctly or its thermal paste may be old. You may need to remove the heat sink and re-apply new thermal paste before reseating the heat sink properly. This tip applies more to tweakers, overclockers, and people who build their own PCs, especially if they may have made a mistake when originally applying the thermal paste. This is often much more difficult when it comes to laptops, which generally aren’t designed to be user-serviceable. That can lead to trouble if the laptop becomes filled with dust and needs to be cleaned out, especially if the laptop was never designed to be opened by users at all. Consult our guide to diagnosing and fixing an overheating laptop for help with cooling down a hot laptop. Overheating is a definite danger when overclocking your CPU or graphics card. Overclocking will cause your components to run hotter, and the additional heat will cause problems unless you can properly cool your components. If you’ve overclocked your hardware and it has started to overheat — well, throttle back the overclock! Image Credit: Vinni Malek on Flickr     

    Read the article

  • Global Perspective: Oracle AppAdvantage Does its Stage Debut in the UK

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Global Perspective is a monthly series that brings experiences, business needs and real-world use cases from regions across the globe. This month’s feature is a follow-up from last month’s Global Perspective note from a well known ACE Director based in EMEA. My first contribution to this blog was before Oracle Open World and I was quite excited about where this initiative would take me in my understanding of the value of Oracle Fusion Middleware. Rimi Bewtra from the Oracle AppAdvantage team came as promised to the Oracle ACE Director briefings and explained what this initiative was all about and I then asked the directors to take part in the new survey. The story was really well received and then at the SOA advisory board that many of these ACE Directors already take part in there was a further discussion on how this initiative will help customers understand the benefits of adoption. A few days later Rick Beers launched the program at a lunch of invited customer executives which included one from Pella who talked about their projects (a quick recap on that here). I wasn’t able to stay for the whole event but what really interested me was that these executives who understood the technology but where looking for how they could use them to drive their businesses. Lots of ideas were bubbling up in my head about how we can use this in user groups to help our members, and the timing was fantastic as just three weeks later we had UKOUG_Apps13, our flagship Applications conference in the UK. We had independently working with Oracle marketing in the UK on an initiative called Apps Transformation to help our members look beyond just the application they use today. We have had a Fusion community page but felt the options open are now much wider than Fusion Applications, there are acquired applications, social, mobility and of course the underlying technology, Oracle Fusion Middleware. I was really pleased to be allowed to give the Oracle AppAdvantage story as a session in our conference and we are planning a special Apps Transformation event in March where I hope the Oracle AppAdvantage team will take part and we will have the results of the survey to discuss. But, life also came full circle for me. In my first post, I talked about Andrew Sutherland and his original theory that Oracle Fusion Middleware adoption had technical drivers. Well, Andrew was a speaker at our event and he gave a potted, tech-talk free update on Oracle Open World. Andrew talked about the Prevailing Technology Winds, and what is driving this today and he talked about that in the past it was the move from simply automating processes (ERP etc), through the altering of those processes (SOA) and onto consolidation. The next drivers are around the need to predict, both faster and more accurately; how to better exploit the information that we have available. He went on to talk about The Nexus of Forces: Social, Mobile, Cloud and Information – harnessing these forces of change with Oracle technology. Gartner really likes this concept and if you want to know more you can get their paper here. All this has made me think, and I hope it will make you too. Technology can help us drive our businesses better and understanding your needs can be the first step on your journey, which was the theme of our event in the UK. I spoke to a number of the delegates and I hope to share some of their stories in later posts. If you have a story to share, the survey is at: https://www.surveymonkey.com/s/P335DD3 About the Author: Debra Lilley, Fujitsu Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Debra has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Leader and then Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts Editor’s Note: Debra has kindly agreed to share her musings and experience in a monthly column on the Fusion Middleware blog so do stay tuned…

    Read the article

  • Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore

    - by sathya
    Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore   I was really excited on attending the day 1 of Microsoft TechEd 2010 in Bangalore. This is the first Teched that am attending. The event was really fun filled with lot of knowledge sharing sessions and lots of goodies and gifts by the partners Initially the Event Started by Murthy's Session. He explained about the Developers relating to the 5 elements of nature (Pancha Boothaas) 1. Fire - Passion 2. Wave (Water) - Catch the right wave which we need to apply. 3. Earth - Connections and lots of opportunities around the world 4. Air -  Its whatever we breathe. Developers.. Without them nothing is possible. they are like the air 5. Sky - Cloud based applications   Next the Keynote and the announcement of Visual Studio by SomaSegar. List of things that he delivered his speech on : 1. Announcement of Visual Studio 2010 2. Announcement of .NET 4.0 3. Announcement of Silverlight later this week 4. What is the current Trend? Microsoft has done a research with many developers across the globe and have got the following feedback from the users. Get Lost (interrupted) - When we do some work and somebody is calling or interrepting by someother way we lose track of what we were doing and we need to do from the start Falling Behind- Technology gets updated  phenomenally over a period of time and developers always have a scenario like they are not in the state of the art technology and they always have a doubt whether they are staying updated. Lack of Collobaration - When a Manager asks a person what the team members have done and some might be done and some might not be and finally all are into a state like we dont know where we are. So they have addressed these 3 points in the VS 2010 by the following features : Get Lost - Some cool features which could overcome this. We have some Graphical interface. which could show what we have done and where we are. Some Zoom features in the code level. Falling Behind - Everything is based on .NET language base. 2010 has been built in such a way that if developers know the native language that's enough for building good applications. Lack of Collobaration - Some Dashboard Features which would show where exactly the project is. And a graphical user interface is shown on clicking which it directly drills down even to the code level. 5. An overview on all new features in VS 2010. 6. Some good demos of new features in VS 2010 by Polita and one more girl. Some of the new features included : 1. Team Explorer 2. Zoom in Code 3. Ribbon Development 4. Development in Single Platform for Windows Phone, XBox, Zune, Azure, Web Based and Windows based applications 5. Sequence Diagram Generation directly from code 6. Dashboards to show project status 7. Javascript and JQuery intellisense 8. Native support for JQuery 9. Packaging feature while deploying. 10. Generation of different versions of web.config like Web.Config.Production, Web.Config.Staging, etc. 11. IntelliTrace - Eliminating the "Not Reproducible" statement. 12. Automated User Interface Testing. At last in the closing of the day we had a great event called Demo Extravaganza, where lot of cool projects that were launched by Microsoft and also the projects that are under research were also shown. I got a lot of info about Bing today. BING really rocks!!! It has the following : 1. Visual Search 2. Product based search. For each product different menu filters were provided to make an advanced search 3. BING Maps was awesome!! It zoomed in to the street level and we can assume that we are the persons who are walking or running on the road and we can see the real objects like buildings moving by our side. 4. PhotoSynth was used in BING to show up all the images taken around the globe in a 3D format. 5. Formula - If we give some formula it automatically gives the value for the variable or derivation of expression Also some info about some kool touch apps which does an authentication and computation of Teched Attendee's Points that they have scored and the sessions attended. One guy won an XBOX in lucky draw as a gift. There were lot of Partner Stalls like Accenture,Intel,Citrix,MicroFocus,Telerik,infragistics,Sapient etc. Some Offers were provided for us like 50% off on Certifications, 1 free Elearning Course, etc. Stay tuned!! Wil update you on other events too..

    Read the article

  • Employee Engagement Q&A with John Brunswick

    - by Kellsey Ruppel
    As we are focusing this week on Employee Engagement, I recently sat down with industry expert and thought leader John Brunswick on the topic. Here is the Q&A dialogue we shared.  Q: How do you effectively engage employees to drive business value?A: Motivation, both extrinsic and intrinsic, combined with the relevancy of various channels to support it.  Beyond chaining business strategies like compensation models within an organization, engagement ultimately is most successful when driven by employee's motivations.  Business value derived from engagement through technical capabilities can be objectively measured through metrics like the rate and accuracy of problem solving for a given business function or frequency of innovation created.  Providing employees performing "knowledge work" with capabilities that allow them to perform work with a higher degree of accuracy in the same or ideally less time, adds value for that individual and in turn, drives their level of engagement to drive business value. Q: Organizations with high levels of employee engagement outperform the total stock market index by 22%. Can you comment on why you think this might be? A: Alignment through shared purpose.  Zappos is an excellent example of a culture that arguably has higher than average levels of employee engagement and it permeates every aspect of their organization – embodied externally through their customer experience.  I recently made my first purchase with them and it was obvious through their web experience, visual design, communication style, customer service and attention to detail down to green packaging, that they have an amazingly strong shared purpose.  The Zappos.com ‘About page’ outlines their "Family Core Values", the first three being "Deliver WOW Through Service, Embrace and Drive Change & Create Fun and A Little Weirdness" – all reflected externally in my interaction with them.  Strong shared purpose enables higher product and service experience, equating to a dedicated customer base, repeat purchases and expanded marketshare. Q: Have you seen any trends in the market regarding employee engagement? A: Some companies now see offering a form of social engagement similar to Facebook and LinkedIn as standard communication infrastructure like email or instant messaging.  Originally offered as standalone tools, the value is now seen when these capabilities are offered in an integrated fashion in the context of business entities.  An emerging area of focus is around employee activities related to their organization on external social platforms, implicitly creating external communities with employees acting on behalf of the brand and interacting with each other (e.g. Twitter).  Companies have reached a formal understand that this now established communication medium requires strategies allowing employees to engage.  I have personally met colleagues from Oracle, like Oracle User Experience Director Ultan O'Broin (@ultan), via Twitter before meeting first through internal channels. Q: Employee engagement is important, but what about engaging customers and partners? A: The last few years we have witnessed an interesting evolution from the novelty of self-service to expectations of "intelligent" self-service.  From a consumer standpoint, engagement can end up being a key differentiator, especially in mature markets.  Customers that perform some level of interaction with a brand develop greater affinity for the brand and have a greater probability of acting as an advocate.  As organizations move toward a model of deeper engagement, they must ensure that their business is positioned to support deeper relationships, offering potentially greater transparency. From a partner standpoint greater engagement can lead to new types of business opportunities, much in the way that Amazon.com offers a unified shopping experience that can potentially span various vendors.  This same model can be extended to blending services and product delivery models, based on a closeness not easily possible before increased capability of engagement mechanisms. Q: What types of solutions are available to successfully deliver employee engagement? A: Solutions enabling higher levels of engagement do so on the basis of relevancy.  This relevancy is generally supported by aspects of content management, social collaboration, business intelligence, portal and process management technologies.  These technologies can help deliver an experience tailored to a given role or process within an organization that applies equally to work that is structured or unstructured, appearing in the form of functionality as simple as an online employee directory search, knowledge communities supported by social collaboration, as well as more feature rich business intelligence dashboards and portals. Looking to learn more about how to effectively engage your employees? Check out this webcast, or read more from John Brunswick. 

    Read the article

  • Limiting Audit Exposure and Managing Risk – Q&A and Follow-Up Conversation

    - by Tanu Sood
    Thanks to all who attended the live ISACA webcast on Limiting Audit Exposure and Managing Risk with Metrics-Driven Identity Analytics. We were really fortunate to have Don Sparks from ISACA moderate the webcast featuring Stuart Lincoln, Vice President, IT P&L Client Services, BNP Paribas, North America and Neil Gandhi, Principal Product Manager, Oracle Identity Analytics. Stuart’s insights given the team’s role in providing IT for P&L Client Services and his tremendous experience in identity management and establishing sustainable compliance programs were true value-add at yesterday’s webcast. And if you are a healthcare organization looking to solve your compliance and security challenges, we recommend you join us for a live webcast on Tuesday, November 29 at 10 am PT. The webcast will feature experts from Kaiser Permanente, PricewaterhouseCoopers and Oracle and the focus of the discussion will be around the compliance challenges a healthcare organization faces and best practices for tackling those. Here are the details: Healthcare IT News Webcast: Managing Risk and Enforcing Compliance in Healthcare with Identity Analytics Tuesday, November 29, 201110:00 a.m. PT / 1:00 p.m. ET Register Today The ISACA webcast replay is now available on-demand and the slides are also available for download. Since we didn’t have time to address all the questions we received during the live Q&A portion of the webcast, we have captured responses to the remaining questions here. Please continue to provide us your feedback and insights from your experience in deploying identity compliance solutions. Q. Can you please clarify the mechanism utilized to populate the Identity Warehouse from each individual application's access management function / files? A. Oracle Identity Analytics (OIA) supports direct imports from applications. Data collection is based on Extract, Transform and Load (ETL) that eliminates the need to write connectors to different applications. Oracle Identity Analytics’ import engine supports complex entitlement feeds saved as either text files or XML. The imports can be scheduled on a periodic basis or triggered as needed. If the applications are synchronized with a user provisioning solution like Oracle Identity Manager, Oracle Identity Analytics has a seamless integration to pull in data from Oracle Identity Manager. Q.  Can you provide a short summary of the new features in your latest release of Oracle Identity Analytics? A. Oracle recently announced availability of enhanced Oracle Identity Analytics. This release focused on easing the certification process by offering risk analytics driven certification, advanced certification screens, business centric views and significant improvement in performance including 3X faster data imports, 3X faster certification campaign generation and advanced auto-certification features, that  will allow organizations to improve user productivity by up to 80%. Closed-loop risk feedback and IT policy monitoring with Oracle Identity Manager, a leading user provisioning solution, allows for more accurate certification reviews. And, OIA's improved performance enables customers to scale compliance initiatives supporting millions of user entitlements across thousands of applications, whether on premise or in the cloud, without compromising speed or integrity. Q. Will ISACA grant a CPE credit for attending this ISACA-sponsored webinar today? A. From ISACA: Hello and thank you for your interest in the 2011 ISACA Webinar Program!  Unfortunately, there are no CPEs offered for this program, archived or live.  We will be looking into the feasibility of offering them in the future.  Q. Would you be able to use this to help manage licenses for software? That is to say - could it track software that is not used by a user, thus eliminating the software license? A. OIA’s integration with Oracle Identity Manager, a leading user provisioning solution, allows organizations to detect ghost accounts or unused accounts via account reconciliation. Based on company’s policies, this could trigger an automated workflow for account deletion or asking for further investigation. Closed-loop feedback between the two solutions would then allow visibility into the complete audit trail of when the account was detected, the action taken, by whom, when and the current status. Q. We have quarterly attestations and .xls mechanisms are not working. Once the identity data is correlated in Identity Analytics, do you then automate access certification? A. OIA’s identity warehouse analyzes and correlates identity data across various resources that allows OIA to determine a user’s risk profile, who the access review request should go to, along with all the relevant access details of the user. The access certification manager gets notification on what to review, when and the relevant data is presented in a business friendly screen. Based on the result of the access certification process, actions are triggered and results recorded and archived. Access review managers have visual risk indicators that also allow them to prioritize access certification tasks and efforts. Q. How does Oracle Identity Analytics work with Cloud Security? A. For enterprises looking to build their own cloud(s), Oracle offers a set of security services that cloud developers can leverage including Oracle Identity Analytics.  For enterprises looking to manage their compliance requirements but without hosting those in-house and instead having a hosting provider offer managed Identity Management services to the organizations, Oracle Identity Analytics can be leveraged much the same way as you’d in an on-premise (within the enterprise) environment. In fact, organizations today are leveraging Oracle Identity Analytics to manage identity compliance in both these ways. Q. Would you recommend this as a cost effective solution for a smaller organization with @ 2,500 users? A. The key return-on-investment (ROI) on Oracle Identity Analytics is derived from automating compliance processes thereby eliminating administrative overhead, minimizing errors, maintaining cost- and time-effective sustainable compliance processes and minimizing audit exposures and penalties.  Of course, there are other tangible benefits that are derived from an Oracle Identity Analytics implementation as outlined in the webcast. For a quantitative analysis of your requirements and potential ROI calculation, we recommend you refer to the Forrester Study on Total Economic Impact of Oracle Identity Analytics. For an in-person discussion, please email Richard Caldwell.

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • Why enumerator structs are a really bad idea

    - by Simon Cooper
    If you've ever poked around the .NET class libraries in Reflector, I'm sure you would have noticed that the generic collection classes all have implementations of their IEnumerator as a struct rather than a class. As you will see, this design decision has some rather unfortunate side effects... As is generally known in the .NET world, mutable structs are a Very Bad Idea; and there are several other blogs around explaining this (Eric Lippert's blog post explains the problem quite well). In the BCL, the generic collection enumerators are all mutable structs, as they need to keep track of where they are in the collection. This bit me quite hard when I was coding a wrapper around a LinkedList<int>.Enumerator. It boils down to this code: sealed class EnumeratorWrapper : IEnumerator<int> { private readonly LinkedList<int>.Enumerator m_Enumerator; public EnumeratorWrapper(LinkedList<int> linkedList) { m_Enumerator = linkedList.GetEnumerator(); } public int Current { get { return m_Enumerator.Current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { return m_Enumerator.MoveNext(); } public void Reset() { ((System.Collections.IEnumerator)m_Enumerator).Reset(); } public void Dispose() { m_Enumerator.Dispose(); } } The key line here is the MoveNext method. When I initially coded this, I thought that the call to m_Enumerator.MoveNext() would alter the enumerator state in the m_Enumerator class variable and so the enumeration would proceed in an orderly fashion through the collection. However, when I ran this code it went into an infinite loop - the m_Enumerator.MoveNext() call wasn't actually changing the state in the m_Enumerator variable at all, and my code was looping forever on the first collection element. It was only after disassembling that method that I found out what was going on The MoveNext method above results in the following IL: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000, [1] valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator CS$0$0001) L_0000: nop L_0001: ldarg.0 L_0002: ldfld valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: stloc.1 L_0008: ldloca.s CS$0$0001 L_000a: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000f: stloc.0 L_0010: br.s L_0012 L_0012: ldloc.0 L_0013: ret } Here, the important line is 0002 - m_Enumerator is accessed using the ldfld operator, which does the following: Finds the value of a field in the object whose reference is currently on the evaluation stack. So, what the MoveNext method is doing is the following: public bool MoveNext() { LinkedList<int>.Enumerator CS$0$0001 = this.m_Enumerator; bool CS$1$0000 = CS$0$0001.MoveNext(); return CS$1$0000; } The enumerator instance being modified by the call to MoveNext is the one stored in the CS$0$0001 variable on the stack, and not the one in the EnumeratorWrapper class instance. Hence why the state of m_Enumerator wasn't getting updated. Hmm, ok. Well, why is it doing this? If you have a read of Eric Lippert's blog post about this issue, you'll notice he quotes a few sections of the C# spec. In particular, 7.5.4: ...if the field is readonly and the reference occurs outside an instance constructor of the class in which the field is declared, then the result is a value, namely the value of the field I in the object referenced by E. And my m_Enumerator field is readonly! Indeed, if I remove the readonly from the class variable then the problem goes away, and the code works as expected. The IL confirms this: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000) L_0000: nop L_0001: ldarg.0 L_0002: ldflda valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000c: stloc.0 L_000d: br.s L_000f L_000f: ldloc.0 L_0010: ret } Notice on line 0002, instead of the ldfld we had before, we've got a ldflda, which does this: Finds the address of a field in the object whose reference is currently on the evaluation stack. Instead of loading the value, we're loading the address of the m_Enumerator field. So now the call to MoveNext modifies the enumerator stored in the class rather than on the stack, and everything works as expected. Previously, I had thought enumerator structs were an odd but interesting feature of the BCL that I had used in the past to do linked list slices. However, effects like this only underline how dangerous mutable structs are, and I'm at a loss to explain why the enumerators were implemented as structs in the first place. (interestingly, the SortedList<TKey, TValue> enumerator is a struct but is private, which makes it even more odd - the only way it can be accessed is as a boxed IEnumerator!). I would love to hear people's theories as to why the enumerators are implemented in such a fashion. And bonus points if you can explain why LinkedList<int>.Enumerator.Reset is an explicit implementation but Dispose is implicit... Note to self: never ever ever code a mutable struct.

    Read the article

  • How to get paid and figure out if I want to keep this client [migrated]

    - by Heiner Fawkes
    I have a client who is not paying on time, but it looks like the specifics don't match similar questions on this SE site. I got a call from a client I did website work for years ago. I had not done this kind of work for many years and frankly I'm not sure I want to now, but nevertheless about a month ago I agreed to bring his website, SEO, social media, and overall marketing for his small business up to speed. Why? He has told me many times how I'm the most honest, most well-informed contractor he's had experience with. And I personally kind of like him too. So I started working on an hourly basis. I sent one very small invoice and got paid. Then we talked a whole lot about all sorts of feature he would like me to implement. I started that work, and sent a second invoice on the first of the month (one of my two stated billing days). I didn't get paid. On every invoice it states that I charge a whopping ten percent per week late. I sent many voicemails and emails asking to please let me know what's going on with payment, and didn't get replies. Then the 15th of the month rolled around (which I stated initially as one of my invoicing dates). Since I hadn't been paid for the last invoice, I simply didn't send him an invoice at that time but emailed him and said that I will combine it with the next scheduled invoice for this reason (probably a bad idea I realize). Eventually he sent a portion of the invoice payment. I emailed back to let him know that he's three weeks late and what the remaining balance is. Finally we got in touch via phone. He basically told me that he thought I hadn't done all of the work I said I did. He looked at the page source code and it didn't look complete to him. I explained why his perception would be different and what work I had done as specified. He accepted this and said that part of the reason he didn't pay in full is that he's been swamped with personal family stuff, and part of the reason is that he didn't think I did all the work. That struck me as pretty weird. He also expressed concern that he has no idea now how much all the changes he has asked for are going to cost. And once again, he told me how honest and high-quality my services are compared to others he has dealt with. He also said he would pay me more (but not all) of the now three weeks overdue invoice that day. I didn't receive any payment. Basically this is how the client relationship strikes me: He's not good at communication. He's very busy and English isn't his first language. He almost never replies to emails but phone calls are fine. He's asked me to avoid emails for communication and I've asked him to please use email. He might not have enough money to afford all the things he has asked for. But so far I have been working for an hourly fee (which is quite high). He also has started paying monthly for hosting and social media services from me. What seems very abnormal is for a client to be so overdue on payments and to actually withhold payment of an invoice without any communication because he didn't think the work was done. I told him that I will send dollar estimates of each module of remaining work so that we can decide which ones are the highest priority if he cannot afford them all. I also reiterated that in the future if he has doubts about the work or an inability to pay, he must contact me immediately to say so. I basically plan to state the following to him: I would like to work for him and help his business. I also have sympathy for his recent family difficulties. I am happy to figure out payment plans that would work better for him, but first I need to be paid in full for all outstanding invoices, especially given that I skipped one of them just to be nice. The most crucial thing I need is communication about any problems with my work or his ability to pay. Once again, he heeds to pay in full immediately before we negotiate anything else. Does the above seem like an appropriate communication? Is anything missing from it? Is anything I'm doing here really abnormal?

    Read the article

  • The JavaOne 2012 Sunday Technical Keynote

    - by Janice J. Heiss
    At the JavaOne 2012 Sunday Technical Keynote, held at the Masonic Auditorium, Mark Reinhold, Chief Architect, Java Platform Group, stated that they were going to do things a bit differently--"rather than 20 minutes of SE, and 20 minutes of FX, and 20 minutes of EE, we're going to mix it up a little," he said. "For much of it, we're going to be showing a single application, to show off some of the great work that's been done in the last year, and how Java can scale well--from the cloud all the way down to some very small embedded devices, and how JavaFX scales right along with it."Richard Bair and Jasper Potts from the JavaFX team demonstrated a JavaOne schedule builder application with impressive navigation, animation, pop-overs, and transitions. They noted that the application runs seamlessly on either Windows or Macs, running Java 7. They then ran the same application on an Ubuntu Linux machine--"it just works," said Blair.The JavaFX duo next put the recently released JavaFX Scene Builder through its paces -- dragging and dropping various image assets to build the application's UI, then fine tuning a CSS file for the finished look and feel. Among many other new features, in the past six months, JavaFX has released support for H.264 and HTTP live streaming, "so you can get all the real media playing inside your JavaFX application," said Bair. And in their developer preview builds of JavaFX 8, they've now split the rendering thread from the UI thread, to better take advantage of multi-core architectures.Next, Brian Goetz, Java Language Architect, explored language and library features planned for Java SE 8, including Lambda expressions and better parallel libraries. These feature changes both simplify code and free-up libraries to more effectively use parallelism. "It's currently still a lot of work to convert an application from serial to parallel," noted Goetz.Reinhold had previously boasted of Java scaling down to "small embedded devices," so Blair and Potts next ran their schedule builder application on a small embedded PandaBoard system with an OMAP4 chip set. Connected to a touch screen, the embedded board ran the same JavaFX application previously seen on the desktop systems, but now running on Java SE Embedded. (The systems can be seen and tried at four of the nearby JavaOne hotels.) Bob Vandette, Java Embedded Architect, then displayed a $25 Rasberry Pi ARM-based system running Java SE Embedded, noting the even greater need for the platform independence of Java in such highly varied embedded processor spaces. Reinhold and Vandetta discussed Project Jigsaw, the planned modularization of the Java SE platform, and its deferral from the Java 8 release to Java 9. Reinhold demonstrated the promise of Jigsaw by running a modularized demo version of the earlier schedule builder application on the resource constrained Rasberry Pi system--although the demo gods were not smiling down, and the application ultimately crashed.Reinhold urged developers to become involved in the Java 8 development process--getting the weekly builds, trying out their current code, and trying out the new features:http://openjdk.java.net/projects/jdk8http://openjdk.java.net/projects/jdk8/spechttp://jdk8.java.netFrom there, Arun Gupta explored Java EE. The primary themes of Java EE 7, Gupta stated, will be greater productivity, and HTML 5 functionality (WebSocket, JSON, and HTML 5 forms). Part of the planned productivity increase of the release will come from a reduction in writing boilerplate code--through the widespread use of dependency injection in the platform, along with default data sources and default connection factories. Gupta noted the inclusion of JAX-RS in the web profile, the changes and improvements found in JMS 2.0, as well as enhancements to Java EE 7 in terms of JPA 2.1 and EJB 3.2. GlassFish 4 is the reference implementation of Java EE 7, and currently includes WebSocket, JSON, JAX-RS 2.0, JMS 2.0, and more. The final release is targeted for Q2, 2013. Looking forward to Java EE 8, Gupta explored how the platform will provide multi-tenancy for applications, modularity based on Jigsaw, and cloud architecture. Meanwhile, Project Avatar is the group's incubator project for designing an end-to-end framework for building HTML 5 applications. Santiago Pericas-Geertsen joined Gupta to demonstrate their "Angry Bids" auction/live-bid/chat application using many of the enhancements of Java EE 7, along with an Avatar HTML 5 infrastructure, and running on the GlassFish reference implementation.Finally, Gupta covered Project Easel, an advanced tooling capability in NetBeans for HTML5. John Ceccarelli, NetBeans Engineering Director, joined Gupta to demonstrate creating an HTML 5 project from within NetBeans--formatting the project for both desktop and smartphone implementations. Ceccarelli noted that NetBeans 7.3 beta will be released later this week, and will include support for creating such HTML 5 project types. Gupta directed conference attendees to: http://glassfish.org/javaone2012 for everything about Java EE and GlassFish at JavaOne 2012.

    Read the article

  • Java EE and GlassFish Server Roadmap Update

    - by John Clingan
    2013 has been a stellar year for both the Java EE and GlassFish Server communities. On June 12, Oracle and its partners announced the release of Java EE 7, which delivers on three major themes – HTML5, developer productivity, and meeting enterprise demands. The online event attracted over 10,000 views in the first two days! During the online event, Oracle also announced the availability of GlassFish Server Open Source Edition 4, the world's first Java EE 7 compatible application server. The primary role of GlassFish Server Open Source Edition has been, and continues to be, driving adoption of the latest release of the Java Platform, Enterprise Edition. Oracle also announced the Java EE 7 SDK, which bundles GlassFish Server Open Source Edition 4, as a Java EE 7 learning aid. Last, Oracle publicly announced the Java EE 7 reference implementation based on GlassFish Server Open Source Edition 4. Java EE is a popular platform, as evidenced by the 20+ Java EE 6 compatible implementations available to choose from. After the launch of Java EE 7 and GlassFish Server Open Source Edition 4, we began planning the Java EE 8 roadmap, which was covered during the JavaOne Strategy Keynote. To summarize, there is a lot of interest in improving on HTML5 support, Cloud, and investigating NoSQL support. We received a lot of great feedback from the community and customers on what they would like to see in Java EE 8. As we approached JavaOne 2013, we started planning the GlassFish Server roadmap. What we announced at JavaOne was that GlassFish Server Open Source Edition 4.1 is scheduled for 2014. Here is an update to that roadmap. GlassFish Server Open Source Edition 4.1 is scheduled for 2014 We are planning updates as needed to GlassFish Server Open Source Edition, which is commercially unsupported As we head towards Java EE 8: The trunk will eventually transition to GlassFish Server Open Source Edition 5 as a Java EE 8 implementation The Java EE 8 Reference Implementation will be derived from GlassFish Server Open Source Edition 5. This replicates what has been done in past Java EE and GlassFish Server releases. Oracle will no longer release future major releases of Oracle GlassFish Server with commercial support – specifically Oracle GlassFish Server 4.x with commercial Java EE 7 support will not be released. Commercial Java EE 7 support will be provided from WebLogic Server. Oracle GlassFish Server will not be releasing a 4.x commercial version Expanding on that last bullet, new and existing Oracle GlassFish Server 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. Oracle recommends that existing commercial Oracle GlassFish Server customers begin planning to move to Oracle WebLogic Server, which is a natural technical and license migration path forward: Applications developed to Java EE standards can be deployed to both GlassFish Server and Oracle WebLogic Server GlassFish Server and Oracle WebLogic Server have implementation-specific deployment descriptor interoperability (here and here). GlassFish Server 3.x and Oracle WebLogic Server share quite a bit of code, so there are quite a bit of configuration and (extended) feature similarities. Shared code includes JPA, JAX-RS, WebSockets (pre JSR 356 in both cases), CDI, Bean Validation, JAX-WS, JAXB, and WS-AT. Both Oracle GlassFish Server 3.x and Oracle WebLogic Server 12c support Oracle Access Manager, Oracle Coherence, Oracle Directory Server, Oracle Virtual Directory, Oracle Database, Oracle Enterprise Manager and are entitled to support for the underlying Oracle JDK. To summarize, Oracle is committed to the future of Java EE.  Java EE 7 has been released and planning for Java EE 8 has begun. GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition. We are planning for GlassFish Server Open Source Edition 5 as the foundation for the Java EE 8 reference implementation, as well as bundling GlassFish Server Open Source Edition 5 in a Java EE 8 SDK, which is the most popular distribution of GlassFish. This will allow GlassFish releases to be more focused on the Java EE platform and community-driven requirements. We continue to encourage community contributions, bug reports, participation on the GlassFish forum, etc. Going forward, Oracle WebLogic Server will be the single strategic commercially supported application server from Oracle. Disclaimer: The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract.It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Caveat utilitor - Can I run two versions of Microsoft Project side-by-side?

    - by Martin Hinshelwood
    A number of out customers have asked if there are any problems in installing and running multiple versions of Microsoft Project on a single client. Although this is a case of Caveat utilitor (Let the user beware), as long as the user understands and accepts the issues that can occur then they can do this. Although Microsoft provide the ability to leave old versions of Office products (except Outlook) on your client when you are installing a new version of the product they certainly do not endorse doing so. Figure: For Project you can choose to keep the old stuff   That being the case I would have preferred that they put a “(NOT RECOMMENDED)” after the options to impart that knowledge to the rest of us, but they did not. The default and recommended behaviour is for the newer version installer to remove the older versions. Of course this does not apply in the revers. There are no forward compatibility packs for Office. There are a number of negative behaviours (or bugs) that can occur in this configuration: There is only one MS Project In Windows a file extension can only be associated with a single program.  In this case, MPP files can be associated with only one version of winproj.exe.  The executables are in different folders so if a user double-clicks a Project file on the desktop, file explorer, or Outlook email, Windows will launch the winproj.exe associated with MPP and then load the MPP file.  There are problems associated with this situation and in some cases workarounds. The user double-clicks on a Project 2010 file, Project 2007 launches but is unable to open the file because it is a newer version.  The workaround is for the user to launch Project 2010 from the Start menu then open the file.  If the file is attached to an email they will need to first drag the file to the desktop. All your linked MS Project files need to be of the same version There are a number of problems that occur when people use on Microsoft’s Object Linking and Embedding (OLE) technology.  The three common uses of OLE are: for inserted projects where a Master project contains sub-projects and each sub-project resides in its own MPP file shared resource pools where multiple MPP files share a common resource pool kept in a single MPP file cross-project links where a task or milestone in one MPP file has a  predecessor/successor relationship with a task or milestone in a different MPP file What I’ve seen happen before is that if you are running in a version of Project that is not associated with the MPP extension and then try and activate an OLE link then Project tries to launch the other version of Project.  Things start getting very confused since different MPP files are being controlled by different versions of Project running at the same time.  I haven’t tried this in awhile so I can’t give you exact symptoms but I suspect that if Project 2010 is involved the symptoms will be different then in a Project 2003/2007 scenario.  I’ve noticed that Project 2010 gives different error messages for the exact same problem when it occurs in Project 2003 or 2007.  -Anonymous The recommendation would be either not to use this feature if you have to have multiple versions of Project installed or to use only a single version of Project. You may get unexpected negative behaviours if you are using shared resource pools or resource pools even when you are not running multiple versions as I have found that they can get broken very easily. If you need these thing then it is probably best to use Project Server as it was created to solve many of these specific issues. Note: I would not even allow multiple people to access a network copy of a Project file because of the way Windows locks files in write mode. This can cause write-locks that get so bad a server restart is required I’ve seen user’s files get write-locked to the point where the only resolution is to reboot the server. Changing the default version to run for an extension So what if you want to change the default association from Project 2007 to Project 2010?   Figure: “Control Panel | Folder Options | Change the file associated with a file extension” Windows normally only lists the last version installed for a particular extension. You can select a specific version by selecting the program you want to change and clicking “Change program… | Browse…” and then selecting the .exe you want to use on the file system. Figure: You will need to select the exact version of “winproj.exe” that you want to run Conclusion Although it is possible to run multiple versions of Project on one system in the main it does not really make sense.

    Read the article

  • How do I do JavaScript Array Animation

    - by Henry
    I'm making a game but don't know how to do Array Animation with the png Array and game Surface that I made below. I'm trying to make it so that when the Right arrow key is pressed, the character animates as if it is walking to the right and when the Left arrow key is pressed it animates as if it is walking to the left (kind of like Mario). I put everything on a surface instead of the canvas. Everything is explained in the code below. I couldn't find help on this anywhere. I hope what I got below makes sense. I'm basically a beginner with JavaScript. I'll be back if more is needed: <!doctype html5> <html> <head></head> <script src="graphics.js"></script> <script src="object.js"></script> <body onkeydown ="keyDown(event)" onkeyup ="keyUp(event)" ></body> <script> //"Surface" is where I want to display my animation. It's like the HTML // canvas but it's not that. It's just the surface to where everything in the //game and the game itself will be displayed. var Surface = new Graphics(600, 400, "skyblue"); //here's the array that I want to use for animation var player = new Array("StandsRight.png", "WalksRight.png", "StandsLeft.png","WalksLeft.png" ); //Here is the X coordinate, Y coordinate, the beginning png for the animation, //and the object's name "player." I also turned the array into an object (but //I don't know if I was supposed to do that or not). var player = new Object(50, 100, 40, 115, "StandsRight.png","player"); //When doing animation I know that it requires a "loop", but I don't // know how to connect it so that it works with the arrays so that //it could animate. var loop = 0; //this actually puts "player" on screen. It makes player visible and //it is where I would like the animation to occur. Surface.drawObject(player); //this would be the key that makes "player" animation in the righward direction function keyDown(e) { if (e.keyCode == 39); } //this would be the key that makes "player" animation in the leftward direction function keyUp(e){ if (e.keyCode == 39); } //this is the Mainloop where the game will function MainLoop(); //the mainloop functionized function MainLoop(){ //this is how fast or slow I could want the entire game to go setTimeout(MainLoop, 10); } </script> </html> From here, are the "graphic.js" and the "object.js" files below. In this section is the graphics.js file. This graphics.js part below is linked to the: script src="graphics.js" html script section that I wrote above. Basically, below is a seperate file that I used for Graphics, and to run the code above, make this graphics.js code that I post below here, a separate filed called: graphics.js function Graphics(w,h,c) { document.body.innerHTML += "<table style='position:absolute;font- size:0;top:0;left:0;border-spacing:0;border- width:0;width:"+w+";height:"+h+";background-color:"+c+";' border=1><tr><td> </table>\n"; this.drawRectangle = function(x,y,w,h,c,n) { document.body.innerHTML += "<div style='position:absolute;font-size:0;left:" + x + ";top:" + y + ";width:" + w + ";height:" + h + ";background-color:" + c + ";' id='" + n + "'></div>\n"; } this.drawTexture = function(x,y,w,h,t,n) { document.body.innerHTML += "<img style='position:absolute;font-size:0;left:" + x + ";top:" + y + ";width:" + w + ";height:" + h + ";' id='" + n + "' src='" + t + "'> </img>\n"; } this.drawObject = function(o) { document.body.innerHTML += "<img style='position:absolute;font-size:0;left:" + o.X + ";top:" + o.Y + ";width:" + o.Width + ";height:" + o.Height + ";' id='" + o.Name + "' src='" + o.Sprite + "'></img>\n"; } this.moveGraphic = function(x,y,n) { document.getElementById(n).style.left = x; document.getElementById(n).style.top = y; } this.removeGraphic = function(n){ document.getElementById(n).parentNode.removeChild(document.getElementById(n)); } } Finally, is the object.js file linked to the script src="object.js"" in the html game file above the graphics.js part I just wrote. Basically, this is a separate file too, so thus, in order to run or test the html game code in the very first section I wrote, a person has to also make this code below a separate file called: object.js I hope this helps: function Object(x,y,w,h,t,n) { this.X = x; this.Y = y; this.Velocity_X = 0; this.Velocity_Y = 0; this.Previous_X = 0; this.Previous_Y = 0; this.Width = w; this.Height = h; this.Sprite = t; this.Name = n; this.Exists = true; } In all, this game is made based on a tutorial on youtube at: http://www.youtube.com/watch?v=t2kUzgFM4lY&feature=relmfu I'm just trying to learn how to add animations with it now. I hope the above helps. If not, let me know. Thanks

    Read the article

  • The Linux powered LAN Gaming House

    - by sachinghalot
    LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server which then supplied PXELINUX using TFTP. When booted, file access was accomplished through network block device (NBD). This is a very easy to use system that allows you to serve the contents of a file as a block device over the network. The client computer runs a user mode device driver and the device can be mounted within the file system using the mount command.One snag with offering file access via NBD is that it's difficult to impose any security restrictions on different areas of the file system as the server only sees a single file. The advantage is perfomance as the client operating system simply sees a block device, and besides, these security issues aren't relevant in this setup.Unfortunately, Windows 7 can't use NBD, so, Varda had to switch to iSCSI (which works in both server and client mode under Linux). His network cards are not compliant with this standard when doing a netboot, but fortunately, gPXE came to the rescue, and he boostraps it over PXE. gPXE is also available as an ISO image and is worth knowing about if you encounter an awkward machine that can't manage a network boot. It can also optionally boot from a HTTP server rather than the more traditional TFTP server.According to Varda, booting all 12 machines over the Gigabit Ethernet network is surprisingly fast, and once booted, the machines don't seem noticeably slower than if they were using local storage. Once loaded, most games attempt to load in as much data as possible, filling the RAM, and the the disk and network bandwidth required is small. It's worth noting that these are aspects of this project that might differ from some other thin client scenarios.At time of writing, it doesn't seem as though the local storage of the client machines is being utilized. Instead, the clients boot into Windows from an image on the server that contains the operating system and the games themselves. It uses the copy on write feature of LVM so that any writes from a client are added to a differencing image allocated to that client. As the administrator, Varda can log into the Linux server and authorize changes to the master image for updates etc.SummaryOverall, Varda estimates the total cost of the project at about $40,000, and of course, he needed a property that offered a large physical space in order to house the computers and the gaming workstations. Obviously, this project has stark differences to most thin client projects. The balance between storage, network usage, GPU power and security would not be typical of an office installation, for example. The only letdown is that WINE proved to be insufficiently compatible to run a wide variety of modern games, but that is, perhaps, asking too much of it, and hats off to Varda for trying to make it work.

    Read the article

  • Oracle CRM On Demand Release 24 is Generally Available

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We are pleased to announce that Oracle CRM On Demand Release 24 is Generally Available as of October 25, 2013 Get smarter, more productive and the best value with Oracle CRM On Demand Release 24. Oracle CRM On Demand continues to be the most complete Software-as-a-Service (SaaS) CRM solution available. Now, with Release 24, organizations of all types and sizes benefit from actionable insight anywhere, anytime, as well as key enhancements in mobility, embedded social, analytics, integration and extensibility, and ease of use.Next Generation Mobile and Desktop Solutions : Oracle CRM On Demand Release 24 offers a complete set of mobile and desktop solutions that improve productivity by enabling reps to access and update information anywhere, anytime. Capabilities include: Oracle CRM On Demand Disconnected Mobile Sales (DMS) – A disconnected native iPad solution, DMS has been further streamlined mobile sales process by adding Structured Product Messaging to record brand specific call objectives, enhancements in HTML5 eDetailing including message response tracking and improvements in administration and configuration such as more field management options for read only fields, role management and enhanced logging. Oracle CRM On Demand Connected Mobile Sales. This add-on mobile service provides a configurable mobile solution on iOS, BlackBerry and now Android devices. You can access data from CRM On Demand in real time with a rich, native user experience, that is comfortable and familiar to current iOS, BlackBerry and Android users. New features also include Single Sign On to enhance security for mobile users.  Oracle CRM On Demand Desktop: This application centralizes essential CRM information in the familiar Microsoft Outlook environment,increasing user adoption and decreasing training costs. Users can manage CRM data while disconnected, then synchronize bi-directionally when they are back on the network. New in Oracle CRM On Demand Desktop Version 3 is the ability to synchronize by Books of Business, and improved Online Lookup. Mobile Browser Support: The following mobile device browsers are now supported: Apple iPhone, Apple iPad, Windows 8 Tablets, and Google Android. Leverage the Social Enterprise Engaging customers via social channels is rapidly becoming a significant key to enhanced customer experience as it provides proactive customer service, targeted messaging and greater intimacy throughout the entire customer lifecycle. Listening to customers on the social channels can identify a customers’ sphere of influence and the real value they bring to their organization, or the impact they can have on the opportunity. Servicing the customer’s need is the first step towards loyalty to a brand, integrating with social channels allows us to maximize brand affinity and virally expand customer engagements thus increasing revenue. Oracle CRM On Demand is leveraging the Social Enterprise through its integration with Oracle’s Social Relationship Management (SRM) product suite by providing out-of-the-box integration with Social Engagement and Monitoring (SEM), Social Marketing (SM) and Oracle Social Network (OSN). With Oracle CRM On Demand Release 24, users are able to create a service request from a social post via SEM and have leads entered on a SM lead form automatically entered into Oracle CRM On Demand along with the campaign, streamlining the lead qualification process. Get Smarter with Actionable Insight The difference between making good decisions and great decisions depends heavily upon the quality, structure, and availability of information at hand. Oracle CRM On Demand Release 24 expands upon its industry-leading analytics capabilities to provide greater business insight than ever before. New capabilities include flexible permissions on analytics reports folders, allowing for read only access to reports, and additional field and object coverage. Get More Productive with Powerful Tools Oracle CRM On Demand Release 24 introduces a new set of powerful capabilities designed to maximize productivity. A significant new feature for customizing Oracle CRM On Demand is a JavaScript API. The JS API allows customers to add new buttons, suppress existing buttons and even change what happens when a user clicks an existing button. Other usability enhancements, such as personalized related information applets, extended case insensitive search provide users with better, more intuitive, experience. Additional privileges for viewing private activities and notes allow administrators to reassign records as needed, and Custom Object management. Workflow has been added to the Order Item object; and now tasks can be assigned to a relative user, such as an Account Owner, allowing more complex business processes to be automated and adhered to. Get the Best Value Oracle CRM On Demand delivers unprecedented value with the broadest set of capabilities from a single-provider solution, the industry’s lowest total cost of ownership, the most on-demand deployment options, the deepest CRM expertise and experience of any CRM provider, and the most secure CRM in the cloud. With Release 24, Oracle CRM On Demand now includes even more enterprise-grade security, integration, and extensibility features, along with enhanced industry editions to save you time and money. New features include: Business Process Administration: A new privilege has been added that allows administrators to override a Business Process Administration rule.This privilege permits users to edit a locked record, or unlock a record, in the event of a material change that needs to be reflected per corporatepolicy. Additionally, the Products Detailed object has been added to Business Process Administration, enabling record locking and logic to be applied. Expanded Integration: Oracle continues to improve Web Services each release, by adding more object coverage enabling customers and partners to easily integrate with CRM On Demand. Bottom Line Oracle CRM On Demand Release 24 enables organizations to get smarter, get more productive, and get the best value, period. For more information on Oracle CRM On Demand Release 24, please visit oracle.com/crmondemand

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Telerik Releases a new Visual Entity Designer

    Love LINQ to SQL but are concerned that it is a second class citizen? Need to connect to more databases other than SQL Server? Think that the Entity Framework is too complex? Want a domain model designer for data access that is easy, yet powerful? Then the Telerik Visual Entity Designer is for you. Built on top of Telerik OpenAccess ORM, a very mature and robust product, Teleriks Visual Entity Designer is a new way to build your domain model that is very powerful and also real easy to use. How easy? Ill show you here. First Look: Using the Telerik Visual Entity Designer To get started, you need to install the Telerik OpenAccess ORM Q1 release for Visual Studio 2008 or 2010. You dont need to use any of the Telerik OpenAccess wizards, designers, or using statements. Just right click on your project and select Add|New Item from the context menu. Choose Telerik OpenAccess Domain Model from the Visual Studio project templates. (Note to existing OpenAccess users, dont run the Enable ORM wizard or any other OpenAccess menu unless you are building OpenAccess Entities.) You will then have to specify the database backend (SQL Server, SQL Azure, Oracle, MySQL, etc) and connection. After you establish your connection, select the database objects you want to add to your domain model. You can also name your model, by default it will be NameofyourdatabaseEntityDiagrams. You can click finish here if you are comfortable, or tweak some advanced settings. Many users of domain models like to add prefixes and suffixes to classes, fields, and properties as well as handle pluralization. I personally accept the defaults, however, I hate how DBAs force underscores on me, so I click on the option to remove them. You can also tweak your namespace, mapping options, and define your own code generation template to gain further control over the outputted code. This is a very powerful feature, but for now, I will just accept the defaults.   When we click finish, you can see your domain model as a file with the .rlinq extension in the Solution Explorer. You can also bring up the visual designer to view or further tweak your model by double clicking on the model in the Solution Explorer.  Time to use the model! Writing a LINQ Query Programming against the domain model is very simple using LINQ. Just set a reference to the model (line 12 of the code below) and write a standard LINQ statement (lines 14-16).  (OpenAccess users: notice the you dont need any using statements for OpenAccess or an IObjectScope, just raw LINQ against your model.) 1: using System; 2: using System.Linq; 3: //no need for anOpenAccess using statement 4:   5: namespace ConsoleApplication3 6: { 7: class Program 8: { 9: static void Main(string[] args) 10: { 11: //a reference tothe data context 12: NorthwindEntityDiagrams dat = new NorthwindEntityDiagrams(); 13: //LINQ Statement 14: var result = from c in dat.Customers 15: where c.Country == "Germany" 16: select c; 17:   18: //Print out the company name 19: foreach (var cust in result) 20: { 21: Console.WriteLine("CompanyName: " + cust.CompanyName); 22: } 23: //keep the consolewindow open 24: Console.Read(); 25: } 26: } 27: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Lines 19-24 loop through the result of our LINQ query and displays the results. Thats it! All of the super powerful features of OpenAccess are available to you to further enhance your experience, however, in most cases this is all you need. In future posts I will show how to use the Visual Designer with some other scenarios. Stay tuned. Enjoy! Technorati Tags: Telerik,OpenAccess,LINQ Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Quicktips 1: Windows 7 Libraries; New website

    - by Michael B. McLaughlin
    I’m working on several large posts right now. So in the interim, I’ve decided to do shorter posts that contain something I find very helpful. This is the first. I’ve been using Windows 7 since April 2010. It’s the first OS I’ve ever worked with that I actually enjoy. I’ve used many over the years (KERNAL; PC DOS; MS-DOS 3.x+; Windows 3.0, 3.11, 95, 98, 98 SE, Me, NT 3.51, NT 4, 2000, XP, Vista, 7; various GNU/Linux distros starting with Debian 1.2 – most recently Ubuntu 10.04; ProDOS, Mac OS 9.X, Mac OS X (through 10.4); SunOS, Solaris; AIX, z/OS; OpenVMS). Some were frustrating. Some tolerable. Some were “nice except for…”. OS X actually started out as seemingly “nice” until every single release contained a breaking change to some major API and they then decided to flip-off everyone who had bought a Mac as little as two years earlier with the release of Snow Leopard without PPC support. Windows 7 is the first one that’s just “nice” without any qualifiers. There are so many little features that add up to make it nice. Today’s Quicktip is one of them. Quicktip 1: Create a Library for your Code One thing I particularly like about Windows 7 is the Libraries feature in Explorer. Specifically the fact that you can create custom ones. I used to spend a lot of time opening new Explorer windows and navigating my various Visual Studio projects folders. Custom libraries allowed me to simplify that whole process. I now simply go to my “Code” library and there it all is. Adding a new library is easy. Open an Explorer window. If you aren’t in your Libraries when it opens, navigate to Libraries. Click the “New library” button. Give it a name. Then right click on the new library you created and go to “Properties”. Click the “Include a folder…” button. Choose the folder you want and press “Include folder”. Voilà! If you wish to add more, simply click “Include a folder…” again and repeat. It’s true that this is just a small time saver. But it’s one of those things that just adds a really nice touch. ------------------------ In a separate note, just before Christmas I finally finished and published my new website: http://www.bobtacoindustries.com/ . I waited to post here about it until I found time to incorporate a few things I hadn’t had the time to do when I pushed it out for its “soft open”. Most of them are now done and so my site is now formally open. I have no plans or intentions of moving my blog ( http://blog.bobtacoindustries.com/ points here). I quite like it here, both in terms of the interface and also in terms of the concept (and realization thereof) of pooling geek bloggers to create a pool of knowledge and helpful tips, tricks, techniques, and advice. I created it simply because I felt that it was time to have a website as I venture further into my return to the land of software development. The “For Devs” section should hopefully be useful to developers, particularly the links section. It’s my curated list of sites that I regularly visit to solve problems, to help answer questions on Twitter and the AppHub forums, and to learn new things. I’ll be adding links to it periodically and will be including topic areas as I become acquainted with them enough to form a proper list. WPF will likely be the first topic area added. If there are any links you think I should add to the existing topics, let me know! I warn in advance that I’m less inclined to add blogs; there are simply too many good blogs and I do not want to have hundreds per topic area. So blogs are limited primarily, though not exclusively, to acknowledged experts in the subject area who generally blog regularly about it and who usually are part of the team that develops the product or technology in question. I’m much more amenable to including individual blogs posts in the techniques subcategory in the appropriate topic area. Ultimately, it’s a collection of things I find interesting and helpful. So please no hard feelings if I don’t add a link you think is awesome. I may well think it’s awesome too, but conclude that it doesn’t fit with my goals for the dev links area.

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • Easy Made Easier

    - by dragonfly
        How easy is it to deploy a 2 node, fully redundant Oracle RAC cluster? Not very. Unless you use an Oracle Database Appliance. The focus of this member of Oracle's Engineered Systems family is to simplify the configuration, management and maintenance throughout the life of the system, while offering pay-as-you-grow scaling. Getting a 2-node RAC cluster up and running in under 2 hours has been made possible by the Oracle Database Appliance. Don't take my word for it, just check out these blog posts from partners and end users. The Oracle Database Appliance Experience - Zip Zoom Zoom http://www.fuadarshad.com/2012/02/oracle-database-appliance-experience.html Off-the-shelf Oracle database servers http://normanweaver.wordpress.com/2011/10/10/off-the-shelf-oracle-database-servers/ Oracle Database Appliance – Deployment Steps http://marcel.vandewaters.nl/oracle/database-appliance/oracle-database-appliance-deployment-steps     See how easy it is to deploy an Oracle Database Appliance for high availability with RAC? Now for the meat of this post, which is the first in a series of posts describing tips for making the deployment of an ODA even easier. The key to the easy deployment of an Oracle Database Appliance is the Appliance Manager software, which does the actual software deployment and configuration, based on best practices. But in order for it to do that, it needs some basic information first, including system name, IP addresses, etc. That's where the Appliance Manager GUI comes in to play, taking a wizard approach to specifying the information needed.     Using the Appliance Manager GUI is pretty straight forward, stepping through several screens of information to enter data in typical wizard style. Like most configuration tasks, it helps to gather the required information before hand. But before you rush out to a committee meeting on what to use for host names, and rely on whatever IP addresses might be hanging around, make sure you are familiar with some of the auto-fill defaults for the Appliance Manager. I'll step through the key screens below to highlight the results of the auto-fill capability of the Appliance Manager GUI.     Depending on which of the 2 Configuration Types (Config Type screen) you choose, you will get a slightly different set of screens. The Typical configuration assumes certain default configuration choices and has the fewest screens, where as the Custom configuration gives you the most flexibility in what you configure from the start. In the examples below, I have used the Custom config type.     One of the first items you are asked for is the System Name (System Info screen). This is used to identify the system, but also as the base for the default hostnames on following screens. In this screen shot, the System Name is "oda".     When you get to the next screen (Generic Network screen), you enter your domain name, DNS IP address(es), and NTP IP address(es). Next up is the Public Network screen, seen below, where you will see the host name fields are automatically filled in with default host names based on the System Name, in this case "oda". The System Name is also the basis for default host names for the extra ethernet ports available for configuration as part of a Custom configuration, as seen in the 2nd screen shot below (Other Network). There is no requirement to use these host names, as you can easily edit any of the host names. This does make filling in the configuration details easier and less prone to "fat fingers" if you are OK with these host names. Here is a full list of the automatically filled in host names. 1 2 1-vip 2-vip -scan 1-ilom 2-ilom 1-net1 2-net1 1-net2 2-net2 1-net3 2-net3     Another auto-fill feature of the Appliance Manager GUI follows a common practice of deploying IP Addresses for a RAC cluster in sequential order. In the screen shot below, I entered the first IP address (Node1-IP), then hit Tab to move to the next field. As a result, the next 5 IP address fields were automatically filled in with the next 5 IP addresses sequentially from the first one I entered. As with the host names, these are not required, and can be changed to whatever your IP address values are. One note of caution though, if the first IP Address field (Node1-IP) is filled out and you click in that field and back out, the following 5 IP addresses will be set to the sequential default. If you don't use the sequential IP addresses, pay attention to where you click that mouse. :-)     In the screen shot below, by entering the netmask value in the Netmask field, in this case 255.255.255.0, the gateway value was auto-filled into the Gateway field, based on the IP addresses and netmask previously entered. As always, you can change this value.     My last 2 screen shots illustrate that the same sequential IP address autofill and netmask to gateway autofill works when entering the IP configuration details for the Integrated Lights Out Manager (ILOM) for both nodes. The time these auto-fill capabilities save in entering data is nice, but from my perspective not as important as the opportunity to avoid data entry errors. In my next post in this series, I will touch on the benefit of using the network validation capability of the Appliance Manager GUI prior to deploying an Oracle Database Appliance.

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >