Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 365/973 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Is Oracle Policy Automation a Fit for My Agency? I'll bet it is.

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Recently, I stumbled upon a new(-ish) whitepaper now posted on the Oracle Technology Network around Oracle Policy Automation (OPA). This paper is certain to become a must read for any customer interested in rules automation. What is OPA?  If you are not sitting in your favorite Greek restaurant waiting for that order of Saganaki to appear, OPA is Oracle’s solution for automated streamlining, standardizing, and the maintenance of policy. It is a specialized rules platform that simplifies the automation of rules and policies, putting the analysis in the hands of the analysts, not the IT organization. In other words, OPA allows the organization to be more efficient by eliminating (or at a minimum, reducing the engagement of) the middle man from the process. The whitepaper I mention above is titled, “Is Oracle Policy Automation a Good Fit for My Business?”. This short document walks the reader through use cases and advice for the reader to consider when deciding if OPA is right for their agency. The paper outlines many different scenarios, different uses of OPA in production today and, where OPA may not be a good fit. Many of the use case examples revolve around end user questionnaires or analyst research. What is often overlooked is OPA’s ability to act as a rules engine behind the scenes. That is, take inputs from one source (e.g., personnel data), process that data in OPA and send the output (e.g., pay data with benefits deductions) to a second source. The rules have been automated, no necessary human intervention to perform analysis. A few of my customers have used the embedded OPA solution to improve transaction processing and reduce the time spent analyzing exceptions. I suggest any reader whose organization is reliant on or deals with high complexity, volume or volatility in rules that are based on documentation – or which need to be documented – take a look at Oracle Policy Automation. You can find the white paper on Oracle Technology Network. You can find the white paper in the Oracle Policy Automation of the OTN. You can find more information around OPA on oracle.com. Finally, you can send me a question any time at [email protected] Thank you for reading. If you have any topics around Oracle Applications in the Federal or Public Sector industries you would like to see addressed in this blog, please leave suggestions in the comments section and I will do my best to address in a future post.

    Read the article

  • Orchestrating the Virtual Enterprise, Part I

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's Chief Sustainability Officer & Vice President, SCM Product Strategy During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems. In my next post, I will share examples of companies that have made that shift and talk more about the distributed orchestration process.

    Read the article

  • Oracle ATG Ranked "Leader" Once Again In This Year's Gartner Magic Quadrant For E-Commerce

    - by Michael Hylton
    Oracle ATG Web Commerce is in the top portion of the Leaders quadrant once again in this year's Gartner Magic Quadrant for E-Commerce, and gained in “ability to execute” over the 2010 version. Leaders are defined in this Magic Quadrant as technology providers that demonstrate the optimal blend of insight, innovation, execution and the ability to "see around the corner." Oracle ATG Web Commerce is a Leader because it has broadened its e-commerce capabilities with multisite management, a broader range of mobile devices supported and other additions, and Gartner points out ATG’s steady growth in revenue, market share and market visibility. Gartner notes that Oracle made the announcement regarding its acquisition of ATG in November 2010 and this has helped ATG with additional sales, marketing, R&D and global partnerships.Oracle ATG's latest release, Oracle ATG Commerce 10, provides several important enhancements, including multisite management, cross-channel campaign management and support for a broader range of mobile devices, with the addition of merchandising (including updates to the user interface) and promotions applications. The Magic Quadrant focuses on e-commerce for B2B and B2C across industry verticals, including retail, manufacturing, distribution, telecommunications, publishing, media, and financial services. The product should be able to integrate with applications beyond traditional e-commerce channels to meet the emerging customer requirement to transact across channels with a seamless experience.

    Read the article

  • Oracle ATG Ranked "Leader" Once Again In This Year's Gartner Magic Quadrant For E-Commerce

    - by Michael Hylton
    Oracle ATG Web Commerce is in the top portion of the Leaders quadrant once again in this year's Gartner Magic Quadrant for E-Commerce, and gained in “ability to execute” over the 2010 version. Leaders are defined in this Magic Quadrant as technology providers that demonstrate the optimal blend of insight, innovation, execution and the ability to "see around the corner." Oracle ATG Web Commerce is a Leader because it has broadened its e-commerce capabilities with multisite management, a broader range of mobile devices supported and other additions, and Gartner points out ATG’s steady growth in revenue, market share and market visibility. Gartner notes that Oracle made the announcement regarding its acquisition of ATG in November 2010 and this has helped ATG with additional sales, marketing, R&D and global partnerships.Oracle ATG's latest release, Oracle ATG Commerce 10, provides several important enhancements, including multisite management, cross-channel campaign management and support for a broader range of mobile devices, with the addition of merchandising (including updates to the user interface) and promotions applications. The Magic Quadrant focuses on e-commerce for B2B and B2C across industry verticals, including retail, manufacturing, distribution, telecommunications, publishing, media, and financial services. The product should be able to integrate with applications beyond traditional e-commerce channels to meet the emerging customer requirement to transact across channels with a seamless experience.

    Read the article

  • Is your TRY worth catching?

    - by Maria Zakourdaev
      A very useful error handling TRY/CATCH construct is widely used to catch all execution errors  that do not close the database connection. The biggest downside is that in the case of multiple errors the TRY/CATCH mechanism will only catch the last error. An example of this can be seen during a standard restore operation. In this example I attempt to perform a restore from a file that no longer exists. Two errors are being fired: 3201 and 3013: Assuming that we are using the TRY and CATCH construct, the ERROR_MESSAGE() function will catch the last message only: To workaround this problem you can prepare a temporary table that will receive the statement output. Execute the statement inside the xp_cmdshell stored procedure, connect back to the SQL Server using the command line utility sqlcmd and redirect it's output into the previously created temp table.  After receiving the output, you will need to parse it to understand whether the statement has finished successfully or failed. It’s quite easy to accomplish as long as you know which statement was executed. In the case of generic executions you can query the output table and search for words like“Msg%Level%State%” that are usually a part of the error message.Furthermore, you don’t need TRY/CATCH in the above workaround, since the xp_cmdshell procedure always finishes successfully and you can decide whether to fire the RAISERROR statement or not. Yours, Maria

    Read the article

  • SQL Saturday Atlanta: Intro To Performance Tuning

    - by Mike Femenella
    I'm looking forward to speaking in Atlanta on the 24th, will be fun to get back down that way to visit with some friends and present two topics that I really enjoy. First, an introduction to performance tuning. Performance tuning is a very wide and deep topic and we're staying close to the surface. I direct this class for newbie sql users who have less than 2 years of experience. It's all the things I wish someone would have told me in my first 2 years about what to look for when the database was slow...or allegedly slow I should say. We'll cover using profiler to find slow performing queries and how to save the data off to a table as well as a tour of other features. The difference between clustered, non clustered and covering indexes. How to look at and understand an execution plan (at a high level) and finally the difference between a temp table and a table variable and what the implications are of using either one in your code. That pretty much takes up a full hour. Second presentation, Loading Data in Real Time. It's really a presentation about partitioning but with a twist that we used at work recently to solve a need to load some data quickly and put it into production with minimal downtime. We'll cover partition functions, schemes,$partition, merge, sys.partitions and show some examples of building a set of partitioned tables and using the switch statement to move it from one table to another. Finally we'll cover the differences in partitioning between 2005 and 2008. Hope to see you there! And if you read my blog please introduce yourself!

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • questions about dual-boot install Ubuntu 10.04 and Windows 7 on same hard drive

    - by Tim
    I'd like to dual-boot install Ubuntu 10.04 on the same hard drive as Windows 7 which has already been installed. As to sources on the internet: I found a website iinet about dual-boot installation of Ubuntu 10.10 and Windows 7 on the same hard drive, which I think more specific than the one on Ubuntu Community without specific version of the OSes. Since I am installing Ubuntu 10.04 instead of 10.10, my question is whether their installers are same or almost same and if I can follow iinet for my dual-boot installation? Or are there better websites for information about dual-boot installtion of Ubuntu 10.04 and Windows 7? As to shrinking Windows partitions to make free space for Ubuntu partitions: iinet uses the partition software in Ubuntu's installer to shrink the Windows partition. But I saw in many website that the partition software in Ubuntu's installer cannot guarantee shrinking Windows 7 partitions successfully, so they recommended in general to shrink Windows partitions under Windows itself using its softwares. For example, in Ubuntu Community, it says: Some people think that the Windows partition must be resized only from within Windows Vista and Windows 7 using the shrink/resize option. ... If you use GParted Partition Editor in the Ubuntu Live CD be careful. So I was wondering which way to go in my situation? As to partition for bootloader files: In iinet, I don't see there is a partition created and dedicated to boot files (i.e. Grub files). However, I saw in many websites strongly suggesting using a boot partition for Grub files, especially for the purpose of separation and protection from installed OS files. I was wondering which way I should choose and why? As to installing bootloader Grub, in iinet, I see that to install Grub it only needs to specify the hard drive device for bootloader installation. However, in ubuntuguide(for more than 2 OSes and Ubuntu 9.04), some commands are needed to run in order to put Grub configuration files in MBR, and OS partition, for the chain-load process (where to find the files for the next stage). In Ubuntu Community, there are some related sentences which I don't quite understand how to do in practice: the only thing in your computer outside of Ubuntu that needs to be changed is a small code in the MBR (Master Boot Record) of the first hard disk. The MBR code is changed to point to the boot loader in Ubuntu. If you have a problem with changing the MBR code, you might prefer to just install the code for pointing to GRUB to the first sector of your Ubuntu partition instead. If you do that during the Ubuntu installation process, then Ubuntu won't boot until you configure some other boot manager to point to Ubuntu's boot sector. Windows Vista no longer utilizes boot.ini, ntdetect.com, and ntldr when booting. Instead, Vista stores all data for its new boot manager in a boot folder. Windows Vista ships with an command line utility called bcdedit.exe, which requires administrator credentials to use. You may want to read http://go.microsoft.com/fwlink/?LinkId=112156 about it. Using a command line utility always has its learning curve, so a more productive and better job can be done with a free utility called EasyBCD, developed and mastered in during the times of Vista Beta already. EasyBCD is user friendly and many Vista users highly recommend EasyBCD. In what is quoted above, I was wondering how exactly I should change the MBR code to point to the bootloader in Ubuntu? if I fail to change MBR code, are the other suggested boot managers being bcdedit.exe and EasyBCD in Windows? With the three sources above, which one shall I follow? Thanks and regards

    Read the article

  • Webcast Q&A: Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the third webcast in our WebCenter in Action webcast series, "Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter", where customer Sean Mattson from HDS and Rob Vandenberg from Oracle Partner Lingotek shared how Oracle WebCenter is powering Hitachi Data System’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Sean Mattson, Hitachi Data Systems  Q: Did you run into any issues in the deployment of the platform?A: There were some challenges, we were one of the first enterprise ‘on premise’ installations for Lingotek and our WebCenter platform also has a lot of custom features.  There were a lot of iterations and back and forth working with Lingotek at first.  We both helped each other, learned a lot and in the end managed to resolve all issues and roll out a very compelling solution for HDS. Q: What has been the biggest benefit your end users have seen?A: Being able to manage and govern the content lifecycle globally and centrally and at the same time enabling the field to update, review and publish the incremental content changes without a lot of touchpoints has helped us streamline and simplify the entire publishing process. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: I wouldn't say resistance as much as skepticism that we could actually deploy an automated and self publishing solution.  Even if a solution is great, adoption of a new process can be a challenge and we are still pursuing our adoption targets.  One of the most important aspects is to include lots of training and support materials and offer as much helpdesk type support as needed to get the field self sufficient and confident in the capabilities of the system.  Rob Vandenberg, Lingotek  Q: Are there any limitations regarding supported languages such as support for French Canadian and Indian languages?A: Lingotek supports all language pairs. Including right to left languages and double byte languages such as Chinese, Japanese and Korean Q: Is the Lingotek solution integrated with the new 11g release of WebCenter Sites? A: Yes! In fact, Lingotek is the first OVI partner for Oracle WebCenter Sites  Q: Can translation memories help to improve the accuracy of machine translation?A: One of the greatest long term strategic benefits of using Lingotek is the accumulation of translation memories, or past human translations. These TMs can be used to "train" statistical machine translation engines to have higher and higher quality. This virtuous cycle is ongoing and will consistently improve both machine and human translations.  Q: We have existing translation memories from previous work with our translation service provider. Can they be easily imported in to the Lingotek solution for re-use? Q: Yes, Lingotek is standards compliant. We support TM import in both the TMX and XLIFF formats. Q: If we use Lingotek as a service to do our professional translation and also use the Lingotek software solution, do we get the translation memories to give us a means of just translating future adds and changes ourselves? A: Yes, all the data is yours, always. Lingotek can provide both the integrated translation software as well as the professional translation services. All the content and translation memories are yours. Q: Can you give us an example of where community translation has proved to be successful?A: The key word here is community. If you have a community that cares about you, your content, and the rest of the community, then community translation can work for you. We've seen effective use cases in Product User Groups content, Support Communities, and other types of User Generated content, like wikis and blogs.   If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!   Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter from Oracle WebCenter

    Read the article

  • MIX 2010 Covert Operations Day 3

    - by GeekAgilistMercenary
    I rolled over to the Mandalay for breakfast.  There I met a couple guys that were really excited about the new Windows 7 Phone.  They, as I, are also hopeful that the phone really gets a big push and some penetration into the market.  Not because we don’t like any other of the phones, but because this phone is so much better in many ways.  From a developer's perspective creating applications in Windows 7 Mobile will be vastly superior in ease, capabilities, and other aspects.  The architectural, existing code base, examples, and provisions to create things on the 7 Mobile Device are already existing as of RIGHT NOW.  There is no reason, except for fickle market conditions, for this phone to not just explode onto the market.  But alas, I won't hold my breath. Day three keynote had a whole new slew of things provided.  It also seemed that things got a lot more technical on this second keynote.  The oData was one of the very technical bits, yet it included almost no code.  Starting with a Netflix example and all the way to the Codename "Dallas" effort the oData Services provide some expansive possibilities. A mash up going 4 ways was then shown for finding a movie, finding local places to have a viewing, and information about the movie and were to prospectively find and buy additional movie bits.  The display was of course, in a Windows 7 Mobile device with literally a click to view each set of data.  The backend and the front end of this was beautifully smooth. The Dallas Project has a lot of potential for analytics in dashboard and scorecard creation also.  If there is a need or reason to provide data to a vast and wide range of clients, Dallas is a prime example of how to do that. Azure Clouds After the main keynote I checked out (while developing a working WPF & Silverlight Application for work) the session on deploying ASP.NET Applications, services, etc, into the cloud.  The session was pretty good, but I'll admit I got a little unfocused from it a few times.  It is after all hard to do two things at one time. I did take note that the cloud still is a multiple step process for deploying to.  This is a good thing and a bad thing.  There needs to be more checks and verifications when deploying something into the cloud just for technical reasons.  However, I feel that there should be some streamlining to the process.  Going back and forth between web and Visual Studio as the interface also seems kind of clunky.  Deployment should be able to be completed from within Visual Studio in my perspective.  Overall, the cloud is getting more and more impressive in function as well as theory. That's it from me so far on the third day of MIX.  I'll be note taking and studying hard to have more good tidbits to provide. Thanks for reading, if you're curious about more of my writing, check out this original entry at my other blog Agilist Mercenary.

    Read the article

  • Interfaces and Virtuals Everywhere????

    - by David V. Corbin
    First a disclaimer; this post is about micro-optimization of C# programs and does not apply to most common scenarios - but when it does, it is important to know. Many developers are in the habit of declaring member virtual to allow for future expansion or using interface based designs1. Few of these developers think about what the runtime performance impact of this decision is. A simple test will show that this decision can have a serious impact. For our purposes, we used a simple loop to time the execution of 1 billion calls to both non-virtual and virtual implementations of a method that took no parameters and had a void return type: Direct Call:     1.5uS Virtual Call:   13.0uS The overhead of the call increased by nearly an order of magnitude! Once again, it is important to realize that if the method does anything of significance then this ratio drops quite quickly. If the method does just 1mS of work, then the differential only accounts for a 1% decrease in performance. Additionally the method in question must be called thousands of times in order to produce a meaqsurable impact at the application level. Yet let us consider a situation such as the per-pixel processing of a graphics processing application. Here we may have a method which is called millions of times and even the slightest increase in overhead can have significant ramification. In this case using either explicit virtuals or interface based constructs is likely to be a mistake. In conclusion, good design principles should always be the driving force behind descisions such as these; but remember that these decisions do not come for free.   1) When a concrete class member implements an interface it does not need to be explicitly marked as virtual (unless, of course, it is to be overriden in a derived concerete class). Nevertheless, when accessed via the interface it behaves exactly as if it had been marked as virtual.

    Read the article

  • To SYNC or not to SYNC – Part 4

    - by AshishRay
    This is Part 4 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In Part 3, I talked in details regarding why Data Guard SYNC is a good thing, and the distance implications you have to keep in mind. In this final article of the series, I will talk about how you can nicely complement Data Guard SYNC with the ability to failover in seconds. Wait - Did I Say “Seconds”? Did I just say that some customers do Data Guard failover in seconds? Yes, Virginia, there is a Santa Claus. Data Guard has an automatic failover capability, aptly called Fast-Start Failover. Initially available with Oracle Database 10g Release 2 for Data Guard SYNC transport mode (and enhanced in Oracle Database 11g to support Data Guard ASYNC transport mode), this capability, managed by Data Guard Broker, lets your Data Guard configuration automatically failover to a designated standby database. Yes, this means no human intervention is required to do the failover. This process is controlled by a low footprint Data Guard Broker client called Observer, which makes sure that the primary database and the designated standby database are behaving like good kids. If something bad were to happen to the primary database, the Observer, after a configurable threshold period, tells that standby, “Your time has come, you are the chosen one!” The standby dutifully follows the Observer directives by assuming the role of the new primary database. The DBA or the Sys Admin doesn’t need to be involved. And - in case you are following this discussion very closely, and are wondering … “Hmmm … what if the old primary is not really dead, but just network isolated from the Observer or the standby - won’t this lead to a split-brain situation?” The answer is No - It Doesn’t. With respect to why-it-doesn’t, I am sure there are some smart DBAs in the audience who can explain the technical reasons. Otherwise - that will be the material for a future blog post. So - this combination of SYNC and Fast-Start Failover is the nirvana of lights-out, integrated HA and DR, as practiced by some of our advanced customers. They have observed failover times (with no data loss) ranging from single-digit seconds to tens of seconds. With this, they support operations in industry verticals such as manufacturing, retail, telecom, Internet, etc. that have the most demanding availability requirements. One of our leading customers with massive cloud deployment initiatives tells us that they know about server failures only after Data Guard has automatically completed the failover process and the app is back up and running! Needless to mention, Data Guard Broker has the integration hooks for interfaces such as JDBC and OCI, or even for custom apps, to ensure the application gets automatically rerouted to the new primary database after the database level failover completes. Net Net? To sum up this multi-part blog article, Data Guard with SYNC redo transport mode, plus Fast-Start Failover, gives you the ideal triple-combo - that is, it gives you the assurance that for critical outages, you can failover your Oracle databases: very fast without human intervention, and without losing any data. In short, it takes the element of risk out of critical IT operations. It does require you to be more careful with your network and systems planning, but as far as HA is concerned, the benefits outweigh the investment costs. So, this is what we in the MAA Development Team believe in. What do you think? How has your deployment experience been? We look forward to hearing from you!

    Read the article

  • Method flags as arguments or as member variables?

    - by Martin
    I think the title "Method flags as arguments or as member variables?" may be suboptimal, but as I'm missing any better terminology atm., here goes: I'm currently trying to get my head around the problem of whether flags for a given class (private) method should be passed as function arguments or via member variable and/or whether there is some pattern or name that covers this aspect and/or whether this hints at some other design problems. By example (language could be C++, Java, C#, doesn't really matter IMHO): class Thingamajig { private ResultType DoInternalStuff(FlagType calcSelect) { ResultType res; for (... some loop condition ...) { ... if (calcSelect == typeA) { ... } else if (calcSelect == typeX) { ... } else if ... } ... return res; } private void InteralStuffInvoker(FlagType calcSelect) { ... DoInternalStuff(calcSelect); ... } public void DoThisStuff() { ... some code ... InternalStuffInvoker(typeA); ... some more code ... } public ResultType DoThatStuff() { ... some code ... ResultType x = DoInternalStuff(typeX); ... some more code ... further process x ... return x; } } What we see above is that the method InternalStuffInvoker takes an argument that is not used inside this function at all but is only forwarded to the other private method DoInternalStuff. (Where DoInternalStuffwill be used privately at other places in this class, e.g. in the DoThatStuff (public) method.) An alternative solution would be to add a member variable that carries this information: class Thingamajig { private ResultType DoInternalStuff() { ResultType res; for (... some loop condition ...) { ... if (m_calcSelect == typeA) { ... } ... } ... return res; } private void InteralStuffInvoker() { ... DoInternalStuff(); ... } public void DoThisStuff() { ... some code ... m_calcSelect = typeA; InternalStuffInvoker(); ... some more code ... } public ResultType DoThatStuff() { ... some code ... m_calcSelect = typeX; ResultType x = DoInternalStuff(); ... some more code ... further process x ... return x; } } Especially for deep call chains where the selector-flag for the inner method is selected outside, using a member variable can make the intermediate functions cleaner, as they don't need to carry a pass-through parameter. On the other hand, this member variable isn't really representing any object state (as it's neither set nor available outside), but is really a hidden additional argument for the "inner" private method. What are the pros and cons of each approach?

    Read the article

  • Does it make sense to write a build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Tell Us Once&ndash;Guardian Innovation Award Winner

    - by BizTalk Visionary
    Yesterday the Tell Us Once project received it’s latest accolade. My partner in crime in the execution of the delivery of software for this project, Mark Usher,  reports: It’s always great to receive recognition for the effort you put in when working on a project. It’s no secret that here at Solidsoft we are extremely proud of our association with the Government’s Tell Us Once (TUO) programme. Having already been selected by Microsoft as Worldwide Partner Conference (WPC) 2011 Award Winners for Application Integration, we are very pleased that the TUO programme as a whole has been recognised and has won the Guardian Newspaper’s Innovation Nation Award for Frontline Services (link to http://www.guardian.co.uk/innovation-nation-awards )  The TUO entry was judged the winner over three other shortlisted solutions from Dyfed Powys Police, North Yorkshire County Council and Staffordshire County Council. Innovation Nation is a partnership between Virgin Media Business and the Guardian, an initiative to uncover the most innovative businesses, public sector organisations and charities in the UK today.  Its aim is to showcase the ideas, the endeavour and the energy that are making things better in the areas of customer service, unique working practices, frontline government services and collaboration. Solidsoft have been involved with the Tell Us Once programme since its inception in 2007 and worked closely with the Department of Work and Pensions (DWP) to produce a business case for the programme. Teaming up with Atos (who host the application) Solidsoft delivered the first national solution in 2011 and a second phase in April 2012. Whilst currently restricted to distributing citizen data to central government organisations and local government authorities, DWP is now actively engaging with the private sector to see if TUO data can be disclosed to private sector organisations such as banks and building societies. Solidsoft welcome this expansion into the private sector where even more efficiencies will be realised. Mark Usher - Solidsoft Sales and Marketing Director For my part I’d like to say a big thank you to the Solidsoft Team, ATOS team and DWP team that made it happen.

    Read the article

  • Workaround for an Xcode/iOS SDK Issue...

    - by Joe Huang
    Hi, everyone: When you are doing ADF Mobile development, and you need to deploy the application to an iOS device, you would need to compile/deploy the app with iOS App Certificates and Provisioning Profile. This means you would need to "Deploy to Package" or "Deploy to iTunes" during deployment, and configure JDeveloper with the proper certificates/profiles. In some instances (exact combination is still not clear), deploy and signing the application to generate the ipa file may fail with similar error message at the end of the deployment log: [01:04:45 PM] Deployment failed due to one or more errors returned by '/usr/bin/xcrun'. The following is a summary of the returned error(s): Command-line execution failed (Return code: 1) error: /usr/bin/codesign --force --preserve-metadata=identifier,entitlements,resource-rules --sign iPhone Distribution: Oracle Corporation --resource-rules=/var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/Payload/HelloWorld.app/ResourceRules.plist --entitlements /var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/entitlements_plistEINPBkIG /var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/Payload/HelloWorld.app failed with error 1. Output: /var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/Payload/HelloWorld.app: replacing existing signature Program /usr/bin/codesign returned 1 : [/var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/Payload/HelloWorld.app: replacing existing signature This issue is a known issue and is not related to ADF Mobile. The workaround is discussed in this article from StackOverflow. This article refers to the old location of Xcode, so you would need to adjust the paths accordingly. The path for Xcode 4.3 and above would be like: /Applications/Xcode.app/Contents//Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/PackageApplication to this script file. To modify it, you probably can’t use Text Editor. I end up opening a terminal session, changed the file permission, and used vi to update it. Thanks, Oracle ADF Mobile Product Management Team

    Read the article

  • Top Ten Reasons to Attend the 2015 Oracle Value Chain Summit

    - by Terri Hiskey
    Need justification to attend the 2015 Oracle Value Chain Summit? Check out these Top Ten Reasons you should register now for this event: 1. Get Results: 60% higher profits. 65% better earnings per share. 2-3x greater return on assets. Find out how leading organizations achieved these results when they transformed their supply chains. 2. Hear from the Experts: Listen to case studies from leading companies, and speak with top partners who have championed change. 3. Design Your Own Conference: Choose from more than 150 sessions offering deep dives on every aspect of supply chain management: Cross Value Chain, Maintenance, Manufacturing, Procurement, Product Value Chain, Value Chain Execution, and Value Chain Planning. 4. Get Inspired from Those Who Dare: Among the luminaries delivering keynote sessions are former SF 49ers quarterback Steve Young and Andrew Winston, co-author of one of the top-selling green business books, Green to Gold. 5. Expand Your Network: With 1500+ attendees, this summit is a networking bonanza. No other event gathers as many of the best and brightest professionals across industries, including tech experts and customers from the Oracle community. 6. Improve Your Skills: Enhance your expertise by joining NEW hands-on training sessions. 7. Perform a Road-Test: Try the latest IT solutions that generate operational excellence, manage risk, streamline production, improve the customer experience, and impact the bottom line. 8. Join Similar Birds-of-a-Feather: Engage industry peers with similar interests, or shared supply chain communities, in expanded roundtable discussions. 9. Gain Unique Insight: Speak directly with the product experts responsible for Oracle’s Value Chain Solutions. 10. Save $400: Take advantage of the Super Saver rate by registering before September 26, 2014.

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • CodePlex Daily Summary for Saturday, July 27, 2013

    CodePlex Daily Summary for Saturday, July 27, 2013Popular ReleasesSharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.10: - Added support for RAR Decryption (thanks to https://github.com/hrasyid) - Embedded some BouncyCastle crypto classes to allow RAR Decryption and Winzip AES Decryption in Portable and Windows Store DLLs - Built in Release (I think)Memory Teaser Game: Full Release 1.1.0: -> Fixed Memory leak issue. -> Restart game button issue. -> Added Splash screen. -> Changed Release Icon. This is the version 1.1.0.0VG-Ripper & PG-Ripper: VG-Ripper 2.9.46: changes FIXED LoginOfflineBrowser: Preview Release: This is a preview release so that others can help me find bugs. This should be pretty stable, but any bugs found should be reported here as an Issue.Home Access Plus+: v9.4.0727: Released to allow you to disable secure LDAP queriesOpen Source Job board: Version X3: Full version of job board, didn't have monies to fund it so it's free.DSeX DragonSpeak eXtended Editor: Version 1.0.116.0726: Cleaned up Wizard Interface Added Functionality for RTF UndoRedo IE Inserting Text from Wizard output to the Tabbed Editor Added Sanity Checks to Search/Replace Dialog to prevent crashes Fixed Template and Paste undoredo Fix Undoredo Blank spots Added New_FileTag Const = "(New FIle)" Added Filename to Modified FileClose queries (Thanks Lothus Marque)Math.NET Numerics: Math.NET Numerics v2.6.0: What's New in Math.NET Numerics 2.6 - Announcement, Explanations and Sample Code. New: Linear Curve Fitting Linear least-squares fitting (regression) to lines, polynomials and linear combinations of arbitrary functions. Multi-dimensional fitting. Also works well in F# with the F# extensions. New: Root Finding Brent's method. ~Candy Chiu, Alexander Täschner Bisection method. ~Scott Stephens, Alexander Täschner Broyden's method, for multi-dimensional functions. ~Alexander Täschner ...Microsoft .NET Gadgeteer: .NET Gadgeteer Core 2.43.800: The .NET Gadgeteer Core installer includes the core libraries and end user project templates for Microsoft .NET Gadgeteer. This is a prerequisite for end users to build and deploy .NET Gadgeteer projects. It includes a project template wizard in the New Project dialog in Visual Studio 2012 or 2010 (or express versions) under the Gadgeteer tab - ".NET Gadgeteer Application". This template uses a graphical designer built for Visual Studio which allows end users to visually configure .NET Gadget...FogBugzPd - Project Dashboard For FogBugz: 1.0: First public release of FogBugzPd. Zip File includes web application. Requires: IIS 7+ Sql Server 2008/2012 or Sql Server Express 2012 .NET 4.5Open Url Rewriter for DotNetNuke: Open Url Rewriter Core 0.4.3 (Beta): bug fix for removing home page New Tab with rules count for each Portal with memory use estimation OpenUrlRewriter_00.04.03_Install.zip : for dnn 6.01 to 7.06 OpenUrlRewriter71_00.04.03_Install.zip : for dnn 7.1KerbalAlarmClock: v2.5.0.0 Release: Version 2.5.0.0 Recompiled it for 0.21 Fixed some issues with Hyperbolic orbits and AN/DN NodesAJAX Control Toolkit: July 2013 Release: AJAX Control Toolkit Release Notes - July 2013 Release Version 7.0725July 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...MJP's DirectX 11 Samples: Specular Antialiasing Sample: Sample code to complement my presentation that's part of the Physically Based Shading in Theory and Practice course at SIGGRAPH 2013, entitled "Crafting a Next-Gen Material Pipeline for The Order: 1886". Demonstrates various methods of preventing aliasing from specular BRDF's when using high-frequency normal maps. The zip file contains source code as well as a pre-compiled x64 binary.Kartris E-commerce: Kartris v2.5003: This fixes an issue where search engines appear to identify as IE and so trigger the noIE page if there is not a non-responsive skin specified.GoAgent GUI: GoAgent GUI 1.3.5 Alpha (20130723): ????????Alpha?,???????????,?????????????。 ??????????GoAgent???(???phus lu?GitHub??????GoAgent??????,??????????????????) ????????????????????????Bug ?????????。??????????????。 ????issue????,????????,????????????????。LogicCircuit: LogicCircuit 2.13.07.22: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionYou can make visual elements of the circuit been visible on its symbols. This way you can build composite displays, keyboards and reuse them. Please read about displays for more details http://ww...LINQ to Twitter: LINQ to Twitter v2.1.08: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.AcDown?????: AcDown????? v4.4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.4.3 ?? ??Bilibili????????????? ???????????? ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2.10?...Magick.NET: Magick.NET 6.8.6.601: Magick.NET linked with ImageMagick 6.8.6.6. These zip files are also available as a NuGet package: https://nuget.org/profiles/dlemstra/New Projectsagree grammar engineering environment: agree is a concurrent parse/generation engine, a .NET implementation of the DELPH-IN joint reference formalism for natural language analysis. AWF's Utility Library: A collection of awf utilities C# chat client with server: Newest chat with client and server.Charming components for Windows Phone: Build Charming apps for Windows Phone. Adds ready-made Search, Share and Settings functionality to Windows Phone. Share more code across platforms.Darkorbit Configuration Manager: config manager dark orbit darkorbit eltepvpers heaven 'Heaven. configuration manager configurationmanagerDarkorbit MultiTool: dark orbit darkorbit multi tool mulitool heaven 'Heaven. elitepvpers awesome skylab bot trade bot techcenter bot multiaccount diabind: Python binding of DIA (Debug Interface Access) SDKDoodle .Net Connector: This library allows an easy access to the Doodle REST API.DssCECB Version 2.0: aaaeCommunity: e-communityEFDemo: This is a demo for Entity frameworkEmployee Directory Webpart in SharePoint 2010: Employee Directory for SharePoint 2010EntityContext: A lightweight wrapper around Entity Framework allowing for accessing some internals of the framework, as well as, some functionality usually required.EwsRelentless: This is a sample application which demonstrates you might place a heavy load of EWS calls against an Exchange server in order to test performance. FogBugzPd - Project Dashboard For FogBugz: This tool helps PMs, Tech Leads and Executives to see actual progress on the projects tracked in FogBugz.HP Battery Health Scan Script: Silently runs HP Battery Check Utility and saves result to an Access DB.I am following: Users can follow nearly everything in SharePoint 2013. This solution provides a list of followed contend where ever you need it in you SharePoint.Image Viewer: Simple image viewer written for academic purposes LIBRERIA PRISA: sistema de ventasLiduv: Facilitate teachers daily work including creating marks and lesson drafts, curricula and calculating marks for written tests etc.Maze Builder Library: A builder for random maze generationPayPal Express Checkout for nopCommerce: nopCommerce plugin to allow for PayPal Express Checkout. Full integration with shipping options.PsTest - UnitTesting for PowerShell: PsTest is a lightweight UnitTesting Module for use in PowerShell. It lets PowerShell users create, discover and run UnitTests in PowerShell.SIGE: sistema integral gestion educativaSql Mass Dumper: Sql Mass Dumper. A simple project to dump all the data in your SQL Server in XML or in JSON format.SSIS Wait Task: A SSIS task which suspends execution for a time period or until a specific time. Additionally a sql statement can be defined that can also delay execution.Tactical Combat - Unity3d Game: A first person shooter! The main character is Billy Hills, need a story plot.The open source customer relation and billing software.: Memtem in a open source customer relation and billing software written in C#.NET.Tridion Gateway Service: WCF Support for the Tridion TOM COM APIWINJS CTK: WinJS Control kit is a set of custom WINJS controls that are not supported by Windows 8.

    Read the article

  • How do you update live web sites with code changes?

    - by Aaron Anodide
    I know this is a very basic question. If someone could humor me and tell me how they would handle this, I'd be greatful. I decided to post this because I am about to install SynchToy to remedy the issue below, and I feel a bit unprofessional using a "Toy" but I can't think of a better way. Many times I find when I am in this situation, I am missing some painfully obvious way to do things - this comes from being the only developer in the company. ASP.NET web application developed on my computer at work Solution has 2 projects: Website (files) WebsiteLib (C#/dll) Using a Git repository Deployed on a GoGrid 2008R2 web server Deployment: Make code changes. Push to Git. Remote desktop to server. Pull from Git. Overwrite the live files by dragging/dropping with windows explorer. In Step 5 I delete all the files from the website root.. this can't be a good thing to do. That's why I am about to install SynchToy... UPDATE: THANKS for all the useful responses. I can't pick which one to mark answer - between using a web deployment - it looks like I have several useful suggesitons: Web Project = whole site packaged into a single DLL - downside for me I can't push simple updates - being a lone developer in a company of 50, this remains something that is simpler at times. Pulling straight from SCM into web root of site - i originally didn't do this out of fear that my SCM hidden directory might end up being exposed, but the answers here helped me get over that (although i still don't like having one more thing to worry about forgetting to make sure is still true over time) Using a web farm, and systematically deploying to nodes - this is the ideal solution for zero downtime, which is actually something I care about since the site is essentially a real time revenue source for my company - i might have a hard time convincing them to double the cost of the servers though. -- finally, the re-enforcement of the basic principal that there needs to be a single click deployment for the site OR ELSE THERE SOMETHING WRONG is probably the most useful thing I got out of the answers. UPDATE 2: I thought I come back to this and update with the actual solution that's been in place for many months now and is working perfectly (for my single web server solution). The process I use is: Make code changes Push to Git Remote desktop to server Pull from Git Run the following batch script: cd C:\Users\Administrator %systemroot%\system32\inetsrv\appcmd.exe stop site "/site.name:Default Web Site" robocopy Documents\code\da\1\work\Tree\LendingTreeWebSite1 c:\inetpub\wwwroot /E /XF connectionsconfig Web.config %systemroot%\system32\inetsrv\appcmd.exe start site "/site.name:Default Web Site" As you can see this brings the site down, uses robocopy to intelligently copy the files that have changed then brings the site back up. It typically runs in less than 2 seconds. Since peak traffic on this site is about 2 requests per second, missing 4 requests per site update is acceptable. Sine I've gotten more proficient with Git I've found that the first four steps above being a "manual process" is also acceptable, although I'm sure I could roll the whole thing into a single click if I wanted to. The documentation for AppCmd.exe is here. The documentation for Robocopy is here.

    Read the article

  • disks not ready in array causes mdadm to force initramfs shell

    - by RaidPinata
    Okay, this is starting to get pretty frustrating. I've read most of the other answers on this site that have anything to do with this issue but I'm still not getting anywhere. I have a RAID 6 array with 10 devices and 1 spare. The OS is on a completely separate device. At boot only three of the 10 devices in the raid are available, the others become available later in the boot process. Currently, unless I go through initramfs I can't get the system to boot - it just hangs with a blank screen. When I do boot through recovery (initramfs), I get a message asking if I want to assemble the degraded array. If I say no and then exit initramfs the system boots fine and my array is mounted exactly where I intend it to. Here are the pertinent files as near as I can tell. Ask me if you want to see anything else. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions # CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 13 Nov 2012 13:50:41 -0700 # by mkconf $Id$ ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae Here is fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdc2 during installation UUID=3fa1e73f-3d83-4afe-9415-6285d432c133 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdc3 during installation UUID=c4988662-67f3-4069-a16e-db740e054727 none swap sw 0 0 # mount large raid device on /data /dev/md0 /data ext4 defaults,nofail,noatime,nobootwait 0 0 output of cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sda[0] sdd[10](S) sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdb[1] 23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none> Here is the output of mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae devices=/dev/sda,/dev/sdb,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdd Please let me know if there is anything else you think might be useful in troubleshooting this... I just can't seem to figure out how to change the boot process so that mdadm waits until the drives are ready to build the array. Everything works just fine if the drives are given enough time to come online. edit: changed title to properly reflect situation

    Read the article

  • Upgrade issues due to broken "dependency problems prevent configuration of linux-image-generic" error

    - by tsukune1791
    okay, I've recently upgrade from 11.10 to 12.04 and I've been having some issues. I don't know if its a bug or not, but I thought I would submit it here. Okay here's a little background; I ran the distro update from the update manager and got a couple errors that I didn't catch. the computer restarted, and when I logged the Launcher and my top bar of the Ubuntu desktop didn't load. While it was trying to load a couple error messages came up, I think they were called "apport", saying they couldn't send the bug information for some reason. I believe it said somethings wrong with my internet connection, but nothing's wrong with it. Anyway I tried running some things in terminal, namely sudo apt-get -f install sudo apt-get upgrade sudo apt-get dist-upgrade and keep getting the following errors; dustin@marceau-laptop:~$ sudo apt-get dist-upgrade [sudo] password for dustin: Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up initramfs-tools (0.99ubuntu13) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-24-generic (3.2.0-24.37) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/zz-runlilo 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/kernel/postinst.d/zz-runlilo exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-24-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-24-generic; however: Package linux-image-3.2.0-24-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.24.26); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/initramfs/post-update.d//runlilo exited with return code 1 dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic linux-image-generic linux-generic initramfs-tools localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB localepurge: Disk space freed in /usr/share/doc/kde/HTML: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) And my Ubuntu desktop is still not working. I can log into Gnome and Ubuntu 2D but the Launcher, I think it's call, doesn't load. Can someone help me fix these error, or point me in the right direction to get them fixed? It is much appriciated.

    Read the article

  • Oracle SOA Governance EMEA Workshop for Partners & System Integrators: Nov 5-7th | Madrid, Spain

    - by Lionel Dubreuil
    The EMEA Fusion Middleware Product Management team is delighted to announce an exciting and a much-awaited workshop on our market-leading SOA Governance offering. Oracle SOA Governance solution is Oracle Fusion Middleware's strategic approach to governing SOA. Whether just embarking on an SOA program, or expanding from project or pilot to broader deployment, the Oracle SOA Governance solution closes the loop on measuring SOA success from project inception through to realization, and providing the proof of ROI on SOA. Would your prospects and customers like to: Align their SOA Vision and Execution Improve Decision Making Effectively Manage Business and Technology Change Enable Control Foster Enterprise-wide Collaboration Reduce Development Costs Track their SOA Investments and Returns Demonstrate business value and ROI of SOA This FREE hands-on workshop is dedicated to EMEA Partners & System Integrators (SIs). It'll be delivered by Oracle HQ Product Management and will primarily focus on : SOA Governance as a Strategy and Methodology Hands-on with Oracle Enterprise Repository (OER) and Oracle Service Registry (OSR) When, how and whom to position our SOA Governance offerings Our SOA Governance Rapid Start Service Hands-on sessions for the most popular customer use cases Seats are limited, book now - you cannot afford to miss this training! If you're interested please contact Yogesh Sontakke (yogesh.sontakke-AT-oracle-DOT-com)

    Read the article

  • Oracle SOA Governance EMEA Workshop for Partners & System Integrators: Nov 5-7th | Madrid, Spain

    - by Lionel Dubreuil
    The EMEA Fusion Middleware Product Management team is delighted to announce an exciting and a much-awaited workshop on our market-leading SOA Governance offering. Oracle SOA Governance solution is Oracle Fusion Middleware's strategic approach to governing SOA. Whether just embarking on an SOA program, or expanding from project or pilot to broader deployment, the Oracle SOA Governance solution closes the loop on measuring SOA success from project inception through to realization, and providing the proof of ROI on SOA. Would your prospects and customers like to: Align their SOA Vision and Execution Improve Decision Making Effectively Manage Business and Technology Change Enable Control Foster Enterprise-wide Collaboration Reduce Development Costs Track their SOA Investments and Returns Demonstrate business value and ROI of SOA This FREE hands-on workshop is dedicated to EMEA Partners & System Integrators (SIs). It'll be delivered by Oracle HQ Product Management and will primarily focus on : SOA Governance as a Strategy and Methodology Hands-on with Oracle Enterprise Repository (OER) and Oracle Service Registry (OSR) When, how and whom to position our SOA Governance offerings Our SOA Governance Rapid Start Service Hands-on sessions for the most popular customer use cases Seats are limited, book now - you cannot afford to miss this training! If you're interested please contact Yogesh Sontakke: [email protected].

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >