Search Results

Search found 5597 results on 224 pages for 'worker processes'.

Page 99/224 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Instance Patching Demo for BPM 11.1.1.7 by Mark Nelson

    - by JuergenKress
    BPM 11.1.1.7 has a new ‘instance patching and migration’ feature that allows you to apply changes to running instances of processes (without changing the revision of the process) and/or to migrate running instances between revisions of a process. There is a short viewlet demonstration posted here, but there is unfortunately no sound. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Mark Nelson,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • File manager respawns with ubuntuone

    - by pygator
    Starting Feb 11, my Ubuntu 10.10 desktop respawns FileManager many times(hundreds). You can observe the "Starting File Manager" processes at the bottom of the gnome desktop. I can make this behaviour stop by: System - Preferences - Ubuntu One - Services - uncheck "Files". Can someone walk me though the debug process? Linux 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:48 UTC 2011 i686 GNU/Linux I'm trying to reset the Ubuntu One configuration. I found good information here: https://wiki.ubuntu.com/UbuntuOne/Bugs Look for "ROOT_MISMATCH in syncdaemon.log" After running through the steps to reset and restart UbuntuOne, no more "Starting File Mangager" respawns.

    Read the article

  • node-xmpp-bosh error on Ubuntu 11.10

    - by megueloby
    I am newbie in Linux word. I want to implement a bosh server. Because it is hard on Windows platform I decided to deploy it on a Ubuntu virtual machine via vmware. I made installation without problems. I followed the processes on this page. Now I want to test my bosh server with the command sudo bosh or sudo /etc/init.d/bosh start, after typing those I get on the terminal Starting bosh server after, nothing. I looked on the bosh.err file and I see exec: 2: /usr/local/lib/bosh/run-server.js: Permission denied I don? t know why this error with sudo. When I try ls -l /usr/local/lib/bosh/run-server.js it show -rw-r--r-- 1 root root 4889 2012-04-01 18:50 /usr/local/lib/bosh/run-server.js How can I make bosh start?

    Read the article

  • Server-side Input

    - by Thomas
    Currently in my game, the client is nothing but a renderer. When input state is changed, the client sends a packet to the server and moves the player as if it were processing the input, but the server has the final say on the position. This generally works really well, except for one big problem: falling off edges. Basically, if a player is walking towards an edge, say a cliff, and stops right before going off the edge, sometimes a second later, he'll be teleported off of the edge. This is because the "I stopped pressing W" packet is sent after the server processes the information. Here's a lag diagram to help you understand what I mean: http://i.imgur.com/Prr8K.png I could just send a "W Pressed" packet each frame for the server to process, but that would seem to be a bandwidth-costly solution. Any help is appreciated!

    Read the article

  • bluetooth daemon not running at startup

    - by ffaxer
    I'm trying to connect a bluetooth mouse to my Xubuntu system using Blueman (v. 1.21) Problem seems to be bluetoothd not running at startup, so blueman refuses to start, only a dialog appears: "Bluez daemon is not running, blueman-manager cannot continue." On my system, bluetoothd will run only as root (sudo), so my current workaround is simply to sudo bluetoothd manually, which works fine but id like to have it run at startup so that my mouse is just working without any interaction from me, if possible. If i try to start bluetoothd as non-root it reports: Bluetooth deamon 4.91 Unable to get on D-Bus In the startup scripts i found the same bluetoothd script in all runlevels and init.d: DAEMON=/usr/sbin/bluetoothd test -f /usr/sbin/bluetoothd || exit 0 # bluetoothd normally starts up by udev rules. it needs dbus to function, log_progress_msg "bluetoothd" pkill -TERM bluetoothd || true log_progress_msg "bluetoothd" I looked in /etc/udev/rules.d/ but no reference to bluetoothd. Further i have already tried with no luck: Editing /etc/dbus-1/system.d/bluetooth.conf to include my user (essentially copying the part that was for root): I tried it while both keeping the root policy and without, still, no luck! Editing /etc/pam.d/common-session and /etc/pam.d/gdm to include the line: session optional pam_ck_connector.so In the case of common-session it was already there but with a "nox11" which i tried removing. No luck no luck. Btw, I'm confused as to which session manager I'm using, since i have both xfce4-session and gdm-session-worker running. Anyways, hope someone is savvy enough to figure it out or bring some hints, otherwise i sincerely apologize for wasting your time! I'll sign off with uname -a: Linux [mycompname] 3.0.0-9-lowlatency #12ppa1~natty1-Ubuntu SMP PREEMPT Mon Aug 22 06:52:15 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux Peace B)

    Read the article

  • Multiple sites with the same codebase in Python

    - by Jimmy
    I am trying to run a large amount of sites which share about 90% of their code. They are simply designed to query an API and return the results. They will have a common userbase / database but will be configured slightly different and will have different CSS (perhaps even different templating). My initial idea was to run them as separate applications with a common library but I have read about the sites framework which would allow them to run from a single instance of Django which may help to reduce memory usage. https://docs.djangoproject.com/en/dev/ref/contrib/sites/ Is the site framework the right approach to a problem like this, and does it have real benefits over running separate applications? Initially I thought it was, but now I think otherwise. I have heard the following: Your SITE_ID is set in settings.py, so in order to have multiple sites, you need multiple settings.py configurations, which means multiple distinct processes/instances. You can of course share the code base between them, but each site will need a dedicated worker / WSGIDaemon to serve the site. This effectively removes any benefit of running multiple sites under one hood, if each site needs a UWSGI instance running. Alternative ideas of systems: https://github.com/iivvoo/django_layers https://github.com/shestera/django-multisite I don't know what route to be taking with this.

    Read the article

  • Error message when running OpenGL programs with bumblebee

    - by user170860
    X Error of failed request: BadDrawable (invalid Pixmap or Window parameter) Major opcode of failed request: 152 (DRI2) Minor opcode of failed request: 8 (DRI2SwapBuffers ) Resource id in failed request: 0x4200005 Serial number of failed request: 2166 Current serial number in output stream: 2167 primus: warning: timeout waiting for display worker Segmentation fault (core dumped) I don't get this on all OGL programs, but only particularly GPU intense ones. Also, I only get this using primusrun. optirun gives the same error no matter what I run: [VGL] NOTICE: Pixel format of 2D X server does not match pixel format of [VGL] Pbuffer. Disabling PBO readback. I don't know what either of these mean. Neither of them stop the programs from running, but I'd like to fix the problem if there is one. Also, I prefer to use primusrun because it is faster and it does a better job with vertical sync, however, it only supports OGL 4.2. This isn't a big issue because the programs I write are forward compatible, but it still seems odd to me. So basically I'd just like it if someone could explain to me what is happening and if there is something I can do about it. Thanks.

    Read the article

  • OTN Virtual Developer Day: WebLogic and Coherence

    - by Tori Wieldt
    Who: Java Developers What: This OTN Virtual Developer Day will guide you through tooling and best practices around developing applications with WebLogic and Coherence. You'll also explore ways to improve your your build, deploy, and ongoing management processes in your application's life cycle. When: Tuesday, November 5, 9am to 1pm PDT / 12pm to 4pm EDT / 1pm to 5pm BRT Where: Your Desk Why: Many Java developers utilize open source and/or free tooling to develop their projects, but ultimately deploy production applications to commercial, mission-critical application servers. There are sessions utilizing common developer tools such as Eclipse, Maven, Chef, and Puppet to create, deploy and manage applications with WebLogic Server and Coherence as target platforms. Don't miss the session Exploring ADF 12C and Java EE Development in Eclipse. Register now, it's free!  

    Read the article

  • Ubuntu Software Centre Issue (unity 11.10) after broken sun-java6-jre package installation

    - by Stephen Myall
    I have been installing software packages from USC and I am getting the following error message. Worked fine one minute then the message below. Tried to search but couldn't find a solution Previously I was installing the sun-java6-jre package in terminal and had an connection outage and it didnt complete. I attempted an apt-get -f install with no success. i dont know what to try next. I'm relatively new to Linux. The answer provided with a similar question on this site didnt resolve the issue for meClick on this link An unhandlable error occured There seems to be a programming error in aptdaemon, the software that allows you to install/remove software and to perform other package management related tasks. Details File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1092, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 235, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:I wasn't able to locate a file for the sun-java6-jre package. This might mean you need to manually fix this package. Terminal Output Link http://dl.dropbox.com/u/48466855/Terminal%20output.odt

    Read the article

  • Can You Name the Top 10 Technology Trends?

    - by kellsey.ruppel
    Can You Name the Trends? No need to do the research. Come to this Webcast and find out. Join the conversation as Andy Mulholland, Global CTO, Capgemini, discusses the 10 game-changing technology trends that will enable business innovation. As you might expect, three of the trends discussed will be: Mobility: from nice-to-have to a cornerstone of user engagement Big data: how to acquire, organize, and analyze it Cloud computing: how to build applications, automate processes, collaborate, and secure the enterprise But you’ll have to attend the Webcast to learn about the other seven trends. Register now. And profit from the experience. REGISTER NOW Thurs., July 19, 201210 a.m. PT / 1 p.m. ET Presented by: Andy MulhollandGlobal CTO, Capgemini Christian FinnSenior Director, Oracle WebCenter Product Management, Oracle Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Quality Assurance activities

    - by MasloIed
    Having asked but deleted the question as it was a bit misunderstood. If Quality Control is the actual testing, what are the commonest true quality assurance activities? I have read that verification (reviews, inspections..) but it does not make much sense to me as it looks more like quality control as mentioned here: DEPARTMENT OF HEALTH AND HUMAN SERVICES ENTERPRISE PERFORMANCE LIFE CYCLE FRAMEWORK Practices guide Verification - “Are we building the product right?” Verification is a quality control technique that is used to evaluate the system or its components to determine whether or not the project’s products satisfy defined requirements. During verification, the project’s processes are reviewed and examined by members of the IV&V team with the goal of preventing omissions, spotting problems, and ensuring the product is being developed correctly. Some Verification activities may include items such as: • Verification of requirement against defined specifications • Verification of design against defined specifications • Verification of product code against defined standards • Verification of terms, conditions, payment, etc., against contracts

    Read the article

  • Reproducing a Conversion Deadlock

    - by Alexander Kuznetsov
    Even if two processes compete on only one resource, they still can embrace in a deadlock. The following scripts reproduce such a scenario. In one tab, run this: CREATE TABLE dbo.Test ( i INT ) ; GO INSERT INTO dbo.Test ( i ) VALUES ( 1 ) ; GO SET TRANSACTION ISOLATION LEVEL SERIALIZABLE ; BEGIN TRAN SELECT i FROM dbo.Test ; --UPDATE dbo.Test SET i=2 ; After this script has completed, we have an outstanding transaction holding a shared lock. In another tab, let us have that another connection have...(read more)

    Read the article

  • Transmitting Form Data from the Client to the Web Server

    The steps involved in transmitting form data from the client to the web server User loads web form User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process If the data passes the second validation check then the server side code will continue with the requested processes

    Read the article

  • links for 2011-01-05

    - by Bob Rhubart
    A Role-Based Approach to Automated Provisioning and Personalized Portals Authors Rex Thexton (Managing Dir, PricewaterhouseCoopers), Nishidhdha Shah (Sr. Associate at PwC Consulting) and Harish Gaur (Dir. Product Management, Oracle Fusion Middleware) bring you the final article in the Fusion Middleware Patterns series. (tags: Oracle otn entarch enterprise2.0) 13 Jan 2011 - New York, NY - Coherence Special Interest Group - Oracle Coherence Knowledge Base The world's largest enterprise software company, Oracle is the only vendor to offer solutions for every tier of your business -- database, middleware, business intelligence, business applications, and collaboration. With Oracle, you get information that helps you measure results, improve business processes, and communicate a single truth to your constituents. (tags: ping.fm) Marc Kelderman: Exporting the SOA MDS Marc Kelderman show you how in this brief tutorial. (tags: oracle otn soa mds)

    Read the article

  • Do you know what is a DevOps Project?

    - by Gopinath
    Yesterday I wrote about OpenStack project, an open source cloud computing stack that lets you build Cloud Computing environments. While reading more on this topic I stumbled about a new type of projects called DevOps projects.  OpenStack is all set to become the first DevOps project, reports Forbes …the way OpenStack is applying the open source model to creating cloud infrastructure, the open source model is on the verge of being extended so that the collaboration and design process will include software, hardware, and networking in the data center as well as operational processes. In modern development, the idea of designing software, data center, and operations using one integrated team is called DevOps.

    Read the article

  • Many small scripts, one repository or multiple?

    - by The Jug
    A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a fabric.py file to deploy and a requirements.txt file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one requirements.txt file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?

    Read the article

  • Windows Azure BidNow Sample &ndash; definitely worth a look

    - by Eric Nelson
    [Quicklink: download new Windows Azure sample from http://bit.ly/bidnowsample] On Mondays (17th May) in the  6 Weeks of Windows Azure training (Now full) Live Meeting call, Adrian showed BidNow as a sample application built for Windows Azure. I was aware of BidNow but had not found the time to take a look at it nor seems it running before. Adrian convinced me it was worth some a further look. In brief I like it :-) It is more than Hello World, but still easy enough to follow. Bid Now is an online auction site designed to demonstrate how you can build highly scalable consumer applications using Windows Azure. It is built using Visual Studio 2008, Windows Azure and uses Windows Azure Storage. Auctions are processed using Windows Azure Queues and Worker Roles. Authentication is provided via Live Id. Bid Now works with the Express versions of Visual Studio and above. There are extensive setup instructions for local and cloud deployment You can download from http://bit.ly/bidnowsample (http://code.msdn.microsoft.com/BidNowSample) and also check out David original blog post. Related Links UK based? Sign up to UK fans of Windows Azure on ning Check out the Microsoft UK Windows Azure Platform page for further links

    Read the article

  • Bare-metal mode for Ubuntu

    - by user1071136
    I'm interested to benchmark a console-mode application, and would like to reduce to a minimum any interferences from other processes in the system. Is there an easy way to boot into Ubuntu 12.04 in a "bare-metal" mode ? I'm still interested in casually booting a "desktop" version of Ubuntu (so will prefer to avoid permanent changes), and would like to avoid installing a separate Ubuntu-server version. My use-case is the following - Application is single-thread and console-mode only. Test-box has 12GB of memory. I ssh into the test-box. Seems I can skip at least Unity, X-server and their dependents.

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Application Composer: Exposing Your Customizations in BI Analytics and Reporting

    - by Richard Bingham
    Introduction This article explains in simple terms how to ensure the customizations and extensions you have made to your Fusion Applications are available for use in reporting and analytics. It also includes four embedded demo videos from our YouTube channel (if they don't appear check the browser address bar for a blocking shield icon). If you are new to Business Intelligence consider first reviewing our getting started article, and you can read more about the topic of custom subject areas in the documentation book Extending Sales. There are essentially four sections to this post. First we look at how custom fields added to standard objects are made available for reporting. Secondly we look at creating custom subject areas on the standard objects. Next we consider reporting on custom objects, starting with simple standalone objects, then child custom objects, and finally custom objects with relationships. Finally this article reviews how flexfields are exposed for reporting. Whilst this article applies to both Cloud/SaaS and on-premises deployments, if you are an on-premises developer then you can also use the BI Administration Tool to customize your BI metadata repository (the RPD) and create new subject areas. Whilst this is not covered here you can read more in Chapter 8 of the Extensibility Guide for Developers. Custom Fields on Standard Objects If you add a custom field to your standard object then it's likely you'll want to include it in your reports. This is very simple, since all new fields are instantly available in the "[objectName] Extension" folder in existing subject areas. The following two minute video demonstrates this. Custom Subject Areas for Standard Objects You can create your own subject areas for use in analytics and reporting via Application Composer. An example use-case could be to simplify the seeded subject areas, since they sometimes contain complex data fields and internal values that could confuse business users. One thing to note is that you cannot create subject areas in a sandbox, as it is not supported by BI, so once your custom object is tested and complete you'll need to publish the sandbox before moving forwards. The subject area creation processes is essentially two-fold. Once the request is submitted the ADF artifacts are generated, then secondly the related metadata is sent to the BI presentation server API's to make the updates there. One thing to note is that this second step may take up to ten minutes to complete. Once finished the status of the custom subject area request should show as 'OK' and it is then ready for use. Within the creation processes wizard-like steps there are three concepts worth highlighting: Date Flattening - this feature permits the roll up of reports at various date levels, such as data by week, month, quarter, or year. You simply check the box to enable it for that date field. Measures - these are your own functions that you can build into the custom subject area. They are related to the field data type and include min-max for dates, and sum(), avg(), and count() for  numeric fields. Implicit Facts - used to make the BI metadata join between your object fields and the calculated measure fields. The advice is to choose the most frequently used measure to ensure consistency. This video shows a simple example, where a simplified subject area is created for the customer 'Contact' standard object, picking just a few fields upon which users can then create reports. Custom Objects Custom subject areas support three types of custom objects. First is a simple standalone custom object and for which the same process mentioned above applies. The next is a custom child object created on a standard object parent, and finally a custom object that is related to a parent object - usually through a dynamic choice list. Whilst the steps in each of these last two are mostly the same, there are differences in the way you choose the objects and their fields. This is illustrated in the videos below.The first video shows the process for creating a custom subject area for a simple standalone custom object. This second video demonstrates how to create custom subject areas for custom objects that are of parent:child type, as well as those those with dynamic-choice-list relationships. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt; Flexfields Dynamic and Extensible Flexfields satisfy a similar requirement as custom fields (for Application Composer), with flexfields common across the Fusion Financials, Supply Chain and Procurement, and HCM applications. The basic principle is when you enable and configure your flexfields, in the edit page under each segment region (for both global and context segments) there is a BI Enabled check box. Once this is checked and you've completed your configuration, you run the Scheduled Process job named 'Import Oracle Fusion Data Extensions for Transactional Business Intelligence' to generate and migrate the related BI artifacts and data. This applies for dynamic, key, and extensible flexfields. Of course there is more to consider in terms of how you wish your flexfields to be implemented and exposed in your reports, and details are given in Chapter 4 of the Extending Applications guide.

    Read the article

  • Pace Layering Comes Alive

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Rick Beers is Senior Director of Product Management for Oracle Fusion Middleware. Prior to joining Oracle, Rick held a variety of executive operational positions at Corning, Inc. and Bausch & Lomb. With a professional background that includes senior management positions in manufacturing, supply chain and information technology, Rick brings a unique set of experiences to cover the impact that technology can have on business models, processes and organizations. Rick hosts the IT Leaders Editorial on a monthly basis. By now, readers of this column are quite familiar with Oracle AppAdvantage, a unified framework of middleware technologies, infrastructure and applications utilizing a pace layered approach to enterprise systems platforms. 1. Standardize and Consolidate core Enterprise Applications by removing invasive customizations, costly workarounds and the complexity that multiple instances creates. 2. Move business specific processes and applications to the Differentiate Layer, thus creating greater business agility with process extensions and best of breed applications managed by cross- application process orchestration. 3. The Innovate Layer contains all the business capabilities required for engagement, collaboration and intuitive decision making. This is the layer where innovation will occur, as people engage one another in a secure yet open and informed way. 4. Simplify IT by minimizing complexity, improving performance and lowering cost with secure, reliable and managed systems across the entire Enterprise. But what hasn’t been discussed is the pace layered architecture that Oracle AppAdvantage adopts. What is it, what are its origins and why is it relevant to enterprise scale applications and technologies? It’s actually a fascinating tale that spans the past 20 years and a basic understanding of it provides a wonderful context to what is evolving as the future of enterprise systems platforms. It all begins in 1994 with a book by noted architect Stewart Brand, of ’Whole Earth Catalog’ fame. In his 1994 book How Buildings Learn, Brand popularized the term ‘Shearing Layers’, arguing that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In 1997 he produced a 6 part BBC Series adapted from the book, in which Part 6 focuses on Shearing Layers. In this segment Brand begins to introduce the concept of ‘pace’. Brand further refined this idea in his subsequent book, The Clock of the Long Now, which began to link the concept of Shearing Layers to computing and introduced the term ‘pace layering’, where he proposes that: “An imperative emerges: an adaptive [system] has to allow slippage between the differently-paced systems … otherwise the slow systems block the flow of the quick ones and the quick ones tear up the slow ones with their constant change. Embedding the systems together may look efficient at first but over time it is the opposite and destructive as well.” In 2000, IBM architects Ian Simmonds and David Ing published a paper entitled A Shearing Layers Approach to Information Systems Development, which applied the concept of Shearing Layers to systems design and development. It argued that at the time systems were still too rigid; that they constrained organizations by their inability to adapt to changes. The findings in the Conclusions section are particularly striking: “Our starting motivation was that enterprises need to become more adaptive, and that an aspect of doing that is having adaptable computer systems. The challenge is then to optimize information systems development for change (high maintenance) rather than stability (low maintenance). Our response is to make it explicit within software engineering the notion of shearing layers, and explore it as the principle that systems should be built to be adaptable in response to the qualitatively different rates of change to which they will be subjected. This allows us to separate functions that should legitimately change relatively slowly and at significant cost from that which should be changeable often, quickly and cheaply.” The problem at the time of course was that this vision of adaptable systems was simply not possible within the confines of 1st generation ERP, which were conceived, designed and developed for standardization and compliance. It wasn’t until the maturity of open, standards based integration, and the middleware innovation that followed, that pace layering became an achievable goal. And Oracle is leading the way. Oracle’s AppAdvantage framework makes pace layering come alive by taking a strategic vision 20 years in the making and transforming it to a reality. It allows enterprises to retain and even optimize their existing ERP systems, while wrapping around those ERP systems three layers of capabilities that inherently adapt as needed, at a pace that’s optimal for the enterprise.

    Read the article

  • Session Report - Modern Software Development Anti-Patterns

    - by Janice J. Heiss
    In this standing-room-only session, building upon his 2011 JavaOne Rock Star “Diabolical Developer” session, Martijn Verburg, this time along with Ben Evans, identified and explored common “anti-patterns” – ways of doing things that keep developers from doing their best work. They emphasized the importance of social interaction and team communication, along with identifying certain psychological pitfalls that lead developers astray. Their emphasis was less on technical coding errors and more how to function well and to keep one’s focus on what really matters. They are the authors of the highly regarded The Well-Grounded Java Developer and are both movers and shakers in the London JUG community and on the Java Community Process. The large room was packed as they gave a fast-moving, witty presentation with lots of laughs and personal anecdotes. Below are a few of the anti-patterns they discussed.Anti-Pattern One: Conference-Driven DeliveryThe theme here is the belief that “Real pros hack code and write their slides minutes before their talks.” Their response to this anti-pattern is an expression popular in the military – PPPPPP, which stands for, “Proper preparation prevents piss-poor performance.”“Communication is very important – probably more important than the code you write,” claimed Verburg. “The more you speak in front of large groups of people the easier it gets, but it’s always important to do dry runs, to present to smaller groups. And important to be members of user groups where you can give presentations. It’s a great place to practice speaking skills; to gain new skills; get new contacts, to network.”They encouraged attendees to record themselves and listen to themselves giving a presentation. They advised them to start with a spouse or friends if need be. Learning to communicate to a group, they argued, is essential to being a successful developer. The emphasis here is that software development is a team activity and good, clear, accessible communication is essential to the functioning of software teams. Anti-Pattern Two: Mortgage-Driven Development The main theme here was that, in a period of worldwide recession and economic stagnation, people are concerned about keeping their jobs. So there is a tendency for developers to treat knowledge as power and not share what they know about their systems with their colleagues, so when it comes time to fix a problem in production, they will be the only one who knows how to fix it – and will have made themselves an indispensable cog in a machine so you cannot be fired. So developers avoid documentation at all costs, or if documentation is required, put it on a USB chip and lock it in a lock box. As in the first anti-pattern, the idea here is that communicating well with your colleagues is essential and documentation is a key part of this. Social interactions are essential. Both Verburg and Evans insisted that increasingly, year by year, successful software development is more about communication than the technical aspects of the craft. Developers who understand this are the ones who will have the most success. Anti-Pattern Three: Distracted by Shiny – Always Use the Latest Technology to Stay AheadThe temptation here is to pick out some obscure framework, try a bit of Scala, HTML5, and Clojure, and always use the latest technology and upgrade to the latest point release of everything. Don’t worry if something works poorly because you are ahead of the curve. Verburg and Evans insisted that there need to be sound reasons for everything a developer does. Developers should not bring in something simply because for some reason they just feel like it or because it’s new. They recommended a site run by a developer named Matt Raible with excellent comparison spread sheets regarding Web frameworks and other apps. They praised it as a useful tool to help developers in their decision-making processes. They pointed out that good developers sometimes make bad choices out of boredom, to add shiny things to their CV, out of frustration with existing processes, or just from a lack of understanding. They pointed out that some code may stay in a business system for 15 or 20 years, but not all code is created equal and some may change after 3 or 6 months. Developers need to know where the code they are contributing fits in. What is its likely lifespan? Anti-Pattern Four: Design-Driven Design The anti-pattern: If you want to impress your colleagues and bosses, use design patents left, right, and center – MVC, Session Facades, SOA, etc. Or the UML modeling suite from IBM, back in the day… Generate super fast code. And the more jargon you can talk when in the vicinity of the manager the better.Verburg shared a true story about a time when he was interviewing a guy for a job and asked him what his previous work was. The interviewee said that he essentially took patterns and uses an approved book of Enterprise Architecture Patterns and applied them. Verburg was dumbstruck that someone could have a job in which they took patterns from a book and applied them. He pointed out that the idea that design is a separate activity is simply wrong. He repeated a saying that he uses, “You should pay your junior developers for the lines of code they write and the things they add; you should pay your senior developers for what they take away.”He explained that by encouraging people to take things away, the code base gets simpler and reflects the actual business use cases developers are trying to solve, as opposed to the framework that is being imposed. He told another true story about a project to decommission a very long system. 98% of the code was decommissioned and people got a nice bonus. But the 2% remained on the mainframe so the 98% reduction in code resulted in zero reduction in costs, because the entire mainframe was needed to run the 2% that was left. There is an incentive to get rid of source code and subsystems when they are no longer needed. The session continued with several more anti-patterns that were equally insightful.

    Read the article

  • Windows Azure Interop

    - by kaleidoscope
    How Windows Azure Platform is an open cloud platform. What makes it interoperable? The Windows Azure platform supports popular standards and protocols including SOAP, REST, and XML. Developers can use their preferred programming frameworks including .NET, and PHP, now. Tools such as Eclipse have been created for PHP developers for building Windows Azure applications. Now external endpoints (inbound traffic) have been enabled to worker a role, which enables applications that receive internet traffic that aren’t running under IIS. Windows Azure interoperable with Java At PDC 09, solution accelerator for Tomcat is delivered. Tomcat is an open source software implementation of the Java Servlet and JavaServer Pages technologies. The Windows Azure solution accelerator leverages a PDC09 feature that enable arbitrary processes to bind to inbound service endpoints. Windows Azure interoperable with PHP The Windows Azure tools for Eclipse extension builds upon the PHP Development Toolkit (PDT) and integrates Web Tools Platform (WTP) to provide a complete toolkit for Windows Azure web application development. For more details please refer to the link: http://www.microsoft.com/windowsazure/faq/   Rituraj

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Upgrading from 11.10- to 12.04 Hangs at "mysql: restarting..."

    - by Mark
    I'm trying to upgrade from Ubuntu Server 11.10 to 12.04 using Update Manager. I figured out that I had to click the arrow next to 'Terminal' and select OK a couple times. Now it's supposed to be "Installing the upgrades" but it's frozen trying to restart mysql for the second time mysql: restarting.... Apache2, atd and cups were all restarted twice after "Restarting services possibly affected by the upgrade". It's been stuck here for nearly an hour. I read here about killing processes, but I can't get a terminal window to open. EDIT: I don't know if this matters but there are 6 occurrences in the Distrobution Upgrade window that say: debconf: unable to initialize frontend: Gnome debconf: (Unable to load Gtk -- is libgtk2-perl installed?) debconf: falling back to frontend: Dialog I should have left the server alone and never run apt-get install ubuntu-desktop yesterday. I would really appreciate any suggestions. Thanks, Mark

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >