Search Results

Search found 8056 results on 323 pages for 'external'.

Page 249/323 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • The Social Business Thought Leaders - Ray Wang

    - by kellsey.ruppel
    It seems both consumers and businesses are at the peak of the social hype. Overwhelmed by social media channels, platforms, and processes both in their private and professional life, many early adopters are starting to feel the social fatigue. Mirroring what happened with email and web sites during the late 1990's - early 2000's, more and more managers are looking to move from ubiquitous social media tactics to the most appropriate business use case and processes. This step becomes even more important considering the year over year contraction in IT budgets and the consequent need to maximize return on every dollar spent in new technologies. Ray Wang, CEO and Principal Analyst at Constellation Research, suggests engagement through collaborative technologies both as a conceptual model and a transformational tool for enterprises to reap business value. Without participation - the reasoning goes - there is no value and good technology alone is not enough to guarantee employee and customer adoption. Enterprise gamification is a new lever to succeed with Social Business by directing a critical mass of participation towards desired outcomes. What kind of outcomes? A recent study from Constellation Research (see 2012 Q1 Gamification Early Adopters Best Practices) highlights how Marketing, Customer Service and HR are leading the pack with gamification in processes such as: Sustaining long term customer loyalty (76.4%) Improving response in campaign to lead (74.5%) Right channeling incidents for resolution in social media (67.3%) Growing the number service and support incidents resolved by the community (63.6%) Improving employee referral rates and effective recruiting (43.6%) Driving on-boarding success with new hires (20%) More than simply adding badges, points and leaderboards to existing processes, enterprise gamification should be holistically embedded into employee and customer experience to stimulate specific behaviors. According to Ray Wang this can be done at three core levels: Measurable actions. The behaviors we want to facilitate consist of granular actions (i.e likes, comments, posts, recommendations, etc) and more complex actions (i.e projects, initiatives, programmes) attributed to individuals, groups and/or external actors  Reputation. The reputation an individual has earned through his actions is a key factor in building motivation among others and it is determined by its identity, social standing status and competitiveness Incentives or the intrinsic and extrinsic rewards that motivate behaviors and drive actions Listen to Ray Wang's video-interview to learn more about the dynamics that are shaping the future of collaboration and how gamification can help organizations attain new levels of engagement.

    Read the article

  • input / output error, drives randomly refusing to read / write

    - by ILMV
    I have an issue with one of our servers running Ubuntu 10.04, it is running BackupPC and collects backups from various machines / servers around the building. On the 8th minute (12:08, 12:18, 12:28 etc) the backups are transferred to an external hard drive, we have three and rotate one drive for another everyday. The problem we are having is we are randomly experiencing input / output errors, when this happens you cannot read / write to the drive, it hasn't unmounted so I can cd to the mount point /media/backup1. The drives are not faulty as it's happening on all of them, so I'm at a loss as to what the problem could be, here is an example of the many errors we get: gzip: stdout: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_1083_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1088_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1089_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 39: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_592_tech3.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error EDIT » Resolved So it turns out Quamis was right, even though I didn't think it was possible it was actually a problem with the drive. You see we have three drives all formatted to ext2, on two of them we were getting I/O errors frequently, I cam back to Quamis' answer and discovered the fsck command, so ran it against the problems drives: fsck /dev/sdb1 This found and fixed a load of problems on the drive, most probably caused by power outages / unsafe removal of drives etc, as the drives are in the xt2 format they aren't journalled and thus aren't protected against such issues. Drives are now working beautifully, thanks all! :D

    Read the article

  • Elastic PaaS with WebLogic and OpenStack, part I

    - by Jernej Kaše
    In my previous blog I described the steps to get OpenStack on Solaris up and running. Now we'll explore how WebLogic and OpenStack can work together to deliver truly elastic Middleware Platform as a Service. Middleware / Platform as a Service goals First, let's define what PaaS should be : PaaS offerings facilitate the deployment of applications without the complexity of managing the underlying hardware and software and provisioning hosting capabilities. To break it down: - PaaS provides a complete platform for hosting solutions (Java EE, SOA, BPM, ...) - Infrastructure provisioning (virtual machine, OS, platform) and managing is hidden from the PaaS user [administrator or developer] - Additionally, PaaS could / should define target SLAs, and the platform should ensure the SLAs are meet automatically. PaaS use case To make it more tangible, we have an IT Administrator who has the requirement to deploy a Java EE enterprise application. The application is used by external users who need to submit reports by the end of each month. As a result, the number of concurrent users will fluctuate, with expected huge spikes around the end of each month. The SLA agreed by the management is that no more than 100 requests should be waiting to be processes at any given time. In addition, the IT admin has no more than 3 days to have the platform and the application operational. The Challenges Some of the challenges the IT Administrator is facing are: - how are we going to ensure the processing power? - how are we going to provision the (virtual) machines, Java EE platform and deploy the application? - how are we going to monitor the SLA? - how are we going to react to SLA, and increase capacity?  The Ideal Solution Ideally, the whole process should be automated, "set it and forget" and require no human interaction: - The vendor packages the solution as deployable image(s) - The images are deployed to the IaaS - From there, automated processes take care of SLA  Solution Architecture with WebLogic 12c, Dynamic Clusters, OpenStack & Solaris OracleSolaris provides OS and virtualisation through Solaris Zones OpenStack is a part of Solaris 11.2 and provides Cloud Management (console and API) WebLogic 12c with Dynamic Clusters provides the Platform Trafic Manager provides load balancing On top of out that, we are going to implement a small control script - Cloud Manager - which is going to monitor SLA through WebLogic Diagnostic Framework. In case there are more than 100 pending requests, the script will: - provision a new virtual machine based on image which is configured for the WebLogic domain - add the machine to WebLogic domain - Increase the number of servers in dynamic cluster - Start the newly provisioned server  Stay tuned for part II The hole solution with working demo will be presented in one of our Partner WebCasts in June, exact date TBA. Jernej Kaše is a Fusion Middleware Specialist working closely with Oracle Partners in the ECEMEA region to grow their business by leveraging Oracle technology.

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tic Tac Toe Winner in Javascript and html [closed]

    - by Yehuda G
    I am writing a tic tac toe game using html, css, and JavaScript. I have my JavaScript in an external .js file being referenced into the .html file. Within the .js file, I have a function called playerMove, which allows the player to make his/her move and switches between player 'x' and 'o'. What I am trying to do is determine the winner. Here is what I have: each square, when onclick(this), references playerMove(piece). After each move is made, I want to run an if statement to check for the winner, but am unsure if the parameters would include a reference to 'piece' or a,b, and c. Any suggestions would be greatly appreciated. Javascript: var turn = 0; a = document.getElementById("topLeftSquare").innerHTML; b = document.getElementById("topMiddleSquare").innerHTML; c = document.getElementById("topRightSquare").innerHTML; function playerMove(piece) { var win; if(piece.innerHTML != 'X' && piece.innerHTML != 'O'){ if(turn % 2 == 0){ document.getElementById('playerDisplay').innerHTML= "X Plays " + printEquation(1); piece.innerHTML = 'X'; window.setInterval("X", 10000) piece.style.color = "red"; if(piece.innerHTML == 'X') window.alert("X WINS!"); } else { document.getElementById('playerDisplay').innerHTML= "O Plays " + printEquation(1); piece.innerHTML = 'O'; piece.style.color = "brown"; } turn+=1; } html: <div id="board"> <div class="topLeftSquare" onclick="playerMove(this)"> </div> <div class="topMiddleSquare" onclick="playerMove(this)"> </div> <div class="topRightSquare" onclick="playerMove(this)"> </div> <div class="middleLeftSquare" onclick="playerMove(this)"> </div> <div class="middleSquare" onclick="playerMove(this)"> </div> <div class="middleRightSquare" onclick="playerMove(this)"> </div> <div class="bottomLeftSquare" onclick="playerMove(this)"> </div> <div class="bottomMiddleSquare" onclick="playerMove(this)"> </div> <div class="bottomRightSquare" onclick="playerMove(this)"> </div> </div>

    Read the article

  • Developing add-ins for multiple versions of Office

    - by Pranav
    Do you want to develop an add-in targeting multiple versions of Office? And you have basic questions like “Is it possible to do? ” and “How to do it?” ? Then you came to the right place. Few months back, I got a requirement to developed add-ins for Outlook 2003 and Outlook 2007. The functionality for both the versions is same. A doubt stroked… when the functionality is same, why would I develop two add-ins separately? Why don’t I make a single build for both the versions of Office? Then I started searching for techniques to develop add-ins which works in both (2003 and 2007) and read many articles written by VSTO Experts in their blogs, Official VSTO Blog, MSDN, Forums and what not. Misha Says: Theoretically, you can develop an add-in for multiple versions of Microsoft Office by catering to the lowest common denominator. This means if you use an Excel 2003 add-in template in Visual Studio 2008, you would be able to develop and debug this with Excel 2007. However if you try this, you may meet these error messages: “You cannot debug or run this project, because the required version of the Microsoft Office application is not installed.”, followed by “Unable to start debugging.” You can develop Office 2003 add-in in a system where Office 2007 is installed. The following is the procedure that demonstrates how to update your Visual Studio debugging options to use Microsoft Outlook 2007 to debug an add-in targeting Microsoft Outlook 2003. On the Project menu, click on ProjectName Properties Click on the Debug tab In the Start Action pane, click the Start external program radio button Click the file browser button and navigate to %ProgramFiles%\Microsoft Office\Office12 Choose Outlook.exe and click Open Press F5 to debug your add-in For more details. Go through this article in Misha Shneerson’s Blog. There are some tips and tricks to be followed and the things that one needs to take care while developing add-ins targeting multiple versions of Office in Andrew’s Blog. Have a look at this too. You might find it interesting and useful. http://blogs.msdn.com/andreww/archive/2007/06/15/can-you-build-one-add-in-for-multiple-versions-of-office.aspx Here is an MSDN article on Running Solutions in Different Versions of Microsoft Office http://msdn.microsoft.com/en-us/library/bb772080.aspx Hope this helps!

    Read the article

  • Process Rules!

    - by Ajay Khanna
    One of the key components of a process is “Business Rule”. Business rule takes many forms inside your process definition and in a way is a manifestation of your company’s business policy. Business rules inside the process are used for policy enforcement, governance, decision management, operations efficiency etc. Following are some basic types of rules that can be a part of your process. 1. Process conditions:  These are defined as the process gateways that determine a path process will take depending on the process parameters. For Example, if discount >10% go to approval path : if discount < 10% auto-approve order. 2. Data rules: These business rules are defined as facts in decision table or knowledge base. The process captures all required parameters and submits those to RETE based rules engine. Rules engine processes the data and returns the result back. For example, rules determining your insurance eligibility. 3. Event rules: Here the system is monitoring the various events and events patterns that are emerging inside the process or external to the process. You can define actions or alerts to be triggered when a certain pattern of events emerges over a specified time period. Such types of rules need Complex Event Processing and are used in applications like Credit Card Fraud detection or Utility Demand Response. 4. User Interface Rules: In order to add dynamic behavior to UI or to keep users from making mistakes and enforcing policy, another mechanism available is UI rules. They are evaluated as the end user is filling out the web forms. These may include enabling and disabling of UI as per business policy. An example could be, if the age of a user is less than 13 years, disable credit card field and enable parental approval required checkbox. Your process may include many of such rule types. Oracle OpenWorld provides a unique opportunity to listen to Oracle Business Process Management Experts and Customers.  We will discuss business rules during various sessions in Oracle OpenWorld. Two of the sessions specifically focused on business rules are listed below: Accelerating an Implementation of Complex Worldwide Business Approval Rules Wednesday, Oct 3, 10:15 AM Moscone South – 305 Oracle Business Rules Use Cases Design and Testing Wednesday, Oct 3, 3:30 PM Marriott Marquis - Golden Gate C3   Oracle Business Process Management Track covers a variety of topics, and speakers covering technology, methodology and best practices. You can see the list of Business process Management sessions here. Come back to this blog for more coverage from Oracle OpenWorld!

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • New hidden parameters in Oracle 11.2

    - by Mike Dietrich
    We really welcome every external review of our slides. And also recommendations from customers visiting our workshops. So it happened to me more than a week ago that Marco Patzwahl, the owner of MuniqSoft GmbH, had a very lengthy train ride in Germany (as the engine drivers go on strike this week it could have become even worse) and nothing better to do then reviewing our slide set. And he had plenty of recommendations. Besides that he pointed us to something at least I was not aware of and added it to the slides: In patch set 11.2.0.2 a new behaviour for datafile write errors has been implemented. With this release ANY write error to a datafile will cause the instance to abort. Before 11.2.0.2 those errors usually led to an offline datafile if the database operates in archivelog mode (your production database do, don’t they?!) and the datafile does not belong to the SYSTEM tablespace. Internal discussion found this behaviour not up-to-date and alligned with RAC systems and modern storages. Therefore it has been changed and a new underscore parameter got introduced. _DATAFILE_WRITE_ERRORS_CRASH_INSTANCE=TRUE This is the default setting´and the new behaviour beginning with Oracle 11.2.0.2 If you would like to revert to the pre-11.2.0.2 behaviour you’ll have to set in your init.ora/spfile this parameter to false. But keep in mind that there’s a reason why this has been changed. You’ll find more info in MOS Note: 7691270.8 and this topic in the current version of the slides on slide 255. Thanks to Marco for the review!!   And then I received an email from Kurt Van Meerbeeck today. Kurt is pretty well known in the Oracle community. And he’s the owner of jDUL/DUDE, a database unloading tool which bypasses the Oracle database engine and access data direclty from the blocks. Kurt visited the upgrade workshop two weeks ago in Belgium and did highlight to me that since Oracle 11.2.0.1 even though you haven’t set neither SGA_TARGET nor MEMORY_TARGET the database might still do resize operations. Reason why this behaviour has been changed: Prevention of ORA-4031 errors. But on databases with extremly high loads this can cause trouble. Further information can be found in MOS Note:1269139.1 . And the parameter set to TRUE by default is called _MEMORY_IMM_MODE_WITHOUT_AUTOSGA=TRUE This can be found now in the slide set as well on slide number 240. And thanks to Kurt for this information!!

    Read the article

  • Silverlight Cream for May 12, 2010 -- #860

    - by Dave Campbell
    In this Issue: Miroslav Miroslavov(-2-), Mike Snow(-2-, -3-), Paul Sheriff, Fadi Abdelqader, Jeremy Likness, Marlon Grech, and Victor Gaudioso. Shoutouts: Andy Beaulieu has a cool WP7 game up and is looking for opinions/comments: Droppy Pop: A Windows Phone 7 Game Karl Shifflett has code and video tutorials up for the app he wrote for the WPF LOB tour he just did: Stuff – WPF Line of Business Using MVVM Video Tutorial From SilverlightCream.com: Flipping panels I had missed this 3rd part of the CompleteIT explanation. In this post Miroslav Miroslavov describes the page flipping they're doing. Great explanation and all the code included. Flying objects against you The 4th part of the CompleteIT explanation is blogged by Miroslav Miroslavov where he is discussing the screen elements 'flying toward' the user. Silverlight Tip of the Day #17 – Double Click Mike Snow's Tip of the Day 17 is showing how to implement mouse double-clicks either for an individual control or for an entire app. Silverlight Tip of the Day #18 – Elastic Scrolling In Mike Snow's Tip of the Day 18, he's talking about and showing some 'elastic' scrolling in his image viewer application. He's asking for opinions and suggestions. Silverlight Tip of the Day #19 – Using Bing Maps in Silverlight Mike Snow's Tips are getting more elaborate :) ... Number 19 is about using the BingMap control in your Silverlight app. Control to Control Binding in WPF/Silverlight Paul Sheriff demonstrates control to control binding... saving a bunch of code behind in the process. Project included. Your First Step to the Silverlight Voice/Video Chatting Client/Server Fadi Abdelqader has a post up at CodeProject using the WebCam and Mic features of Silverlight 4 to setup a voice & video chatting app. MVVM Coding by Convention (Convention over Configuration) Jeremy Likness discusses Convention over Configuration and gives up some good MVVM nuggets along the way... check out his nice long post and grab the source for the project too... and also check out the external links he has in there. MEFedMVVM changes >> from cool to cooler Marlon Grech has refactored MEFedMVVM, and in addition is working with other MVVM framework folks to use some of the same MEF techniques in theirs... code on CodePlex New Silverlight Video Tutorial: How to Create a Silverlight Paging System to Load new Pages In Victor Gaudioso's latest video tutorial he builds a ContentHolder UserControl that will load any page on demand into your MainPage.xaml Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • Happy New Year! Upcoming Events in January 2011

    - by mandy.ho
    Oracle Database kicks off the New Year at the following events during the month of January. Hope to see you there and please send in your pictures and feedback! Jan 20, 2011 - San Francisco, CA LinkShare Symposium West 2011 Oracle is a proud Gold Sponsor at the LinkShare Symposium West 2011 January 20 in San Francisco, California. Year after year LinkShare has been bringing their network the opportunity to come to life. At the LinkShare Symposium online performance marketing leaders meet to optimize face-to-face during a full day of networking. Learn more by attending Oracle Breakout Session, "Omni - Channel Retailing, What is possible now?" on Thursday, January 20, 11:15 a.m. - 12:00 noon, Grand Ballroom. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=128306&src=6954634&src=6954634&Act=397 Jan 24, 2011 - Cincinnati, OH Greater Cincinnati Oracle User Group Meeting "Tom Kyte Day" - Featuring a day of sessions presented by Senior Technical Architect, Tom Kyte. Sessions include "Top 10, no 11, new features of Oracle Database 11g Release 2" and "What do I really need to know when upgrading", plus more. http://www.gcoug.org/ Jan 25, 2011 - Vancouver, British Columbia Oracle Security Solutions Forum Featuring a Special Keynote Presentation from Tom Kyte - Complete Database Security Join us at this half-day event; Oracle Database Security Solutions: Complete Information Security. Learn how Oracle Database Security solutions help you: • Prevent external threats like SQL injection attacks from reaching your databases • Transparently encrypt application data without application changes • Prevent privileged database users and administrators from accessing data • Use native database auditing to monitor and report on database activity • Mask production data for safe use in nonproduction environments http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=126974&src=6958351&src=6958351&Act=97 Jan 26, 2011 - Halifax, Nova Scotia Oracle Database Security Technology Day Exclusive Seminar on Complete Information Security with Oracle Database 11g The amount of digital data within organizations is growing at unprecedented rates, as is the value of that data and the challenges of safeguarding it. Yet most IT security programs fail to address database security--specifically, insecure applications and privileged users. So how can you protect your mission-critical information? Avoid risky third-party solutions? Defend against security breaches and compliance violations? And resist costly new infrastructure investments? Join us at this half-day seminar, Oracle Database Security Solutions: Complete Information Security, to find out http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=126269&src=6958351&src=6958351&Act=93

    Read the article

  • Using ext4 in VMware machine

    First of all, using a journaling filesystems like NTFS, ext4, XFS, or JFS (not to name all of them) is a very good idea and nowadays unthinkable not to do. Linux offers a good variety of different option as journaling filesystem for your system. Since years I am using SGI's XFS and I am pretty confident with stability, performance and liability of the system. In earlier years I had to struggle with incompatibilities between XFS and the boot loader. Using an ext2 formatted /boot solved this issue. But, wow, that is ages ago! Lately, I had to setup a fresh Lucid Lynx (Ubuntu 10.04 LTS) system for a change of our internal groupware / messaging system. Therefore, I fired up a new virtual machine with almost standard configuration in VMware Server and run through our network-based PXE boot and installation procedure. At a certain step in this process, Ubuntu asks you about the partitioning of your hard drive(s). Honestly, I have to say that only out of curiousity I sticked to the "default" suggestion and gave my faith and trust into the Ubuntu installation routine... Resulting to have an ext4 based root mount point ( / ). The rest of the installation went on without further concerns or worries. Note:I really can't remember why I chose to go away from my favourite... Well, it should turn out to be the wrong decision after all. Ok, let's continue the story about ext4 in a VMware based virtual machine. After some hours installing additional packages and configuring the new system using LDAP for general authentication and login, I had an "out-of-the-box" usable enterprise messaging system based on Zarafa 6.40 Community Edition inclusive proper SSL-based Webaccess interface and Z-Push extension for ActiveSync with my Nokia mobile. Straightforward and pretty nice for the time spent on the setup. Having priority on other tasks I let the system just running and didn't pay any further attention at all. Until I run into an upgrade of "Mail for Exchange" on Symbian OS. My mobile did not bother me at all with the upgrade and everything went smooth, but trying to re-establish the ActiveSync connection to the Zarafa messaging system resulted in a frustating situation. So, I shifted my focus back to the Linux system and I was amazed to figure out that the root had been remounted readonly due to hard drive failures or at least ext4 reported errors. Firing up Google only confirmed my concerns and it seems that using ext4 for VMware based virtual machines does not look like a stable and reliable candidate to me. You might consider reading those external resources: ext4 fs corruption under VMWare Server 2.01Bug #389555 - ext4 filesystem corruption Well, I learned my lesson and ext{2|3|4} based filesystems are not going to be used on any of my Linux systems or customer installations in the future. Addendum: I did not try this setup in other virtualization environments like VirtualBox, qemu, kvm, Xen, etc.

    Read the article

  • SCHA API for resource group failover / switchover history

    - by krishna.k.murthy
    The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format. Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups. The command usage is as shown below: # scha_cluster_get -O RG_FAILOVER_LOG number_of_days number_of_days : the number of days to be considered for scanning the historical logs. The command returns a list of events in the following format. Each field is separated by a semi-colon [;]: resource_group_name;source_nodes;target_nodes;time_stamp source_nodes: node_names from which resource group is failed over or was switched manually. target_nodes: node_names to which the resource group failed over or was switched manually. There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag. In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup. # /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3 geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014 geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014 oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014 asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014 asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014 oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014 geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014 geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014 asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014 asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014 oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014 oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014 oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014 Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline. - Arpit Gupta, Harish Mallya

    Read the article

  • How do I stop icons appearing on the desktop in a particular area?

    - by Seamus
    When I download something to my desktop, or insert a CD or flash drive, the icon appears on my desktop. When I have conky running, the icon sometimes appears in the top right corner, underneath conky; where I can't see it. How do I stop this happening? My .conkyrc is pasted below. I didn't write it all myself, so I'm not entirely sure what I need to change, or what parts are relevant for this particular question... # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type override own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont DejaVu Sans:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left alignment top_right #alignment bottom_left #alignment bottom_right # Gap between borders of screen and text gap_x 10 gap_y 20 # stuff after 'TEXT' will be formatted on screen TEXT $color ${color orange}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine ${color orange}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${acpitemp} $cpubar ${cpugraph 000000 ffffff} NAME ${goto 150}PID ${goto 200}CPU% ${goto 250}MEM% ${top name 1} ${goto 150}${top pid 1} ${goto 200}${top cpu 1} ${goto 250}${top mem 1} ${top name 2} ${goto 150}${top pid 2} ${goto 200}${top cpu 2} ${goto 250}${top mem 2} ${top name 3} ${goto 150}${top pid 3} ${goto 200}${top cpu 3} ${goto 250}${top mem 3} ${top name 4} ${goto 150}${top pid 4} ${goto 200}${top cpu 4} ${goto 250}${top mem 4} ${color orange}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Home: ${fs_free_perc /home}% ${fs_bar 6 /}$color Free Space: ${fs_free /home} ${color orange}NETWORK (${addr eth0}) ${hr 2}$color Down: $color${downspeed eth0} k/s ${alignr}Up: ${upspeed eth0} k/s ${downspeedgraph eth0 25,140 000000 ff0000} ${alignr}${upspeedgraph eth0 25,140 000000 00ff00}$color Total: ${totaldown eth0} ${alignr}Total: ${totalup eth0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color orange}WIRELESS (${addr wlan0}) ${hr 2}$color Down: $color${downspeed wlan0} k/s ${alignr}Up: ${upspeed wlan0} k/s ${downspeedgraph wlan0 25,140 000000 ff0000} ${alignr}${upspeedgraph wlan0 25,140 000000 00ff00}$color Total: ${totaldown wlan0} ${alignr}Total: ${totalup wlan0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} Conky solutions have been offered, but perhaps these aren't the best way of approaching it. What I really want is to stop icons even appearing in that part of the desktop window: that is, I want to make part of the desktop real estate "off-limits" to new icons appearing on the desktop.

    Read the article

  • Automate RAC Cluster Upgrades using EM12c

    - by HariSrinivasan
    One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases. This automation includes: Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB)  Best practices in tune with your operations, where you can automate upgrade in steps: Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs). Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster). Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required. Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to : a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN) b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup c. Ignore backup completely, if there are external mechanisms already in place.  High Level Steps: Select the Procedure "Upgrade Database" from Database Provisioning Home page. Choose the Target Type for upgrade and the Destination version Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already Select the Gold Image of the destination version for deploying both the GI and RAC OHs Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts Set the Break points in the procedure execution to isolate Downtime activities Submit and track the procedure's execution status.  The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link. Explore the functionality!! In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

    Read the article

  • asus n550jv audio problem: no sound from notebook' speakers

    - by skywalker
    Ubuntu 13.10. The problem is: the internal speakers don't work. I have no problem when I'm using the headphones. There is no hardware issue since in windows 8 everything works perfectly(external subwoofer included). I'm trying to modify /etc/modprobe.d/alsa-base.conf but I can't find the correct model to put into: options snd-hda-intel model= The file HD-Audio-Models.txt doesn't contain the model for ALC668. Some info: :~sudo aplay -l **** List of PLAYBACK Hardware Devices **** card 0: MID [HDA Intel MID], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: PCH [HDA Intel PCH], device 0: ALC668 Analog [ALC668 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 :~$ sudo lspci -v | grep -A7 -i "audio" 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) Subsystem: Intel Corporation Device 2010 Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at f7a14000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Kernel driver in use: snd_hda_intel -- 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04) Subsystem: ASUSTeK Computer Inc. Device 11cd Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f7a10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel PS info :~$ amixer -c 0 Simple mixer control 'IEC958',0 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',1 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',2 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] :~$ pacmd dump-volumes Welcome to PulseAudio! Use "help" for usage information. Sink 0: reference = 0: 76% 1: 76%, real = 0: 76% 1: 76%, soft = 0: 100% 1: 100%, current_hw = 0: 76% 1: 76%, save = yes Input 8: volume = 0: 100% 1: 100%, reference_ratio = 0: 100% 1: 100%, real_ratio = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, volume_factor = 0: 100% 1: 100%, volume_factor_sink = 0: 100% 1: 100%, save = no Source 0: reference = 0: 100% 1: 100%, real = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, current_hw = 0: 100% 1: 100%, save = no Source 1: reference = 0: 16% 1: 16%, real = 0: 16% 1: 16%, soft = 0: 100% 1: 100%, current_hw = 0: 16% 1: 16%, save = yes

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • Build Dependencies and Silverlight 4

    - by Kyle Burns
    At my current position, I’ve been doing quite a bit of Silverlight development and have also been working with TFS2010 build services to enable continuous integration.  One of the critical pieces of a successful continuous build setup (and also one of the benefits of having one) is that the build system should be able to “get latest” against the source repository and immediately build with no errors.  This can break down both in an automated build scenario and a “new guy” scenario when the solution has external dependencies that may not be present in the build environment. The method that I use to address the dependency issue is to store all of the binaries upon which my solution depends in a folder under the solution root called “Reference Items”.  I keep this folder as part of the solution and check all of the binaries into source control so when I get the latest version of the solution from source control all of the binaries are downloaded to my machine as well and gets me closer to the ideal where a new developer installs the development IDE, get latest and can immediately build and run unit tests before jumping into coding the feature of the day. This all sounds pretty good (and it is), but a little while back I ran into one of those little hiccups that requires a little manual intervention.  The issue that I ran into is that with Silverlight (at least version 4), the behavior of the “Add Reference” command when adding reference to a DLL that is present in the GAC is to omit the HintPath element that it includes with regular .Net projects, so even if the DLL is setting in the Reference Items folder and downloaded to the build machine it cannot be found at compile time and the build will fail. To work around this behavior, you need to be comfortable editing the XML project files generated by Visual Studio (in my case this is typically a .csproj file).  Simply open the project file in your favorite text editor, find the Reference element that refers to the component, and modify the XML to include the HintPath.  Here’s a before and after example of the component that ultimately led me to the investigation behind this post: Before: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL" /> After: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL">       <HintPath>..\Reference Items\Telerik.Windows.Controls.dll</HintPath>     </Reference>

    Read the article

  • SharePoint 2010, Cloud, and the Constitution

    - by Michael Van Cleave
    The other evening an article on the Red Tap Chronicles caught my eye. The article written by Bob Sullivan titled "The Constitutional Issues of Cloud Computing" was very interesting in regards to the direction most of the technical world is going. We all have been inundated about utilizing cloud computing for reasons of price, availability, or even scalability; but what Bob brings up is a whole separate view of why a business might not want to move toward the cloud for services or applications. The overall point to the article was pretty simple. It all boiled down to the summation that hosting "Things" in the cloud (Email, Documents, etc…) are interpreted differently under the law regarding constitutional search and seizure than say a document or item that is kept in physical form at a business or home. Where if you physically have it stored someone would have to get a warrant to search for it or seize it, but if it is stored off in the cloud and the ISV or provider is subpoenaed for the item then they will usually give access to the information. Obviously this is a big difference in interpretation of the law and the constitution due to technology. So you might ask "Where does this fit in with SharePoint? Well the overall push for this next version of SharePoint is one that gives a business ultimate flexibility to utilize the Cloud. In one example this upcoming version gracefully lends itself to Multi Tenancy so that online or "Cloud" hosting would be possible by Service Providers. Another aspect to the upcoming version is that it has updated its ability to store content outside of the database and in a cheaper commoditized storage facility. This is called Remote Blob Storage (or RBS) which is the next evolution of External Blob Storage (or EBS). With this new functionality that business might look forward to it is extremely important for them to understand that they might be opening themselves up to laws that do not need a warrant to search or seize their information that is stored in the cloud. It will be interesting to see how this all plays out in the next few months. Usually the laws change slowly in comparison to technology so it might be a while until we see if it is actually constitutional to treat someone's content on the cloud differently as it would be in their possession, however until there is some type of parity that happens or more concrete laws regarding the differences be very careful about what you put in the cloud. Michael

    Read the article

  • New computer hangs on shutdown/reboot, how to troubleshoot?

    - by torbengb
    Summary: My machine hangs on shutdown/restart: all windows and the menu bar disappear but the desktop wallpaper remains, and it stays like that without disk activity forever (hours). It doesn't even show the shutdown screen (the one with the animated dots) where I could hit ESC and watch the shutdown text. How can I troubleshoot this? Details: I've just received a new nettop computer (Acer Aspire Revo 3700: CPU:Atom D525, GPU:Nvidia ION2). I've just made a clean install of Ubuntu 10.10 using the standard USB pendrive method. The machine boots okay and works OK including WLAN and audio, but the graphics are not OK. Ubuntu offered to install&activate the current recommended Nvidia driver, but the machine hangs on shutdown/restart which prevents the installation of the proper Nvidia driver. I have to cycle the power to reboot. I ran the Update Manager in the hope that the updates would fix the hang-up. At the end of the update-installation it asked to reboot - and got stuck just like before. I see no obvious cause of the freeze and I don't know if it's caused by graphics problems or anything else. The only USB attachment is a mouse/keyboard; I don't have any external storage attached; and I don't have any programs running (the machine freezes even when doing restart right after login). How can I determine what is causing the freeze? How can I fix this? I'm frankly rather disappointed because I bought this new machine in the hopes of getting the graphics to work, which failed miserably on my old machine, even though Ubuntu is supposed to be good with Nvidia. Being a fresh convert from Windows, I was hoping for a happier experience this time, so I'm very much looking forward to your suggestions! ... After posting this question, I see related questions in the right sidebar: this, this, and this. Don't know why these didn't show up while I composed by question. Those questions suggest some ACPI settings but I am not experienced enough to find/change those settings. I'll try the sudo shutdown -h now command when I get home and see if that works, then update this question. I did check the system BIOS but didn't see anything out of the ordinary.

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Data Auditor by Example

    - by Jinjin.Wang
    OWB has a node Data Auditors under Oracle Module in Projects Navigator. What is data auditor and how to use it? I will give an introduction to data auditor and show its usage by examples. Data auditor is an important tool in ensuring that data quality levels meet business requirements. Data auditor validates data against a set of data rules to determine which records comply and which do not. It gathers statistical metrics on how well the data in a system complies with a rule by auditing and marking how many errors are occurring against the audited table. Data auditors are typically scheduled for regular execution as part of a process flow, to monitor the quality of the data in an operational environment such as a data warehouse or ERP system, either immediately after updates like data loads, or at regular intervals. How to use data auditor to monitor data quality? Only objects with data rules can be monitored, so the first step is to define data rules according to business requirements and apply them to the objects you want to monitor. The objects can be tables, views, materialized views, and external tables. Secondly create a data auditor containing the objects. You can configure the data auditor and set physical deployment parameters for it as optional, which will be used while running the data auditor. Then deploy and run the data auditor either manually or as part of the process flow. After execution, the data auditor sets several output values, and records that are identified as not complying with the defined data rules contained in the data auditor are written to error tables. Here is an example. We have two tables DEPARTMENTS and EMPLOYEES (see pic-1 and pic-2. Click here for DDL and data) imported into OWB. We want to gather statistical metrics on how well data in these two tables satisfies the following requirements: a. Values of the EMPLOYEES.EMPLOYEE_ID attribute are three-digit numbers. b. Valid values for EMPLOYEES.JOB_ID are IT_PROG, SA_REP, SH_CLERK, PU_CLERK, and ST_CLERK. c. EMPLOYEES.EMPLOYEE_ID is related to DEPARTMENTS.MANAGER_ID. Pic-1 EMPLOYEES Pic-2 DEPARTMENTS 1. To determine legal data within EMPLOYEES or legal relationships between data in different columns of the two tables, firstly we define data rules based on the three requirements and apply them to tables. a. The first requirement is about patterns that an attribute is allowed to conform to. We create a Domain Pattern List data rule EMPLOYEE_PATTERN_RULE here. The pattern is defined in the Oracle Database regular expression syntax as ^([0-9]{3})$ Apply data rule EMPLOYEE_PATTERN_RULE to table EMPLOYEES.

    Read the article

  • Getting WLAN on my Laptop to work (Medion MD98300)

    - by Anand Böhmer
    Dear Ubuntu Community, I am having difficulties to get the WLAN Adapter on my Medion 98300 Laptop to work. The WLAN Card seems to be connected through an internal USB Interface and the Card itself had shown up as a wirelles Network while installing Ubuntu. I have tried a few things earlier, but none of my google reasearches have brought me to a working solution... I am quite new to the Linux System and only knoew a couple of terminal commands so far, so I probably have missed out on a few possible solutions. Maybe you can help me? Thank you very much in advance! A fre minimal technical Details: AMD Turion 64 X2 Dual-Core Mobile Technolgie TL-50 NVIDIA GeForce Go 6150 SanDisk 64GB SSD 2GB RAM DDR2 667 nForce Chipset (I forgt the Version, but deductable from the GPU I guess) WiFi: ZyDAS ZD1211B 802.11g Thank's a lot again! :) UPDATE: I tried around myself a little and found a guide on the Linux Mint forums that helped! I already had tried to install the linux backport modules etc. What I finally did was update the linux firmware and run the following command: echo "options acer_wmi wirelles=1" sudo tee /etc/modprobe.d/acer_wmi.conf and rebooted now I found and could connect to networks, but unfortunately I found, that the link quality was very bad, around 40 to 50. Eventhough my Router is running at high power and is only 6 Meters away! I then switched a few channels, but that did not improve much. Before, under Windows, I had a very good link quality and had the entire 16mbit/s internet connection at disposal, now I can only get about 3-5Mbit/s. better then nothing, but still pretty bad! The "TX power" is fixed on 20dBm and iwconfig says, that the "Power Management" is off... Maybe the power of the module is set too low?... UPDATE2: I figured that 20dBm is a normal power output. I even tried to change the power using iwconfig wlan0 txpower INTEGERHERE but, obviously my "Card" does not support more then 20. More would probably be illegal as well, so I won't even use more then 20. I guess that I will have to figure out a way, or maybe just switch cards. Are the Mailboard-USB-Connectors on a laptop of the same property as the standard external ones? If so, I could simply solder a micro Wirelless N Adapter onto the board :)

    Read the article

  • No client internet access when setting up these iptables rules

    - by Siriss
    I have read many other posts but cannot figure this out. eth0 is my external connected to a Comcast modem. The server has internet access with no issues. eth1 is internal and running DHCP for the clients. I have DHCP working just fine, all my clients can get an IP and ping the server but they cannot access the internet. I am using ISC-DHCP-SERVER and have set /etc/default/isc-dhcp-server to INTERFACE="eht1" Here is my dhcpd.conf file located in /etc/dhcp/dhcpd.conf ddns-update-style interim; ignore client-updates; subnet 10.0.10.0 netmask 255.255.255.0 { range 10.0.10.10 10.0.10.200; option routers 10.0.10.2; option subnet-mask 255.255.255.0; option domain-name-servers 208.67.222.222, 208.67.220.220; #OpenDNS # option domain-name "example.com"; default-lease-time 21600; max-lease-time 43200; authoritative; } I have made the *net.ipv4.ip_forward=1* change in /etc/sysctl.conf here is my interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp iface eth1 inet static address 10.0.10.2 netmask 255.255.255.0 network 10.0.10.0 auto eth1 And finally- here is my iptables.conf file: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *nat :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.10.0/24 -o eth0 -j MASQUERADE #-A PREROUTING -i eth0 -p tcp --dport 59668 -j DNAT --to-destination 10.0.10.2:59668 COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i eth1 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT -A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT -A FORWARD -s 10.0.10.0/24 -o eth0 -j ACCEPT -A FORWARD -d 10.0.10.0/24 -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT -A FORWARD -p icmp -j ACCEPT -A FORWARD -i lo -j ACCEPT -A FORWARD -i eth1 -j ACCEPT #-A FORWARD -i eth0 -m state --state NEW -m tcp -p tcp -d 10.0.10.2 --dport 59668 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT I am completely stuck. I cannot figure out why the clients cannot access the internet. Am I missing a service? Is a service not running? Any help would be greatly appreciated. I tried to be as thorough as possible but please let me know if I have missed something. Thank you!

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >