Search Results

Search found 4458 results on 179 pages for 'individual improvement'.

Page 56/179 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Welcome to the FMW Install and Admin Proactive Team Blog

    - by Daniel Mortimer
    IntroductionWelcome to the Fusion Middleware Install and Administration Proactive Support blog.  This is our first post, so let's begin by introducing ourselves and our mission. Who We AreWe are a small team of support engineers based in Europe.  Our expertise covers all matters related to the installation and administration of Oracle Application Server 10g, Oracle Fusion Middleware 11g and future versions to come. We particularly focus on core components such as the Installers and Configuration Wizards Web Tier ( Oracle HTTP Server ) OPMN Enterprise Manager Console for Application Server as well as general questions / problems relating to patching, maintenance and architecture. Our Mission Improve the customer experience Enable customers to avoid / prevent issues when working with our products Enable faster resolution of problems when they occur Our Activities Enhancement and maintenance of our knowledge base In particular, develop and maintain special content such as the Fusion Middleware Information Centers and Lifecycle Support Advisors Seek continuous improvement of the product documentation Contribute to the Fusion Middleware Support News Moderation of the "Oracle Application Server" support community Participate in the Support Advisor Webcast program Involved in the Lifecycle of diagnostic tools such as RDA and OCM User Acceptance Testing Logging of enhancements and health check ideas Provide feedback to product management / development Logging of product bugs and enhancements Suggest improvements that could be made to web sites like OTN Promote new support documents, tools via channels such as Newsletter and Social Media We hope that this blog will be a two-way communication as we are interested in feedback on what we can improve. Many suggestions we can act on immediately while others may take more time, but all of them will be acknowledged and followed up.Thank you for your time and we look forward to both informing and working with you.Postscript: Many links you will find in our blog entries will require a login to My Oracle Support. For readers who do not have a login, please accept our apologies - when and where possible we will endeavour to ensure the links will supplement rather than replace wording in the blog entries.

    Read the article

  • UML Class diagrams with Java packages?

    - by loosebruce
    I am trying to model in UML 2.0 a Java servlet application that has three classes Servlet class; essentially a main class that acts as the controller DatabaseLogic; contains methods for database operations XMLBuilder; builds an XML from a query result string The classes use a variety of packages from the Java library. I am unsure how to model this in UML Do I have to create a package and show which libraries are used for each individual class or can I just have one large package in the diagram with all the libraries showing which classes have dependencies on which. As per this diagram This is my first time working with java properly (im a C++ guy) Apart from being a bit messy , is this a correct UML representation of the system I described? Does a Package in UML mean the same as a Package in Java?

    Read the article

  • VS 11 Beta merge tool is awesome, except for resovling conflicts

    - by deadlydog
    If you've downloaded the new VS 11 Beta and done any merging, then you've probably seen the new diff and merge tools built into VS 11.  They are awesome, and by far a vast improvement over the ones included in VS 2010.  There is one problem with the merge tool though, and in my opinion it is huge.Basically the problem with the new VS 11 Beta merge tool is that when you are resolving conflicts after performing a merge, you cannot tell what changes were made in each file where the code is conflicting.  Was the conflicting code added, deleted, or modified in the source and target branches?  I don't know (without explicitly opening up the history of both the source and target files), and the merge tool doesn't tell me.  In my opinion this is a huge fail on the part of the designers/developers of the merge tool, as it actually forces me to either spend an extra minute for every conflict to view the source and target file history, or to go back to use the merge tool in VS 2010 to properly assess which changes I should take.I submitted this as a bug to Microsoft, but they say that this is intentional by design. WHAT?! So they purposely crippled their tool in order to make it pretty and keep the look consistent with the new diff tool?  That's like purposely putting a little hole in the bottom of your cup for design reasons to make it look cool.  Sure, the cup looks cool, but I'm not going to use it if it leaks all over the place and doesn't do the job that it is intended for. Bah! but I digress.Because this bug is apparently a feature, they asked me to open up a "feature request" to have the problem fixed. Please go vote up both my bug submission and the feature request so that this tool will actually be useful by the time the final VS 11 product is released.

    Read the article

  • Gauging Maturity of your BPM Strategy - part 2 / 2

    - by Sanjeev Sharma
    In my earlier post I had discussed the essence of maturity assessment and the business imperative for doing the same in the context of BPM. In this post I will discuss Oracle’s BPM Maturity assessment methodology. Oracle’s BPM Maturity model comprises of the following components: Maturity – represents stages of evolution of your BPM capability with 0 being the lowest level and 5 being the highest level  Domain – represents multiple perspectives both technical and business oriented against which your BPM capability can be assessed Adoption – represents scale of BPM rollout starting at the project level to the enterprise level Note: Your BPM capability can be at different levels of maturity for the different domains. Oracle’s BPM assessment methodology measures the maturity of your BPM capability at the individual domain level as well as the aggregate level. The output of Oracle’s BPM assessment benefits you in two ways: Gap Analysis by comparing the “As-Is” BPM capability with the desired “To-Be” BPM capability along the various domains  (see Figure 1) Systematic Adoption by aligning evolution of BPM capability with its rollout in multiple phases (see Figure 2)

    Read the article

  • A question every programmer has. Maybe.

    - by zengr
    I have been using Java from the last 2yrs (academics). Now, when I am graduating, I received a job offer from a .com. The job is awesome and it's a backend Java work. I wanted to get involved with Ruby on Rails, looked for alot of jobs, gave few interviews, but didn't make it. So, what should I do now? Should I go ahead with Java and learn/do more with Java, a complete 360degree of the java world - Full stack of Java from backend to frontend? OR Java at workplace and try to improve my Ruby on Rails. I understand, this is a very subjective question and depends on the individual, but what would you have done? Have you ever faced a similar problem? I feel I have wasted some time with Rails, where I could not "conquer" Rails, where as I could have used that time to go more into Java.

    Read the article

  • Get aggregated view of data for entire website with Google Analytics

    - by crmpicco
    I have a website (www.ayrshireminis.com), which has three main sections under different directories, these are: /forum /galleries /contact I would like to have an aggregated view of the data for the whole website, but also for each section. What is the recommended approach for doing this? I believe I can create a web property that includes a profile for the entire website and duplicated filtered profiles, each section having an include filter. This is my gut instinct, but i'd like to know if there is another (better) way to do it? Maybe by having one account that includes a profile for the whole site and another profile with an include filter for the individual sections?

    Read the article

  • How do I simplify terrain with tunnels or overhangs?

    - by KKlouzal
    I'm attempting to store vertex data in a quadtree with C++, such that far-away vertices can be combined to simplify the object and speed up rendering. This works well with a reasonably flat mesh, but what about terrain with overhangs or tunnels? How should I represent such a mesh in a quadtree? After the initial generation, each mesh is roughly 130,000 polygons and about 300 of these meshes are lined up to create the surface of a planetary body. A fully generated planet is upwards of 10,000,000 polygons before applying any culling to the individual meshes. Therefore, this second optimization is vital for the project. The rest of my confusion focuses around my inexperience with vertex data: How do I properly loop through the vertex data to group them into specific quads? How do I conclude from vertex data what a quad's maximum size should be? How many quads should the quadtree include?

    Read the article

  • PASS: Board Q&amp;A at the Summit

    - by Bill Graziano
    The last two years we’ve put the Board in front of the members and taken questions.  We’re going to do that again this year.  It will be in Room 307/308 from 12:15 to 1:30 on Friday. Yes, this time overlaps with the Birds of a Feather Lunch and the start of afternoon sessions – but only partially.  You can attend the Q&A and still get to parts of both of those.  There just isn’t a great time to do this.  Every time overlaps with something. We can’t do it after the last session on Friday.  We can’t fit it between the last session and the evening events on Wednesday or Thursday.  We had some discussion around breakfast time but I didn’t think that was realistic.  This is the least bad time we could come up with. Last year we had 60-70 people attend.  These are the items that were specific things that I could work on: The first question was whether to increase transparency around individual votes of Board members.  We approved this at the Board meeting the following day.  The only caveat was that if the Board is given confidential information as a basis for their vote then we may not be able to disclose individual votes.  Putting a Director in a position where they can’t publicly defend the reason for their vote is a difficult situation.  Thanks Kendal! Can we have a Board member discretionary fund?  As background, I took a couple of people to lunch so we could have a quiet place to talk.  I bought lunch but wasn’t able to expense it back to PASS.  We just don’t have a budget item for things like this.  I think we should.  I would guess the entire Board would like it also.  It was in an earlier version of the budget but came out as part of a cost-cutting move to balance the budget.  I’d like to see it added back in but we’ll have to see. I know there were a comments about the elections.  At this point we had created the Election Review Committee.  I’ve already written at length about this process. Where does IT work go?  PASS started to publish our internal management reports starting in December 2010.  You can find them on our Governance page.  These aren’t filtered at all and include a variety of information about IT projects.  The most recent update had roughly a page of updates related to IT.  Lots of the work was related to Summit and the Orator tool that we use to manage speaker submissions. There were numerous requests that Tina Turner not be repeated.  Done.  I don’t think we’ll do anything quite like that again.  We had a request for a payment plan for Summit.  We looked into this briefly but didn’t take any action.  We didn’t think the effort was worth the small number of people that would use it.  If you disagree, submit this on our Summit Feedback site and get some votes. There were lots of suggestions around the first-timers events – especially from first timers.  You can find all our current activities related to first-timers at the First Timers page on the Summit web site.  Plus links to 34 (!) blog posts on suggestions for first-timers.  And a big THANK YOU to Confio and Red Gate for sponsoring this. I hope you get the chance to attend.  These events are very helpful to me as a Board member.  I like being able to look around the room as comments are being made and see the audience reaction.  It helps me gauge the interest in an idea. I’d also like to direct you to the Summit Feedback site.  You can submit and vote on ideas to make the Summit a better experience.  As of right now we have the suggestions from last year still up.  We may reset these prior to the Summit though.

    Read the article

  • Sort algorithms that work on large amount of data

    - by Giorgio
    I am looking for sorting algorithms that can work on a large amount of data, i.e. that can work even when the whole data set cannot be held in main memory at once. The only candidate that I have found up to now is merge sort: you can implement the algorithm in such a way that it scans your data set at each merge without holding all the data in main memory at once. The variation of merge sort I have in mind is described in this article in section Use with tape drives. I think this is a good solution (with complexity O(n x log(n)) but I am curious to know if there are other (possibly faster) sorting algorithms that can work on large data sets that do not fit in main memory. EDIT Here are some more details, as required by the answers: The data needs to be sorted periodically, e.g. once in a month. I do not need to insert a few records and have the data sorted incrementally. My example text file is about 1 GB UTF-8 text, but I wanted to solve the problem in general, even if the file were, say, 20 GB. It is not in a database and, due to other constraints, it cannot be. The data is dumped by others as a text file, I have my own code to read this text file. The format of the data is a text file: new line characters are record separators. One possible improvement I had in mind was to split the file into files that are small enough to be sorted in memory, and finally merge all these files using the algorithm I have described above.

    Read the article

  • What percentage of software developers work solo?

    - by JMather
    I'm trying to put together some ideas for a talk, and one of the things that occurred to me, is if there's any documentation or research into how many programmers work as the lone developer within their team. I think this is an important distinction because individual developers (and perhaps small team developers) end up having to wear many more hats than developers part of a large developer group. It could give us some better insight to career development and transition tactics, as well. I've tried some generally googling, and wasn't able to turn up anything, so I'm hoping maybe someone has seen (or studied) something related to this. Thanks in advance!

    Read the article

  • Should Professional Development occur on company time?

    - by jshu
    As a first-time part-time software developer at a small consulting company, I'm struggling to organise time to further my own software development knowledge - whether that's reading a book, keeping up with the popular questions on StackOverflow, researching a technology we're using in-depth, or following the front page of Hacker News. I can see results borne from my self-allocated study time, but listing and demonstrating the skills and knowledge gained through Professional Development is difficult. The company does not have any defined PD policy, and there's a lot of pressure to get something deliverable done now! when working for consultants. I've checked what my coworkers do, and they don't appear to allocate any time to self-improvement; they just work at the problems they're given, looking up specific MSDN references, code samples, and the like as they need them. I realise that PD policy is going to vary across companies of different size and culture, and a company like my own is probably a bit of an edge case. I'd love to hear views and experiences from more seasoned developers; especially those who have to make the PD policy choices in their team or company. I'd also like to learn about the more radical approaches to PD, even if they're completely out there; it's always interesting to see what other people are trying. Not quite a summary, but what I'm trying to ask: Is it common or recommended for companies to allocate PD time? Whose responsibility is it to ensure a developer's knowledge and skills are up to date? Should a part-time work schedule inspire a lower ratio of PD time : work? How can a developer show non-developer coworkers that reading blogs and books is net productive? Is reading blogs and books actually net productive? (references welcomed) Is writing blogs effective as a way of PD? (a recent theme on Hacker News) This is sort of a broad question because I don't know exactly which questions I need to ask here, so any thoughts on relevant issues I haven't addressed are very welcome.

    Read the article

  • Javascript Rookie Question: Define Variables Inline

    - by Dylan Kinnett
    I'm proficient with HTML and CSS but I'm still pretty shaky when it comes to Javascript. That said, I've been able to build a site using the Internet Archive Book Reader, which relies on reader.js Here's a copy of one of my versions of reader.js https://gist.github.com/dylan-k/ed4efed2384e221d46cc It's a good site, but I find I have to repeat things a lot. Basically, I have one copy of reader.js for every page/book featured on the site. It seems there must be a better way. I re-use the script, making copies, just so that I can change lines 28, 80, 83, 84. Is there a way I could include just one copy of reader.js and then use a <script> tag to define these 4 lines for the individual pages?

    Read the article

  • Formal definition for term "pure OO language"?

    - by Yauhen Yakimovich
    I can't think of a better place among SO siblings to pose such a question. Originally I wanted to ask "Is python a pure OO language?" but considering troubles and some sort of discomfort people experience while trying to define the term I decided to start with obtaining a clear definition for the term itself. It would be rather fair to start with correspondence by Dr. Alan Kay, who has coined the term (note the inspiration in biological analogy to cells or other living objects). There are following ways to approach the task: Give a comparative analysis by listing programming languages that exhibits certain properties unique and sufficient to define the term (although Smalltalk and Java are passing examples but IMO this way seems neither really complete or nor fruitful) Give a formal definition (or close to it, e.g. in more academic or mathematical style). Give a philosophical definition that would totally rely on semantical context of concrete language or a priori programming experience (there must be some chance of successful explanation by the community). My current version: "If a certain programing (formal) language that can (grammatically) differentiate between operations and operands as well as infer about the type of each operand whether this type is an object (in sense of OOP) or not then we call such a language an OO-language as long as there is at least one type in this language which is an object. Finally, if all types of the language are also objects we define such language to be pure OO-language." Would appreciate any possible improvement of it. As you can see I just made the definition dependent on the term "object" (often fully referenced as class of objects).

    Read the article

  • Should a link validator report 302 redirects as broken links?

    - by Kevin Vermeer
    A while ago, sparkfun.com changed their URL structure from /commerce/product_info.php?products_id=9266 to /products/9266 This is nice, right? We don't need to know that it is (or was) a PHP page, and commerce, product_info, and products_id all tell us that we're looking at some products. The latter form seems like a great improvement. However, the change would have broken existing links. So, nicely, they stuck in 302 redirects. Visit http://www.sparkfun.com/commerce/product_info.php?products_id=9266 and your browser will issue GET /commerce/product_info.php?products_id=9266 HTTP/1.1 to which Sparkfun's servers reply HTTP/1.1 302 Found Location: http://www.sparkfun.com/products/9266 This 302 redirect is caught by Stack Exchange's link validator as a broken link. It's not broken it works just fine. Here, try it: http://www.sparkfun.com/commerce/product_info.php?products_id=9266 I understand that a 302 redirect is intended to be a temporary redirect, while a 301 should be used for permanent changes per RFC 2616. That said, Wikipedia and common practice use it as a redirect. Who is in error in this situation? Is this an error in Sparkfun's redirect implementation or in Stack Exchange's URL validator?

    Read the article

  • Rubik's cube array rotation

    - by Ace
    I'm about to make a 3D Rubik's cube based game in Flash AS3 and Away3d. I don't really know how to manage the 2D arrays of the Rubik's cube. For example, how do I rotate the corresponding arrays if I rotate a side, or just rotate a middle part? In this stage I also don't know how to rotate those smaller cube parts all together if a side is rotating. First I was thinking of "groups" ( like in sketchup or 3ds max, blender), but that would be tricky, because the group components would change every time. So I was thinking of just rotating each individual piece along a global axis. However, I just know the Away3d functions to rotate the cube of his local X , Y or Z axis, but how to rotate in global axis? Does anyone know of a algorithm for doing these types of rotations?

    Read the article

  • Too much I/O in the morning ?

    - by steveh99999
    Interesting little improvement on a SQL 2005 system I encountered recently….. Some background - this system had a fairly ‘traditional OLTP’ workload ie  heavily used during day – till around 9pm, then had a batch window for several hours, then not much activity in the early hours of the day, until normal workload resumed the following morning. Using perfmon, I noticed that every morning, we would see a big spike in SQL Server I/O when the application started to be used... As it was 2005 I decided to look at what tables were in cache before and after the overnight batch processing ran… ( using DMV equivalent of dbcc memusage that I posted earlier). Here’s what I saw :-     So, contents of data cache split fairly evenly between my 'important/heavily used' tables.   After this:- some application batch processing,backups, DBCC checks and reindexes were run.  A fairly standard batch I'd suggest. Cache contents then looked like this :- Hmmmm – most of cache is now being used by a table I’ve described as ‘unimportant’. Why ? Well, that table was the last to be reindexed…. purely due to luck, as  the reindexing stored procedure performing a loop in alphabetical order through all application tables...  When the application starts to be used again – all this ‘unimportant’ data has to be replaced in cache by data that is heavily used… So, we changed the overnight reindex scripts –  the most heavily accessed tables are now the last to be reindexed. Obvious really, but we did see a significant reduction in early-morning I/O after changing the order of our reindexing.  

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Applications Menu disappeared

    - by Sophie Sperner
    I'm using Ubuntu 12.04, classic desktop without effects. Once the indicator-applet-complete (right part of the top panel) had been disappeared. I found how to fix it: Alt-Win-RightMouseClick on the panel, then "Add to the Panel", where choose "Indicator Applet Complete" to add. Now, the left part of the top panel (Applications Menu) has been disappeared! If I do Alt-Win-RightMouseClick on the panel, I can add only individual Menu sections like Internet, Office, Settings etc. But how to get back the full Menu as it ought to be?

    Read the article

  • Introducing the Documentation Workflows

    - by Owen Allen
    The how-to documents  provide end to end examples of specific features, such as creating a new zone or discovering a new system. We are enhancing the individual how-tos with documents called Workflows. These workflows are each built around procedural flowcharts that show these larger and more complex tasks. The workflow indicates which how-tos or other workflows you should follow to complete a more complex process, and give you a flow for planning the execution of a process. Over the coming days I'll highlight each of these workflows, and talk about the tasks that each one guides you through.

    Read the article

  • A Big Week for Oracle Procurement- In the Cloud and On the Web

    - by David Hope-Ross
    It has been quite a week for Oracle Procurement. On June 6th, CEO Larry Ellison announced the availability ERP Cloud Services- inclusive of Procurement and Inventory. For a replay of the announcement click here. For more information on Oracle Cloud ERP Services click here. Stay tuned as we’ll be providing updates and further details in coming weeks. We hope you noticed, but we also expanded Oracle Fusion Procurement’s presence on oracle.com. We’ve upgraded the Oracle Fusion Procurement overview page and provided some drill down product information, including screenshots and datasheets. For more information check out individual product pages for Purchasing, Self Service Procurement, Sourcing, Procurement Contracts, and Supplier Portal.    

    Read the article

  • How does one pronounce "cron" as in "cron job"?

    - by Rooke
    Before someone ban-hammers this question as they do with all other pronunciation questions, let me explain its relevance. Verbal communication among co-workers and partners is important; today I was on a conference call with people discussing what I thought was something to do with "Chrome", as in Google Chrome. I pronounce the "cron" in "cron job" with a short O, much like "tron", "gone," or "pawn", but this individual pronouced it with a long O, as in "hone", "bone", or "stone" (notice the e at the end of all those!). Is there a standard pronunciation? Or is this a matter of opinion. For example, there's nothing ambiguous about the pronunciation of "Firefox", but debate is raging over "potato" and "tomato".

    Read the article

  • Music player that remembers last song and playlist

    - by user654628
    I am looking for something similar to winamp. I have seen other threads but I have tried some solutions and they did not work. I tried Banshee that comes with Ubuntu 11.10 but it does not open last song. I tried Rhythmbox with the remember last song plugin however it does not remember the playlist I got the song from so it would start shuffling all my music. I tried Amarok and it does the same thing as Banshee except cannot even play my playlist and starts playing all my music. I tried audacious but importing my playlist .m3u doesnt allow me to select the individual playlists and play them. I just moved from Windows using winamp and would like a music player that can open playlists .m3u and when I open the application later that it opens the last song and playlist and I can press the play hotkey and music will start playing on startup similar to winamp. I do not care about any additional functionality or user interface.

    Read the article

  • The Business Case for a Platform Approach

    - by Naresh Persaud
    Most customers have assembled a collection of Identity Management products over time, as they have reacted to industry regulations, compliance mandates and security threats, typically selecting best of breed products.  The resulting infrastructure is a patchwork of systems that has served the short term IDM goals, but is overly complex, hard to manage and cannot scale to meets the needs of the future social/mobile enterprise. The solution is to rethink Identity Management as a Platform, rather than individual products. Aberdeen Research has shown that taking a vendor integrated platform approach to Identity Management can reduce cost, make your IT organization more responsive to the needs of a changing business environment, and reduce audit deficiencies.  View the slide show below to see how companies like Agilent, Cisco, ING Bank and Toyota have all built the business case and embraced the Oracle Identity Management Platform approach. Biz case-keynote-final copy View more PowerPoint from OracleIDM

    Read the article

  • What do you do if you reach a design dead-end in evolutionary methods like Agile or XP?

    - by Dipan Mehta
    As I was reading Martin Fowler's famous blog post Is Design Dead?, one of the striking impressions I got is that given the fact that in Agile Methodology and Extreme Programming, the design as well as programming is evolutionary, there are always points where things need to get refactored. It may be possible that when a programmer's level is good, and they understand design implications and don't make critical mistakes, the code continues to evolve. However, in a normal context, what is the ground reality in this context? In a normal day given some significant development goes into product, and when critical change occurs in requirement isn't it a constraint that how much ever we wish, fundamental design aspects cannot be modified? (without throwing away major part of the code). Is it not quite likely that one reaches dead-end on any further possible improvement on design and requirements? I am not advocating any non-Agile practice here, but I want to know from people who practice agile or iterative or evolutionary development methods, as for their real experiences. Have you ever reached such dead-ends? How have you managed to avoid it or escaped it? Or are there measures to ensure that design remains clean and flexible as it evolves?

    Read the article

  • How could I change the colour of the menu font in Lubuntu?

    - by cipricus
    I use Lubuntu 12.04 and I have become obsessed with the looks of it! There is a type of themes that I especially want to use (flat, light), all related to the Elegant brit theme. On my desktop it looks like: As I want to replace the dominating orange with blue, I would prefer even more the theme Elegant Brit re-Revisited The problem is that both of them have a problem with my system tray, a white background appears: A theme that is almost identical to the orange one, but has no problem with the system tray is called Elegant Blackle. I have tried to use this one and replaced the gtk2 folder in its main folder with the gtk2 folder from Elegant Brit re-Revisited. The result is an improvement for me, as I have eliminated a part of the orange stuff with a decent blue :). The orange now appears only with the gtk3 apps, which is rather amusing. Migrating also the gtk3 folder from the blue to the orange theme would bring the sys tray problem too: it is something related to that gtk3 folder. But this is another matter. Now I have something very close to what I want. But I especially would like to have a feature that was present in the Elegant Blackle theme and now is gone as its gtk2 folder was replaced by that of the blue theme: the black font of the menus, instead of the blueish ones. Could anyone instruct me what is to be changed in the gtk2 folder of the Elegant Brit re-Revisited theme so as to make the menus look like this (like they do in Elegant Blackle with its original gtk2 folder): and not like this? : (on my display this blue is more whitish). An alternative and ideal solution would be knowing how to edit the Elegant Brit re-Revisited theme so as to remove the systray problem or, maybe easier, to edit the orange colour of Elegant Blackle theme to as to be completely replaced by the blue of the other one.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >