Search Results

Search found 9970 results on 399 pages for 'regular john'.

Page 335/399 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Paper-free Customer Engagement

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Appropriate repost from our friends at the AIIM blog: Digital Landfill -- John Mancini, supporting our mission of enabling customer engagement through better technology choices.  ---------- My wife didn't even give me a card for #wpfd - and they say husbands are bad at remembering anniversaries Well, today is the third World Paper Free Day.  I just got off the Tweet Jam, and there was a host of ideas for getting rid of -- or at least reducing -- paper. When we first started talking about "paper-free" most of the reasons raised to pursue this direction were "green" reasons.  I'm glad to see that the thinking has moved on to questions about how getting rid of paper and digitizing processes helps improve customer engagement.  And the bottom line.  And process responsiveness.  Not that the "green" reasons have gone away, but it's nice to see a maturation in the BUSINESS reasons to get rid of paper. Our World Paper Free Handbook (do not, do not, do not print it!) looks at how less paper in the workplace delivers significant benefits. Key findings show eliminating paper from processes can improve the responsiveness of customer service by 300 percent. Removing paper from business processes and moving content to PCs and tablets has the added advantage of helping companies adopt mobile-enable processes and eliminate elapsed time, lost forms, poor data and re-keying. To effectively mobile-enable processes and reduce reliance on paper, data should be captured as close to the point of origination as possible, which makes information easily available to whomever needs it, wherever they are, in the shortest time possible. This handbook summarizes the value of automating manual, paper-based processes. It then goes a step beyond to provide actionable steps that will set you on the path to productivity, profitability, and, yes, less paper.  Get your copy today and send the link around to your peers and colleagues.  Here's the link; please share it! http://www.aiim.org/Research-and-Publications/Research/AIIM-White-Papers/WPFD-Revolution-Handbook And don't miss out on the real world discussions about increasing engagement with WebCenter in new webinars being offered over the next couple of weeks:  October 30, 2012:  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter November 1, 2012: WebCenter Content for Applications: Streamline Processes with Oracle WebCenter Content Management for Human Resources Applications Available On-Demand:  Using Oracle WebCenter to Content-Enable Your Business Applications

    Read the article

  • Removing occurrences of characters in a string

    - by DmainEvent
    I am reading this book, programming Interviews exposed by John Wiley and sons and in chapter 6 they are discussing removing all instances of characters in a src string using a removal string... so removeChars(string str, string remove) In there writeup they sey the steps to accomplish this are to have a boolean lookup array with all values initially set to false, then loop through each character in remove setting the corresponding value in the lookup array to true (note: this could also be a hash if the possible character set where huge like Unicode-16 or something like that or if str and remove are both relatively small... < 100 characters I suppose). You then iterate through the str with a source and destination index, copying each character only if its corresponding value in the lookup array is false... Which makes sense... I don't understand the code that they use however... They have for(src = 0; src < len; ++src){ flags[r[src]] == true; } which is turning the flag value at the remove string indexed at src to true... so if you start out with PLEASE HELP as your str and LEA as your remove you will be setting in your flag table at 0,1,2... t|t|t but after that you will get an out of bounds exception because r doesn't have have anything greater than 2 in it... even using there example you get an out of bounds exception... Am is there code example unworkable? Entire function string removeChars( string str, string remove ){ char[] s = str.toCharArray(); char[] r = remove.toCharArray(); bool[] flags = new bool[128]; // assumes ASCII! int len = s.Length; int src, dst; // Set flags for characters to be removed for( src = 0; src < len; ++src ){ flags[r[src]] = true; } src = 0; dst = 0; // Now loop through all the characters, // copying only if they aren’t flagged while( src < len ){ if( !flags[ (int)s[src] ] ){ s[dst++] = s[src]; } ++src; } return new string( s, 0, dst ); } as you can see, r comes from the remove string. So in my example the remove string has only a size of 3 while my str string has a size of 11. len is equal to the length of the str string. So it would be 11. How can I loop through the r string since it is only size 3? I haven't compiled the code so I can loop through it, but just looking at it I know it won't work. I am thinking they wanted to loop through the r string... in other words they got the length of the wrong string here.

    Read the article

  • Does the Ubuntu One sync work?

    - by bisi
    I have been on this for several hours now, trying to get a simple second folder to sync with my (paid) account. I cannot tell you how many times I removed all devices, removed stored passwords, killed all processes of u1, logged out and back in online...and still, the tick in the file browser (Synchronize this folder) is loading and loading and loading. Also, I have logged out, rebooted countless times. And this is after me somehow managing to get the u1 preferences to finally "connect" again. I have also checked the status of your services, and none are close to what I am experiencing. And I have checked the suggested related questions above! So please, just confirm whether it is a problem on my side, or a problem on your side. EDIT: In the mean time, here is what has changed, on top of what is mentioned just above. • My files went from 0MB to 71.9MB, and is still rising. • My first folder of 400.2MB is being filled with the data as I write this. The second folder has the folder sub-structure in place. • Both folders now show in the File Browser that it will be synchronized. I believe that right now, it is all back to normal and working fine, and I guess that's what a good night's sleep can do ;). And we're now only back to the point where synchronizing is slow, but will pick up with the release of Natty (https://wiki.ubuntu.com/UbuntuOne/FAQ/WhyIsItTakingSoLongForMyFilesToSync). But to get to the questions: My about says I use 11.04, Natty Narwhal, but I am quite sure the last distribution I installed was 10.10. Folder A is 400.2MB, and Folder B is 29.5MB I am on a DSL line, behind a regular fritz.box setting. No proxy servers in use, and I did not install any particular firewall features. No physical firewall, just the router (on which I have a TV signal as well), and 2 switches to get to this floor. Status: inactive The ubuntuone-indicator runs the same window as when I click on my name on the top-right corner and select Ubuntu one, or in the Control Center choose Ubuntu one. It wasn't supposed to go further than this was it?

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • Is it OK to repeat code for unit tests?

    - by Pete
    I wrote some sorting algorithms for a class assignment and I also wrote a few tests to make sure the algorithms were implemented correctly. My tests are only like 10 lines long and there are 3 of them but only 1 line changes between the 3 so there is a lot of repeated code. Is it better to refactor this code into another method that is then called from each test? Wouldn't I then need to write another test to test the refactoring? Some of the variables can even be moved up to the class level. Should testing classes and methods follow the same rules as regular classes/methods? Here's an example: [TestMethod] public void MergeSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for(int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); MergeSort merge = new MergeSort(); merge.mergeSort(a, 0, a.Length - 1); CollectionAssert.AreEqual(a, b); } [TestMethod] public void InsertionSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for (int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); InsertionSort merge = new InsertionSort(); merge.insertionSort(a); CollectionAssert.AreEqual(a, b); }

    Read the article

  • All hail the Excel Queen

    - by Tim Dexter
    An excellent question this past week from dear ol Blighty; actually from Brian at Nextgen Clearing Ltd in the big smoke (London). Brian was developing an excel template and wanted to be able to reference the data fields multiple times inside the Excel template. Damn good question and I of course has some wacky solutions, from macros and cell referencing in Excel to pre-processing the data with an XSL stylesheet to copy the data multiple times so it could be referenced multiple times. All completely outlandish, enter our Queen of Excel, Shirley from the development team. Shirley is singlehandedly responsible for the Excel templates, I put her through six months of hell a few years back, with a host of Excel template requirements. She was more than up to the challenge and has developed some great features. One of those, is the ability to use the hidden XDO_METADATA sheet to map the data to custom named fields so they can be used multiple times in the template. So simple and very neat! Excel template and regular Excel users will know that you can only use the naming function once ie the names have to be unique across the workbook so you can not reuse a cell/group name. To get around this you can just come up with as many cell names as you want and map them in the XDO_METADATA sheet to the data columns/fields in your XML data set:. For example: XDO_?DEPTNO_SUMMARY?  <?DEPTNO?> XDO_?DNAME_SUMMARY?  <?DNAME?> XDO_GROUP_?G_D_DETAIL? <xsl:for-each-group select=".//G_D" group-by="./DEPTNO"> XDO_?DEPTNO_DETAIL? <?DEPTNO?> As you can see DEPTNO has been referenced twice and mapped to different named values in the left hand column. These values can then be used to name individual cells in the Excel template. You'll also notice a mix of Publisher <? ...?> and native XSL commands. So the world is your oyster on the mapping and the complexity you might need for calculations or string manipulation. Shirley has kindly built out a sample Excel template, data and result here so you can see how it all hangs together. the XDO_METADATA sheet is hidden, just right click on the sheet names and use the Unhide command to show it.

    Read the article

  • Fast Data Executive Round Table FY14 event kit

    - by JuergenKress
    We are very interested to run joint marketing events jointly with you as our partners! At our SOA Community Workspace (SOA Community membership required) you can find a new Fast Data Executive Round Table FY14 event kit. This event is designed at senior IT and executives level for the purposes of education, awareness, and thought leadership around the subject of big data; and a specific flavor of big data - Fast Data - that has begun to spark the imagination of many Oracle customers. Fast Data is not new. It’s a term that was invented initially by Ovum’s Tony Baer as a way to represent the collection of ‘high velocity’ solutions with respect to the big data. For Oracle, the Fast Data campaign in FY13 began as a way to tie a broader set of solutions together (SOA/Business Process Management, Data Integration and Business Analytics) under a set of use cases focused on real-time, high velocity data. It has helped to give Oracle a leap-frog advantage over many of the niche integration vendors (i.e. Informatica, Pega, Tibco, Software AG, Terracotta) who haven’t been able to address these types of end-to-end use cases which rely on the combination of filtering, in-memory data processing, correlation, real-time data movement and transformation, end-to-end analytics, and business process management. Only Oracle can address all the dimensions of fast data, and only Oracle can provide a set of engineered solutions to address this space. This event is designed to continue that thought leadership momentum and raise the awareness about what Oracle Fast Data solutions are designed to solve. It’s designed to highlight real customer solutions and articulate the business benefits that fast data can address. This is not an event that gets into the esoteric technical standards of Hadoop, NoSQL, and in-memory data grids. This is an event that instead gets into the heart of business problems that big data has left un-addressed and charts the path for next steps in fast data. Get the Fast Data Executive Round Table FY14 event kit here. Support marketing campaigns We can support such events by: Oracle speakers - contact your partner manager Marketing budget - contact your A&C marketing manager Event location - free use of Oracle Customer Visitor Centers conference rooms Promote your event at events.oracle.com: http://tinyurl.com/eventspecialized E-Blast: invite customers to your event – contact your A&C marketing manager For additional marketing kits e.g for Business Process Managementplease visit our SOA Community Workspace. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags:

    Read the article

  • CFOs: Do You Have a Playbook for Growth?

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs In most global markets, CFOs are optimistic about their company's growth opportunities. Deloitte's CFO Signals Report, "Time to Accelerate" found that: In the U.K. business optimism is at its highest level in three-and-a-half years Optimism in North America rose from a strong +42% last quarter (Q2 to Q3 2013) to an even stronger +54%. The inaugural Southeast Asia survey, 44% of CFOs reported a positive outlook despite worries over the Chinese economy and political uncertainty. Sustainable and profitable business growth doesn't usually happen by accident. Company's need a playbook for growth that's owned by the CFO. And today, that playbook must leverage the six enabling technologies--Social, Big Data, Mobile, Cloud, Analytics, and The Internet of Things (or, as Oracle president Mark Hurd explains, "The Internet of the People"). On Monday June 9 at  2:00 pm Eastern, CFO.com is hosting a webcast, "The CFO Playbook on Growth: How CFOs Can Boost Efficiency and Performance with Automation". Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “Investing in technology begins with a business metric driven business case with clear tangible business results expected," says John Lieblang, Affiliate Partner with Waterstone Management Group. "The progressive CFO has learned how to forge a partnership with the CIO to align everyone in the 'result value chain' to be accountable for the business results not just for functional technology.” Click HERE to register  Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • Storing a looong lookup table

    - by inquisitive
    Background The product i am working on has a very long lookup-table. the table contains static data and cannot be auto generated. there are about 500 rows and 10 columns. columns have mostly integers and strings. to complicate the matters, there are actually two such tables. every row in table-1 maps to zero-or-more rows in table-2. we use an SQLite database with two tables. the product installer places the SQLite file in the installation directory. the application is written in dot-net and we use ADO to load the data once on startup. now, the lookup table grows. in each release a month, we add about 10 new entries existing entries are adjusted. every release we fine tune existing entries. The problem a team of (10) developers work on the lookup table. Code goes in the SVN, but the little devil the SQLite does not. this prevents multiple developers to work on it. we do take regular backups of the file, but proper versioning is not possible. we never know who did the breaking change. the worse thing is we dont know if there is any change at all. diff'ing databases is tedious if not impossible. the tables are expected to grow quite large in years to come and we would need developers to work in parallel on it. the data is business critical. we need to be able to audit changes made to it. Question What would be a solution for the problems outlines above? one idea was to transform the whole thing to XML and treat it like just another source file. that way SVN can do the versioning and we can work in parallel. but the data shows relational behavior. with XML we loose the unique and foreign-key constraints. also we cant query it with sql like ease. any help here will be appreciated.

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • PASS Summit 2011: Save Money Now

    - by Bill Graziano
    Register by March 31st and save $200.  On April 1st we increase the price.  On July 1st we increase it again.  We have regular price bumps all the way through to the Summit.  You can save yourself $200 if you register by Thursday. In two years of marketing for PASS and a year of finance I’ve learned a fair bit about our pricing, why we do this and how you react to it.  Let me help you save some money! Price bumps drive registrations.  We see big spikes in the two weeks prior to a price increase.  Having a deadline with a cost attached is a great motivator to get people to take action. Registering early helps you and it helps PASS.  You get the exact same Summit at a cheaper rate.  PASS gets smoother cash flow and a better idea of how many people to expect.  We also get people that are already registered that will tell their friends about the conference. This tiered pricing lets us serve those that are very price conscious.  They can register early and take advantage of these discounts.  I know there are people that pay for this conference out of their own pockets.  This is a great way for those people to reduce the cost of the conference.  (And remember for next year that our cheapest pricing starts right after the Summit and usually goes up around the first of the year.) We also get big price bumps after we announce the program and the pre-conference sessions.  If you wrote down the 50 or so best known speakers in the SQL Server community I’m guessing we’ll have nearly all of them at the conference.  We did last year.  I expect we will this year too.  We’re going to have good sessions.  Why wait?  Register today. If you want to attend a pre-conference session you can always add it to your registration later.  Pre-con prices don’t change.  It’s very easy to update your registration and add a pre-conference session later. I want as many people as possible to attend the Summit.  It’s been a great experience for me and I hope it will be for you.  And if you are going to go, do yourself a favor and save some money.  Register today!

    Read the article

  • Why can't the IT industry deliver large, faultless projects quickly as in other industries?

    - by MainMa
    After watching National Geographic's MegaStructures series, I was surprised how fast large projects are completed. Once the preliminary work (design, specifications, etc.) is done on paper, the realization itself of huge projects take just a few years or sometimes a few months. For example, Airbus A380 "formally launched on Dec. 19, 2000", and "in the Early March, 2005", the aircraft was already tested. The same goes for huge oil tankers, skyscrapers, etc. Comparing this to the delays in software industry, I can't help wondering why most IT projects are so slow, or more precisely, why they cannot be as fast and faultless, at the same scale, given enough people? Projects such as the Airbus A380 present both: Major unforeseen risks: while this is not the first aircraft built, it still pushes the limits if the technology and things which worked well for smaller airliners may not work for the larger one due to physical constraints; in the same way, new technologies are used which were not used yet, because for example they were not available in 1969 when Boeing 747 was done. Risks related to human resources and management in general: people quitting in the middle of the project, inability to reach a person because she's on vacation, ordinary human errors, etc. With those risks, people still achieve projects like those large airliners in a very short period of time, and despite the delivery delays, those projects are still hugely successful and of a high quality. When it comes to software development, the projects are hardly as large and complicated as an airliner (both technically and in terms of management), and have slightly less unforeseen risks from the real world. Still, most IT projects are slow and late, and adding more developers to the project is not a solution (going from a team of ten developer to two thousand will sometimes allow to deliver the project faster, sometimes not, and sometimes will only harm the project and increase the risk of not finishing it at all). Those which are still delivered may often contain a lot of bugs, requiring consecutive service packs and regular updates (imagine "installing updates" on every Airbus A380 twice per week to patch the bugs in the original product and prevent the aircraft from crashing). How can such differences be explained? Is it due exclusively to the fact that software development industry is too young to be able to manage thousands of people on a single project in order to deliver large scale, nearly faultless products very fast?

    Read the article

  • Java Cloud Service for developers

    - by JuergenKress
    The advent of cloud computing has reinvented application development for many companies. “That’s the beauty of the cloud,” says Cameron Purdy, vice president of development, Oracle. “It dramatically improves developer productivity because they can do what they do best without having to manage complex development, testing, staging, and production environments.” The key is to find a platform that doesn’t impose proprietary restrictions or force developers to learn new tools. For example, Oracle Java Cloud Service is an enterprise-grade platform as a service for building and deploying Java EE, Oracle WebLogic Server, and Oracle Application Development Framework (Oracle ADF) applications. “It’s designed to be flexible and easy to use,” says Purdy. “And it is also a standards-based solution -it’s not proprietary and there is no cloud lock-in. Developers get instant access to an enterprise-grade environment for a simple, monthly subscription.” Oracle Java Cloud Service instances are created with just a few clicks, so businesses can create a rich application development environment within minutes. Running on Oracle WebLogic Server and Oracle Exalogic, the underlying infrastructure also leverages Oracle Fusion Middleware’s integration with common services. For example, instances come integrated and preconfigured with optimized Oracle Database and Oracle Identity Management configurations. Based on Oracle Enterprise Manager, the Oracle Java Cloud Service console lets customers easily manage and monitor their Oracle Java Cloud Service instances. The open nature of the Oracle Java Cloud Service lets developers integrate through Web services such as SOAP and REST APIs, as well as use their favorite developer tools, whether they are out-of-the-box tools such as Maven and Ant or the productivity features built into Oracle JDeveloper, Oracle Enterprise Pack for Eclipse, or NetBeans IDE. The service allows for the seamless movement of applications between on-premise Oracle WebLogic Server domains and instances of Oracle Java Cloud Service within Oracle Cloud. This approach allows flexibility to mix and match the use of on-premise environments with cloud instances for development, test, and production environments. Visit to learn more and watch videos about Oracle Java Cloud Service. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: java,cloud,oracle cloud,java cloud,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • JavaOne 2013: (Key) Notes of a conference – State of the Java platform and all the roadmaps by Amis

    - by JuergenKress
    Last week’s JavaOne conference provided insights in the roadmap of the Java platform as well as in the current state of things in the Java community. The close relationship between Oracle and IBM concerning Java, the (continuing) lack of such a relationship with Google, the support from Microsoft for Java applications on its Azure cloud and the vibrant developer community – with over 200 different Java User Groups in many countries of the world. There were no major surprises or stunning announcements. Java EE 7 (release in June) was celebrated, the progress of Java 8 SE explained as well as the progress on Java Embedded and ME. The availability of NetBeans 7.4 RC1 and JDK 8 Early Adopters release as well as the open sourcing of project Avatar probably were the only real news stories. The convergence of JavaFX and Java SE is almost complete; the upcoming alignment of Java SE Embedded and Java ME is the next big consolidation step that will lead to a unified platform where developers can use the same skills, development tools and APIs on EE, SE, SE Embedded and ME development. This means that anything that runs on ME will run on SE (Embedded) and EE – not necessarily the reverse because not all SE APIs are part of the compact profile or the ME environment. However, the trimming down of the SE libraries and the increased capabilities of devices mean that a pretty rich JVM runs on many devices – such as JavaFX 8 on the Raspberry PI. The major theme of the conference was Internet of Things. A world of things that are smart and connected, devices like sensors, cameras and equipment from cars, fridges and television sets to printers, security gates and kiosks that all run Java and are all capable of sending data over local network connections or directly over the internet. The number of devices that has these capabilities is rapidly growing. This means that the number of places where Java programs can help program the behavior of devices is growing too. It also means that the volume of data generated is expanding and that we have to find ways to harvest that data, possibly do a local pre-processing (filter, aggregate) and channel the data to back end systems. Terms typically used are edge devices (small, simple, publishing data), gateways (receiving data from many devices, collecting and consolidating, pre-processing, sending onwards to back end – typically using real time event processing) and enterprise services – receiving the data-turned-information from the gateways to further consolidate, distribute and act upon. A cheap device like the Raspberry PI is a perfect way to get started as a Java developer with what embedded (device) programming means and how interaction with physical input and output takes place. Roadmaps The over all progress on Java is visualized in this overview: Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Amis,OOW,Oracle OpenWorld,JavaOne,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • WCF Routing Service Filter Generator

    - by Michael Stephenson
    Recently I've been working with the WCF routing service and in our case we were simply routing based on the SOAP Action. This is a pretty good approach for a standard redirection of the message when all messages matching a SOAP Action will go to the same endpoint. Using the SOAP Action also lets you be specific about which methods you expose via the router. One of the things which was a pain was the number of routing rules I needed to create because we were routing for a lot of different methods. I could have explored the option of using a regular expression to match the message to its routing but I wanted to be very specific about what's routed and not risk exposing methods I shouldn't via the router. I decided to put together a little spreadsheet so that I can generate part of the configuration I would need to put in the configuration file rather than have to type this by hand. To show how this works download the spreadsheet from the following url: https://s3.amazonaws.com/CSCBlogSamples/WCF+Routing+Generator.xlsx In the spreadsheet you will see that the squares in green are the ones which you need to amend. In the below picture you can see that you specify a prefix and suffix for the filter name. The core namespace from the web service your generating routing rules for and the WCF endpoint name which you want to route to. In column A you will see the green cells where you add the list of method names which you want to include routing rules for. The spreadsheet will workout what the full SOAP Action would be then the name you will use for that filter in your WCF Routing filters. In column D the spreadsheet will have generated the XML snippet which you can add to the routing filters section in your configuration file. In column E the spreadsheet will have created the XML snippet which you can add to the routing table to send messages matching each filter to the appropriate WCF client endpoint to forward the message to the required destination. Hopefully you can see that with this spreadsheet it would be very easy to produce accurate XML for the WCF Routing configuration if you had a large number of routing rules. If you had additional methods in other services you can simply copy the worksheet and add multiple copies to the Excel workbook. One worksheet per service would work well.

    Read the article

  • Using all Ten IO slots on a 7420

    - by user12620172
    So I had the opportunity recently to actually use up all ten slots in a clustered 7420 system. This actually uses 20 slots, or 22 if you count the clusteron card. I thought it was interesting enough to share here. This is at one of my clients here in southern California. You can see the picture below. We have four SAS HBAs instead of the usual two. This is becuase we wanted to split up the back-end taffic for different workloads. We have a set of disk trays coming from two SAS cards for nothing but Exadata backups. Then, we have a different set of disk trays coming off of the other two SAS cards for non-Exadata workloads, such as regular user file storage. We have 2 Infiniband cards which allow us to do a full mesh directly into the back of the nearby, production Exadata, specifically for fast backups and restores over IB. You can see a 3rd IB card here, which is going to be connected to a non-production Exadata for slower backups and restores from it.The 10Gig card is for client connectivity, allowing other, non-Exadata Oracle databases to make use of the many snapshots and clones that can now be created using the RMAN copies from the original production database coming off the Exadata. This allows for a good number of test and development Oracle databases to use these clones without effecting performance of the Exadata at all.We also have a couple FC HBAs, both for NDMP backups to an Oracle/StorageTek tape library and also for FC clients to come in and use some storage on the 7420.  Now, if you are adding more cards to your 7420, be aware of which cards you can place in which slots. See the bottom graphic just below the photo.  Note that the slots are numbered 0-4 for the first 5 cards, then the "C" slots which is the dedicated Cluster card (called the Clustron), and then another 5 slots numbered 5-9. Some rules for the slots: Slots 1 & 8 are automatically populated with the two default SAS cards. The only other slots you can add SAS cards to are 2 & 7. Slots 0 and 9 can only hold FC cards. Nothing else. So if you have four SAS cards, you are now down to only four more slots for your 10Gig and IB cards. Be sure not to waste one of these slots on a FC card, which can go into 0 or 9, instead.  If at all possible, slots should be populated in this order: 9, 0, 7, 2, 6, 3, 5, 4

    Read the article

  • OSB unit testing, part 1 by Qualogy

    - by JuergenKress
    First you need to implement the simple bpel process like this : In my current project, I inherited a lot of OSB components that have been developed by (former) team members, but they all lack unit tests. This is a situation I really dislike, since this makes it much harder to refactor or bug-fix the existing code base. So, for all newly created components (and components I have to bug-fix) I strive to add unit tests. Of course, the unit tests will be created using my favourite testing tool: soapUI ! Unit of test The unit test should be created for the service composition, which in OSB terms should be the proxy service combination with its business service. Now, since you do not want to rely on any other services, you should provide mock services for all services invoked from your Component-Under-Test. In a previous article, I wrote about mocking your services in soapUI. While this approach would also be valid here, creating a mock service (and certainly deploying it on a separate WebServer) does violate one of the core principles of unit testing: to make your unit tests as self-contained as possible, i.e. not depending on any external components. In this article, I will show you how to achieve this by simply providing a mock response inside your unit test. Scenario The scenario I implement for testing is a simple currency converter; the external request consists of a from and a to currency, and an amount (in currency from). The service will perform an exchange rate lookup using the WebServiceX CurrencyConverter and return a response to the caller consisting of both the source and target currencies and amounts. For the purpose of unit testing, I will implement a mock response for the exchange rate lookup. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Qualogy,OSB,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to generate SPMetal for a specific list (OOTB: like tasks or contacts) with custom columns

    - by KunaalKapoor
    SPMetal is used to make use of LINQ on a list in SharePoint 2010. By default when you generate SPMetal on a site you will get a code generated file for most of the lists and probably more. Here is a MSDN link for some info on SPMetal.http://msdn.microsoft.com/en-us/library/ee538255(office.14).aspxBut what if you want only to generate the code for one list?Well it is quite simple once you figure it out. You need to add an xml file to override the default settings of SPMetal and specify it in the /parameters option. I will show you how to do this.First create a Folder that will contain two files (GenerateSPMetalCode.bat and SPMetal.xml).Below is the content of the files:GenerateSPMetalCode.bat "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml pause SPMetal.xml <?xml version="1.0" encoding="utf-8"?> <Web AccessModifier="Internal" xmlns="http://schemas.microsoft.com/SharePoint/2009/spmetal"> <List Name="ListName"> <ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List> <ExcludeOtherLists></ExcludeOtherLists> </Web> You will have to change some of the text in the files so that it will be specific to your SharePoint Server Setup. In the bat file you will have to change http://YourServer to the url of the web where your list is. In the SPMetal.xml file you need to change ListName to the name of your list and the ContentTypeName to the name of the content type you want to extract. The GeneratedClassName can be anything but perhaps you should rename it to something more sensible.Adding the following line: '<List Name="ListName"><ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List>'  makes sure that any custom columns added to an OOTB list like contacts or tasks are also generated, which are missed out in a regular generation.So now when you run it the SPMetal command will read the SPMetal.xml list and override its commands. ExcludeOtherLists element makes it so that only the code for the lists you specify will be generated. For some reason I got an error if I had this element above the List element.You sould now have a code file called OutPutFileName.cs that has been generated. You can now put this in your SharePoint project for use with your LINQ queries against that list.I will soon write a LINQ example that uses the generated class. UPDATE: Add the /namespace parameter to add a namespace to the generated code. "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /namespace:MySPMetalNameSpace /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Boot error aftter clean Ubuntu 13.04 install: [Reboot and select proper boot device]

    - by IcarusNM
    I am having the same problem as this guy where a fresh Ubuntu install completes beautifully but will not boot. I get the ASUS (?) "Reboot and select proper boot device" error, first with Xubuntu 13.10 and after finally giving up there, and Xubuntu 13.4, I am back to regular Ubuntu 13.4. ASUS motherboard Z77, Intel chipset. Standard internal SATA 500GB HD. 64-bit. All-new hardware less than 3 months old. It was running Ubuntu 12.04LTC great until I tried this upgrade. I have re-installed from scratch every which way: with LVM, without LVM; with the default partitions, with my own partitions. With ext3 or ext4. Alongside; replace; upgrade. No difference. On the last two tries, I have booted afterward from the same USB stick, downloaded and run boot-repair, and now I guess I am off to the boot-repair support email with my URLs from that. It did all kinds of cool stuff but ultimately made no difference. I never got anything like this with Ubuntu 12.04. I've now probably re-installed Ubuntu 13.04 ten times slightly different ways. I finally found how to skip the language packs, so at least that sped things up! :) This starts from the ubuntu-13.04-desktop-amd64.iso and UNetbootin as suggested on the official instructions for USB thumb drive creation from OSX. That part all works fine (booting the USB on the PC and trying Ubuntu and/or installing from there on the PC HD.) I have no CD drive on this PC, but I suppose I could get one. I would rather find some Linux install that works from USB like I've always done. After running boot-repair twice, in the ASUS BIOS I now see three different UEFI boot options in the priority list, and they are all labeled exactly the same: ubuntu (P6: WDC WD5000AAKX-00U6AA0) Then there's a non-UEFI option: P6: WDC WD5000AAKX-00U6AA0 (476940MB) And a fifth option appeared after the first boot-repair: Windows Boot Manager (P6: WDC WD5000AAKX-00U6AA0) I have tried all 5 of these, and I get exactly the same error. I have never had Windows installed on this HD. ASUS is calling it Windows Boot Manaer but I presume that's a mistaken label for whatever boot-repair did. I can boot on USB and run GParted and it looks great. The partitions all look normal. I found another case of this online with no solution posted. I can't find much about it online. Needs a Master Boot Record wipe/redo?? I'm not sure how.

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • How to configure a longer version Number in artifactory

    - by claudine
    The version-numbers for our jars have to be longer them x.x.x. We would rather need x.x.x.x to integrate some old-fashioned self-made mechanism. This is, because we tag our software with x.x.x and as soon as we have a delivery to a customer one specific jar has to be build exactly at this point of time to fit to another backend, which communicates with our program. For that reason this one jar has the version 2.3.4.1, when generated and in next delivery of the same Version it is build and named 2.3.4.2. Now artifactory cannot handle this an doesn't save more than x.x.x.2 in some cases. So we thought of maybe edit the regular expression in the maven repository layout (see attached Screenshot) Because testing the path in the field below shows, that it cannot handle the version number. Of course for the rest of our jars still x.x.x has to work.. For Example here is the maven-metadata.xml <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.firm</groupId> <artifactId>someid</artifactId> <version>1.5.1</version> <versioning> <latest>1.5.1</latest> <release>1.5.1</release> <versions> <version>1.4.62</version> </versions> <lastUpdated>20120926073942</lastUpdated> </versioning> </metadata> The folder structure looks like: someid 1.4.62 1.4.62.1 1.4.62.2 1.4.62.3 If we deploy an new artifact version (1.4.62.1), the maven-metadata.xml contains the 1.4.62.1 version. But the artifactory overrides the version number (1.4.62.x) to (1.4.62) after an unspecified time. It seems that the artifactory only support major, minor and revision numbers, and deletes the buildnumber. Now we looking for a solution do disable this behavior. We use the JFrog Artifactory version 2.5.0 (rev. 13086). Any ideas, maybe? Thanks in andvance

    Read the article

  • JOB OF THE WEEK

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} My name is Pascaline and I am the EMEA Solution Response Manager. I currently have a role open for a Benelux Solution Response Representative to jump-start his/her career in my international team of six people from all across Europe. Key for this exciting role is that you are curious to learn, like networking and constantly want to develop yourself. To help you with that, you will get extensive product trainings and workshops on all Oracle product lines and you will receive sales training. Further, you have the opportunity to get certified on Oracle products through online trainings and workshops. Every month you will also benefit from 1-on-1 sales coaching and regular coaching from me to help you grow and develop your career at Oracle! The role will include the follow-up of marketing events and online marketing activities with current Key Accounts in the Benelux. It is truly a pioneering role at Oracle as it is the first time that an employee will engage in business conversations about all lines of businesses and products ranges with Key Accounts. So are you interested to work in between marketing and sales? Do you want to work for a big IT multinational? Do you want to work abroad after you graduate and do you want to develop yourself? Then please visit this link for more information.

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >