Search Results

Search found 32475 results on 1299 pages for 'change detection'.

Page 493/1299 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • How to know which fields of a record are updated in saving the edit? [closed]

    - by Luiz Maffort
    I'm recording in a log table all that is changed in a given table, so this need to know when for example the User to change the status from active to inactive. With this information I will write in my log table information of which record was changed, by whom and what was the value of old and new. If I instantiate an object before: db.Entry(chamados).State = EntityState.Modified; I can even compare, but this error appears at runtime. *An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key. * Podem me ajudar por favor?

    Read the article

  • Eclipse does not want to use openjdk-7

    - by umop aplsdn
    I am on a new installation of Ubuntu. I install all the updates, and then restart. I then install openjdk-7-jdk from apt, then I restart. Then I install eclipse-platform, eclipse-jdt, and eclipse-cdt. I then launch Eclipse. When I check the build path for my imported projects it decided that during the eclipse-platform installation to install openjdk-6. Okay, cool. The problem is that I can't use openjdk-7 AT ALL. There is no option to use it in the build path library manager. How can I change it so it uses openjdk-7? I tried reinstalling it already, didn't do anything. Just told me it was already installed. EDIT: Failed at the title, fixed.

    Read the article

  • Windows 7 / Ubuntu Dualboot GRUB Problem.

    - by Tek
    I'd like to first say ahead of time that I'm running a RAID-0 Setup. 1.First of all, I'm glad Ubuntu 9.10 installed flawlessly and detected my RAID-0 setup just fine. The issue I'm having now is that I already had Windows 7 installed and made a small 12GB partition for Linux/Swap. I grabbed EasyBCD 2.0 to edit the W7 bootloader and configured it to use dual boot Grub2 because before it didn't even show the option for Ubuntu. The bootloader points to a file made in the windows directory made by EasyBCD called "C:\NST\AutoNeoGrub0.mbr" which is what I'm guessing grub is booting from. After that I got the option for booting Ubuntu. The problem is that it's sending me to the Grub prompt (probably because it's pointing to \NST|AutoNeoGrub0.mbr?), at first I didn't know what to do but I researched and have to type grub commands to manually boot into Ubuntu Linux. Ex: grubroot (hd0,4) grubkernel /boot/vmlinuz-2.6... root=/dev/disk/by-uuid/24624-2424... grubinitrd boot/initrd.img-2.6... grubboot After all that Ubuntu boots just fine, but how do I fix it permanently? Do I need to edit the bootloader manually (since Easy BCD "autoconfigures")? Some insight on this would rock! Also, it sucks to type the actual uuid since it's REALLY long. I tried getting the name of the drive via fdisk -l but since it's raid 0 I'm guessing I can't do that. How can I get a shorter name of the drive? like /dev/sda, /dev/sdb etc? I've also tried to update to the latest GRUB and I got this: Creating config file /etc/default/grub with new version Generating core.img error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' grub-probe: error: no mapping exists for nvidia_dbedfcca5' Auto-detection of a filesystem module failed. Please specify the module with the option--modules' explicitly. dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of grub2: grub2 depends on grub-pc; however: Package grub-pc is not configured yet. dpkg: error processing grub2 (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. E: Sub-process /usr/bin/dpkg returned an error code (1) I've also tried: b@dnb:~$ sudo update-grub error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.31-14-generic Found initrd image: /boot/initrd.img-2.6.31-14-generic error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/mapper/nvidia_dbedfcca1 error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca1' done To no avail. Any idea what I can do to fix this mess? :( Edit: This is my disk configuration. b@dnb:~$ sudo df -l Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/nvidia_dbedfcca5 12302232 2744788 8932520 24% / udev 1030288 268 1030020 1% /dev none 1030288 964 1029324 1% /dev/shm none 1030288 92 1030196 1% /var/run none 1030288 0 1030288 0% /var/lock none 1030288 0 1030288 0% /lib/init/rw /dev/sr0 706532 706532 0 100% /media/cdrom0 Note: /dev/mapper/nvidia_dbedfcca5 is my Linux boot partition

    Read the article

  • Run the Windows .net Application in System Tray on System Startup

    - by Rajneesh Verma
    Hi, Today i have created a .net windows application which has following key points. 1. Run only one instance of the project: to achieve this i have change the code of Program.cs as: Code Snippet static class Program { /// <summary> /// The main entry point for the application. /// </summary> [ STAThread ] static void Main() { bool instanceCountOne = false ; using ( Mutex mtex = new Mutex ( true , "MyRunningApp" , out instanceCountOne)) { if (instanceCountOne) { Application ...(read more)

    Read the article

  • About the use of dotted hostname with avahi

    - by BenZen
    Hi I recently discovered avahi. It help you when you when to resolve hostname for the local network. But in my situation i've got a issue. I decided to host a machine called "a.alpha" and a another called "b.aplha". In a near futur i will also use some machine called "a.beta" and "b.beta". My probleme is that from "a.alpha" i can resolv "a.alpha.local" hostname, but currently i can't resolv "a.aplha.local" from b.alpha. So when i will decide to use the ".beta" extension, i will have some issues. Is it normal that the machine "a.alpha" doesn't expos the entire hostname to mdns ? I know i can change the naming method (saying use a-alpha instead of a.alpha). But i like it this way. So the quesiton is: Is it possible to use dotted name in the /etc/hostname and to resolve it using avahi?

    Read the article

  • Number crunching algo for learning multithreading?

    - by Austin Henley
    I have never really implemented anything dealing with threads; my only experience with them is reading about them in my undergrad. So I want to change that by writing a program that does some number crunching, but splits it up into several threads. My first ideas for this hopefully simple multithreaded program were: Beal's Conjecture brute force based on my SO question. Bailey-Borwein-Plouffe formula for calculating Pi. Prime number brute force search As you can see I have an interest in math and thought it would be fun to incorporate it into this, rather than coding something such as a server which wouldn't be nearly as fun! But the 3 ideas don't seem very appealing and I have already done some work on them in the past so I was curious if anyone had any ideas in the same spirit as these 3 that I could implement?

    Read the article

  • Can I use a project code which has New BSD license but uses a GPL license library?

    - by Alok Kulkarni
    I want to use the ICSOpenVpn project source code in my commercial application. If we see the ICSOpenVpn project, it states that its license is New BSD but the libopenvpn.so library it uses is under GNU GPLv2 license. As per FAQ for version 2 of GNU GPL "If a library is released under the GPL (not the LGPL), does that mean that any program which uses it has to be under the GPL?" The answer says: "Yes, because the program as it is actually run includes the library." Also, how could ICSOpenVpn change the license to New BSD?

    Read the article

  • Cannot access system after deleting passwd file

    - by joao rodrigo leao
    I was trying to change my user name and also my /home/username and my system started to crash. I deleted the passwd file but I had a backup named passwd_bkp. I tried to rename this passwd_bkp as passwd and it did not work. No commands were being executed... I was in a terminal window. I re-started my system and now I cannot login. The GRub loads two options: Linux and recovery mode. I tried to open a sessions as root but it says the file system is corrupted. I cannot access my files. Did I lose all my files?

    Read the article

  • Pagination In blogengine.net 2.0

    - by anirudha
    blogengine.net 2.0 is a great platform for make blogging easier. whenever you update the blog in blogengine.net 2.0 you found that pagination not looking great. BE.net show previous post instead of next post and next post instead of previous post. well here is a solution. you need to solve the module for pagination here is code replace them then blogengine.net 2.0 pagination work well. go to App_Code/Controls/postPager.cs replace the folllowing code or change them by this file I put here download pagination module search related to pagination not work in blogengine.net 2.0 pagination bug in blogengine.net 2.0 make pagination work in blogengine.net 2.0

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • With respect to gnome-session, what is a "component"?

    - by Alistair Buxton
    Under /usr/share/gnome-session/sessions are files which describe the different types of sessions available from gnome-session. In these files is a list of required components, eg for shell: RequiredComponents=gnome-shell;gnome-settings-daemon; or for fallback: RequiredComponents=gnome-panel;gnome-settings-daemon; This appears to be a list of executables, but it is not. If I change gnome-panel to some other type of panel, the session does not start, and I see the following errors in ~/.xsession-errors: gnome-session[2003]: WARNING: Unable to find required component 'xfce4-panel' So my question: What is a component, how are they defined, and where does gnome-session look for them?

    Read the article

  • URL rewrite from www.domain.com/sudirectory to http://domain.com/subdirectory

    - by chrizzbee
    I need a solution for the following problem: I use a CMS and want the backend only be available at http://domain.com/backend and not at http://www.domain.com/backend. How do I have to change my .htaccess file to achieve this? I already have a rewrite rule from HTTP (non-www) to www. Here's what I currently have in my .htaccess file: ## # Uncomment the following lines to add "www." to the domain: # RewriteCond %{HTTP_HOST} ^shaba-baden\.ch$ [NC] RewriteRule (.*) http://www.shaba-baden.ch/$1 [R=301,L] # # Uncomment the following lines to remove "www." from the domain: # # RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] # RewriteRule (.*) http://example.com/$1 [R=301,L] # # Make sure to replace "example.com" with your domain name. ## So, the first bit is the redirect from HTTP to www. It works on the domain part of the URL. As explained, I need a rewrite rule from the backend login at http://www.shaba-baden.ch/contao to http://shaba-baden.ch/contao

    Read the article

  • Mirroring Ubuntu on several systems in a computer lab

    - by Harvey Steck
    I am working in a new refugee school where the only Internet service available is a slow satellite connection. We are about to set up a computer lab (already have desktop systems and am about to install Ubuntu on them). I'm a newbie when it comes to Linux, but it seems a better alternative than pirated copies of Windows. I'd like to set up one Ubuntu system, and then mirror that system on perhaps ten to twenty other systems (all of which would be on an ethernet network). I expect to have an internet connection on the one system that I set up, but then it may be difficult to have enough bandwidth to go through all the same steps on the other ten systems. Can I set up the other ten or twenty computers to get all of their updates/upgrades/configuration from one master system? Can I also set things up so that students cannot change the configuration, install new programs, etc.? Appreciate any help you can give. -- Harvey

    Read the article

  • Gerrit, git and reviewing whole branch

    - by liori
    I'm now learning Gerrit (which is the first code review tool I use). Gerrit requires a reviewed change to consist of a single commit. My feature branch has about 10 commits. The gerrit-prefered way is to squash those 10 commits into a single one. However this way if the commit will be merged into the target branch, the internal history of that feature branch will be lost. For example, I won't be able to use git-bisect to bisect into those commits. Am I right? I am a little bit worried about this state of things. What is the rationale for this choice? Is there any way of doing this in Gerrit without losing history?

    Read the article

  • Answers to Your Common Oracle Database Lifecycle Management Questions

    - by Scott McNeil
    We recently ran a live webcast on Strategies for Managing Oracle Database's Lifecycle. There were tons of questions from our audience that we simply could not get to during the hour long presentation. Below are some of those questions along with their answers. Enjoy! Question: In the webcast the presenter talked about “gold” configuration standards, for those who want to use this technique, could you recommend a best practice to consider or follow? How do I get started? Answer:Gold configuration standardization is a quick and easy way to improve availability through consistency. Start by choosing a reference database and saving the configuration to the Oracle Enterprise Manager repository using the Save Configuration feature. Next create a comparison template using the Oracle provided template as a starting point and modify the ignored properties to eliminate expected differences in your environment. Finally create a comparison specification using the comparison template you created plus your saved gold configuration and schedule it to run on a regular basis. Don’t forget to fill in the email addresses of those you want to notify upon drift detection. Watch the database configuration management demo to learn more. Question: Can Oracle Lifecycle Management Pack for Database help with patching an Oracle Real Application Cluster (RAC) environment? Answer: Yes, Oracle Enterprise Manager supports both parallel and rolling patch application of Oracle Real Application Clusters. The use of rolling patching is recommended as there is no downtime involved. For more details watch this demo. Question: What are some of the things administrators can do to control configuration drift? Why is it important? Answer:Configuration drift is one of the main causes of instability and downtime of applications. Oracle Enterprise Manager makes it easy to manage and control drift using scheduled configuration comparisons combined with comparison templates. Question: Does Oracle Enterprise Manager 12c Release 2 offer an incremental update feature for "gold" images? For instance, if the source binary has a higher PSU level, what is the best approach to update the existing "gold" image in the software library? Do you have to create a new image or can you just update the original one? Answer:Provisioning Profiles (Gold images) can contain the installation files and database configuration templates. Although it is possible to make some changes to the profile after creation (mainly to configuration), it is normally recommended to simply create a new profile after applying a patch to your reference database. Question: The webcast talked about enforcing in-house standards, does Oracle Enterprise Manager 12c offer verification of your databases and systems to those standards? For example, the initial "gold" image has been massively deployed over time, and there may be some changes to it. How can you do regular checks from Enterprise Manager to ensure the in-house standards are being enforced? Answer:There are really two methods to validate conformity to standards. The first method is to use gold standards which you compare other databases to report unwanted differences. This method uses a new comparison template technology which allows users to ignore known differences (i.e. SID, Start time, etc) which results in a report only showing important or non-conformant differences. This method is quick to setup and configure and recommended for those who want to get started validating compliance quickly. The second method leverages the new compliance framework which allows the creation of specific and robust validations. These compliance rules are grouped into standards which can be assigned to databases quickly and easily. Compliance rules allow for targeted and more sophisticated validation beyond the basic equals operation available in the comparison method. The compliance framework can be used to implement just about any internal or industry standard. The compliance results will track current and historic compliance scores at the overall and individual database targets. When the issue is resolved, the score is automatically affected. Compliance framework is the recommended long term solution for validating compliance using Oracle Enterprise Manager 12c. Check out this demo on database compliance to learn more. Question: If you are using the integration between Oracle Enterprise Manager and My Oracle Support in an "offline" mode, how do you know if you have the latest My Oracle Support metadata? Answer:In Oracle Enterprise Manager 12c Release 2, you now only need to download one zip file containing all of the metadata xmls files. There is no indication that the metadata has changed but you could run a checksum on the file and compare it to the previously downloaded version to see if it has changed. Question: What happens if a patch fails while administrators are applying it to a database or system? Answer:A large portion of Oracle Enterprise Manager's patch automation is the pre-requisite checks that happen to ensure the highest level of confidence the patch will successfully apply. It is recommended you test the patch in a non-production environment and save the patch plan as a template once successful so you can create new plans using the saved template. If you are using the recommended ‘out of place’ patching methodology, there is no urgency because the database is still running as the cloned Oracle home is being patched. Users can address the issue and restart the patch procedure at the point it left off. If you are using 'in place' method, you can address the issue and continue where the procedure left off. Question: Can Oracle Enterprise Manager 12c R2 compare configurations between more than one target at the same time? Answer:Oracle Enterprise Manager 12c can compare any number of target configurations at one time. This is the basis of many important use cases including Configuration Drift Management. These comparisons can also be scheduled on a regular basis and emails notification sent should any differences appear. To learn more about configuration search and compare watch this demo. Question: How is data comparison done since changes are taking place in a live production system? Answer:There are many things to keep in mind when using the data comparison feature (as part of the Change Management ability to compare table data). It was primarily intended to be used for maintaining consistency of important but relatively static data. For example, application seed data and application setup configuration. This data does not change often but is critical when testing an application to ensure results are consistent with production. It is not recommended to use data comparison on highly dynamic data like transactional tables or very large tables. Question: Which versions of Oracle Database can be monitored through Oracle Enterprise Manager 12c? Answer:Oracle Database versions: 9.2.0.8, 10.1.0.5, 10.2.0.4, 10.2.0.5, 11.1.0.7, 11.2.0.1, 11.2.0.2, 11.2.0.3. Watch the On-Demand Webcast Stay Connected: Twitter | Facebook | YouTube | Linkedin | NewsletterDownload the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Peaceful Tropical Cavern Wallpaper

    - by Asian Angel
    colorful-hand-painted [DesktopNexus] Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • Google Webmasters tools crawl error

    - by Shiro
    I am looking in to Google Webmaster Tools - Crawl Error section. How should I handle for those URL due their system / application showed invalid URL. e.g http://www.example/images/products/s_=enlarge_16gb.jpg but, I dunno what happen to yahoo groups, it break the link into http://www.example/images/products/s_= enlarge_16gb.jpg and I only make the top part become hyperlink, which is http://www.example/images/products/s_= Because of the URL, Google show crawl error, I got few error because of this kind of result or because other people typo error. How do I prevent this. I am sure I don't have the right go and change other people post. What is the solution for this. Thanks!

    Read the article

  • How do you balance documentation requirements with Agile developments

    - by Jeremy
    In our development group there is currently discussions around agile and waterfal methodology. No-one has any practical experience with agile, but we are doing some reading. The agile manifesto lists 4 values: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan We are an internal development group developing applications for the consumption of other units in our enterprise. A team of 10 developers builds and releases multiple projects simultanously, typically with 1 - maybe 2 (rarely) developer on each project. It seems to be that from a supportability perspective the organization needs to put some real value on documentation - as without it, there are serious risks with resourcing changes. With agile favouring interactions, and software deliverables over processes and documentation, how do you balance that with the requirements of supportable systems and maintaining knowledge and understanding of how those systems work? With a waterfall approach which favours documentation (requirements before design, design specs before construction) it is easy to build a process that meets some of the organizational requirements - how do we do this with an agile approach?

    Read the article

  • Accidentally Uninstalled Ubuntu Desktop and Anacron. Reinstalled. What Can I Expect?

    - by Volomike
    Unfortunately when I installed the cron package to take a look at it, I didn't realize that I was also uninstalling Ubuntu Desktop and Anacron. Crap!!! So, I then did apt-get install anacron ubuntu-desktop, which also removed fcron. However, I need to know what instability issues I may now encounter because I have done this change and changed it back. I mean, now that anacron is back and ubuntu-desktop is back, am I out of the woods? Or, will I lose any important jobs that need to run periodically from anacron?

    Read the article

  • Ubuntu sudo not working

    - by Ron Sebastian
    I wanted to move a file to a /usr/python2.7/ but i was unable to do so, so i changed the permissions of /usr to myuser: sudo chown -R ***** /usr it worked but i realised it was a blunder when sudo stopped working after that. It says: sudo: effective uid is not 0, is sudo installed setuid root? I have seen this post where the accepted solution was to use the policykit: pkexec chown root:root /usr/bin/sudo pkexec chmod 4755 /usr/bin/sudo however, even the policykit is saying that: pkexec must be setuid root please help, i've learned a lesson and will never change permissions for /usr again. Please help me this time!

    Read the article

  • Do you know your ADF "grace period?"

    - by Chris Muir
    What does the term "support" mean to you in context of vendors such as Oracle giving your organization support with our products? Over the last few weeks I'm taken a straw poll to discuss this very question with customers, and I've received a wide array of answers much to my surprise (which I've paraphrased): "Support means my staff can access dedicated resources to assist them solve problems" "Support means I can call Oracle at anytime to request assistance" "Support means we can expect fixes and patches to bugs in Oracle software" The last expectation is the one I'd like to focus on in this post, keep it in mind while reading this blog. From Oracle's perspective as we're in the business of support, we in fact offer numerous services which are captured on the table in the following page. As the text under the table indicates, you should consult the relevant Oracle Lifetime Support brochures to understand the length of time Oracle will support Oracle products. As I'm a product manager for ADF that sits under the FMW tree of Oracle products, let's consider ADF in particular. The FMW brochure is found here. On page 8 and 9 you'll see the current "Application Development Framework 11gR1 (11.1.1.x)" and "Application Development Framework 11gR2 (11.1.2)" releases are supported out to 2017 for Extended Support. This timeframe is pretty standard for Oracle's current released products, though as new releases roll in we should see those dates extended. On page 8 of the PDF note the comment at the end of this page that refers to the Oracle Support document 209768.1: For more-detailed information on bug fix and patch release policies, please refer to the “Error Correction Support Policy” on MyOracle Support. This policy document is important as it introduces Oracle's Error Correction Support Policy which addresses "patches and fixes". You can find it attached the previous Oracle Support document 209768.1. Broadly speaking while Oracle does provide "generalized support" up to 2017 for ADF, the Error Correction Support Policy dictates when Oracle will provide "patches and fixes" for Oracle software, and this is where the concept of the "grace period" comes in. As Oracle releases different versions of Oracle software, say 11.1.1.4.0, you are fully supported for patches and fixes for that specific version. However when we release the next version, say 11.1.1.5.0, Oracle provides at minimum of 3 months to a maximum of 1 year "grace period" where we'll continue to provide patches and fixes for the previous version. This gives you time to move from 11.1.1.4.0 to 11.1.1.5.0 without being unsupported for patches and fixes. The last paragraph does generalize as I've attempted to highlight the concept of the grace period rather than the specific dates for any version. For specific ADF and FMW versions and their respective grace periods and when they terminated you must visit Oracle Support Note 1290894.1. I'd like to include a screenshot here of the relevant table from that Oracle Support Note but as it is will be frequently updated it's better I force you to visit that note. Be careful to heed the comment in the note: According to policy, the Grace Period has passed because a newer Patch Set has been released for more than a year. Its important to note that the Lifetime Support Policy and Error Correction Support Policy documents are the single source of truth, subject to change, and will provide exceptions when required. This My Oracle Support document is providing a summary of the Grace Period dates and time lines for planning purposes. So remember to return to the policy document for all definitions, note 1290894.1 is a summary only and not guaranteed to be up to date or correct. A last point from Oracle's perspective. Why doesn't Oracle provide patches and fixes for all releases as long as they're supported? Amongst other reasons, it's a matter of practicality. Consider JDeveloper 10.1.3 released in 2005. JDeveloper 10.1.3 is still currently supported to 2017, but since that version was released there has been just under 20 newer releases of JDeveloper. Now multiply that across all Oracle's products and imagine the number of releases Oracle would have to provide fixes and patches for, and maintain environments to test them, build them, staff to write them and more, it's simple beyond the capabilities of even a large software vendor like Oracle. So the "grace period" restricts that patches and fixes window to something manageable. In conclusion does the concept of the "grace period" matter to you? If you define support as "getting assistance from Oracle" then maybe not. But if patches and fixes are important to you, then you need to understand the "grace period" and operate within the bounds of Oracle's Error Correction Support Policy. Disclaimer: this blog post was written July 2012. Oracle Support policies do change from time to time so the emphasis is on you to double check the facts presented in this blog.

    Read the article

  • CUPS: HP printer DNS url

    - by wintersolutions
    The URL for my printer generated by hp-makeuri looks like this: hp:/net/Officejet_6500_E710n-z?ip=192.168.178.30 But the printer is on a dhcp enabled wifi network and so its IP-address does and could change. On the other hand my wifi router seems smart enough to have some sort of DNS: $ ping hp-6500a PING hp-6500a.fritz.box (192.168.178.30) 56(84) bytes of data. 64 bytes from hp-6500a.fritz.box (192.168.178.30): icmp_req=1 ttl=255 time=11.3 ms I tried to use the hostname in the CUPS URL/DeviceUID but it failed, any suggestions if this is possible and the correct format?

    Read the article

  • Need more RAM?? Running VMmachine + ubuntu

    - by JBizzle
    So I looked but im running ubuntu in vm machine and its running at a snails pace and I installed the 64bit ver. Running windows 7 64bit the down side is that I only have 2GB of ram installed.....bummer huh I know the answer is im going to need more ram I just want to confirm... Or is there an tip that I can do change some settings to make it run faster in VM? I just want verification on what steps I can take if any to speed things up? Thanks Experts!

    Read the article

  • After the upgrading to 13.10, I can't input Japanese and Chinese in Emacs

    - by oda
    I have just upgraded Ubuntu from 13.04 to 13.10. It seems iBus have been made big changes.Then I just go to system setting - text entry settings - add "Chinese pinyin" and "Japanese anty" input method. It works well when I input Chinese or Japanese in terminal or .txt file. But when I want to input Chinese and Japanese in Emacs. Even though I have enable ibus-mode in the buffer and change to Chinese pinyin or Japanese anty input method. It just output the English word. Below is the ibus configure in .emacs.By the way, It works well before I upgrade Ubuntu to 13.10 and Emacs to 24.3.1. (add-to-list 'load-path (concat my-emacs-path "/ibus-el-0.3.2")) ;;(setq ibus-python-shell-command-name "python2.7") (require 'ibus) ;; Turn on ibus-mode automatically after loading .emacs (add-hook 'after-init-hook 'ibus-mode-on) (setq ibus-cursor-color '("red" "blue" "limegreen"))

    Read the article

  • How much is modern programming still tied to underyling digital logic?

    - by New Talk
    First of all: I've got no academic background. I'm working primarily with Java and Spring and I'm also fond of web programming and relational databases. I hope I'm using the right terms and I hope that this vague question makes some sense. Today the following question came to my mind: How much is modern programming still tied to the underlying digital logic? With modern programming I mean concepts like OOP, AOP, Java 7, AJAX, … I hope you get the idea. Do they no longer need the digital logic with which computers are working internally? Or is binary logic still ubiquitous when programming this way? If I'd change the inner workings of a computer overnight, would it matter, because my programming techniques are already that abstract? P. S.: With digital logic I mean the physical representation of everything "inside" the computer as zeroes and ones. Changed "binary" to "digital".

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >