Daily Archives

Articles indexed Friday October 5 2012

Page 13/15 | < Previous Page | 9 10 11 12 13 14 15  | Next Page >

  • Quarterly E-Business Suite Upgrade Recommendations: October 2012 Edition

    - by Steven Chan (Oracle Development)
    I've previously published advice on the general priorities for applying EBS updates.  But what are your top priorities for major upgrades to EBS and its technology stack components? Here is a summary of our latest upgrade recommendations for E-Business Suite updates and technology stack components.  These quarterly recommendations are based upon the latest updates to Oracle's product strategies, support deadlines, and newly-certified releases.  Upgrade Recommendations for October 2012 EBS 11i users should upgrade to 12.1.3, or -- if staying on 11i -- should be on the minimum 11i patching baseline, EBS 12.0 users should upgrade to 12.1.3, or -- if staying on 12.0 -- should be on the minimum 12.0 patching baseline, EBS 12.1 users should upgrade to 12.1.3. Oracle Database 10gR2 and 11gR1 users should upgrade to 11gR2 11.2.0.3. EBS 12 users of Oracle Single Sign-On 10g users should migrate to Oracle Access Manager 11g 11.1.1.5. EBS 11i users of  Oracle Single Sign-On 10g users should migrate to Oracle Access Manager 10g 10.1.4.3. Oracle Internet Directory 10g users should upgrade to Oracle Internet Directory 11g 11.1.1.6. Oracle Discoverer users should migrate to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Business Intelligence Applications (OBIA), or Discoverer 11g 11.1.1.6. Oracle Portal 10g users should migrate to Oracle WebCenter 11g 11.1.1.6 or upgrade to Portal 11g 11.1.1.6. All Windows desktop users should migrate from JInitiator and older Java releases to JRE 1.6.0_35 or later 1.6 updates. All Firefox users should upgrade to Firefox Extended Support Release 10. Related Articles Extended Support Fees Waived for E-Business Suite 11i and 12.0 On Database Patching and Support: A Primer for E-Business Suite Users On Apps Tier Patching and Support: A Primer for E-Business Suite Users EBS Support Information Center + Patching & Maintenance Advisor Available on My Oracle Support What's the Best Way to Patch an E-Business Suite Environment?

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • At the Java DEMOgrounds - JavaFX

    - by Janice J. Heiss
    JavaFX has made rapid progress in the last year, as is evidenced by the wealth of demos on display. A few questions appear to be prominent in the minds of JavaFX enthusiasts. Here are some questions with answers provided by Oracle’s JavaFX team.When will the rest of the JavaFX code be available in open source?Oracle has started to open source JavaFX. The existing platform code will finish being committed to OpenJFX by the end of the year.Why should I use JavaFX instead of HTML5?We see JavaFX as complementary to HTML5, and most companies we talk to react positively once they understand how they can benefit from a hybrid solution. As most HTML5 developers will tell you, the biggest obstacle to deploying HTML5 applications is fragmentation. JavaFX offers a convenient way to render HTML and JavaScript within its WebView component, which provides the same level of quality and features across Windows, Mac, and Linux. Additionally, JavaScript in WebView can make calls into the Java code, and vice versa, allowing developers to tap into the best of both worlds.What is the market penetration of JavaFX? It is currently limited, as we've just made available JavaFX on Mac and Linux in August, but we expect JavaFX to be present on millions of desktop-type systems now that JavaFX is included as part of the JRE. We have also significantly lowered the level of effort required to deploy an application bundling the JRE and JavaFX runtime libraries. Finally, we are seeing a lot of interest by companies operating in the embedded market, who have found it hard to develop compelling UIs with existing technologies.Below are summaries of JavaFX Demos on display at JavaOne 2012:JavaFX EnsembleEnsemble is a collection of over 100 JavaFX samples packaged as a JavaFX application. This demo is especially useful to those new to JavaFX, or those not familiar with its latest features (e.g. canvas, color picker). Ensemble is the reference for getting familiar with JavaFX functionality. Each sample can be run from within Ensemble, and the API for each sample, as well as the source code are available alongside the sample.The samples source code can be saved as a NetBeans project for convenience purposes, or can be copied as is in any other Java IDE. The version of Ensemble shown is packaged as a native Windows application, including the JRE and JavaFX libraries. It was created with the JavaFX packager, which provides multiple packaging options, and frees developers from the cumbersome and error-prone process of packaging a Java application.FX Experience ToolsFX Experience Tools is a JavaFX application that provides different utilities to create new skins for your JavaFX applications. One of the most powerful features of JavaFX is the ability to skin applications via CSS. Since not all Java developers are familiar with CSS, these utilities are a great starting point to create custom skins. JavaFX allows developers to easily customize the look and feel of their applications through CSS. FX Experience Tools makes it easy to create new themes for JavaFX applications, even if you are not familiar with CSS. FX Experience Tools is a JavaFX application packaged as a native application including the JRE and JavaFX runtime libraries. FX Experience tools shows how this type of deployment simplifies the packaging of Java applications without requiring developers to master the intricacies of Java application packaging. The download site for FX Experience Tools is http://fxexperience.com/2012/03/announcing-fx-experience-tools/ JavaFX Scene BuilderJavaFX Scene Builder is a visual layout tool that lets users quickly design the UI of your JavaFX application, without coding. Users can drag and drop UI components, modify their properties, apply style sheets, and the FXML code they create for the layout is automatically generated in the background. The result is an FXML file that can then be combined with a Java project by binding the UI to the application’s logic. Developers can easily create user interfaces for their application, as well as separate the application’s UI from the application logic for easier maintenance. Attendees can get this app by going to javafx.com and checking the link at top of the “Overview” page.Scene Builder allows developers to easily layout JavaFX UI controls, charts, shapes, and containers, so that you can quickly prototype user interfaces. It generates FXML, an XML-based markup language that enables users to define an application’s user interface, separately from the application logic. Scene Builder can be used in combination with any Java IDE, but is more tightly integrated with NetBeans IDE. It is written as a JavaFX application, with native desktop integration on Windows and Mac OS X. It’s a perfect example of a JavaFX application packages as a native application.Scene Builder is available for your preferred development platform. Besides the GA release on Windows and Mac, a Developer Preview of Scene Builder for Linux has just been made available.Scenic ViewScenic View is a tool that can be used to understand the current state of your application UI, and to also easily manipulate properties of the scenegraph without having to keep editing your code. Creating UIs is a complex process, and it can be hard and tedious detecting these issues, editing the code, and then compiling it to test the app again. Scenic View is a great diagnostics tool that helps developers identify these issues and correct them at runtime.Attendees can get Scenic View by going to javafx.com, selecting the “Community” tab, and clicking the link under the “Third Party Tools and Utilities” section.Scenic View allows developers to easily examine the state of a JavaFX application scenegraph while the application is running. Some of the latest features added to Scenic View include event monitoring, javadoc browsing, and contextual menus. The download site for Scenic View is available here: http://fxexperience.com/scenic-view/ Conference TourConference Tour is an application that lets users discover some of the major Java conferences throughout the world. The Conference Tour application shows how simple it is to mix JavaFX and HTML5 into a single, interactive application. Attendees get Conference Tour here.JavaFX includes a Web engine based on Webkit that provides a consistent web interface to render HTML5 across operating systems, within a JavaFX application. JavaFX features a bi-directional bridge that allows Java APIs to call JavaScript within WebView, or allows JavaScript to make calls to Java APIs. This allows developers to leverage the best of both worlds.Java EE developers can take advantage of WebView and the JavaScript-Java bridge to allow their HTML clients to seamlessly bypass Web browser’s sandbox to access native system resources, providing a richer user experience.FXMediaPlayerFXMediaPlayer is an application that lets developers check different media functionality in JavaFX, such as synthesizer or support for HTTP Live Streaming (HLS). This demo shows how developers can embed video content in their Java applications. JavaFX leverages the underlying video (e.g., H.264) and audio (e.g., AAC) codecs on the user’s computer. JavaFX APIs allow developers to interact with the video content (e.g. play/pause, or programmable markers). Some of the latest media features introduced in JavaFX 2.2 include HTTP Live Streaming (HLS). Obviously there is a lot for JavaFX enthusiasts to chew on!

    Read the article

  • Backup File Naming Convention

    - by Andrew Kelly
      I have been asked this many times before and again just recently so I figured why not blog about it. None of this information outlined here is rocket science or even new but it is an area that I don’t think people put enough thought into before implementing.  Sure everyone choses some format but it often doesn’t go far enough in my opinion to get the most bang for the buck. This is the format I prefer to use: ServerName_InstanceName_BackupType_DBName_DateTimeStamp.xxx ServerName_InstanceName...(read more)

    Read the article

  • SQLRally Nordic 2012 – session material

    - by Hugo Kornelis
    As some of you might know, I have been to SQLRally Nordic 2012 in Copenhagen earlier this week. I was able to attend many interesting sessions, I had a great time catching up with old friends and meeting new people, and I was allowed to present a session myself. I understand that the PowerPoint slides and demo code I used in my session will be made available through the SQLRally website – but I don’t know how long it will take the probably very busy volunteers to do so. And I promised my attendees...(read more)

    Read the article

  • Java and web pages

    - by Filippo
    Hello everyone and thank you in advance for the answers. I have a question concerning not how to do something, but with which instruments. Let's say, I want to write a simple application in Java that connects to a news website (e.g. CNN), parses the html document and prints on screen the news. Another example : my application retrieves and prints on screen soccer results from Eurosports. What do I need to do that? External libraries? Or maybe what I'm looking for is already included in JavaEE? Could this be helpful? http://jsoup.org/<< Thank you everyone again and have a nice day.

    Read the article

  • What is the most efficient way to study multiple languages, frameworks, and APIs as a developer?

    - by Akromyk
    I know there are those out there who have read a slurry of books on a specific technology and only code in that one particular language, but this question is aimed at those who need bounce around between using multiple technologies and yet still manage to be productive. What is the most efficient way to study multiple languages, frameworks, and APIs as a developer without becoming a cheap swiss army knife? And how much time should one dedicate to a particular subject before moving to another?

    Read the article

  • How often is seq used in Haskell production code?

    - by Giorgio
    I have some experience writing small tools in Haskell and I find it very intuitive to use, especially for writing filters (using interact) that process their standard input and pipe it to standard output. Recently I tried to use one such filter on a file that was about 10 times larger than usual and I got a Stack space overflow error. After doing some reading (e.g. here and here) I have identified two guidelines to save stack space (experienced Haskellers, please correct me if I write something that is not correct): Avoid recursive function calls that are not tail-recursive (this is valid for all functional languages that support tail-call optimization). Introduce seq to force early evaluation of sub-expressions so that expressions do not grow to large before they are reduced (this is specific to Haskell, or at least to languages using lazy evaluation). After introducing five or six seq calls in my code my tool runs smoothly again (also on the larger data). However, I find the original code was a bit more readable. Since I am not an experienced Haskell programmer I wanted to ask if introducing seq in this way is a common practice, and how often one will normally see seq in Haskell production code. Or are there any techniques that allow to avoid using seq too often and still use little stack space?

    Read the article

  • When to use identity comparison instead of equals?

    - by maaartinus
    I wonder why would anybody want to use identity comparison for fields in equals, like here (Java syntax): class C { private A a; public boolean equals(Object other) { // standard boring prelude if (other==this) return true; if (other==null) return false; if (other.getClass() != this.getClass()) return false; C c = (C) other; // the relevant part if (c.a != this.a) return false; // more tests... and then return true; } // getter, setters, hashCode, ... } Using == is a bit faster than equals and a bit shorter (due to no need for null tests), too, but in what cases (if any) you'd say it's really better to use == for fields inside equals?

    Read the article

  • What are some good resources for learning about file systems? [closed]

    - by Daniel
    I'd like to learn about file system design at a very detailed level. I'm currently in a graduate level operating systems course, and we're currently going over file systems. We mostly discuss papers and such, but our semester long project is to implement a log-structured file system using fuse and a virtual disk. Are there any good books that focus heavily on file system design and implementation? I have some conceptual clouding on things that seem very basic such as "when we say that an inode has pointers to blocks, do we mean anything besides the inode keeping track of block numbers? Is there any other format for 'disk pointers'?" I'm actually looking at file system design to start my career, so I'm probably going to try to implement a more traditional file system with fuse and our virtual disk format after this course is over.

    Read the article

  • Basics of ERP for dummies

    - by DarenW
    A situation has arisen where (if I don't scream and run away) I will be involved in an ERP system. This project will be using OpenERP specifically. My background is entirely science/engineering/music/games/art/whatever. I've never set foot in the realm of business systems or anything describable with the word "enterprise". What is a good introduction to the whole ERP concept, OpenERP and business systems in general suitable for those with flat zero experience in that world? The ideal intro would explain, from no assumptions, what the main ideas are, terminology, they style of work and thinking of people in that world, and maybe some concrete suggestions how one can tinker around with a copy of OpenERP to gain basic familiarity.

    Read the article

  • How to justify technology choice to customer?

    - by suslik
    When freelancing / contracting a customer will typically specify functional requirements, acceptance criteria, etc, and the implementation details are in the developer's hands. As a developer your technology choice is a balancing act between what you are most familiar with, technologically what the right tool appears to be, ease of finding coders with this skill and their expense, and a few other factors. I'm in a situation where I have evaluated my options and selected a somewhat obscure open source technology that I believe will get me there faster, easier, and be more maintanable in the long term. It's different, but I think that that is what the requirements call for. The customer has inquired about what I'm going to use to build the solution, and now they are concerned because they've never heard of it before. The reasons for my choice are mostly technical, whereas the customer isn't (but they know some buzzwords!). Explaining these technical reasons will not be easy, and I am not sure if that is the right way to approach this situation anyway. And that's my question: what is the right way to approach this situation so as to cause the least amount of headache for everyone involved?

    Read the article

  • Should Equality be commutative within a Class Hierachy?

    - by vossad01
    It is easy to define the Equals operation in ways that are not commutative. When providing equality against other types, there are obviously situations (in most languages) were equality not being commutative is unavoidable. However, within one's own inheritance hierarchy where the root base class defines an equality member, a programmer has more control. Thus you can create situations where (A = B) ? (B = A), where A and B both derive from base class T Substituting the = with the appropriate variation for a given language. (.Equals(_), ==, etc.) That seems wrong to me, however, I recognize I may be biased by background in Mathematics. I have not been in programming long enough to know what is standard/accepted/preferred practice when programming. Do most programmers just accept .Equals(_)may not be commutative and code defensibly. Do they expect commutativity and get annoyed if it is not. In short, when working in a class hierarchy, should effort me made to ensure Equality is commutative?

    Read the article

  • Time tracking and payment registration architecture

    - by egis
    ?itle might be a little bit incorrect. :) Anyway, I'm building a software where employees input time they worked per day (work hours) and employer "pays" for this time. "Payment" is done outside this system, so employer just "confirms" (checkbox or something like this) which work hours are paid. So the question is - what is the best way (both UI and data storage wise) to implement this? At the moment I have this idea: Employee selects week and manually (with some Javascript helpers, like "fill the same time for all days") inputs work hours in every day of the week. Employer confirms payment the same way employee inputs data (selects week, confirms each day). Data is saved in DB as unix timestamp (one day per table row). Problem is 14 inputs (7 days * ("hours from" + "hours to" input), yet this approach seems kinda easy to implement. Maybe I'm overlooking something and this can be done differently and better? Maybe someone has any example of already working software?

    Read the article

  • Come up with a real-world problem in which only the best solution will do (a problem from Introduction to algorithms) [closed]

    - by Mike
    EDITED (I realized that the question certainly needs a context) The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: "Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough." I'm interested in its first statement. And (from my understanding) it is asked to name a real-world problem where only the exact solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make this simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then when the mathematical model needs to be solved an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because the question is taken from the book "Introduction to algorithms" the term "solution" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: What are the real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough solution)? There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do there's no practical need in instant crack though it's desired. So the quality of "best" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N*Y. But that's the problem of finding solution for P on computer X which is... well, good enough. My final thought that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases really very very good but still not the best ones. Isn't it? :) If it's not can you provide an example? Or can you name any such unsolved problem of practical interest?

    Read the article

  • How do you manage all of the information you have learned and found? [closed]

    - by B Seven
    Possible Duplicate: How do you manage your knowledge base? What do you use for personal note taking to keep track of everything you learn? Are you always Googling or searching StackOverflow to answer the same questions? Or searching for and copying and pasting existing code? I feel like I have a poor memory, especially remembering things like syntax. Are there any knowledge management systems that would work well for a programming language or operating system? It would be great if there were a way to save everything I learn in an easy to search system. Does such a thing exist? Maybe you would be able to search by question (How to sort an array?, How to set static IP?), or by tag (sort, array, enumeration, iterator, IP). I know it would be easy to develop my own system, but I thought it would be great to learn what works for other people.

    Read the article

  • Could spending time on Programmers.SE or Stack Overflow be substitute of good programming books for a non-beginner?

    - by Atul Goyal
    Could spending time (and actively participating) on Programmers.SE and Stack Overflow help me improve my programming skills any close to what spending time on reading a book like Code Complete 2 (which would otherwise be next in my reading list) will help. Ok, may be the answer to this question for someone who is beginning with programming might be a straight no, but I'd like to add that this question I'm asking in context when the person is familiar with programming languages but wants to improve his programming skills. I was reading this question on SO and also this book has been recommended by many others (including Jeff and Joel). To be more specific, I'd also add that even though I do programming in C, Java, Python,etc but still I'm not happy with my coding skills and reading the review of CC2 I realized I still need to improve a lot. So, basically I want to know what's the best way for me to improve programming skills - spend more time on here/SO or continue with CC2 and may be come here as and when time permits.

    Read the article

  • How to find a programming mentor?

    - by Dvole
    I decided to learn programming. I've been reading SO for few days, and I think I will start with C++, as I read some articles. I am aware of loops, arrays, program logic and objects a little and I need someone to look me over and help me with small questions I get when doing my first projects. So here is the question - where do I find such guy? I don't got any friends who program and all. EDIT: 2 years later, I am still looking for mentor. I did not actively code just started 3 months again. I work on Objective-C and iOS programming and game programming with Cocos2d. If you want to become my mentor, drop me a or comment.

    Read the article

  • Input Signal Out of Range 1920x1080

    - by Zach
    I've recently installed Ubuntu 12.04 on my computer. Now, every time I boot and when I shut down, my monitor goes blank and says "Input Signal Out of Range - Change Setting to 1920x1080 60Hz." Once the computer gets to the login screen, it's okay again. This problem also happens when I try to open any 3d app. My graphics card is NVIDIA GEforce 6150 SE. I tried updating the drivers, but it broke everything and I had to reinstall Ubuntu. Any help?

    Read the article

  • Why does my Canon printer print document pages at ~25% size?

    - by Erlend Alvestad
    I'm using a Canon PIXMA MP250, and I'm running 12.04 LTS. The printer's been working fine for the couple of months I've been a Linux user. That is, until today. I just printed a 1-page ODT document from LibreOffice. Instead of filling the sheet, the document occupies only a little less than 25% of the paper, in the top left corner, and the text has also shrunk to something like 5pt. I looked at the paper format settings for the document and printer, which were set to "letter". I changed these to "A4", hoping that would solve the issue. There was no change, however. I tried printing a different document in LibreOffice and got the same result. I tried exporting the original document to PDF and printing it through Document Viewer. Same result. I then printed a web page from Google Chrome. No formatting problems there. In all cases the print preview looks fine.

    Read the article

  • Unable to install scanner software for Cannon Scanner

    - by Gerrie Jooste
    I am getting the following message when I am trying to do a CanonScan 5600F installation and setup from CD-ROM Archive: /media/CANOSCAN/MSETUP4.EXE [/media/CANOSCAN/MSETUP4.EXE] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /media/CANOSCAN/MSETUP4.EXE or /media/CANOSCAN/MSETUP4.EXE.zip, and cannot find /media/CANOSCAN/MSETUP4.EXE.ZIP, period.

    Read the article

  • How to read default key value with dconf or gsettings?

    - by Zta
    I would like to know the default value of a dconf/gsettings key. My question is a followup of the question below: Where can I get a list of SCHEMA / PATH / KEY to use with gsettings? What I'm trying to do, so create a script that reads all my personal preferences so I can back them up and restore them. I plan to iterate though all keys, like the script above, see what keys have been changed from their default value, and make a note of these, that can be restored later. I see that the dconf-editor display the keys' default value, but I'd very much like to script this. Also, I don't see how parsing the schemas /usr/share/glib-2.0/schemas/ can be automated. Maybe someone can help? gsettings get-default|list-defaults would be nice =) (Geesh, it was much easier in the old days where you just kept your ~/.somethingrc in subversion ... =\ Based on the answer given below, I've updated the script to print schema, key, key's data type, default value, and actual value: #!/bin/bash for schema in $(gsettings list-schemas | sort); do for key in $(gsettings list-keys $schema | sort); do type="$(gsettings range $schema $key | tr "\n" " ")" default="$(XDG_CONFIG_HOME=/tmp/ gsettings get $schema $key | tr "\n" " ")" value="$(gsettings get $schema $key | tr "\n" " ")" echo "$schema :: $key :: $type :: $default :: $value" done done This workaround basically covers what I need. I'll continue working on the backup scrip from here.

    Read the article

  • Wired connection not being recognized

    - by maxifick
    I'm on Ubuntu Maverick (10.10). I've read a few threads regarding wired connection problems, but haven't found a solution yet. The problem appears after I connect to a wireless network. When I disconnect wireless and plug in an internet cable, the wired connection is not recognized at all. Even the socket appears dead (there are no diodes flashing). The only solution so far seems to be restarting the computer. Network Manager then tries to connect to a Wi-Fi, but the wired connection is listed and working. I've tried sudo restart network-manager, but that doesn't solve anything. After a while, available wireless networks start appearing, but the wired still doesn't. Any ideas? Thanks in advance. Edit: Here is the dmesg output after switching off Wi-Fi and then plugging the internet cable. [18200.623543] Restarting tasks ... done. [18200.648422] video LNXVIDEO:00: Restoring backlight state [18200.707580] sky2 0000:02:00.0: eth0: phy I/O error [18200.707715] sky2 0000:02:00.0: eth0: phy I/O error [18200.707819] sky2 0000:02:00.0: eth0: phy I/O error [18200.707922] sky2 0000:02:00.0: eth0: phy I/O error [18200.708025] sky2 0000:02:00.0: eth0: phy I/O error [18200.708127] sky2 0000:02:00.0: eth0: phy I/O error [18200.708229] sky2 0000:02:00.0: eth0: phy I/O error [18200.708332] sky2 0000:02:00.0: eth0: phy I/O error [18200.708824] sky2 0000:02:00.0: eth0: enabling interface [18200.709587] ADDRCONF(NETDEV_UP): eth0: link is not ready [18202.662422] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 [18203.324061] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 [18211.193137] eth1: no IPv6 routers present [18212.844649] usb 5-1: new low speed USB device using ohci_hcd and address 5 [18213.017235] input: USB Optical Mouse as /devices/pci0000:00/0000:00:13.0/usb5/5-1/5-1:1.0/input/input16 [18213.017499] generic-usb 0003:0461:4D17.0004: input,hidraw0: USB HID v1.11 Mouse [USB Optical Mouse] on usb-0000:00:13.0-1/input0 After system restart, dmesg says this: [ 19.802126] sky2 0000:02:00.0: eth0: enabling interface [ 19.802394] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 20.812533] device eth0 entered promiscuous mode [ 21.495547] sky2 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex, flow control rx [ 21.495677] sky2 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex, flow control rx [ 21.496574] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

    Read the article

  • Why Java's JMF doesn't work in Linux?

    - by Visruth CV
    I got to do some image processing program in java using Linux. I chose to use the JMF for my camera (a webcam) access. But my program is not able to access the camera. But, the jmf works well in Windows. I downloaded jmf from oracle.com and I tried to install it in 'Ubuntu 10.10-the Maverick-released in October 2010 and supported until April 2012'. The downloaded file was a .bin file. I got the below output (last part of the output) when I tried the command provided by oracle /bin/sh ./jmf-2_1_1e-linux-i586.bin. For inquiries please contact: Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, California 95054. LFI# 129621/Form ID#011801 Do you agree to the above license terms? [yes or no] yes Permit recording from an applet? (see readme.html) [yes or no] yes Permit writing local files from an applet? (recommend no, see readme.html) [yes or no] yes Unpacking... tail: cannot open `+309' for reading: No such file or directory Extracting... ./install.sfx.4140: 1: cannot open ==: No such file ./install.sfx.4140: 1: ==: not found ./install.sfx.4140: 3: Syntax error: ")" unexpected chmod: cannot access `JMF-2.1.1e/bin/jmstudio': No such file or directory chmod: cannot access `JMF-2.1.1e/bin/jmfregistry': No such file or directory chmod: cannot access `JMF-2.1.1e/bin/jmfinit': No such file or directory ./jmf-2_1_1e-linux-i586.bin: 305: JMF-2.1.1e/bin/jmfinit: not found /bin/cp: cannot stat `JMF-2.1.1e/lib/jmf.properties': No such file or directory Done. When try the same command again, getting nothing in the terminal (console). visruth@laptop:~/Desktop/mobileapps$ /bin/sh ./jmf-2_1_1e-linux-i586.bin visruth@laptop:~/Desktop/mobileapps$ Now, I'm not sure that whether it is properly installed or not.Whatever, I'm not getting camera access in my programme. I checked out the driver of the camera, it is available in the os itself I think because other softwares are able to access the camera (web cam). I tried it on both desktop and laptop, but no effect... Is there any solution for the problem?

    Read the article

  • SAPPHIRE HD 7770 no audio on HDMI TV display

    - by zeroconf
    I have SAPPHIRE HD 7770 and cannot get work audio over HDMI. http://www.sapphiretech.com/presentation/product/?cid=1&gid=3&sgid=1159&lid=1&pid=1452&leg=0 I use Ubuntu 12.04 LTS 64-bit version with all current updates. I tried at /etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash radeon.audio=1" ... it didn't help. It's probably I use proprietary driver -this seems to be open source driver. I use the driver, what jockey-gtk (additional drivers) offered me: ATI/AMD PROPRIETARY FGLRX GRAPHICS DRIVER <---- I installed that one ATI/AMD PROPRIETARY FGLRX GRAPHICS DRIVER (post-release update) So - I installed the first one, because installing second version failed. Everything went fine but no sound at TV display by HDMI. Even Gnome sound mixer doesn't show HDMI choice. Using 32" Samsung B530 LCD TV - http://www.lcdbesttv.com/2010/02/samsung-b530-series-lcd-tv/ I have Asus P8Z77-M motherboard - http://www.asus.com/Motherboards/Intel_Socket_1155/P8Z77M/ - there is also HDMI integrated. When I put HDMI cord to that plug, then even Gnome sound mixer showed HDMI audio but it didn't work. I have set from BIOS, that I use that SAPPHIRE HD 7770 from PCIe. My lspci output: 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Ivy Bridge PCI Express Root Port (rev 09) 00:02.0 Display controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) 00:1c.6 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Device 683d 01:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Device aab0 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 09) 04:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)

    Read the article

< Previous Page | 9 10 11 12 13 14 15  | Next Page >