Search Results

Search found 17345 results on 694 pages for 'next'.

Page 287/694 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Introducing the Oracle Parcel Service&ndash;Example/Reference Application

    - by Jeffrey West
    Over the last few weeks the product management team has been working on a webcast series that is airing in EMEA.  It is a 5-episode series where we talk about different features of WebLogic and show how to build applications that take advantage of these features.  Each session is focused at a different layer of the technology stack, and you can find the schedule below. The application we are building in this series is named the ‘Oracle Parcel Service’.  It is an example application and not a product of Oracle by any stretch of the imagination.  Over the next few weeks we will be finalizing the code and will be releasing it for you to check out.  For updates, request membership to the Oracle Parcel Service project on SampleCode.oracle.com: https://www.samplecode.oracle.com/sf/projects/oracle-parcel-svc/. Here are some of the key features that we are highlighting: JPA 2.0 (new in WebLogic 10.3.4) with EclipseLink Coherence TopLink Grid Level 2 cache for JPA JAX-RS (new in WebLogic 10.3.4) 1.0 for RESTful services Lightweight JQuery Web UI for consuming RESTful services JSF 2.0 (new in WebLogic 10.3.4) utilizing PrimeFaces EJB 3.0 Spring-WS Web Services JAX-WS Web Services Spring MDP’s for Event Driven Architectures Java MDB’s for Event Driven Architectures Partitioned Distributed Topics for Event Driven Architectures   Accessing the Code on SampleCode.Oracle.com You will need to log in using your Oracle.com username and password.  If you have not created an account, you will need to do so.  It’s a simple one-page form and we don’t bother you with too many emails.   Please join the project to be kept up to date on changes to the code and new projects.  Joining the project is not required, but very much appreciated. Once you have signed in you should see an icon for accessing the Source Code via Subversion.  You can also download a zip file containing the code.

    Read the article

  • Advice for someone moving from Windows / Coldfusion / Java to Linux / Ruby / Rails

    - by Ciaran Archer
    Hi all I am thinking of undertaking a serious career move. Currently I work day to day with ColdFusion 9+, and some Java in a Windows environment. My background is Java/JSP etc prior to ColdFusion. I'm considering a move towards Ruby / Rails on Linux as I think it would be a real challenge, keep things fresh and would stand me in good stead for the next few years. There are also more jobs in this area. I would consider myself an experienced web professional. I do TDD and I understand good OO design concepts. I have worked for the past few years on a busy transactional gaming website with all the security and performance challenges that entails. I have also contributed to an open source ColdFusion project recently and I am a active member of the CF community on StackOverflow . In order to maintain my current remuneration (!) etc. I would like to get up to speed on Ruby / Rails and Linux before I go job hunting. The idea is that I can demonstrate enough proficiency in these new skills and combined with my other language / programming / architectural and performance experience I have I'll be a good candidate. I am building a personal website in Rails 3.0 on Ubuntu which I hope will expose me to lots of Rails/Ruby and I am reading a few books. What else can I do? Has anyone made this type of move, and if so would they have any tips apart from what I've mentioned? Is there any areas around Rails/Ruby/Linux that I have to get up to speed with? Any and all tips are appreciated.

    Read the article

  • Headset - No audio devices are installed

    - by Meowbits
    I've been having problems with my headset and I just cannot seem to figure it how to fix it. When I plug the microphone and headphone jacks into the computer I can hear sound fine. However the microphone does not get recognized. If I go into Sound Recording, it showed the microphone and it said that it was working but nothing was getting picked up. I uninstalled the audio devices and let windows reinstall them but now when I go into Sound Recording, it states "No audio devices are installed" Before I uninstalled the audio driver I made sure to try every combination of audio devices I could - none worked... If you know what I should try next, please let me know. I am getting frustrated. Running windows 7 64 bit

    Read the article

  • Switching BIOS SATA RAID/AHCI setting causes BSOD at Windows Start - Why?

    - by thephatp
    I just changed my disk setup from: 1 SATA HDD Primary OS Disk 2x SATA HDD Backup Disks in RAID 1 TO: 1 SATA SSD Primary OS Disk 1 SATA HDD Backup Disk [No RAID] Everything worked great, no problem. So, since I don't have a RAID array anymore, I decided that I could change my BIOS setting to AHCI instead of RAID. I have a Gigabyte GA-P35-DS3R v1.0 mobo. These are my steps: Settings Integrated Peripherals "SATA RAID/AHCI Mode" = RAID -- Changed this setting to AHCI Reboot Windows Start screen shows up, but as the color orbs are spinning into focus, BSOD and immediate restart Repeated reboot several times, same outcome Next Step: Launch BIOS settings Integrated Peripherals "Onboard SATA/IDE Ctrl Mode" = RAID -- Changed this setting to AHCI Reboot Windows Start screen shows up, but as the color orbs are spinning into focus, BSOD and immediate restart Repeated reboot several times, same outcome Switch both settings back to RAID, reboot, and Windows starts up just fine, no issues. What am I missing? Why can't I set it to AHCI mode without BSODs?

    Read the article

  • Due to the Classes

    - by Ratman21
    Why does it seem that I am always saying sorry (or in Japanese Gomennasi)?  Well I am late again for blog as you can see. The CCNA class’s part 1 (also known as CCENT) was, well more intense than all of the certification classes before it.   The teacher was cramming as much as he could into us during the week and it was hard to come home and do much more than fall into bed (Well I was doing still doing my Job search and checking up on my web sites and groups).   But I didn’t have much left in the way of blogging (Which by the way is now in 3 different sites). Even though it was hard some times, I really liked the fact I was getting back to something like (and mean really like, in fact I like Cisco routers than some people I know). At the class, I got some software that allows me to simulate setting up and troubles shoot Lan’s or Wan’s.   When we weren’t getting facts for the test thrown at us, we were doing labs with this software. It was fun for me to be able to use the CISCO router commands and trouble shoot router issues. Even if it was just a sim. So now it is study, study, take practices tests and do the labs. I took the week end and more off after cram CCENT week but, now I am back at it.  Also I could not keep up with my Love Dare book during week of the class. No I did not stop or forget what I already learned. I just put the next dare on hold. Well the hold is off starting tomorrow and tonight I think I am going to write a new cover letter. Let’s see what else I can get done tonight. Hmm I think I will try to do a sim of my home wireless LAN and study for CCENT test in about 3 weeks.   So see you tomorrow (I hope).

    Read the article

  • Could we build a mega-processor out of superconductors?

    - by Carson Myers
    A superconductor, once cooled below a critical temperature, loses all of its electrical resistance and therefore becomes 100% efficient. This means that when a current flows through a superconductor, none of the energy is lost to heat or light. Theoretically, could we build a processor out of superconductive materials, that could effectively run at, oh I don't know, say, 300ghz? or 5,000ghz? Since a superconductive circuit is 100% efficient, this means that once supplied with electricity, the source of power could be completely removed from the circuit and the current would continue to flow forever. So if we made all the components inside a computer out of superconductive materials, could we get away with only supplying power to the peripherals and save a-whole-lot on energy, while dramatically increasing computing speed? Might this be one of the next big breakthroughs in computing? What do you think?

    Read the article

  • Partner Webcast - Is your Application Ready? Prove it with the Oracle Exastack Program

    - by Thanos
    At Oracle we design Engineered Systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimizes performance at every IT layer to simplify business operations, drive down costs and accelerate business innovation.As the Engineered System foundation platform, Oracle Exadata and Oracle Exalogic, run all of Oracle Cloud's services across a range of global data centers, delivering extreme performance, massive scalability, and fault tolerance that has no single point of failure.The Oracle Exastack Program enables you as an ISV to leverage Oracle's scalable, integrated infrastructure to test, tune and optimize your applications for high performance. By getting Exastack Ready and Exastack Optimized, your applications get formal recognition from Oracle and additional visibility, while you as an ISV receive additional set of OPN benefits. Don't miss this opportunity to learn more about how you can optimize your applications to run faster and more reliably leveraging Oracle Exastack, but also become more competitive letting everybody know you are ready. Agenda: Oracle Engineered Systems Strategy OPN Exastack Program Benefits & Objectives Value for You Oracle is resourced for your success How to Apply –Demo Next Steps & Useful contacts Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Thursday 06 December 2012, 10.00 CET (GMT+1) Duration: 1 hour Register Now! " height="6"> For any questions please contact us at [email protected] our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix

    Read the article

  • Worldwide Web Camps

    - by ScottGu
    Over the next few weeks Microsoft is sponsoring a number of free Web Camp events around the world.  These provide a great way to learn about ASP.NET 4, ASP.NET MVC 2, and Visual Studio 2010. The Web Camps are two day events.  The camps aren’t conferences where you sit quietly for hours and people talk at you – they are intended to be interactive.  The first day is focused on learning through presentations that are heavy on coding demos.  The second day is focused on you building real applications using what you’ve learned.  The second day includes hands-on labs, and you’ll join small development teams with other attendees and work on a project together. We’ve got some great speakers lined up for the events – including Scott Hanselman, James Senior, Jon Galloway, Rachel Appel, Dan Wahlin, Christian Wenz and more.  I’ll also be presenting at one of the camps. Below is the schedule of the remaining events (the sold-out Toronto camp was a few days ago): Moscow May 19-19 Beijing May 21-22 Shanghai May 24-25 Mountain View May 27-28 Sydney May 28-29 Singapore June 04-05 London June 04-05 Munich June 07-08 Chicago June 11-12 Redmond, WA June 18-19 New York June 25-26 Many locations are sold out already but we still have some seats left in a few of them.  Registration and attendance to all of the events is completely free.  You can register to attend at www.webcamps.ms. Hope this helps, Scott

    Read the article

  • Boot from VHD with windows7 - bcdedit trouble

    - by Michiel Overeem
    I'm running Windows7 Enterprise, x64 version. I've created a windows7 vhd file with help of the following blog post hanselman blog After that, I've added it to my boot menu with help of another blog post hanselman blog This worked great. After that, i've upgraded my hdd. With help of clonezilla i've copied the old disk to the new disk. Next step was to copy the vhd to another partition. Then i updated the boot menu. However, the step C:\>bcdedit /set {guid} device vhd=[driveletter:]\<directory>\<vhd filename> fails with the message An error has occurred setting the element data. The request is not supported. what is happening?

    Read the article

  • SQL Server backup

    - by zzz777
    I have Full-Backup-A Transaction-Log-Backup-A Transaction-Log-Backup-B (*) - I have to restore this point Full-Backup-B How to do it? It seems that the only way is Full-Backup-A Transaction-Log-Backup-A Transaction-Log-Backup-B Shut-off client access Transaction-Log-C Full-Backup-B Allow client access Are there any other ways to guarantee that nothing did happen with the database between last transaction log and the next full backup. I was thinking about a. Starting transaction log backup simultaneously with full backup. b. Using differential back up while clients are connected and making full backup during maintenance window only c. Run replication and back-up the replica, stopping and restoring duplication services in points 4 and 7 and feel that it is actually hopeless.

    Read the article

  • yum remove doesn't remove things completely ?

    - by Shrinath
    I am trying to remove apache completely from my server,which is a ec2 instance, running Amazonian linux v2.6xx. Lets assume I have a file in /etc/httpd/conf/xyz.txt I am using the following code : yum remove httpd when I try to cd /etc/httpd I get "there is no such directory" error. Next, if I install httpd again, using this : yum install httpd, and then if I look in /etc/httpd/conf/ I still have that file as it is.. untouched.. How is this possible ? How do I "Clean" this ?

    Read the article

  • This update does not come from a source that supports changelogs

    - by blade19899
    When I get an update via update-manager for a software like blender/vlc, I like to see what has been fixed/changed. I added the ppa for blender/vlc (this only applies to the software I added a ppa for) sudo add-apt-repository ppa:cheleb/blender-svn sudo apt-get update sudo apt-get install blender And vlc like this. sudo add-apt-repository ppa:videolan/stable-daily sudo apt-get update sudo apt-get install vlc And when i run update-manager, or update manager pop-ups I see that vlc/blender have updates but, I can't see what has been changed/fixed this is the message I get, the screenshot below is for mupen but it's the same thing. (I updated vlc and blender, didn't wanna wait for the next update) This update does not come from a source that supports changelogs. (by the way I have a dutch Ubuntu, so the above text i Google translated it!) It only shows which version you have and to which version you be upgrading to. So my question is, how do I get the change-log tab of update-manager working. if it's even possible?

    Read the article

  • Installing sublime text plugins all at once

    - by James
    Is there a way to install all the sublime text 2 plugins that you would like to install all at once. In Notepad++, there is a plugin manager which lets you install all the plugins you want to install by checking the box next to the plugin name & description. I was wandering if there is something like that for sublime text. For eg, I would like to install Zen Coding, JQuery Package for Sublime Text, Sublime Prefixr, JS Format, SublimeLinter and many other plugins all at once rather than typing each plugin in the Package Control and installing it one by one.

    Read the article

  • Compelling Keynotes Coming: Oracle OpenWorld Latin America

    - by Oracle OpenWorld Blog Team
    Make your plans now for 4-6 December in São Paulo! Again this year there are informative and inspiring keynotes lined up for Oracle OpenWorld Latin America. For the opening keynote on 4 December, Oracle President Mark Hurd and Chief Technology Officer Edward Screven will talk about the many elements that are defining the convergence of business and information technology. The next day's keynote will focus on cloud computing, diving deeply into how mobile and social technologies play into this critical way of delivering services. Featured speakers are Oracle executives Thomas Kurian, Andrew Mendelsohn, and Robert Shimp. On Thursday, 6 December, Anthony Lye, Oracle senior vice president, will discuss the customer experience revolution and how the analysis of customer behavior can help shape companies' ability to understand and adapt more effectively to their customers' needs and wants. And, of course, Oracle partners always have interesting and exciting things to say. Be sure to come hear about innovations from Odebrecht, CTIS Tecnologia, and Intel do Brasil executives on topics including technology adoption that drives business results; the "Model School" revolution; and the role of the data center as technology advances. You can still enjoy Early Bird savings through 3 December, so register now!

    Read the article

  • Installing Multiple OWB Patches

    - by [email protected]
    When an OUBI bug requires a fix to the Oracle Warehouse Builder (OWB) code, the fix is delivered as an MDL export file that will need to be imported and deployed in OWB. If more than one bug is being patched, then a recent question came in that asked if it would be possible to import one after the next, and then do the deploy steps once? The answer is Yes, all of the imports can be done before any of objects are deployed. Once all of the objects have been deployed, then the TCL scripts that need to be rerun can be run, and then the objects that were changed can be deployed. The order that the MDL files are loaded does not matter unless the same object is in two or more MDL files. In that case, the latest MDL file should be loaded last. For example, if two MDL files both contain changes to the SPLMAP_F_RECENT_CREW mapping, and one was created on January 2, 2009 and the second one was created on March 14, 2009, then the January 2 file should be loaded first and the March 14 file should be loaded second. Note that if the MDL files are always loaded in the order that they were created by Release Services, then this will work correctly.

    Read the article

  • Message Passing Interface (MPI)

    So you have installed your cluster and you are done with introductory material on Windows HPC. Now you want to develop an application with the most common programming model: Message Passing Interface.The MPI programming model is a standard with implementations from many vendors. For newbies (like myself!), I have aggregated below links for getting started.Non-Microsoft MPI resources (useful even if you are not on the Windows platform)1. Message Passing Interface on wikipedia. 2. The MPI standard.3. MPICH2 - an MPI implementation.4. Tutorial on MPI by William Gropp.5. MPI patterns presented as a tutorial with sample code. 6. THE official MPI Forum (maintains the standard) including the wiki discussing the MPI future.7. Great MPI tutorial including at the end the MPI Exercise.8. C++ MPI Exercises by John Burkardt.9. Book online: MPI The Complete Reference.MS-MPI10. Windows HPC Server 2008 - Using MS-MPI whitepaper (15 page doc).11. Tracing MPI applications (27 page doc).12. Using Microsoft MPI (TechNet section).13. Windows HPC Server MPI forum (for posting questions). MPI.NET14. MPI.NET Home Page (not owned by Microsoft).15. MPI.NET Tutorial.16. HPC Development using F# using MPI.NET (38 page doc).Next time I'll post resources for the Microsoft Cluster SOA programming model - happy coding... Comments about this post welcome at the original blog.

    Read the article

  • cannot delete IPv6 default gateway

    - by NulledPointer
    The commands below should be pretty self-explanatory. Please note that the route for which i get failure is obtained by RA and has very less expiry ( e Flag in UDAe). @vm:~$ ip -6 route 2001:4860:4001:800::1002 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1003 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1005 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:803::100e via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 fd00:ffff:ffff:fff1::/64 dev eth1 proto kernel metric 256 expires 2592300sec fe80::/64 dev eth1 proto kernel metric 256 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto kernel metric 1024 expires 1776sec @vm:~$ @vm:~$ @vm:~$ @vm:~$ sudo route -6 delete default gw fe80::20c:29ff:fe87:f9e7 @vm:~$ ip -6 route 2001:4860:4001:800::1002 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1003 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1005 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:803::100e via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 fd00:ffff:ffff:fff1::/64 dev eth1 proto kernel metric 256 expires 2592279sec fe80::/64 dev eth1 proto kernel metric 256 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto kernel metric 1024 expires 1755sec @vm:~$ @vm:~$ @vm:~$ sudo route -6 delete ::/0 gw fe80::20c:29ff:fe87:f9e7 dev eth1 SIOCDELRT: No such process @vm:~$ @vm:~$ @vm:~$ route -n6 Kernel IPv6 routing table Destination Next Hop Flag Met Ref Use If 2001:4860:4001:800::1002/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:800::1003/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:800::1005/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:803::100e/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 fd00:ffff:ffff:fff1::/64 :: UAe 256 0 0 eth1 fe80::/64 :: U 256 0 0 eth1 ::/0 fe80::20c:29ff:fe87:f9e7 UGDAe 1024 0 0 eth1 ::/0 :: !n -1 1 349 lo ::1/128 :: Un 0 1 3 lo fd00:ffff:ffff:fff1:a00:27ff:fe7f:7245/128 :: Un 0 1 0 lo fd00:ffff:ffff:fff1:fce8:ce07:b9ea:389f/128 :: Un 0 1 0 lo fe80::a00:27ff:fe7f:7245/128 :: Un 0 1 0 lo ff00::/8 :: U 256 0 0 eth1 ::/0 :: !n -1 1 349 lo @vm:~$ UPDATE: Another question is whats the use of link local address as the default route?

    Read the article

  • SQL User Group Events coming - Cambridge, Leeds, Manchester and Edinburgh

    - by tonyrogerson
    Neil Hambly and myself are presenting next week in Cambridge, Neil will be showing us how to use tools at hand to determine the current activity on your database servers and I'll be doing a talk around Disaster Recovery and High Availability and the options we have at hand.The User Group is growing in size and spread, there is a Southampton event planned for the 9th Dec - make sure you keep your eyes peeled for more details - the best place is the UK SQL Server User Group LinkedIn area.Want removing from this email list? Then just reply with remove please on the subject line.Cambridge SQL UG - 25th Nov, EveningEvening Meeting, More info and registerNeil Hambly on Determining the current activity of your Database Servers, Product demo from Red-Gate, Tony Rogerson on HA/DR/Scalability(Backup/Recovery options - clustering, mirroring, log shipping; scaling considerations etc.)Leeds SQL UG - 8th Dec, EveningEvening Meeting, More info and registerNeil Hambly will be talking about Index Views and Computed Columns for Performance, Tony Rogerson will be showing some advanced T-SQL techniques.Manchester SQL UG - 9th Dec, EveningEvening Meeting, More info and registerEnd of year wrap up, networking, drinks, some discussions - more info to follow soon.Edinburgh SQL UG - 9th Dec, EveningEvening Meeting, More info and registerSatya Jayanty will give an X factor for a DBAs life and Tony Rogerson will talk about SQL Server internals.Many thanks,Tony Rogerson, SQL Server MVPUK SQL Server User Grouphttp://sqlserverfaq.com

    Read the article

  • Game physics presentation by Richard Lord, some questions

    - by Steve
    I been implementing (in XNA) the examples in this physics presentation by Richard Lord where he discusses various integration techniques. Bearing in mind that I am a newcomer to game physics (and physics in general) I have some questions. 15 slides in he shows ActionScript code for a gravity example and an animation showing a bouncing ball. The ball bounces higher and higher until it is out of control. I implemented the same in C# XNA but my ball appeared to be bouncing at a constant height. The same applies to the next example where the ball bounces lower and lower. After some experimentation I found that if I switched to a fixed timestep and then on the first iteration of Update() I set the time variable to be equal to elapsed milliseconds (16.6667) I would see the same behaviour. Doing this essentially set the framerate, velocity and acceleration to zero for the first update and introduced errors(?) into the algorithm causing the ball's velocity to increase (or decrease) over time. I think! My question is, does this make the integration method used poor? Or is it demonstrating that it is poor when used with variable timestep because you can't pass in a valid value for the first lot of calculations? (because you cannot know the framerate in advance). I will continue my research into physics but can anyone suggest a good method to get my feet wet? I would like to experiment with variable timestep, acceleration that changes over time and probably friction. Would the Time Corrected Verlet be OK for this?

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • What do you think are the biggest software development issues, in small to medium businesses?

    - by Ron-Damon
    Hi, I own a small software development company that developes Web software to other small and medium companies in Chile. The business process is very complex and it is hard to stablish where to put the efforts to make our company better, more efficient, and give better solutions. I'm also a TI master's degree student and i'm making a paper about this subject, so any help would be great to help my company and my paper. I have considered 3 areas for the problems: 1) Software development problems 2) Web development problems 3) Small and Medium companies problems I don't know about you, but at least this "business formula" in Chile has not received very much support but it is getting better, but today my company is far from being self-sufficient. UPDATE: Thanks guys for your support so far, i'm updating because i have somewhat enough information so i decided to go deeper within the subjects, wish i would like you to consider for your next answers/commentaries on the subject: 1) Software development problems (3) 1.1 Incomplete problem picture 1.2 Useless delivered software 1.3 Unrealistic or inadequate schedule 2) Web development problems (3) 2.1 Apparently non-viable implementation 2.2 Unefficient module construction design 2.3 Reduced result system inter-operability 3) Small and Medium companies problems (3) 3.1 Very specific, but narrowed requerired system characteristics 3.2 Developed system is not used 3.3 Positivist demand for activities in project execution There are only 3 problems for category, to deliberately keep a thiner scope. Also, i have considered that it would have been apropiated to separate the third clasification on two, but won't be doing it just now: 3) Small and Medium software developement providers problems 4) Small and Medium software developement clients problems In that case, i think i would have made the scope of the problem wider and it is not what i want to do now, until at least i'm very trough with the other two clasifications. What you think?

    Read the article

  • Setting up a vpn and IIS IP address restrictions

    - by carpat
    I'm trying to get a VPN set up with internal access only sites. I have set up a VPN on a windows server (single VPS server), and I can connect from a remote computer and I get an IP assigned correctly (from 192.168.1.1 - 255) Next I configured IIS (running on the same machine) IP Address and Domain Restrictions to only allow only IP address range 192.168.1.0 with subnet mask 255.255.255.0 When I connect to the VPN with "Use Default Gateway on Remote Network" (so that requests must go through the vpn), I get a 403 from the internal sites. What did I miss?

    Read the article

  • FTP timeout only the first time

    - by user1474681
    I'm using PureFTPd on MacOSX (Snow Leopard, not server version). When trying to access the FTP account from the outside via dyndns (e.g. using https://www.wormly.com/test_ftp_server) the connection always times out the FIRST time. When I try AGAIN in the next few seconds it works. What is this about? I have forwarded the ports to my apple router and tried disabling the OSX firewall as well. Thanks for any advice. Dennis

    Read the article

  • What's the strategy to implement a "knowledge base" in my company.

    - by Oscar Reyes
    In my current work we think we can get benefit from having a knowledge base, so the next time someone has a question/problem etc, that base can be consulted and an answer will show up. Also, it will reduce the risk from having people leaving the company with the knowledge and we would have to start all over again. My question is, what strategy can we follow to implement/buy/get/build/etc this knowledge base? Are there software ready for this? Would it be better to have something build by ourselves ( we have some programmers ) This is an small company ( < 30 ) and the base should be accessible from outside the office ( when the employees are with the customer etc.) so I guess a webapp is in order.

    Read the article

  • SOA &amp; Application Grid Specialization &ndash; 6 steps to success &ndash; part 1 OMM

    - by Jürgen Kress
    SOA Specialization – Oracle Open Market Model (OMM) Dear Application Grid SOA Partners, Or goal is to SOA Specialize you, in the next weeks we will inform you in a series how you can achieve SOA Specialization. Specialization is key the be recognized by Oracle and to be preferred by our Customers. The first step to become SOA Specialized is to proof 2 transactions. You can either resell, co-sell or referral – as a proof point we do use our Open Market Model (OMM). To create your account go to our new Partner Portal: go to login of your OPN-Homepage: http://oraclepartnernetwork.oracle.com click on: "Sales" "Create a PRM User Account" Enter your User ID: Enter Company Identifier: ((please ask your OPN IC)) Finish Wait for a Confirmation Email If you need OMM support please contact out dedicated team: Nordics  please ask: [email protected] Portugal, Spain please ask: [email protected] Austria, Belgium, Germany, Luxembourg, Netherlands, Switzerland, United Arab Emirates, United Kingdom please ask: [email protected] For more information about OMM watch our on-demand webcast “Recognising the Value of Partners: Register Oracle Deals through the Open Market Model (OMM)”. Become SOA Specialized today SOA Specialized & Application Grid Specialized Create your references, create your OMM Entry, take the SOA Sales assessment, take the SOA Pre-Sales assessment, take the Support assessment and register for the SOA Implementation assessment. For more information on Specialization please visit our OPN Specialized Webcast Series To get support on Specialization please contact the Partner Business Centers.   SOA Specialized Application Grid Specialized Proof 2 transactions with OMM Proof 2 transactions with OMM Create your 2 references Create your 2 references SOA Sales assessment 3, Oracle Application Grid Sales Specialist  SOA Pre-Sales assessment 3 Oracle Application Grid PreSales Specialist Support assessment 1 Support assessment 2 SOA Implementation assessment 4 Application Grid Implementation assessment 4

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >