Search Results

Search found 50 results on 2 pages for 'd mill'.

Page 2/2 | < Previous Page | 1 2 

  • Java program runs smoothly in Netbeans but slowly in Eclipse and as an executed jar. WTF?

    - by comp sci balla
    A java program that does frequent swing/awt painting animation (but nothing more advanced than g.fillOval(...)) runs at a consistent 60fps in Netbeans, and at about 6fps when ran in Eclipse or executed as a jar file from a unix terminal. The program was developed in Netbeans and is run-of-the-mill desktop application (not webstart or japplet or ...). This is occurring in Ubuntu 10 with java 1.6. How is this possible? The universe no longer makes sense to me.

    Read the article

  • What tools do people use to make programming tutorial videos?

    - by Pure.Krome
    Hi folks, I'm wanting to make some yee-run-o-the-mill tutorial video's about some programming concepts and stuff i've been doing. Nothing special ... lots of peeps been doing it. What tools are people using to record and edit these videos? What resolutions / fonts / sizes do people generally use/set? The only tool I've had experience with is Camtasia - and i didn't mind it. But i've seen vid's (or live demo's) where people zoom in to code sections.. how do they do that? For final editing, do most people just do some simple power point presentation with some video snippets mashed in. cheers!

    Read the article

  • php cURL POST how to follow location

    - by One Stuck Pixel
    I am in a bit of a rut with a cURL issue. The post works greate, the data is POSTED just fine and received ok, but the url of the posted page never appears in the browser after the cURL session is executed, for example look at the following code: $ch = curl_init("http://localhost/eterniti/cart-step-1.php"); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, "error=1&em=$em&fname=$fname&lname=$lname&email1=$email1&email2=$email2&code=$code&area=$area&number=$num&mobile=$mobile&address1=$address1&address2=$address2&address3=$address3&suburb=$suburb&postcode=$postcode&country=$country"); curl_exec($ch); curl_close($ch); The post works fine and I am taken to the cart-step-1.php where I can process the posted data, HOWEVER the location in the URL address bar of the browser remains that of the script page, in this case proc_xxxxxx.php Any ideas how to get the URL address to reflect the page I am actually POSTED to? Thanks a mill

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Windows Server 2008 Services won't start after patch

    - by Antitribu
    After installing the run of the mill patches today on a Windows Server 2008 (Running as an AD controller and Exchange 2007 Server) the machine came back up with "configuring updates stage 3 of 3 0% complete". The machine had been kept reasonably up to date so this likely was caused by a very recent patch. At the leaste the following patches were installed: KB973037 KB969947 KB973565 Restarting the server into safe mode and then subsequently rebooting (with no changes made) allowed the computer to restart and I can now log in normally. However none of the critical services start; including but not limited to Exchange, DNS and Terminal Services (Obviously if DNS doesn't start other things will break). I am unable to run Internet Explorer but Chrome will work. There are no meaningful errors in the event logs as to why services won't start. Under KDC I have The Key Distribution Center (KDC) cannot find a suitable certificate to use for smart card logons, or the KDC certificate could not be verified. Smart card logon may not function correctly if this problem is not resolved. To correct this problem, either verify the existing KDC certificate using certutil.exe or enroll for a new KDC certificate. This is going to be an evil one to debug and I'm kinda hoping someone has encountered it and knows the answer off hand. Thanks all.

    Read the article

  • Bluehost Emails Getting Blocked

    - by colithium
    A site for my client has the run-of-the-mill "website with users" email pattern. Create an account, get an activation email. Get an email when a subscription is expiring, etc. The site is hosted on Bluehost and currently it uses php's mail() function. There isn't much configuration that is allowed (as far as I know). The trouble is, about a third of these emails disappear into the void. They aren't in spam or junk folders, there's no bounce message, they just cease to exist. I've read about Bluehost email troubles but I can't figure out what my options are for fixing it. These aren't marketing emails, ie they have user-specific information contained within them. I suppose if a solution offers a good templating system that would be fine. What are my options? Excerpt of headers when delivered to a Gmail address: Received-SPF: neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) client-ip=00.000.000.000; DomainKey-Status: good Authentication-Results: mx.google.com; spf=neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) smtp.mail=domain@box###.bluehost.com; domainkeys=pass [email protected]

    Read the article

  • How do i set up a fully featured small business network?

    - by JoshReedSchramm
    This has the possibility to be a very large question but I recently acquired a few rack mount servers and the hardware necessary to run them. Unfortunately I'm a programmer with very little understanding of how to set up a good working network so I'm hoping someone on here might be able to help. What I want to do is run a domain with a series of subdomains which would all be externally accessible. The setup would live inside my home and my internet connection is your run of the mill cable model (which means a dynamic IP) I want to be able to set up a couple site, specifically: www.mycompany.com (mycompany.com with no subdomain would redirect to this) build.mycompany.com (for my continuous integration server) ruby.mycompany.com (for ruby projects) win.mycompany.com (for windows project) etc. Additionally this is still my home network so our personal machines need to be able to get on via wifi with at least the same security we have now through an out of the box router from best buy. I'm thinking i need a DNS server, DHCP server and one of those would run either no-ip or dyndns to accommodate the dynamic ip. I don't necessarily need mail but it might be helpful to have some sort of mail server i could use for testing, it doesn't need to get out to the greater internet though. So how do i set up this kinda of network? tl;dr Need to know how to set up your standard office style network in my home off my normal consumer level cable modem connection.

    Read the article

  • How to determine the root cause of a system lockup on Ubuntu 8.04 LTS?

    - by jdt141
    I'm currently working a project that involves setting up a PC/104 stack and running Ubuntu 8.04 LTS. We need to use the PC/104 stack because its an embedded application - and we're required to use a DeviceNet peripheral card to communicate to other devices. (DeviceNet is just a protocol on top of CAN.) Anyway, the following hardware is on the stack: Kontron MOPSPM104 with a 1GHz Intel Celeron processor ConnectTech FlashDrive/104 4GB Industrial Temp (-40 to +85 C) Woodhead (Molex) PC104DVNIO DeviceNet card A run of the mill 104 power supply The Kontron Board offers two serial ports, one VGA out, and two USB ports. The DeviceNet card is an ISA card. Because of this (per the User's Guide for the Kontron Board), I have manually set the IRQs in the BIOS to be appropriately configured, and turned off ACPI in both the BIOS and passed the appropriate flag in GRUB. I've installed Ubuntu 8.04 desktop, 32 bit. The problem that I'm having is that, from time to time, the entire 104 stack locks up. This only seems to happen in two cases, both of which we're running GNOME. We have a custom application that uses the DeviceNet card, and the system will lock up, or (more frequently) when we're running Firefox and either surfing for some information or trying to test it - typically by streaming video from a IP-camera. The reason I ask this questions is I cannot determine the root cause of this lockup. The IRQs appear to correctly configured in the BIOS and as the Kernel sees them, and nothing is logged to dmesg. If you all could help me determine the root cause of this lockup, I would greatly appreciate it. Thanks.

    Read the article

  • Orchestrating the Virtual Enterprise, Part I

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's Chief Sustainability Officer & Vice President, SCM Product Strategy During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems. In my next post, I will share examples of companies that have made that shift and talk more about the distributed orchestration process.

    Read the article

  • Write XML using best way(Linq To XML or other)

    - by Pankaj
    Hello All I want to write my xml with following format. How can i do it?I am using c# <map borderColor='c5e5b8' fillColor='6a9057' numberSuffix=' Mill.' includeValueInLabels='0' labelSepChar=': ' baseFontSize='9' showFCMenuItem='0' hoverColor='c2bc23' showTitle='0' type='0' showCanvasBorder='0' bgAlpha='0,0' hoveronEmpty='1' includeNameInLabels='0' showLabels='1'> <!--toolText='Alaska'imageSave='1' imageSaveURL='Path/FusionChartsSave.aspx or FusionChartsSave.php'--> <data> <entity id='AL' value='AL' link="JavaScript:FilterClientProjectList('AL');" fontBold='1' showLabel='0' /> <entity id='AK' value='AK' link="JavaScript:FilterClientProjectList('AK');" fontBold='1' hoverColor='6a9057'/> <entity id='AZ' value='AZ' link="JavaScript:FilterClientProjectList('AZ');" fontBold='1'/> </data> <styles> <definition> <style name='MyFirstFontStyle' type='font' face='Verdana' size='11' color='0372AB' bold='1' bgColor='FFFFFF' /> </definition> <application> <apply toObject='Labels' styles='' /> </application> </styles> </map> Thanks in advance..

    Read the article

  • Rewriting URL's in codeigniter with url_title()?

    - by Craig Ward
    I am rewriting my website with codeigniter and have something I want to do but not sure it is possible. I have a gallery on my site powered by the Flickr API. Here is an example of the code I use to display the Landscape pictures: <?php foreach ($landscapes->photoset->photo as $l->photoset->photo) : ?> <a >photoset->photo->farm ?>/<?php echo $l->photoset->photo->server ?>/<?php echo $l->photoset->photo->id ?>/<?php echo $l->photoset->photo->secret ?>/<?php echo $l->photoset->photo->title ?>'> <img class='f_thumb'>photoset->photo->farm ?>.static.flickr.com/<?php echo $l->photoset->photo->server ?>/<?php echo $l->photoset->photo->id ?>_<?php echo $l->photoset->photo->secret ?>_s.jpg' title='<?php echo $l->photoset->photo->title ?>' alt='<?php echo $l->photoset->photo->title ?>' /></a> <?php endforeach; ?> As you can see when a user clicks on a picture I pass over the Farm, Server, ID, Secret and Title elements using URI segments and build the page in the controller using $data['farm'] = $this->uri->segment(3); $data['server'] = $this->uri->segment(4); $data['id'] = $this->uri->segment(5); $data['secret'] = $this->uri->segment(6); $data['title'] = $this->uri->segment(7); Everything works and is fine but the URL’s are a tad long, example “http://localhost:8888/wip/index.php/gallery/focus/3/2682/4368875046/e8f97f61d9/Old Mill House in Donegal” Is there a way to rewrite the URL so its more like “http://localhost:8888/wip/index.php/gallery/focus/Old_Mill_House_in_Donegal” I was looking at using: $url_title = $this->uri->segment(7); $url_title = url_title($url_title, 'underscore', TRUE); But I don’t seem to be able to get it to work. Any ideas?

    Read the article

  • DD_belatedPNG.js - how to access the vml object? this is for a PNG image-swap.

    - by akc
    I am trying to use Drew Dillard's awesome DD_belatedPNG fix + jQuery to achieve a run-of-the-mill image-swap on hover -- but with PNGs, and to work on IE6. Example: <a id="thelink" href="blah.html"><img src="f-u-ie6.png" /></a> Since DD's script sets the visibility of the original image to "hidden", you can't effectively hover over it. A lot of people, I have noticed, are thwarted by this limitation. Enough so that Drew mentioned he would try to get a work-around into the next version of his PNG fix. Well, in the meantime, I thought I could get around this by handling the hover event on the image's parent instead. So onmouseover, I would hide the VML object created by DD_belatedPNG while setting a background image on "thelink", and onmouseout, show the VML object again and set the background image to nothing. The following code was just to see if I could access the VML object, but it does not work on the VML. It hides all manner of other children, but not the VML. Any ideas? $(document).ready(function(){ $("thelink").hover(function() { $(this).children().attr({ style: "visibility:hidden" }); }, function() { $(this).children().attr({ style: "visibility:visible" }); }); }); Alternatively, can anyone suggest a great PNG image-swap method? I know that you can swap a background image of a link. But you still need to have something inside the A tag. That's not my case. Also, you could put a transparent GIF in the A tag and have the background image swapped to achieve the effect, but I really don't want to do that. Thanks for your insights!

    Read the article

  • Plotting an Arc in Discrete Steps

    - by phobos51594
    Good afternoon, Background My question relates to the plotting of an arbitrary arc in space using discrete steps. It is unique, however, in that I am not drawing to a canvas in the typical sense. The firmware I am designing is for a gcode interpreter for a CNC mill that will translate commands into stepper motor movements. Now, I have already found a similar question on this very site, but the methodology suggested (Bresenham's Algorithm) appears to be incompatable for moving an object in space, as it only relies on the calculation of one octant of a circle which is then mirrored about the remaining axes of symmetry. Furthermore, the prescribed method of calculation an arc between two arbitrary angles relies on trigonometry (I am implementing on a microcontroller and would like to avoid costly trig functions, if possible) and simply not taking the steps that are out of the range. Finally, the algorithm only is designed to work in one rotational direction (e.g. counterclockwise). Question So, on to the actual question: Does anyone know of a general-purpose algorithm that can be used to "draw" an arbitrary arc in discrete steps while still giving respect to angular direction (CW / CCW)? The final implementation will be done in C, but the language for the purpose of the question is irrelevant. Thank you in advance. References S.O post on drawing a simple circle using Bresenham's Algorithm: "Drawing" an arc in discrete x-y steps Wiki page describing Bresenham's Algorithm for a circle http://en.wikipedia.org/wiki/Midpoint_circle_algorithm Gcode instructions to be implemented (see. G2 and G3) http://linuxcnc.org/docs/html/gcode.html

    Read the article

  • What Counts For a DBA: Simplicity

    - by Louis Davidson
    Too many computer processes do an apparently simple task in a bizarrely complex way. They remind me of this strip by one of my favorite artists: Rube Goldberg. In order to keep the boss from knowing one was late, a process is devised whereby the cuckoo clock kisses a live cuckoo bird, who then pulls a string, which triggers a hat flinging, which in turn lands on a rod that removes a typewriter cover…and so on. We rely on creating automated processes to keep on top of tasks. DBAs have a lot of tasks to perform: backups, performance tuning, data movement, system monitoring, and of course, avoiding being noticed.  Every day, there are many steps to perform to maintain the database infrastructure, including: checking physical structures, re-indexing tables where needed, backing up the databases, checking those backups, running the ETL, and preparing the daily reports and yes, all of these processes have to complete before you can call it a day, and probably before many others have started that same day. Some of these tasks are just naturally complicated on their own. Other tasks become complicated because the database architecture is excessively rigid, and we often discover during “production testing” that certain processes need to be changed because the written requirements barely resembled the actual customer requirements.   Then, with no time to change that rigid structure, we are forced to heap layer upon layer of code onto the problematic processes. Instead of a slight table change and a new index, we end up with 4 new ETL processes, 20 temp tables, 30 extra queries, and 1000 lines of SQL code.  Report writers then need to build reports and make magical numbers appear from those toxic data structures that are overly complex and probably filled with inconsistent data. What starts out as a collection of fairly simple tasks turns into a Goldbergian nightmare of daily processes that are likely to cause your dinner to be interrupted by the smartphone doing the vibration dance that signifies trouble at the mill. So what to do? Well, if it is at all possible, simplify the problem by either going into the code and refactoring the complex code to simple, or taking all of the processes and simplifying them into small, independent, easily-tested steps.  The former approach usually requires an agreement on changing underlying structures that requires countless mind-numbing meetings; while the latter can generally be done to any complex process without the same frustration or anger, though it will still leave you with lots of steps to complete, the ability to test each step independently will definitely increase the quality of the overall process (and with each step reporting status back, finding an actual problem within the process will be definitely less unpleasant.) We all know the principle behind simplifying a sequence of processes because we learned it in math classes in our early years of attending school, starting with elementary school. In my 4 years (ok, 9 years) of undergraduate work, I remember pretty much one thing from my many math classes that I apply daily to my career as a data architect, data programmer, and as an occasional indentured DBA: “show your work”. This process of showing your work was my first lesson in simplification. Each step in the process was in fact, far simpler than the entire process.  When you were working an equation that took both sides of 4 sheets of paper, showing your work was important because the teacher could see every step, judge it, and mark it accordingly.  So often I would make an error in the first few lines of a problem which meant that the rest of the work was actually moving me closer to a very wrong answer, no matter how correct the math was in the subsequent steps. Yet, when I got my grade back, I would sometimes be pleasantly surprised. I passed, yet missed every problem on the test. But why? While I got the fact that 1+1=2 wrong in every problem, the teacher could see that I was using the right process. In a computer process, the process is very similar. We take complex processes, show our work by storing intermediate values, and test each step independently. When a process has 100 steps, each step becomes a simple step that is tested and verified, such that there will be 100 places where data is stored, validated, and can be checked off as complete. If you get step 1 of 100 wrong, you can fix it and be confident (that if you did your job of testing the other steps better than the one you had to repair,) that the rest of the process works. If you have 100 steps, and store the state of the process exactly once, the resulting testable chunk of code will be far more complex and finding the error will require checking all 100 steps as one, and usually it would be easier to find a specific needle in a stack of similarly shaped needles.  The goal is to strive for simplicity either in the solution, or at least by simplifying every process down to as many, independent, testable, simple tasks as possible.  For the tasks that really can’t be done completely independently, minimally take those tasks and break them down into simpler steps that can be tested independently.  Like working out division problems longhand, have each step of the larger problem verified and tested.

    Read the article

  • Why everybody should do Sales!

    - by FelixWehmeyer
    I speak with many business students and ask them what job they want to get into. Most of them tell me they want a job in Marketing, Management Consulting or Finance. I hardly ever hear “Sales, that is what I want to do”, and I often wonder why. I would like to start with a quote from Zig Ziglar, a successful salesman: "Nothing happens until someone sells something." But to get back to the main point, why wouldn’t you want to get in sales? When people think of sales, they picture a typical salesman in their head and think that selling is scary and all about manipulating, pressuring and pushing someone into buying something they don’t need. Are these stereotypes accurate? I don’t believe so: So why should you want to be in sales? If you think about selling as providing the solution for the problem and talking about the benefits of making a decision, then every job in this world comes out of selling. In every job you deal with coworkers that you want to convince of your ideas or convincing your boss that the project you want to work on is good for the company.  These days, consumers and businesses are very well informed about services and products. When we are talking about highly complex products, such as IT solutions, businesses don’t accept your run-of-the-mill salesman who is pushing a sale. These are often long projects where salespeople have a consulting and leading role. Salespeople need to be able to consult companies and customers with their problem and convince a client that their solution is the best fit. Next to the fact that sales, is by far, not as scary and shady as you thought, there are a few points that will make you want to consider a sales career: Negotiating skills – When you are in sales you will learn how to negotiate. Salespeople learn to listen to their customers and try to make them happy, overcoming objections and come to a final agreement that both parties are happy with. Persistence/Challenge – As a salesperson you will often hear a negative answer, in a sales role you will start to embrace this and see a ‘no’ as a challenge not as a rejection. This attitude change can help you a lot in your career, but also in your personal life. You will become more optimistic and gain a go-getter attitude. Salary – As salespeople are seen as the moneymakers for the company, companies often reward their sales teams generously. Most likely in a sales role, you will receive a good basic salary and often you get nice bonuses on top of that based on your performance. Oracle is, for instance, the company that offers the highest average commission in the world. Further you can expect many other benefits as companies know that there is a high demand for good salespeople. Teamwork – Sales is a lot like having your own business, you are responsible for your own territory or set of clients. You are the one who is responsible for the revenue coming from that territory. So in order to gain revenue you will have to work together with many departments and people to make that happen. Every (potential) client could be seen as a different project, and you are the project leader. Understanding customers and the business – From any job that you choose sales will get you the most insight in the market. Salespeople are usually well-connected, talk with different customers and learn about the market and are up-to-date about all latest changes. Even if you want to change to a different role in the long run, you have a great head start as you understand the market and customers like no one else. Job security – Look at all the job postings out there. Many of them are sales-related. So if you want to have a steady job, plenty of choice and companies willing to invest in you, sales could be something for you.  Are you interested in exploring a sales career? At Oracle we are always looking for good sales professionals and fresh graduates who want to get into sales! For many languages such as Flemish, Dutch, German, French, Swedish and Norwegian (and more) we are currently looking for graduates who want to develop their career in Oracle. Please have a look at this article for the experience of a Business Development Consultant at Oracle in Dublin. Want to learn more about this job check out this link or send an email to jessica.ebbelaar-at-oracle.com! Have a look at our website http://campus.oracle.com for all of our other latest sales and non-sales vacancies!

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • Windows MAchine Debugging

    - by PrettyFlower
    I've been learning how to program for Windows for some time now and am getting pretty comfy with COM. I had thought to go over to Linux and do some C++ programming there and I wished to run Rosetta Commons so I installed Fedora. I had tried installing Ubuntu a few months ago and things got messy. I had a glitch, maybe caused by one of the live cd creators, my video card or something I don't know. Who Crashed suggested it was my video card and I had regular messages about ntfs.sys and page file issues. At any rate I just installed Fedora and the same thing is happening again. I would like to think with the twenty five years of doing this that I might finally make some headway into debugging my system. I think I may have overlooked a lot of what could be done in favor of simply uninstalling, reinstalling and formatting and starting from scratch. I have opened up the folder windows debugging tools, quite accidentally and just before I was going to clean sweep again, and I found KD and WinDbg. I had never seen these before and I felt that maybe I should look into this. I am quite familiar with the modern machine that is known as the computer, I know what a Kernel is and am now pretty familiar with at the very least Windows Operating System Services. I wish to begin tracking my own machines errors. I understand that most kernel debugging is done on a second machine but I don't have one. And also I understand the goal of the debugger seems to be less about run of the mill errors and more about development time strategies but I'm sure there is more to this. This is my first go at this and I thought maybe I could get some suggestions on where to go from here. I would really like to learn ways to fix my machine and also maybe pick up some tricks on the dev side as well. I hope this isn't too broad a question or too generalized. I'm really just looking for the keywords and an overview of the more routine strategies used. thx

    Read the article

  • In Linux, what's the best way to delegate administration responsibilities, like for Apache, a database, or some other application?

    - by Andrew Banks
    In Linux, what's the best way to delegate administration responsibilities for Apache and other "applications"? File permissions? Sudo? A mix of both? Something else? At work we have two tiers of "administrators" Operating system administrators. These are your run-of-the-mill "server administrators." They are responsible for just the operating system. Application administrators. The people who build the web site. This includes not only writing the SQL, PHP, and HTML, but also setting up and running Apache and PostgreSQL or MySQL. The aforementioned OS admins will install this stuff, but it's mainly up to the app admins to edit all the config files, start and stop processes when needed, and so on. I am one of the app admins. This is different than what I am used to. I used to just write code. The sysadmin took care not only of the OS but also installing, setting up, and keeping up the server software. But he left. Now I'm in charge of setting up Apache and the database. The new sysadmins say they just handle the operating system. It's no problem. I welcome learning new stuff. But there is a learning curve, even for the OS admins. Apache, by default, seems to be set up for administration by root directly. All the config files and scripts are 644 and owned by root:root. I'm not given the root password, naturally, so the OS admins must somehow give my ordinary OS user account all the rights necessary to edit Apache's config files, start and stop it, read its log files, and so on. Right now they're using a mix of: (1) giving me certain sudo rights, (2) adding me to certain groups, and (3) changing the file permissions of various directories, to make them writable by one of the groups I'm in. This never goes smoothly. There's always a back-and-forth between me and the sysadmins. They say it's ready. Then I try certain things, and half of them I still can't do. So they make some more changes. Then finally I seem to be independent and can administer Apache and the database without pestering them anymore. It's the sheer complication and amount of changes that make me uncomfortable. Even though it finally works, more or less, it seems hackneyed. I feel like we're doing it wrong. It seems like the makers of the software would have anticipated this scenario (someone other than root administering it) and have a clean two- or three-step program to delegate responsibility to me. But it feels like we are really chewing up the filesystem and making it far and away from the default set-up. Any suggestions? Are we doing it the recommended way? P.S. For PostgreSQL it seems a little better. Its files are owned by a system user named postgres. So giving me the right to run sudo su - postgres gives me just about everything. I'm just now getting into MySQL, but it seems to be set up similarly. But it seems a little weird doing all my work as another user.

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Doing CRUD on XML using id attributes in C# ASP.NET

    - by Brandon G
    I'm a LAMP guy and ended up working this small news module for an asp.net site, which I am having some difficulty with. I basically am adding and deleting elements via AJAX based on the id. Before, I had it working based on the the index of a set of elements, but would have issues deleting, since the index would change in the xml file and not on the page (since I am using ajax). Here is the rundown news.xml <?xml version="1.0" encoding="utf-8"?> <news> <article id="1"> <title>Red Shield Environmental implements the PARCSuite system</title> <story>Add stuff here</story> </article> <article id="2"> <title>Catalyst Paper selects PARCSuite for its Mill-Wide Process...</title> <story>Add stuff here</story> </article> <article id="3"> <title>Weyerhaeuser uses Capstone Technology to provide Control...</title> <story>Add stuff here</story> </article> </news> Page sending del request: <script type="text/javascript"> $(document).ready(function () { $('.del').click(function () { var obj = $(this); var id = obj.attr('rel'); $.post('add-news-item.aspx', { id: id }, function () { obj.parent().next().remove(); obj.parent().remove(); } ); }); }); </script> <a class="del" rel="1">...</a> <a class="del" rel="1">...</a> <a class="del" rel="1">...</a> My functions protected void addEntry(string title, string story) { XmlDocument news = new XmlDocument(); news.Load(Server.MapPath("../news.xml")); XmlAttributeCollection ids = news.Attributes; //Create a new node XmlElement newelement = news.CreateElement("article"); XmlElement xmlTitle = news.CreateElement("title"); XmlElement xmlStory = news.CreateElement("story"); XmlAttribute id = ids[0]; int myId = int.Parse(id.Value + 1); id.Value = ""+myId; newelement.SetAttributeNode(id); xmlTitle.InnerText = this.TitleBox.Text.Trim(); xmlStory.InnerText = this.StoryBox.Text.Trim(); newelement.AppendChild(xmlTitle); newelement.AppendChild(xmlStory); news.DocumentElement.AppendChild(newelement); news.Save(Server.MapPath("../news.xml")); } protected void deleteEntry(int selectIndex) { XmlDocument news = new XmlDocument(); news.Load(Server.MapPath("../news.xml")); XmlNode xmlnode = news.DocumentElement.ChildNodes.Item(selectIndex); xmlnode.ParentNode.RemoveChild(xmlnode); news.Save(Server.MapPath("../news.xml")); } I haven't updated deleteEntry() and you can see, I was using the array index but need to delete the article element based on the article id being passed. And when adding an entry, I need to set the id to the last elements id + 1. Yes, I know SQL would be 100 times easier, but I don't have access so... help?

    Read the article

  • Need help to debug application using .Net and MySQL

    - by Tim Murphy
    What advice can you give me on how to track down a bug I have while inserting data into MySQL database with .Net? The error message is: MySql.Data.MySqlClient.MySqlException: Duplicate entry '26012' for key 'StockNumber_Number_UNIQUE' Reviewing of the log proves that StockNumber_Number of 26012 has not been inserted yet. Products in use. Visual Studio 2008. mysql.data.dll 6.0.4.0. Windows 7 Ultimate 64 bit and Windows 2003 32 bit. Custom built ORM framework (have source code). Importing data from Access 2003 database. The code works fine for 3000 - 5000 imports. The record being imported that causes the problem in a full run works fine if just importing by itself. I've also seen the error on other records if I sort the data to be imported a different way. Have tried import with and without transactions. Have logged the hell out of the system. The SQL command to create the table: CREATE TABLE `RareItems_RareItems` ( `RareItemKey` CHAR(36) NOT NULL PRIMARY KEY, `StockNumber_Text` VARCHAR(7) NOT NULL, `StockNumber_Number` INT NOT NULL AUTO_INCREMENT, UNIQUE INDEX `StockNumber_Number_UNIQUE` (`StockNumber_Number` ASC), `OurPercentage` NUMERIC , `SellPrice` NUMERIC(19, 2) , `Author` VARCHAR(250) , `CatchWord` VARCHAR(250) , `Title` TEXT , `Publisher` VARCHAR(250) , `InternalNote` VARCHAR(250) , `DateOfPublishing` VARCHAR(250) , `ExternalNote` LONGTEXT , `Description` LONGTEXT , `Scrap` LONGTEXT , `SuppressionKey` CHAR(36) NOT NULL, `TypeKey` CHAR(36) NOT NULL, `CatalogueStatusKey` CHAR(36) NOT NULL, `CatalogueRevisedDate` DATETIME , `CatalogueRevisedByKey` CHAR(36) NOT NULL, `CatalogueToBeRevisedByKey` CHAR(36) NOT NULL, `DontInsure` BIT NOT NULL, `ExtraCosts` NUMERIC(19, 2) , `IsWebReady` BIT NOT NULL, `LocationKey` CHAR(36) NOT NULL, `LanguageKey` CHAR(36) NOT NULL, `CatalogueDescription` VARCHAR(250) , `PlacePublished` VARCHAR(250) , `ToDo` LONGTEXT , `Headline` VARCHAR(250) , `DepartmentKey` CHAR(36) NOT NULL, `Temp1` INT , `Temp2` INT , `Temp3` VARCHAR(250) , `Temp4` VARCHAR(250) , `InternetStatusKey` CHAR(36) NOT NULL, `InternetStatusInfo` LONGTEXT , `PurchaseKey` CHAR(36) NOT NULL, `ConsignmentKey` CHAR(36) , `IsSold` BIT NOT NULL, `RowCreated` DATETIME NOT NULL, `RowModified` DATETIME NOT NULL ); The SQL command and parameters to insert the record: INSERT INTO `RareItems_RareItems` (`RareItemKey`, `StockNumber_Text`, `StockNumber_Number`, `OurPercentage`, `SellPrice`, `Author`, `CatchWord`, `Title`, `Publisher`, `InternalNote`, `DateOfPublishing`, `ExternalNote`, `Description`, `Scrap`, `SuppressionKey`, `TypeKey`, `CatalogueStatusKey`, `CatalogueRevisedDate`, `CatalogueRevisedByKey`, `CatalogueToBeRevisedByKey`, `DontInsure`, `ExtraCosts`, `IsWebReady`, `LocationKey`, `LanguageKey`, `CatalogueDescription`, `PlacePublished`, `ToDo`, `Headline`, `DepartmentKey`, `Temp1`, `Temp2`, `Temp3`, `Temp4`, `InternetStatusKey`, `InternetStatusInfo`, `PurchaseKey`, `ConsignmentKey`, `IsSold`, `RowCreated`, `RowModified`) VALUES (@RareItemKey, @StockNumber_Text, @StockNumber_Number, @OurPercentage, @SellPrice, @Author, @CatchWord, @Title, @Publisher, @InternalNote, @DateOfPublishing, @ExternalNote, @Description, @Scrap, @SuppressionKey, @TypeKey, @CatalogueStatusKey, @CatalogueRevisedDate, @CatalogueRevisedByKey, @CatalogueToBeRevisedByKey, @DontInsure, @ExtraCosts, @IsWebReady, @LocationKey, @LanguageKey, @CatalogueDescription, @PlacePublished, @ToDo, @Headline, @DepartmentKey, @Temp1, @Temp2, @Temp3, @Temp4, @InternetStatusKey, @InternetStatusInfo, @PurchaseKey, @ConsignmentKey, @IsSold, @RowCreated, @RowModified) @RareItemKey = 0b625bd6-776d-43d6-9405-e97159d172a6 @StockNumber_Text = 199305 @StockNumber_Number = 26012 @OurPercentage = 22.5 @SellPrice = 1250 @Author = SPARRMAN, Anders. @CatchWord = COOK: SECOND VOYAGE @Title = A Voyage Round the World with Captain James Cook in H.M.S. Resolution… Introduction and notes by Owen Rutter, wood engravings by Peter Barker-Mill. @Publisher = @InternalNote = @DateOfPublishing = 1944 @ExternalNote = The first English translation of Sparrman’s narrative, which had originally been published in Sweden in 1802-1818, and the only complete version of his account to appear in English. The eighteenth-century translation had appeared some time before the Swedish publication of the final sections of his account. Sparrman’s observant and well-written narrative of the second voyage contains much that appears nowhere else, emphasising naturally his interests in medicine, health, and natural history.&lt;br&gt;&lt;br&gt;One of 350 numbered copies: a handsomely produced and beautifully illustrated work. @Description = Small folio, wood-engravings in the text; original olive glazed cloth, top edges gilt, a very good copy. London, Golden Cockerel Press, 1944. @Scrap = @SuppressionKey = 00000000-0000-0000-0000-000000000000 @TypeKey = 93f58155-7471-46ad-84c5-262ab9dd37e8 @CatalogueStatusKey = 00000000-0000-0000-0000-000000000003 @CatalogueRevisedDate = @CatalogueRevisedByKey = c4f6fc06-956d-44c4-b393-0d5462cbffec @CatalogueToBeRevisedByKey = 00000000-0000-0000-0000-000000000000 @DontInsure = False @ExtraCosts = @IsWebReady = False @LocationKey = 00000000-0000-0000-0000-000000000000 @LanguageKey = 00000000-0000-0000-0000-000000000000 @CatalogueDescription = @PlacePublished = Golden Cockerel Press @ToDo = @Headline = @DepartmentKey = 529578a3-9189-40de-b656-eef9039d00b8 @Temp1 = @Temp2 = @Temp3 = @Temp4 = v @InternetStatusKey = 00000000-0000-0000-0000-000000000000 @InternetStatusInfo = @PurchaseKey = 00000000-0000-0000-0000-000000000000 @ConsignmentKey = @IsSold = True @RowCreated = 8/04/2010 8:49:16 PM @RowModified = 8/04/2010 8:49:16 PM Suggestions on what is causing the error and/or how to track down what is causing the problem?

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Javascript stockticker : not showing data on php page

    - by developer
    iam not getting any javascript errors , code is getting rendered properly only, but still server not displaying data on the page. please check the code below . <style type="text/css"> #marqueeborder { color: #cccccc; background-color: #EEF3E2; font-family:"Lucida Console", Monaco, monospace; position:relative; height:20px; overflow:hidden; font-size: 0.7em; } #marqueecontent { position:absolute; left:0px; line-height:20px; white-space:nowrap; } .stockbox { margin:0 10px; } .stockbox a { color: #cccccc; text-decoration : underline; } </style> </head> <body> <div id="marqueeborder" onmouseover="pxptick=0" onmouseout="pxptick=scrollspeed"> <div id="marqueecontent"> <?php // Original script by Walter Heitman Jr, first published on http://techblog.shanock.com // List your stocks here, separated by commas, no spaces, in the order you want them displayed: $stocks = "idt,iye,mill,pwer,spy,f,msft,x,sbux,sne,ge,dow,t"; // Function to copy a stock quote CSV from Yahoo to the local cache. CSV contains symbol, price, and change function upsfile($stock) { copy("http://finance.yahoo.com/d/quotes.csv?s=$stock&f=sl1c1&e=.csv","stockcache/".$stock.".csv"); } foreach ( explode(",", $stocks) as $stock ) { // Where the stock quote info file should be... $local_file = "stockcache/".$stock.".csv"; // ...if it exists. If not, download it. if (!file_exists($local_file)) { upsfile($stock); } // Else,If it's out-of-date by 15 mins (900 seconds) or more, update it. elseif (filemtime($local_file) <= (time() - 900)) { upsfile($stock); } // Open the file, load our values into an array... $local_file = fopen ("stockcache/".$stock.".csv","r"); $stock_info = fgetcsv ($local_file, 1000, ","); // ...format, and output them. I made the symbols into links to Yahoo's stock pages. echo "<span class=\"stockbox\"><a href=\"http://finance.yahoo.com/q?s=".$stock_info[0]."\">".$stock_info[0]."</a> ".sprintf("%.2f",$stock_info[1])." <span style=\""; // Green prices for up, red for down if ($stock_info[2]>=0) { echo "color: #009900;\">&uarr;"; } elseif ($stock_info[2]<0) { echo "color: #ff0000;\">&darr;"; } echo sprintf("%.2f",abs($stock_info[2]))."</span></span>\n"; // Done! fclose($local_file); } ?> <span class="stockbox" style="font-size:0.6em">Quotes from <a href="http://finance.yahoo.com/">Yahoo Finance</a></span> </div> </div> </body> <script type="text/javascript"> // Original script by Walter Heitman Jr, first published on http://techblog.shanock.com // Set an initial scroll speed. This equates to the number of pixels shifted per tick var scrollspeed=2; var pxptick=scrollspeed; var marqueediv=''; var contentwidth=""; var marqueewidth = ""; function startmarquee(){ alert("hi"); // Make a shortcut referencing our div with the content we want to scroll marqueediv=document.getElementById("marqueecontent"); //alert("marqueediv"+marqueediv); alert("hi"+marqueediv.innerHTML); // Get the total width of our available scroll area marqueewidth=document.getElementById("marqueeborder").offsetWidth; alert("marqueewidth"+marqueewidth); // Get the width of the content we want to scroll contentwidth=marqueediv.offsetWidth; alert("contentwidth"+contentwidth); // Start the ticker at 50 milliseconds per tick, adjust this to suit your preferences // Be warned, setting this lower has heavy impact on client-side CPU usage. Be gentle. var lefttime=setInterval("scrollmarquee()",50); alert("lefttime"+lefttime); } function scrollmarquee(){ // Check position of the div, then shift it left by the set amount of pixels. if (parseInt(marqueediv.style.left)>(contentwidth*(-1))) marqueediv.style.left=parseInt(marqueediv.style.left)-pxptick+"px"; //alert("hikkk"+marqueediv.innerHTML);} // If it's at the end, move it back to the right. else{ alert("marqueewidth"+marqueewidth); marqueediv.style.left=parseInt(marqueewidth)+"px"; } } window.onload=startmarquee; </script> </html> Below is the server displayed page. I have updated with screenshot with your suggestion, i made change in html too, to check what is showing by child dev

    Read the article

  • Pointers to Derived Class Objects Losing vfptr

    - by duckworthd
    To begin, I am trying to write a run-of-the-mill, simple Ray Tracer. In my Ray Tracer, I have multiple types of geometries in the world, all derived from a base class called "SceneObject". I've included the header for it here. /** Interface for all objects that will appear in a scene */ class SceneObject { public: mat4 M, M_inv; Color c; SceneObject(); ~SceneObject(); /** The transformation matrix to be applied to all points of this object. Identity leaves the object in world frame. */ void setMatrix(mat4 M); void setMatrix(MatrixStack mStack); void getMatrix(mat4& M); /** The color of the object */ void setColor(Color c); void getColor(Color& c); /** Alter one portion of the color, leaving the rest as they were. */ void setDiffuse(vec3 rgb); void setSpecular(vec3 rgb); void setEmission(vec3 rgb); void setAmbient(vec3 rgb); void setShininess(double s); /** Fills 'inter' with information regarding an intersection between this object and 'ray'. Ray should be in world frame. */ virtual void intersect(Intersection& inter, Ray ray) = 0; /** Returns a copy of this SceneObject */ virtual SceneObject* clone() = 0; /** Print information regarding this SceneObject for debugging */ virtual void print() = 0; }; As you can see, I've included a couple virtual functions to be implemented elsewhere. In this case, I have only two derived class -- Sphere and Triangle, both of which implement the missing member functions. Finally, I have a Parser class, which is full of static methods that do the actual "Ray Tracing" part. Here's a couple snippets for relevant portions void Parser::trace(Camera cam, Scene scene, string outputFile, int maxDepth) { int width = cam.getNumXPixels(); int height = cam.getNumYPixels(); vector<vector<vec3>> colors; colors.clear(); for (int i = 0; i< width; i++) { vector<vec3> ys; for (int j = 0; j<height; j++) { Intersection intrsct; Ray ray; cam.getRay(ray, i, j); vec3 color; printf("Obtaining color for Ray[%d,%d]\n", i,j); getColor(color, scene, ray, maxDepth); ys.push_back(color); } colors.push_back(ys); } printImage(colors, width, height, outputFile); } void Parser::getColor(vec3& color, Scene scene, Ray ray, int numBounces) { Intersection inter; scene.intersect(inter,ray); if(inter.isIntersecting()){ Color c; inter.getColor(c); c.getAmbient(color); } else { color = vec3(0,0,0); } } Right now, I've forgone the true Ray Tracing part and instead simply return the color of the first object hit, if any. As you have no doubt noticed, the only way the computer knows that a ray has intersected an object is through Scene.intersect(), which I also include. void Scene::intersect(Intersection& i, Ray r) { Intersection result; result.setDistance(numeric_limits<double>::infinity()); result.setIsIntersecting(false); double oldDist; result.getDistance(oldDist); /* Cycle through all objects, making result the closest one */ for(int ind=0; ind<objects.size(); ind++){ SceneObject* thisObj = objects[ind]; Intersection betterIntersect; thisObj->intersect(betterIntersect, r); double newDist; betterIntersect.getDistance(newDist); if (newDist < oldDist){ result = betterIntersect; oldDist = newDist; } } i = result; } Alright, now for the problem. I begin by creating a scene and filling it with objects outside of the Parser::trace() method. Now for some odd reason, I cast Ray for i=j=0 and everything works wonderfully. However, by the time the second ray is cast all of the objects stored in my Scene no longer recognize their vfptr's! I stepped through the code with a debugger and found that the information to all the vfptr's are lost somewhere between the end of getColor() and the continuation of the loop. However, if I change the arguments of getColor() to use a Scene& instead of a Scene, then no loss occurs. What crazy voodoo is this?

    Read the article

< Previous Page | 1 2