Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 468/595 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Should I be an algorithm developer, or java web frameworks type developer?

    - by Derek
    So - as I see it, there are really two kinds of developers. Those that do frameworks, web services, pretty-making front ends, etc etc. Then there are developers that write the algorithms that solve the problem. That is, unless the problem is "display this raw data in some meaningful way." In that case, the framework/web developer guy might be doing both jobs. So my basic problem is this. I have been an algorithms kind of software developer for a few years now. I double majored in Math and Computer science, and I have a master's in systems engineering. I have never done any web-dev work, with the exception of a couple minor jobs, and some hobby level stuff. I have been job interviewing lately, and this is what happens: Job is listed as "programmer- 5 years of experience with the following: C/C++, Java,Perl, Ruby, ant, blah blah blah" Recruiter calls me, says they want me to come in for interview In the interview, find out they have some webservices development, blah blah blah When asked in the interview, talk about my experience doing algorithms, optimization, blah blah..but very willing to learn new languages, frameworks, etc Get a call back saying "we didn't think you were a fit for the job you interviewed wtih, but our algorithm team got wind of you and wants to bring you on" This has happened to me a couple times now - see a vague-ish job description looking for a "programmer" Go in, find out they are doing some sort of web-based tool, maybe with some hardcore algorithms running in the background. interview with people for the web-based tool, but get an offer from the algorithms people. So the question is - which job is the better job? I basically just want to get a wide berth of experience at this level of my career, but are algorithm developers so much in demand? Even more so than all these supposed hot in demand web developer guys? Will I be ok in the long run if I go into the niche of math based algorithm development, and just little to no, or hobby level web-dev experience? I basically just don't want to pigeon hole myself this early. My salary is already starting to get pretty high - and I can see a company later on saying "we really need a web developer, but we'll hire this 50k/year college guy, instead of this 100k/year experience algorithm guy" Cliffs notes: I have been doing algorithm development. I consider myself to be a "good programmer." I would have no problem picking up web technologies and those sorts of frameworks. During job interviews, I keep getting "we think you've got a good skillset - talk to our algorithm team" instead of wanting me to learn new skills on the job to do their web services or whhatever other new technology they are doing. Edit: Whenever I am talking about algorithm development here - I am talking about the code that produces the answer. Typically I think of more math-based algorithms: solving a financial problem, solving a finite element method, image processing, etc

    Read the article

  • June IOUG events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Independent Oracle User Group (IOUG) Regional Events: June 11-12, 2012 – Broomfield, CO 2-Day Seminar- “ High Performance PL/SQL & Oracle Database 11g New Features” Steven Feuerstein, generally considered the world’s leading PL/SQL expert, will be presenting his all-new, 2-day, “Higher Performance PL/SQL and Oracle 11g PL/SQL New Features” seminar on June 11 & 12 at Level 3 Communications in Broomfield, Colorado.  This will be Steven’s first Denver seminar in almost 4  years.  Who knows when he will offer another? http://www.rmoug.org/ June 14, 2012 – Ottawa, Ontario Pythian’s Gwen Shapira puts on 3 great presentations focused on NoSQL, making OLTP run fast and Big Data. http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:1317735724699447::NO June 21, 2012 – Calgary, Alberta Big Data and Extreme Analytics Summit http://coug.ab.ca/ June 22, 2012 – Westborough, MA 10 Things You Probably Did Not Know? With Tom Kyte PL/SQL turns 23 years old this year. It was first introduced in 1988 with Oracle6 Database. This session looks at five technical things about PL/SQL you probably did not know: under-the-covers features that make PL/SQL quite simply the most efficient language with which to process data in the database. http://noug.com/  June 28/29, 2012 – Plano, Texas Jonathan Lewis Oracle Performance Seminars The DOUG (DALLAS ORACLE USERS GROUP) has invited SpeakTech to return to Dallas, and they’re bringing Jonathan Lewis! Topics are Beating the Oracle Optimizer – June 28, 2012, Trouble Shooting & Tuning – June 29, 2012 http://www.eventbrite.com/event/3082448687

    Read the article

  • The WaitForAll Roadshow

    - by adweigert
    OK, so I took for granted some imaginative uses of WaitForAll but lacking that, here is how I am using. First, I have a nice little class called Parallel that allows me to spin together a list of tasks (actions) and then use WaitForAll, so here it is, WaitForAll's 15 minutes of fame ... First Parallel that allows me to spin together several Action delegates to execute, well in parallel.   public static class Parallel { public static ParallelQuery Task(Action action) { return new Action[] { action }.AsParallel(); } public static ParallelQuery> Task(Action action) { return new Action[] { action }.AsParallel(); } public static ParallelQuery Task(this ParallelQuery actions, Action action) { var list = new List(actions); list.Add(action); return list.AsParallel(); } public static ParallelQuery> Task(this ParallelQuery> actions, Action action) { var list = new List>(actions); list.Add(action); return list.AsParallel(); } }   Next, this is an example usage from an app I'm working on that just is rendering some basic computer information via WMI and performance counters. The WMI calls can be expensive given the distance and link speed of some of the computers it will be trying to communicate with. This is the actual MVC action from my controller to return the data for an individual computer.  public PartialViewResult Detail(string computerName) { var computer = this.Computers.Get(computerName); var perf = Factory.GetInstance(); var detail = new ComputerDetailViewModel() { Computer = computer }; try { var work = Parallel .Task(delegate { // Win32_ComputerSystem var key = computer.Name + "_Win32_ComputerSystem"; var system = this.Cache.Get(key); if (system == null) { using (var impersonation = computer.ImpersonateElevatedIdentity()) { system = computer.GetWmiContext().GetInstances().Single(); } this.Cache.Set(key, system); } detail.TotalMemory = system.TotalPhysicalMemory; detail.Manufacturer = system.Manufacturer; detail.Model = system.Model; detail.NumberOfProcessors = system.NumberOfProcessors; }) .Task(delegate { // Win32_OperatingSystem var key = computer.Name + "_Win32_OperatingSystem"; var os = this.Cache.Get(key); if (os == null) { using (var impersonation = computer.ImpersonateElevatedIdentity()) { os = computer.GetWmiContext().GetInstances().Single(); } this.Cache.Set(key, os); } detail.OperatingSystem = os.Caption; detail.OSVersion = os.Version; }) // Performance Counters .Task(delegate { using (var impersonation = computer.ImpersonateElevatedIdentity()) { detail.AvailableBytes = perf.GetSample(computer, "Memory", "Available Bytes"); } }) .Task(delegate { using (var impersonation = computer.ImpersonateElevatedIdentity()) { detail.TotalProcessorUtilization = perf.GetValue(computer, "Processor", "% Processor Time", "_Total"); } }).WithExecutionMode(ParallelExecutionMode.ForceParallelism); if (!work.WaitForAll(TimeSpan.FromSeconds(15), task => task())) { return PartialView("Timeout"); } } catch (Exception ex) { this.LogException(ex); return PartialView("Error.ascx"); } return PartialView(detail); }

    Read the article

  • PC to USB transfer slow

    - by Vipin Ms
    I'm having trouble with USB transfer,not with external hard disk. Transfer starts with like, for the transfer of 700MB file it starts with 30mb/s and towards the end it stops at 0s and stays put for like 3-4 mins to transfer the last bit. I have tried different USB devices, but no luck. Is it a bug? Another important point is, in Kubuntu there is no such issue. So is it something related to Gnome? I'm using Ubuntu 11.10 64bit. Somebody please help, it's really annoying. Here are the details. PC all of my drives are in ext4. USB I tried ext3,ntfs and fat32. All having the same problem. Here are my USB controllers details: root@LAB:~# lspci|grep USB 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) Here is an example of one transfer. I connected one of my 4GB usb device. Nov 24 12:01:25 LAB kernel: [ 1175.082175] userif-2: sent link up event. Nov 24 12:01:25 LAB kernel: [ 1695.684158] usb 2-2: new high speed USB device number 3 using ehci_hcd Nov 24 12:01:25 LAB mtp-probe: checking bus 2, device 3: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2" Nov 24 12:01:26 LAB mtp-probe: bus: 2, device: 3 was not an MTP device Nov 24 12:01:26 LAB kernel: [ 1696.132680] usbcore: registered new interface driver uas Nov 24 12:01:26 LAB kernel: [ 1696.142528] Initializing USB Mass Storage driver... Nov 24 12:01:26 LAB kernel: [ 1696.142919] scsi4 : usb-storage 2-2:1.0 Nov 24 12:01:26 LAB kernel: [ 1696.143146] usbcore: registered new interface driver usb-storage Nov 24 12:01:26 LAB kernel: [ 1696.143150] USB Mass Storage support registered. Nov 24 12:01:27 LAB kernel: [ 1697.141657] scsi 4:0:0:0: Direct-Access SanDisk U3 Cruzer Micro 8.02 PQ: 0 ANSI: 0 CCS Nov 24 12:01:27 LAB kernel: [ 1697.168827] sd 4:0:0:0: Attached scsi generic sg2 type 0 Nov 24 12:01:27 LAB kernel: [ 1697.169262] sd 4:0:0:0: [sdb] 7856127 512-byte logical blocks: (4.02 GB/3.74 GiB) Nov 24 12:01:27 LAB kernel: [ 1697.169762] sd 4:0:0:0: [sdb] Write Protect is off Nov 24 12:01:27 LAB kernel: [ 1697.169767] sd 4:0:0:0: [sdb] Mode Sense: 45 00 00 08 Nov 24 12:01:27 LAB kernel: [ 1697.171386] sd 4:0:0:0: [sdb] No Caching mode page present Nov 24 12:01:27 LAB kernel: [ 1697.171391] sd 4:0:0:0: [sdb] Assuming drive cache: write through Nov 24 12:01:27 LAB kernel: [ 1697.173503] sd 4:0:0:0: [sdb] No Caching mode page present Nov 24 12:01:27 LAB kernel: [ 1697.173510] sd 4:0:0:0: [sdb] Assuming drive cache: write through Nov 24 12:01:27 LAB kernel: [ 1697.175337] sdb: sdb1 After that I initiated one transfer. lsof -p 3575|tail -2 mv 3575 root 3r REG 8,8 1719599104 4325379 /media/Misc/The Tree of Life (2011) DVDRip XviD-MAXSPEED/The Tree of Life (2011) DVDRip XviD-MAXSPEED www.torentz.3xforum.ro.avi mv 3575 root 4w REG 8,17 1046347776 15 /media/SREE/The Tree of Life (2011) DVDRip XviD-MAXSPEED/The Tree of Life (2011) DVDRip XviD-MAXSPEED www.torentz.3xforum.ro.avi Here are the total time spent on that transfer. root@LAB:/media/SREE# time mv /media/Misc/The\ Tree\ of\ Life\ \(2011\)\ DVDRip\ XviD-MAXSPEED/ /media/SREE/ real 11m49.334s user 0m0.008s sys 0m5.260s root@LAB:/media/SREE# df -T|tail -2 /dev/sdb1 vfat 3918344 1679308 2239036 43% /media/SREE /dev/sda8 ext4 110110576 60096904 50013672 55% /media/Misc Do you think this is normal?? Approximately 12 minutes for 1.6Gb transfer? Thanks.

    Read the article

  • A Community Cure for a String Splitting Headache

    - by Tony Davis
    A heartwarming tale of dogged perseverance and Community collaboration to solve some SQL Server string-related headaches. Michael J Swart posted a blog this week that had me smiling in recognition and agreement, describing how an inquisitive Developer or DBA deals with a problem. It's a three-step process, starting with discomfort and anxiety; a feeling that one doesn't know as much about one's chosen specialized subject as previously thought. It progresses through a phase of intense research and learning until finally one achieves breakthrough, blessed relief and renewed optimism. In this case, the discomfort was provoked by the mystery of massively high CPU when searching Unicode strings in SQL Server. Michael explored the problem via Stack Overflow, Google and Twitter #sqlhelp, finally leading to resolution and a blog post that shared what he learned. Perfect; except that sometimes you have to be prepared to share what you've learned so far, while still mired in the phase of nagging discomfort. A good recent example of this recently can be found on our own blogs. Despite being a loud advocate of the lightning fast T-SQL-based string splitting techniques, honed to near perfection over many years by Jeff Moden and others, Phil Factor retained a dogged conviction that, in theory, shredding element-based XML using XQuery ought to be even more efficient for splitting a string to create a table. After some careful testing, he found instead that the XML way performed and scaled miserably by comparison. Somewhat subdued, and with a nagging feeling that perhaps he was still missing "something", he posted his findings. What happened next was a joy to behold; the community jumped in to suggest subtle changes in approach, using an attribute-based rather than element-based XML list, and tweaking the XQuery shredding. The result was performance and scalability that surpassed all other techniques. I asked Phil how quickly he would have arrived at the real breakthrough on his own. His candid answer was "never". Both are great examples of the power of Community learning and the latter in particular the importance of being brave enough to parade one's ignorance. Perhaps Jeff Moden will accept the string-splitting gauntlet one more time. To quote the great man: you've just got to love this community! If you've an interesting tale to tell about being helped to a significant breakthrough for a problem by the community, I'd love to hear about it. Cheers, Tony.

    Read the article

  • Removing hard-coded values and defensive design vs YAGNI

    - by Ben Scott
    First a bit of background. I'm coding a lookup from Age - Rate. There are 7 age brackets so the lookup table is 3 columns (From|To|Rate) with 7 rows. The values rarely change - they are legislated rates (first and third columns) that have stayed the same for 3 years. I figured that the easiest way to store this table without hard-coding it is in the database in a global configuration table, as a single text value containing a CSV (so "65,69,0.05,70,74,0.06" is how the 65-69 and 70-74 tiers would be stored). Relatively easy to parse then use. Then I realised that to implement this I would have to create a new table, a repository to wrap around it, data layer tests for the repo, unit tests around the code that unflattens the CSV into the table, and tests around the lookup itself. The only benefit of all this work is avoiding hard-coding the lookup table. When talking to the users (who currently use the lookup table directly - by looking at a hard copy) the opinion is pretty much that "the rates never change." Obviously that isn't actually correct - the rates were only created three years ago and in the past things that "never change" have had a habit of changing - so for me to defensively program this I definitely shouldn't store the lookup table in the application. Except when I think YAGNI. The feature I am implementing doesn't specify that the rates will change. If the rates do change, they will still change so rarely that maintenance isn't even a consideration, and the feature isn't actually critical enough that anything would be affected if there was a delay between the rate change and the updated application. I've pretty much decided that nothing of value will be lost if I hard-code the lookup, and I'm not too concerned about my approach to this particular feature. My question is, as a professional have I properly justified that decision? Hard-coding values is bad design, but going to the trouble of removing the values from the application seems to violate the YAGNI principle. EDIT To clarify the question, I'm not concerned about the actual implementation. I'm concerned that I can either do a quick, bad thing, and justify it by saying YAGNI, or I can take a more defensive, high-effort approach, that even in the best case ultimately has low benefits. As a professional programmer does my decision to implement a design that I know is flawed simply come down to a cost/benefit analysis?

    Read the article

  • Oracle Leader in Transportation Management

    - by John Murphy
    Oracle Named a Leader in the Transportation Management Systems Market by Leading Analyst Firm Redwood Shores, Calif. – October 15, 2012 News Facts Gartner, Inc. has placed Oracle Transportation Management in the Leaders Quadrant of its 2012 report, “Magic Quadrant for Transportation Management Systems (TMS).” (1) Gartner Magic Quadrants position vendors within a particular market segment based on their completeness of vision and ability to execute on that vision. According to the report, “Multiple subcomponents make up a comprehensive TMS across planning (for example, load consolidation, routing, mode selection and carrier selection) and execution (for example, tendering loads to carriers, shipment track and trace, and freight audit and payment).” Built on modern, flexible, Internet based architecture, Oracle Transportation Management is a global transportation and logistics operations system that allows companies to minimize cost, optimize service levels, support sustainability initiatives, and create flexible business process automation within their transportation and logistics networks. With a share of 26% of worldwide software revenue for 2011, Oracle is also number one in TMS vendor share according to Gartner’s report, “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016.” (2) Supporting Quote “Shippers and logistics service providers face increasingly complex challenges as they try to reduce costs, secure capacity and improve overall freight efficiency,” said Derek Gittoes, vice president, logistics product strategy, Oracle. “We believe our high standing in both Gartner reports is a reflection of Oracle’s commitment to addressing these challenges by delivering the industry’s broadest and deepest transportation management platform. With a flexible and modern platform, we are able to support customers with both basic transportation needs, as well as those with highly complex logistics requirements.” Supporting Resources Magic Quadrant for Transportation Management Systems Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016 Oracle Transportation Management (1) Gartner, Inc., “Magic Quadrant for Transportation Management Systems,” by C. Dwight Klappich, August 23, 2012 (2) Gartner, Inc., “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016,” by Chad Eschinger and C. Dwight Klappich, September 24, 2012. About Oracle Applications Over 65,000 customers worldwide rely on Oracle's complete, open and integrated enterprise applications to achieve superior results. Oracle provides a secure path for customers to benefit from the latest technology advances that improve the customer software experience and drive better business performance. Oracle Applications Unlimited is Oracle's commitment to customer choice through continuous investment and innovation in current applications offerings. Oracle's next-generation Fusion Applications build upon that commitment, and are designed to work with and evolve Oracle's Applications Unlimited offerings. Oracle's lifetime support policy helps ensure customers will continue to have a choice in upgrade paths, based on their enterprise needs. For more information on the latest Oracle Applications releases go towww.oracle.com/applications About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. Trademarks Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. ###   Karen [email protected] Simon JonesBlanc & [email protected]

    Read the article

  • Thank you Geeks With Blogs for letting me join your community!

    - by GreeNTUG
    First, a link to the blog I can no longer edit because Office Live blew away my digital identity and so I can no longer log into it (the source of a loooong blog about protecting your digital identity sometime when I have more time and after it has played out to the end) http://greentug.spaces.live.com/ The following are the communities I participate in: Green & Sustainability.  I run a virtual user group on Green and Sustainability as it relates to developers and software architects.  It was located at greentug.groups.live.com, and we will need to find a new digital location for it, because I am locked out of that site as well. BizSpark Tampa Bay:  I run a BizSpark group for Microsoft technologists (meetup.com, search for BizSpark Tampa Bay) and speak at Code Camps about "No Better Time to Start Your Own Tech Business".  The meetup group facilitates a balanced presentation that is respectful to anyone wanting to start their own business, whether part-time or full-time, whether micro (just you), sustainable (grow to 2-25-ish, self-funded), high growth (get venture capital or other funding, grow it, sell it within 5 years, do it again), or hybrid (the new model going forward).  It is an "action" group, with assignments and homework if you want to get the most out of it.   At the end of a year you will either have your business on the path to where you want it to be, or you will know the steps you need to do to get it there. Women in Technology Have been participating in the Women in Technology community since 2008, my main interests in this area are mentoring women in the workplace to have them believe they can become geeks and double their income, and to mentor them with respect to starting and running their own business. Access 2010/SharePoint 2010.  This is a game-changer with respect to the Access community (the ap both devs and IT Pros love to hate, the other a-word that's not a fruit).  I conducted Lunch n Learns and Brunch n Learns around this topic before the Office 2010/SharePoint 2010 launch, and spoke on the topic at SharePoint Saturday Tampa in Nov 2009. Interested in learning more about: Using Silverlight HD Streaming out in the non-technical world (horses and equestrian sport).  Migrating to Access Web Services and VB .Net from VBA (see the Access 2010/SharePoint 2010 interest above) Windows Phone 7!  Exciting opportunities both for Green and Sustainability and for my "day job" of Environmental, Health & Safety (EHS). My day job is Environmental, Health & Safetey (EHS) consulting and software solutions, where that interfaces with the developer world is with respect to opportunities around Green and Sustainability, The SmartGrid and Juval Lowy's EnergyNet, both of which will require a lot of technology and software to make them work, The new Microsoft Partner competency for "Digital Home", and The Y2K kind of deadline around how managing chemicals in ERP systems is changing because of Global Harmonization, which hits the EU with a hard deadline on 11/30/10 (yes, this year), and hits the USA about 15 months later. Hope you enjoy my contributions to the digital geek community, and feel free to email me, [email protected] (the email leftover after my digital identity was blown away), and [email protected] (this one could go away at some future point) Best, Kathy Malone

    Read the article

  • Leveraging Social Networks for Retail

    - by David Dorf
    For retailers, social media is all about B2C2C. That is, Business to Consumer to Consumer, or more specifically, retailer to influencer to consumer. Traditional marketing targeted mass media, trying to expose the message to as many people as possible. While effective, this approach has never been very efficient, with high costs for relatively low penetration. Then it was thought that marketers should focus their efforts on a relative few super-influencers that would then sway the masses. History shows a few successes with this approach but lacked any consistency or predictability. After all, if super-influencers were easy to find, most campaigns would easily go viral. Alas, research shows that most wide-spread trends were the result of several fortunate events, including some luck. So do people exert influence over each other when it comes to purchase decisions? Of course they do, all the time. But that influence is usually limited to a small set of friends and specific specialization. For instance, although I have 165 friends on Facebook, I am only able to influence my close friends and family on PC purchases, and I have no sway at all for fashion purchases. People trust my knowledge on technology, but nobody asks my advice on shoes. How then should retailers leverage social networks in order to reinforce brand image and push promotions? Two obvious ways are Like and Share. Online advertisements or wall-postings receive more clicks when the viewer sees that friends have "liked" the posting. That's our modern-day version of word-of-mouth advertising. Statistics show that endorsements from friends make it more likely a person will engage. If my friends and I liked it, then I might also "share" (or "retweet" in the case of Twitter) it with other friends. In that case the retailer has paid for X showings of the advertisement, but sharing has pushed it to an additional Y people at no cost. And further, the implicit endorsement by the sharer makes it more likely the recipient will engage. So a good first step is to find people active in social networks that will Like and Share in order to exert influence. Its still tough to go viral, but doubling engagement is still a big step in the right direction. More complex social graph analysis would be a second step, but I'll leave that topic for another day. If you're interested in the academic side of social dynamics, I suggest reading Duncan Watts' work.

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

  • Macbook (non-pro, late 2010) Power Management Issues relating to Nvidia-Current drivers

    - by gbvitaobscura
    I have tried many potential solutions but to no avail! (this is going to be a long post) Essentially, when I first install ubuntu (I have tested 10.11-12.04 beta) I can change the brightness of the macbook backlight. One of the first things I usually do once my machine finishes installing is enter "sudo apt-get install nvidia-current" into the terminal to get my Nvidia graphics card set up. Before I install nvidia-current power management works flawlessly. After installing nvidia-current my graphics driver works mostly properly (more on that later). When I press F1 or F2 notify osd pops up and shows icon for the backlight along with a meter which changes according to how much I press F1 or F2, but the backlight does not change brightness at all. Two supplementary facts: when I am running off of battery power the backlight does not dim ever, also changing the backlight brightness through the terminal does not work (it recognizes the command but changes nothing). Things which did not work: 1. editing xorg.conf 2. python script/hack 3. editing grub Things which did work: 1. removing Nvidia-Current Final piece of information: Although my graphics card works in every way but Power Management it can not be found through System Settings/Details (Graphics Unknown) I'm using the NVIDIA GeForce 320M. the output of lspci is: 00:00.0 Host bridge: NVIDIA Corporation MCP89 HOST Bridge (rev a1) 00:00.1 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:01.0 RAM memory: NVIDIA Corporation Device 0d6d (rev a1) 00:01.1 RAM memory: NVIDIA Corporation Device 0d6e (rev a1) 00:01.2 RAM memory: NVIDIA Corporation Device 0d6f (rev a1) 00:01.3 RAM memory: NVIDIA Corporation Device 0d70 (rev a1) 00:02.0 RAM memory: NVIDIA Corporation Device 0d71 (rev a1) 00:02.1 RAM memory: NVIDIA Corporation Device 0d72 (rev a1) 00:03.0 ISA bridge: NVIDIA Corporation MCP89 LPC Bridge (rev a2) 00:03.1 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:03.2 SMBus: NVIDIA Corporation MCP89 SMBus (rev a1) 00:03.3 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:03.4 Co-processor: NVIDIA Corporation MCP89 Co-Processor (rev a1) 00:04.0 USB controller: NVIDIA Corporation MCP89 OHCI USB 1.1 Controller (rev a1) 00:04.1 USB controller: NVIDIA Corporation MCP89 EHCI USB 2.0 Controller (rev a2) 00:06.0 USB controller: NVIDIA Corporation MCP89 OHCI USB 1.1 Controller (rev a1) 00:06.1 USB controller: NVIDIA Corporation MCP89 EHCI USB 2.0 Controller (rev a2) 00:08.0 Audio device: NVIDIA Corporation MCP89 High Definition Audio (rev a2) 00:09.0 Ethernet controller: NVIDIA Corporation MCP89 Ethernet (rev a1) 00:0a.0 IDE interface: NVIDIA Corporation MCP89 SATA Controller (rev a2) 00:0b.0 RAM memory: NVIDIA Corporation Device 0d75 (rev a1) 00:15.0 PCI bridge: NVIDIA Corporation Device 0d9b (rev a1) 00:17.0 PCI bridge: NVIDIA Corporation MCP89 PCI Express Bridge (rev a1) 01:00.0 Network controller: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01) 02:00.0 VGA compatible controller: NVIDIA Corporation Device 08a0 (rev a2) There is a very similar bug on Launchpad which I have posted below. https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers/+bug/764195

    Read the article

  • Have Your Cake and Eat it Too: Industry Best Practices + Flexibility

    - by Oracle Accelerate for Midsize Companies
    By Richard Garraputa, VP of Sales & Marketing, brij Richard joined brij in 1996 after graduating from the University of North Carolina at Greensboro with degrees in Information Systems and Accounting. He directs brij’s overall strategies of both the business development and marketing departments. Companies looking for new ERP systems spend so much time comparing features and functions of software products but too often short change the value of their own processes.  Company managers I meet often claim that they are implementing a new ERP system so they can perform better and faster.  When asked how, the answer is often “by implementing best practices”.  But the term ‘best practices’ is frequently used to mean ‘doing things the way everyone else does them’ rather than a starting point or benchmark to build upon by adding your own value. Of course, implementing standardized processes across an enterprise is an important step in improving operational efficiencies.  But not all companies are alike.  Do you ever tell your customers “We are just like our competition and have no competitive differentiation”?  Probably not.  So why should the implementation of your business processes be just like your competitor’s?  Even within the same industry, companies differentiate themselves by leveraging their unique expertise and approach to business.  These unique aspects—the competitive differentiators that companies use to thrive in a crowded marketplace—can and should be supported by the implementation of business systems like ERP. Modern ERP systems like Oracle’s JD Edwards EnterpriseOne have a broad and deep functional footprint designed to integrate a company’s core operations.  But how can a company take advantage of this footprint without blowing up their implementation budget?  Some ERP vendors claim to solve this challenge by stating that their systems come pre-configured with ‘best practices’.  Too often what they are really saying is that you will have to abandon your key operational differentiators to fit a vendor’s template for your business—or extend your implementation and postpone the realization of any benefits. Thankfully for midsize companies, there is an alternative to the undesirable options of extended implementation projects or abandoning their competitive differentiators.  Oracle Accelerate Solutions speed the time it takes to implement JD Edwards EnterpriseOne solution based on your unique business characteristics, getting your new ERP system up and running faster without forcing your business to fit a cookie-cutter solution. We’ve been a JD Edwards implementation partner since 1986 and we now leverage Oracle Business Accelerators—cloud based rapid implementation tools built and maintained by Oracle. Oracle Business Accelerators deliver the benefits of embedded industry best practices without forcing every customer in to one set of processes like many template or “clone and go” approaches do. You retain the ability to reconfigure your applications—without customization—as your business changes. Wielded by Oracle partners with industry-specific domain expertise, Oracle Accelerate Solution implementations powered by Oracle Business Accelerators help automate the application configuration to fit your business better, faster. For example, on a recent project at a manufacturing company, the project manager told me that Oracle Business Accelerators helped get them to Conference Room Pilot 20% faster than with a traditional approach. Time savings equal cost savings. And if ‘better and faster’ is your goal for your business performance, shouldn’t it be the goal for your ERP implementation as well? Established in 1986, brij has been dedicated solely to helping its customers implement Oracle’s JD Edwards solutions and to maximize the value of those customers’ IT investments. They are a Gold level member in Oracle PartnerNetwork and an Oracle Accelerate Solution provider.

    Read the article

  • Twice as long and half as long

    - by PointsToShare
    We are in a project and we hit some snags. What’s a snag? An activity that takes longer than expected. Actually it takes longer than the time assigned to it by an over pressed PM who accepts an impossible time table and tries his best to make it possible, but I digress (again!).  So we have snags and we also have the opposite. Let’s call these “cinches”. The question is: how does a combination of snags and cinches affect the project timeline? Well, there is no simple answer. It depends on the projects dependencies as we see in the PERT chart. If all the snags are in the critical path and all the cinches are elsewhere then the cinches don’t help at all. In fact any snag in the critical path will delay the project.  Conversely, a cinch in the critical path will expedite it. A snag outside the critical path might be serious enough to even change the critical path. Thus without the PERT chart, we cannot really tell. Still there is a principle involved – Time and speed are non-linear! Twice as long adds a full unit, half as long only takes ½ unit away. Let’s just investigate a simple project. It consists of two activities – S and C - each estimated to take a week. Alas, S is a snag and really needs twice the time allotted and – a sigh of relief – C is a cinch and will take half the time allotted, so everything is Hun-key-dory, or is it?  Even here the PERT chart is important. We have 2 cases: 1: S depends on C (or vice versa) as in when the two activities are assigned to one employee. Here the estimated time was 1 + 1 and the actual time was 2 + ½ and we are ½ week late or 25% late. 2: S and C are done in parallel. Here the estimated time was 1, but the actual time is 2 – we are a whole week or 100% late. Let’s change the equation a little. S need 1.5 and C needs .5 so in case 1, we have the loss fully compensated by the gain, but in case 2 we are still behind. There are cases where this really makes no difference. This is when the critical path is not affected and we have enough slack in the other paths to counteract the difference between its snags and cinches – Let’s call this difference DSC. So if the slack is greater than DSC the project will not suffer. Conclusion: There is no general rule about snags and cinches. We need to examine each case within its project, still as we saw in the 4 examples above; the snag is generally more powerful than the cinch. Long live Murphy! That’s All Folks

    Read the article

  • Get to Know a Candidate (11 of 25): Roseanne Barr&ndash;Peace &amp; Freedom Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Barr is an American actress, comedienne, writer, television producer, and director.  Barr began her career in stand-up comedy at clubs before gaining fame for her role in the sitcom Roseanne. The show was a hit and lasted nine seasons, from 1988 to 1997. She won both an Emmy Award and a Golden Globe Award for Best Actress for her work on the show. Barr had crafted a "fierce working-class domestic goddess" persona in the eight years preceding her sitcom and wanted to do a realistic show about a strong mother who was not a victim of patriarchal consumerism. The granddaughter of immigrants from Europe and Russia, Barr was the oldest of four children in a working-class Jewish Salt Lake City family; she was also active in the LDS Church. In 1974 she married Bill Pentland, with whom she had three children, before divorcing in 1990 and marrying comedian Tom Arnold for four years. Controversy arose when she sang "The Star-Spangled Banner" off-key at a 1990 nationally aired baseball game, followed by grabbing her crotch and spitting. After her sitcom ended, she launched her own talk show, The Roseanne Show, which aired from 1998 to 2000. In 2005, she returned to stand-up comedy with a world tour. In 2011, she starred in an unscripted TV show, Roseanne's Nuts that lasted from July to September of that year, about her life on a Hawaiian farm. The Peace and Freedom Party (PFP) is a nationally-organized political party with affiliates in more than a dozen states, including California, Florida, Colorado and Hawaii. Its first candidates appeared on the ballot in 1966, but the Peace and Freedom Party of California was founded on June 23, 1967, after the LAPD riot in the wealthy Century City section of Los Angeles, and qualified for the ballot in January 1968.  The Peace and Freedom Party went national in 1968 as a left-wing organization opposed to the Vietnam War. From its inception, Peace and Freedom Party has been a left-wing political organization. It is a strong advocate of protecting the environment from pollution and nuclear waste. It advocates personal liberties and universal, high quality and free access to education and health care. Its understanding of socialism includes a socialist economy, where industries, financial institutions, and natural resources are owned by the people as a whole and democratically managed by the people who work in them and use them. Barr is on the Ballot in: CA, CO, FL Learn more about Roseanne Barr and Peace and Freedom Party on Wikipedia.

    Read the article

  • New hidden parameters in Oracle 11.2

    - by Mike Dietrich
    We really welcome every external review of our slides. And also recommendations from customers visiting our workshops. So it happened to me more than a week ago that Marco Patzwahl, the owner of MuniqSoft GmbH, had a very lengthy train ride in Germany (as the engine drivers go on strike this week it could have become even worse) and nothing better to do then reviewing our slide set. And he had plenty of recommendations. Besides that he pointed us to something at least I was not aware of and added it to the slides: In patch set 11.2.0.2 a new behaviour for datafile write errors has been implemented. With this release ANY write error to a datafile will cause the instance to abort. Before 11.2.0.2 those errors usually led to an offline datafile if the database operates in archivelog mode (your production database do, don’t they?!) and the datafile does not belong to the SYSTEM tablespace. Internal discussion found this behaviour not up-to-date and alligned with RAC systems and modern storages. Therefore it has been changed and a new underscore parameter got introduced. _DATAFILE_WRITE_ERRORS_CRASH_INSTANCE=TRUE This is the default setting´and the new behaviour beginning with Oracle 11.2.0.2 If you would like to revert to the pre-11.2.0.2 behaviour you’ll have to set in your init.ora/spfile this parameter to false. But keep in mind that there’s a reason why this has been changed. You’ll find more info in MOS Note: 7691270.8 and this topic in the current version of the slides on slide 255. Thanks to Marco for the review!!   And then I received an email from Kurt Van Meerbeeck today. Kurt is pretty well known in the Oracle community. And he’s the owner of jDUL/DUDE, a database unloading tool which bypasses the Oracle database engine and access data direclty from the blocks. Kurt visited the upgrade workshop two weeks ago in Belgium and did highlight to me that since Oracle 11.2.0.1 even though you haven’t set neither SGA_TARGET nor MEMORY_TARGET the database might still do resize operations. Reason why this behaviour has been changed: Prevention of ORA-4031 errors. But on databases with extremly high loads this can cause trouble. Further information can be found in MOS Note:1269139.1 . And the parameter set to TRUE by default is called _MEMORY_IMM_MODE_WITHOUT_AUTOSGA=TRUE This can be found now in the slide set as well on slide number 240. And thanks to Kurt for this information!!

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • Time passage arithmetic explanation

    - by Cyber Axe
    I ported this from http://www.effectgames.com/effect/article.psp.html/joe/Old_School_Color_Cycling_with_HTML5 some time ago. However i'm now wanting to modify it for the purpose of changing it from floating point to fixed point maths for enhanced efficiency (for those who are going to talk about premature optimization and what not, i want to have my entire engine in fixed point both as a learning process for me and so i can port code more easily to systems in the future that dont have native floating points such as arm cpus) My initial conversion to fixed points just resulted in the cycling stuck on either the first or last frame of cycling. Plus it would be nice to understand better how it works so i can add more options and so forth in the future, my maths however sucks and the comments are limited so i don't really know how the maths work for determining the frame it shoud use (cycleAmount) I was also a beginner when i ported it as i had no idea between floating points and integers and what not. So in summary my question is, can anyone give an explination of the arithmatic used for determining the cycleAmount (which determings the "frame" of the cycle) This is the working floating point maths version of the code: public final void cycle(Colour[] sourceColours, double timeNow, double speedAdjust) { // Cycle all animated colour ranges in palette based on timestamp. sourceColours = sourceColours.clone(); int cycleSize; double cycleRate; double cycleAmount; Cycle cycle; for (int i = 0, len = cycles.length; i < len; ++i) { cycle = cycles[i]; cycleSize = (cycle.HIGH - cycle.LOW) + 1; cycleRate = cycle.RATE / (int) (CYCLE_SPEED / speedAdjust); cycleAmount = 0; if (cycle.REVERSE < 3) { // Standard Cycle cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); if (cycle.REVERSE < 1) { cycleAmount = cycleSize - cycleAmount; // If below 1 make sure its not reversed. } } else if (cycle.REVERSE == 3) { // Ping-Pong cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize << 1); if (cycleAmount >= cycleSize) { cycleAmount = (cycleSize * 2) - cycleAmount; } } else if (cycle.REVERSE < 6) { // Sine Wave cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); cycleAmount = Math.sin((cycleAmount * 3.1415926 * 2) / cycleSize) + 1; if (cycle.REVERSE == 4) { cycleAmount *= (cycleSize / 4); } else if (cycle.REVERSE == 5) { cycleAmount *= (cycleSize >> 1); } } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } if (USE_BLEND_SHIFT) { blendShiftColours(sourceColours, cycle, cycleAmount); } else { shiftColours(sourceColours, cycle, cycleAmount); } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } } colours = sourceColours; } // This utility function allows for variable precision floating point modulus. private double DFLOAT_MOD(final double d, final double b) { return (Math.floor(d * PRECISION) % Math.floor(b * PRECISION)) / PRECISION; }

    Read the article

  • Oracle is #1 in Life Sciences!

    - by Michael Snow
    Guest post today by: John Klinke, Senior Principal Product Manager, Oracle WebCenter Content 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Based on the announcement last week at EMC World about Documentum for Life Sciences, it looks like EMC is starting to have regrets about pulling out of the life sciences space over the last few years. Certainly their content management customers and partners in life sciences have noticed their retreat. Many of them are now talking to us about WebCenter Content since they’ve seen the writing on the wall regarding Documentum’s decline, including falling revenue, shrinking investment, departure of key executives, and EMC’s auditing of existing customers. While EMC has been neglecting the life sciences industry over the last few years, Oracle has been increasing its investment and commitment by providing best-of-breed solutions to enable pharmaceutical, medical device, biotech and CRO companies to improve productivity and drive innovation. As a result, according to IDC Health Insights, Oracle is #1 in life sciences. From research and development through clinical development and manufacturing to sales and marketing, Oracle provides the solutions that life sciences companies depend on to accelerate R&D, expedite clinical trials, and speed time to market. Specifically for Oracle WebCenter, our life sciences business is booming thanks to our comprehensive offerings led by Oracle WebCenter Content, our 21 CFR Part 11 compliant enterprise content management platform. Unlike Documentum, WebCenter Content is all about keeping the cost of ownership low - through simplicity, flexibility, and out-of-the-box integrations. WebCenter Content is a single, comprehensive ECM platform that can handle all your content management needs, from controlled documents to digital asset management, records management and document imaging and capture. And it is much more flexible, letting you do configuration changes instead of customizations to meet your business needs. It also saves you money by being pre-integrated with the rest of the Oracle Fusion Middleware technology stack and with leading enterprise applications like Siebel (including Siebel CTMS), Primavera, E-Business Suite, JD Edwards and PeopleSoft. So if you think EMC’s announcement last week was too little and too late, I’m happy to report that Oracle is here to help. Back in October, we announced our Move Off Documentum offer, which provides a 100% trade-in credit for your Documentum licenses when you purchase Oracle WebCenter, and the good news is, this offer is still available for a limited time. So stop maintaining Documentum and start innovating with Oracle WebCenter. For more details see www.oracle.com/moveoff/documentum.

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • What are the software design essentials? [closed]

    - by Craig Schwarze
    I've decided to create a 1 page "cheat sheet" of essential software design principles for my programmers. It doesn't explain the principles in any great depth, but is simply there as a reference and a reminder. Here's what I've come up with - I would welcome your comments. What have I left out? What have I explained poorly? What is there that shouldn't be? Basic Design Principles The Principle of Least Surprise – your solution should be obvious, predictable and consistent. Keep It Simple Stupid (KISS) - the simplest solution is usually the best one. You Ain’t Gonna Need It (YAGNI) - create a solution for the current problem rather than what might happen in the future. Don’t Repeat Yourself (DRY) - rigorously remove duplication from your design and code. Advanced Design Principles Program to an interface, not an implementation – Don’t declare variables to be of a particular concrete class. Rather, declare them to an interface, and instantiate them using a creational pattern. Favour composition over inheritance – Don’t overuse inheritance. In most cases, rich behaviour is best added by instantiating objects, rather than inheriting from classes. Strive for loosely coupled designs – Minimise the interdependencies between objects. They should be able to interact with minimal knowledge of each other via small, tightly defined interfaces. Principle of Least Knowledge – Also called the “Law of Demeter”, and is colloquially summarised as “Only talk to your friends”. Specifically, a method in an object should only invoke methods on the object itself, objects passed as a parameter to the method, any object the method creates, any components of the object. SOLID Design Principles Single Responsibility Principle – Each class should have one well defined purpose, and only one reason to change. This reduces the fragility of your code, and makes it much more maintainable. Open/Close Principle – A class should be open to extension, but closed to modification. In practice, this means extracting the code that is most likely to change to another class, and then injecting it as required via an appropriate pattern. Liskov Substitution Principle – Subtypes must be substitutable for their base types. Essentially, get your inheritance right. In the classic example, type square should not inherit from type rectangle, as they have different properties (you can independently set the sides of a rectangle). Instead, both should inherit from type shape. Interface Segregation Principle – Clients should not be forced to depend upon methods they do not use. Don’t have fat interfaces, rather split them up into smaller, behaviour centric interfaces. Dependency Inversion Principle – There are two parts to this principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. In modern development, this is often handled by an IoC (Inversion of Control) container.

    Read the article

  • How to install Network Adapter Drivers for Atheros AR8161/8165 PCI-E Gigabit Ethernet Controller (NDIS 6.20) Ubuntu 12.04

    - by Jessica Burnett
    How can I install drivers for 64-bit Atheros AR8161/8165 PCI-E Gigabit Ethernet Controller (NDIS 6.20) for Ubuntu 12.04. I dual boot Windows7/Ubuntu 12.04 drivers work for 64-bit Windows 7. lspic -nn: 00:00.0 Host bridge [0600]: Intel Corporation Ivy Bridge DRAM Controller [8086:0154] (rev 09) 00:01.0 PCI bridge [0604]: Intel Corporation Ivy Bridge PCI Express Root Port [8086:0151] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) 00:14.0 USB controller [0c03]: Intel Corporation Panther Point USB xHCI Host Controller [8086:1e31] (rev 04) 00:16.0 Communication controller [0780]: Intel Corporation Panther Point MEI Controller #1 [8086:1e3a] (rev 04) 00:1a.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation Panther Point High Definition Audio Controller [8086:1e20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 1 [8086:1e10] (rev c4) 00:1c.1 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 2 [8086:1e12] (rev c4) 00:1c.3 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 4 [8086:1e16] (rev c4) 00:1d.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #1 [8086:1e26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation Panther Point LPC Controller [8086:1e59] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] [8086:1e03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation Panther Point SMBus Controller [8086:1e22] (rev 04) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:0de9] (rev a1) 02:00.0 Ethernet controller [0200]: Atheros Communications Inc. AR8161 Gigabit Ethernet [1969:1091] (rev 08) 03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 2200 [8086:0891] (rev c4) 04:00.0 System peripheral [0880]: JMicron Technology Corp. SD/MMC Host Controller [197b:2392] (rev 30) 04:00.2 SD Host controller [0805]: JMicron Technology Corp. Standard SD Host Controller [197b:2391] (rev 30) 04:00.3 System peripheral [0880]: JMicron Technology Corp. MS Host Controller [197b:2393] (rev 30) 04:00.4 System peripheral [0880]: JMicron Technology Corp. xD Host Controller [197b:2394] (rev 30) sudo lshw -c network *-network UNCLAIMED description: Ethernet controller product: AR8161 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 version: 08 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi msix bus_master cap_list configuration: latency=0 resources: memory:d3a00000-d3a3ffff ioport:2000(size=128) *-network description: Wireless interface product: Centrino Wireless-N 2200 vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: c4 serial: 9c:4e:36:14:d4:7c width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-23-generic firmware=18.168.6.1 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:45 memory:d3900000-d3901fff I also tried Manually configuring wired connection. Nether wired or wireless connects

    Read the article

  • Friday Fun: Snowball

    - by Asian Angel
    It is Christmas Eve and hopefully you are enjoying the start of an early weekend away from work. This week we have a snowball throwing game for you to try out, so bundle up and get ready to let those snowballs fly! Snowball The object of the game is to use your snowball ammo to harass the drunk businessman and send him flying along distance-wise as far as you can. Simply use your mouse to aim and click the left button to throw snowballs. You can monitor your stats on the silver bar towards the top of the window. The sound can also be disabled if the music is bothering you, but keep in mind that all sound will be disabled if you use the option. Time to get those snowballs flying through the air!! Keep hitting the businessman with your snowballs as you chase after him. Make certain that your aim is good or you will quickly run out of snowballs! You can really get him moving along at a good rate and he can even go high enough in the air to disappear off the screen for a few moments. There is a also chance that your aim will be so wicked with the snowballs that you will literally knock the drunk businessman’s head off! Weird but possible… The game ends when one of these two events occur: 1.) you run out of snowballs or 2.) the businessman literally bounces back at and then drops behind you as seen in the screenshot here. The moment either happens your score will pop up and then you have the opportunity to try again. Have fun! Note: The bounce back event can happen when encountering cars. Play Snowball Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? An Alternate Star Wars Christmas Special [Video] Sunset in a Tropical Paradise Wallpaper Natural Wood Grain Icons for Your Desktop and App Launcher Docks My Blackberry Is Not Working! The Apple Too?! [Funny Video] Hidden Tracks Your Stolen Mac; Free Until End of January Why the Other Checkout Line Always Moves Faster

    Read the article

  • JCP Awards 10 Year Retrospective

    - by Heather VanCura
    As we celebrate 10 years of JCP Program Award recognition in 2012,  take a look back in the Retrospective article covering the history of the JCP awards.  Most recently, the JCP awards were  celebrated at JavaOne Latin America in Brazil, where SouJava was presented the JCP Member of the Year Award for 2012 (won jointly with the London Java Community) for their contributions and launch of the Global Adopt-a-JSR Program. This is also a good time to honor the JCP Award Nominees and Winners who have been designated as Star Spec Leads.  Spec Leads are key to the Java Community Process (JCP) program. Without them, none of the Java Specification Requests (JSRs) would have begun, much less completed and become implemented in shipping products.  Nominations for 2012 Start Spec Leads are now open until 31 December. The Star Spec Lead program recognizes Spec Leads who have repeatedly proven their merit by producing high quality specifications, establishing best practices, and mentoring others. The point of such honor is to endorse the good work that they do, showcase their methods for other Spec Leads to emulate, and motivate other JCP program members and participants to get involved in the JCP program. Ed Burns – A Star Spec Lead for 2009, Ed first got involved with the JCP program when he became co-Spec Lead of JSR 127, JavaServer Faces (JSF), a role he has continued through JSF 1.2 and now JSF 2.0, which is JSR 314. Linda DeMichiel – Linda thus involved in the JCP program from its very early days. She has been the Spec Lead on at least three JSRs and an EC member for another three. She holds a Ph.D. in Computer Science from Stanford University. Gavin King – Nominated as a JCP Outstanding Spec Lead for 2010, for his work with JSR 299. His endorsement said, “He was not only able to work through disputes and objections to the evolving programming model, but he resolved them into solutions that were more technically sound, and which gained support of its pundits.” Mike Milikich –  Nominated for his work on Java Micro Edition (ME) standards, implementations, tools, and Technology Compatibility Kits (TCKs), Mike was a 2009 Star Spec Lead for JSR 271, Mobile Information Device Profile 3. David Nuescheler – Serving as the CTO for Day Software, acquired by Adobe Systems, David has been a key player in the growth of the company’s global content management solution. In 2002, he became Spec Lead for JSR 170, Content Repository for Java Technology API, continuing for the subsequent version, JSR 283. Bill Shannon – A well-respected name in the Java community, Bill came to Oracle from Sun as a Distinguished Engineer and is still performing at full speed as Spec Lead for JSR 342, Java EE 7,  as an alternate EC member, and hands-on problem solver for the Java community as a whole. Jim Van Peursem – Jim holds a PhD in Computer Engineering. He was part of the Motorola team that worked with Sun labs on the Spotless VM that became the KVM. From within Motorola, Jim has been responsible for many aspects of Java technology deployment, from an independent Connected Limited Device Configuration (CLDC) and Mobile Information Device Profile (MIDP) implementations, to handset development, to working with the industry in defining many related standards. Participation in the JCP Program goes well beyond technical proficiency. The JCP Awards Program is an attempt to say “Thank You” to all of the JCP members, Expert Group Members, Spec Leads, and EC members who give their time to contribute to the evolution of Java technology.

    Read the article

  • Interview with Koen Aben, Supply Chain Director of WE Fashion

    - by user801960
    We recently spoke to Koen Aben, the Supply Chain Director of WE Fashion, who gave us some insight into how Oracle supported the international fashion retailer through the completion of a large scale integration project across its 340 European stores. Koen explains the reasoning behind the project which was to create a common retail foundation and to integrate and align working processes to drive insight and enable continued growth. It is always good to hear from someone of Koen’s experience who can articulate the benefits of partnering with the right company for such an extensive project as this. Koen explains that a crucial element of such a project is to unify business applications into a common platform, adding that for successful growth, retailers really need to achieve enterprise-wide alignment. At the start of the three year project, WE Fashion’s application platform was fragmented impacting the company’s ability to support sustained growth. In light of this, WE Fashion invested in its processes, systems, teams and partnerships to build the needed retail foundation. Now after successfully completing the project, the basis is in place to ensure that growth is unimpeded. In the video, Koen Aben highlights some of the factors necessary for the success of the project as: Having an understanding that the process of creating a growth platform for a company is a long journey Accepting that during a lengthy project such as this, there will be high and low points experienced within the project team and the business, but that the relationship with your partners is crucial to the success of the project. Having the correct team in place will prove to be the “lynch –pin” of any successful project Oracle supported Koen and his team in implementing this project, and is recognised for the role it played during this development in partnership with the company. On his experience with working with the Oracle team, Koen points out that in the critical situations, Oracle was there to ensure that the right people were in place whenever needed and this was key to ensuring the project’s success. Since Oracle is one of the few providers that can offer an enterprise-wide retail platform, our best practice approach is key to connecting interactions throughout the business to enable insight and optimise operations. This is a great example of a large scale international retail project, where the true success of its completion is reflected in how proud the company is about what has been achieved, and the fact that results are already being seen.

    Read the article

  • Financial Management: Why Move to the Cloud?

    - by Kathryn Perry
    A guest post by Terrance Wampler, Vice President, Financials Product Strategy, Oracle I’ve spent my career designing and developing financial management systems, most of it at Oracle. Every single day I either meet with our customers or talk to them on the phone. The time is usually spent discussing various business challenges facing CFOs and Controllers, who are running Oracle’s Financials. Lately, we’ve been talking a lot about cloud computing and whether it makes sense for finance to go to the cloud. Here are some pros and cons that might help you make that decision. Let’s start with the benefits of cloud solutions. The first is savings. With cloud services, you pay only for those commodities that you use. That makes you feel like you're getting better value for your money. Plus, you can preserve your cash for your core business and you can get a better matching of expenses and revenues. So, at the top of the list is lower total cost of ownership. The second point has to do with optimization. With cloud services, you’ll need less IT infrastructure so you can optimize your IT resources for better-value, higher-end projects. This also leads to greater financial visibility, where there's a clear cost for the set of services or features replaced by cloud services. And, the last benefit is what I call acceleration. You can save money by speeding up the initialization and deployment of the project. You don't have to deal with IT infrastructure and you can start implementing right away. We did a quick survey of about 70 CFOs at the CFO Summit last month in New York City. We asked them why they were looking at cloud services, and not necessarily just for financials. The No. 1 response was perceived lower cost of ownership. But of course there are risks to consider. The first thing most people think about in the cloud is security and ownership of data. So, will your data really be safe? Can you meet your own privacy policy requirements? Do you really want your private financial data exposed? Do you trust the provider? Is what you see really your data? Do you own it or is it managed by someone else? Security is a big concern that comes with an emotional component. The next thing in the risk category is reliability. Is the provider proven? You’re taking what you have control over – for example, standards and policies and internal service level agreements – away from your IT department and giving it to someone else. Will you still be able to adapt to shifts in your business? Will the provider be able to grow with your business effectively? Reliability means having a provider that can give you the service infrastructure that you need. And then there’s performance, which has two components in terms of risk. Going forward, will the provider be able to scale the infrastructure or service level if you have new employees or new businesses? And second, will the price you negotiate and the rate you lock in cover additional costs and rising service fees? Another piece is cost. What happens if you don't get the service level you want? What if you end the service? What happens, if after a few years, you send the service out for bid and change service? Can you move your data? Can you move the applications? Do the integrations work? These are cost components people don’t always take into account. And, the final piece is the business case. The perception is that you can get started really quickly with cloud. It has a perceived lower cost of total ownership and it feels cool because it's cloud. But do you have a good business case for moving to the cloud? Your total cost of ownership is over three years; then you’ll renew it, so your TCO is six years. Have you compared that to other internal services that you’re offering? You might already have product that you can run this new business or division on. In that same survey at the CFO Summit, the execs thought the biggest perceived risks were security of data, ability to move data back, and the ability to create a business case to actually justify the risks. So that’s the list of pros and cons. Not to leave you hanging, I will do another post on how to balance these pros and cons and make the right decision for your business.

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >