Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 273/405 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Best Frameworks/libraries/engines for 2D multiplayer C# Webbased RPG

    - by Thirlan
    Title is a mouthful but important because I'm looking to meet a specific criteria and it's complex enough that I need a lot of help in finding what I'm looking for. I really want people's suggestions because I trust it a lot more than anything else, so I just need to clearly define what it is I want heh : P Game is a 2D RPG. Think of Secret of Mana. Game is online multiplayer, but not MMO sized. Game must be webbased! I'm looking to the future and want to hit as many platforms as possible. I'm leaning to Webgl because of this, but still looking around. Since the users are seeing the game through the webbrowser the front-end should be mainly responsible for drawing, taking input and some basic checking such as preliminary collision detection. This is important because it means the game engine is NOT on the client's machine. The server should be responsible for the game engine and all the calculations. This means the server is doing all the work and the client is mostly a dumb terminal. Server language is c# I'm looking for fast project execution so I want to use as many pre-existing tools as possible. This would make sense because I'm making a game here, not an engine. I'm not creating some new revolutionary graphics or pushing the physics engines to the next level. Preference for commercially supported tools. For game mechanics reasons and for reasons 4 and 5, don't think I can use existing 2D rpg engines. I've seen them out there and I fear that if I try and use them they will have too many restrictions, but will be happy to hear out suggestions. So all this means I need a game engine on the server, or maybe just a physics engine, and then I need another engine/library to draw everything that the server is sending to the client on the webbrowser. Maybe this is how 50% of games work on the web and there are plenty of frameworks that support this! I wouldn't actually don't know : ( but my gut is telling me that most webgames are single player and 90% of the game is running on the client. So... any suggestions?

    Read the article

  • Antenna Aligner Part 5: Devil is in the detail

    - by Chris George
    "The first 90% of a project takes 90% of the time and the last 10% takes the another 200%"  (excerpt from onista) Now that I have a working app (more or less), it's time to make it pretty and slick. I can't stress enough how useful it is to get other people using your software, and my simple app is no exception. I handed my iPhone to a couple of my colleagues at Red Gate and asked them to use it and give me feedback. Immediately it became apparent that the delay between the list page being shown and the list being drawn was too long, and everyone who tried the app clicked on the "Recalculate" button before it had finished. Similarly, selecting a transmitter heralded a delay before the compass page appeared with similar consequences. All users expected there to be some sort of feedback/spinny etc. to show them it is actually doing something. In a similar vein although for opposite reasons, clicking the Recalculate button did indeed recalculate the available transmitters and redraw them, but it did this too fast! One or two users commented that they didn't know if it had done anything. All of these issues resulted in similar solutions; implement a waiting spinny. Thankfully, jquery mobile has one built in, primarily used for ajax operations. Not wishing to bore you with the many many iterations I went through trying to get this to work, I'll just give you my solution! (Seriously, I was working on this most evenings for at least a week!) The final solution for the recalculate problem came in the form of the code below. $(document).on("click", ".show-page-loading-msg", function () {            var $this = $(this),                theme = $this.jqmData("theme") ||                        $.mobile.loadingMessageTheme;            $.mobile.showPageLoadingMsg(theme, "recalculating", false);            setTimeout(function ()                           { $.mobile.hidePageLoadingMsg(); }, 2000);            getLocationData();        })        .on("click", ".hide-page-loading-msg", function () {              $.mobile.hidePageLoadingMsg();        }); The spinny is activated by setting the class of a button (for example) to the 'show-page-loading-msg' class. &lt;a data-role="button" class="show-page-loading-msg"Recalculate This means the code above is fired, calling the showPageLoadingMsg on the document.mobile object. Then, after a 2 second timeout, it calls the hidePageLoadingMsg() function. Supposedly, it should show "recalculating" underneath the spinny, but I've not got that to work. I'm wondering if there is a problem with the jquery mobile implementation. Anyway, it doesn't really matter, it's the principle I'm after, and I now have spinnys!

    Read the article

  • Block-level deduplicating filesystem

    - by James Haigh
    I'm looking for a deduplicating copy-on-write filesystem solution for general user data such as /home and backups of it. It should use online/inline/synchronous deduplication at the block-level using secure hashing (for negligible chance of collisions) such as SHA256 or TTH. Duplicate blocks need not even touch the disk. The idea is that I should be able to just copy /home/<user> to an external HDD with the same such filesystem to do a backup. Simple. No messing around with incremental backups where corruption to any of the snapshots will nearly always break all later snapshots, and no need to use a specific tool to delete or 'checkout' a snapshot. Everything should simply be done from the file browser without worry. Can you imagine how easy this would be? I'd never have to think twice about backing-up again! I don't mind a performance hit, reliability is the main concern. Although, with specific implementations of cp, mv and scp, and a file browser plugin, these operations would be very fast, especially when there is a lot of duplication as they would only need to transfer the absent blocks. Accidentally using conventional copy tools that do not integrate with the FS would merely take longer, waste some bandwidth when copying remotely and waste some CPU, as the duplicate data would be re-read, re-transferred and re-hashed (although nothing would be re-written), but would absolutely not corrupt anything. (Some filesharing software may also be able to benefit by integrating with the FS.) So what's the best way of doing this? I've looked at some options: lessfs - Looks unmaintained. Any good? [Opendedup/SDFS][3] - Java? Could I use this on Android?! What does [SDFS][4] stand for? [Btrfs][5] - Some patches floating around on mailing list archives, but no real support. [ZFS][6] - Hopefully they'll one day relicense under a true Free/Opensource GPL-compatible licence. Also, 2 years ago I had a go at an attempt in Python using Fuse at the file-level to be used over the top of a typical solid FS such as EXT4, but I found Fuse for Python underdocumented and didn't manage to implement all of the system calls. My first post here, so I can't post more than 2 links until I get over 10 rep: [3]: http://www.opendedup.org/ [4]: https://en.wikipedia.org/w/index.php?title=SDFS&action=edit&redlink=1 [5]: https://en.wikipedia.org/wiki/Btrfs#Features [6]: https://en.wikipedia.org/wiki/ZFS#Linux

    Read the article

  • Architects, Leadership, and Influence

    - by Bob Rhubart
    Technical expertise is a given for architects. In addition to solid development experience, extensive knowledge of technical trends, tools, standards, and methodolgies (not to mention business accumen) provides the foundation for the decisions the architect must make in the effort to get all the pieces to work together. But even superior technical chops can't overcome a lack of leadership. Leadership is about influence: the ability to effectively communicate — to sell your ideas and defend your decisions in a manner that affects the decisions of the people around you. Leadership and influence are especially important in situations in which the architect may not have the authority to simply tell people what to do. And even when the architect has that kind of authority, influential leadership can mean the difference between gaining real buy-in and support from colleagues and stakeholders, and settling for their grudging acceptance (or worse). Guess which outcome is likely to produce the best results. In a previous post I presented some examples of the kind of criticism that is leveled at architects, a great deal of which can be attributed to a lack of leadership and influence on the part of the targets of that criticism. So it was serendipitous that I recently ran across a post on the Harvard Business Review blog written by Chris Musselwhite and Tammie Plouffe. That post, When Your Influence Is Ineffective, includes this: [I]nfluence becomes ineffective when individuals become so focused on the desired outcome that they fail to fully consider the situation. While the influencer may still gain the short-term desired outcome, he or she can do long-term damage to personal effectiveness and the organization, as it creates an atmosphere of distrust where people stop listening, and the potential for innovation or progress is diminished. The need to "see the big picture" is a grossly reductive assesement of the architect's responsibilities — but that doesn't mean it's not true. That big picture perspective must encompass both the technological elements of the architecture and the elements responsible for implementing those technologies in compliance with the prescribed architecture. Technologies may be tempermental, but they don't have personalities or egos, and they are unlikely to carry a grudge — not yet, anyway (Hello, Skynet!).  Effective leadership and the ability to influence people can help to ensure that all the pieces fit and that they work together, today and tomorrow.

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • How employable am I as a programmer?

    - by dsimcha
    I'm currently a Ph.D. student in Biomedical Engineering with a concentration in computational biology and am starting to think about what I want to do after graduate school. I feel like I've accumulated a lot of programming skills while in grad school, but taken a very non-traditional path to learning all this stuff. I'm wondering whether I would have an easy time getting hired as a programmer and could fall back on that if I can't find a good job directly in my field, and if so whether I would qualify for a more prestigious position than "code monkey". Things I Have Going For Me Approximately 4 years of experience programming as part of my research. I believe I have a solid enough grasp of the fundamentals that I could pick up new languages and technologies pretty fast, and could demonstrate this in an interview. Good math and statistics skills. An extensive portfolio of open source work (and the knowledge that working on these projects implies): I wrote a statistics library in D, mostly from scratch. I wrote a parallelism library (parallel map, reduce, foreach, task parallelism, pipelining, etc.) that is currently in review for adoption by the D standard library. I wrote a 2D plotting library for D against the GTK Cairo backend. I currently use it for most of the figures I make for my research. I've contributed several major performance optimizations to the D garbage collector. (Most of these were low-hanging fruit, but it still shows my knowledge of low-level issues like memory management, pointers and bit twiddling.) I've contributed lots of miscellaneous bug fixes to the D standard library and could show the change logs to prove it. (This demonstrates my ability read other people's code.) Things I Have Going Against Me Most of my programming experience is in D and Python. I have very little to virtually no experience in the more established, "enterprise-y" languages like Java, C# and C++, though I have learned a decent amount about these languages from small, one-off projects and discussions about language design in the D community. In general I have absolutely no knowledge of "enterprise-y" technlogies. I've never used a framework before, possibly because most reusable code for scientific work and for D tends to call itself a "library" instead. I have virtually no formal computer science/software engineering training. Almost all of my knowledge comes from talking to programming geek friends, reading blogs, forums, StackOverflow, etc. I have zero professional experience with the official title of "developer", "software engineer", or something similar.

    Read the article

  • Agile Documentation

    - by Nick Harrison
    We all know that one of the premises of the agile manifesto is to value Working Software over Comprehensive Documentation. This is a wonderful idea and it takes a tremendous burden off of project implementations. I have seen as many projects fail because of the maintenance weight of the project documentations as I have for any reason. But this goal as important as it is may not always be practical. Sometimes the client will simply insist on tedious documentation despite the arguments against it. This may be to calm a nervous client. This may be to satisfy an audit / compliance requirement. This may be a non-too subtle attempt at sabotaging the project. Ok, it is probably not an all out attempt to sabotage the project, but it will probably feel that way. So what can we do to keep to the spirit of the Agile Manifesto but still meet the needs of the client wanting the documentation? This is a good question that I have been puzzling over lately! I hope to explore some possible answers more fully here. A common theme that my solutions are likely to follow is the same theme that I often follow with simplifying complex business logic. Make it table driven! My thought is that the sought after documentation could be a report or reports out of a metadata repository. Reports are much easier to maintain than hand written documentation. Here are a few additional advantages that we can explore over time: Reports will take advantage of the fact that different people have different needs and different format requirements Reports and the supporting metadata are more easily validated and the validation can be automated. If the application itself uses this metadata than there never has to be a question as to whether or not the metadata is up to date. It is up to date or the application would not work. In many cases we should be able to automatically gather most of the Meta data that we need using reflection, system tables, etc. I think that this will lower the total cost of ownership for the documentation and may provide something useful beyond having a pretty document to look at.  What are your thoughts?

    Read the article

  • Will Online Learning Save Higher Education (and does it need saving)?

    - by user739873
    A lot (an awful lot) of education industry rag real estate has been devoted to the topics of online learning, MOOC’s, Udacity, edX, etc., etc. and to the uninitiated you’d think that the education equivalent of the cure for cancer had been discovered. There are certainly skeptics (whose voice is usually swiftly trampled upon by the masses) who feel we could over steer and damage or destroy something vital to teaching and learning (i.e. the classroom experience and direct interaction with human beings known as instructors), but for the most part prevailing opinion seems to be that online learning will take over the world and that higher education will never be the same. Now I’m sure that since you all know I work for a technology company you think I’m going to come down hard on the side of online learning proselytizers. Yes, I do believe that this revolution can and will provide access to massive numbers of individuals that either couldn’t afford (from a fiscal or time perspective) a traditional education, and that in some cases the online modality will actually be an improvement over certain traditional forms (such as courses taught by an adjunct or teaching assistant that has no business being a teacher). But I think several things need immediate attention or we’re likely to get so caught up in the delivery that we miss some of the real issues (and opportunities) around online learning. First and foremost, we’ve got to give some thought to how traditional information systems are going to accommodate thousands (possibly hundreds of thousands) of individual students each taking courses from many, many different “deliverers” with an expectation that successful completion of these courses will result in credit at many or most institutions. There’s also a huge opportunity to refine the delivery platform (no, LMS is not a commodity when you are talking about online delivery being your sole mode of operation) as well as the course itself by mining all kinds of data from the interactions that the students have with the material each time they take it. Social data analytics tools will be key in achieving this goal. What about accreditation (badging or competencies vs. traditional degrees)? And again, will the information systems in place today adapt to changes in this area fast enough? The type of scale that this shift in learning could drive has the potential to abruptly overwhelm just about every system in place today in higher education. I would like to (with a not so gentle reminder) refer you back to a blog entry I wrote when I first stepped into my current role at Oracle in which I talked about how higher ed needs an “Oracle” more than at any other time in it’s evolution (despite the somewhat mercantilist reputation it has in some circles). There just aren’t that many organizations that can deliver the kinds of solutions “at scale” that this brave new world of online education will demand. The future may be closer than we think. Cole

    Read the article

  • Live-Ubuntu 12.04 ran fine, now stopped booting!

    - by user89743
    I've seen similar problems to this several times in the forum, but mine is a bit different, so the other posts I saw were no help to me. When I boot Ubuntu 12.04 64-bit from live-SD-card (3GB persistence) I suddenly get this error: (initramfs) mount: mounting /dev/loop0 on //filesystem.squashfs failed: Invalid argument Can not mount /dev/loop/0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs (it says I can type "help" for commands, but I don't know anything about how to go from there, totally new to linux) The reason I say my case is different is because my Ubuntu worked fine for over a week, even pretty fast, and now this problem happened. Before that I used to run my live ubuntu from USB sticks but that was slower (especially when booting which took 15 minutes from USB stick!). Also I kept getting the same above problem after a while when booting and had to re-create a live USB linux several times. Installing on harddrive is not an option because my harddrive has physical damage and getting a replacement will take a while, therefore I can only use live-USB or live-SD-card Ubuntu. As I said I used Ubuntu without problems for more than a week, before that as well for several weeks on USB sticks, but the above problem occured sooner or later. This time I paid attention to when it happened: I was rebooting my computer (HP 620 laptop, 4 GB RAM, 64 bit system) from SD flash card and when I was booting I selected F6 and then the first option "no acpi" or something like that...I had used it before and noticed it slowed down the time it took Linux to use. This time it caused this error. Now even when I boot normally/default I get this error. Now I'm accessing Ubuntu from my USB stick without persistence file, when I check my SD card, all the files mentioned in the error message are there and the filesystem.squashfs is 691.2 MB so nothing seems to have been deleted by accident. (I have already made many changes/downloaded programs to my SD card persistent Ubuntu and would hope to loose them, since downloading is expensive for me, and since the problem seems to re-occur...) Can anyone help me, preferably without having to create another startup disk on my SD card? I'm totally new to this. Sorry for the long posts, just didn't know what info is relevant and what isnt! Kon

    Read the article

  • Registering domain during christmas holydays

    - by arkascha
    One of the domain names I tried to register previously has been blocked by a domain grabber two days prior to my own attempt. That was about 1 year ago. The attempt to buy the domain from that person failed due to a totally exaggerated price. So I dropped the issue and watched the domain (offered at sedo.com). As expected there were no more offers, the domain was not sold. Now I learn from the whois database that the registration of that domain name ends on 25.12.2012 (christmas holyday). This raises two questions for me, I fail to find reliable answers on the internet. So maybe someone experienced here can drop a statement or a hint: is it reasonable that the domain name in question really will be free again when that date mentioned in the whois database up to when the domain is registered has passed? I certainly know that the registration can be prolonged, that is not what I mean. I expect (hope) that that domain grabber does not extend the registration, since it costs money and effort and he failed to sell the domain. Provided this is the case and the domain registration is not prolonged, is that date mentioned reliable? Or might it just be some 'default' date? I would like to try to register that domain name as soon as it is unregistered. Since that domain grabber registered that domain only two days before my own registration attempt I would like to prevent such annoying interference next time. So I ask myself: is it possible to register a domain name on a holyday? I mean not to send an email to my provider to do so on that day or before, but to actually have to process taking place as not to wait for 1-2 days after the unregistration? My own provider which I am very happy with does not offer such service on a holyday (which is perfectly understandable). They are 'still checking' if they can offer something automatic. I researched and did not find an answer to the question if that is possible at all. Is an autoomatic registration attempt on a holyday possible? Where can I do that? Is that reliable? Thanks for any reply!

    Read the article

  • Perfect is the enemy of “Good Enough”

    - by Daniel Moth
    This is one of the quotes that I was against, but now it is totally part of my core beliefs: "Perfect is the enemy of Good Enough" Folks used to share this quote a lot with me in my early career and my frequent interpretation was that they were incompetent people that were satisfied with mediocrity, i.e. I ignored them and their advice. (Yes, I went through an arrogance phase). I later "grew up" and "realized" that they were missing the point, so instead of ignoring them I would retort: "Of course we have to aim for perfection, because as human beings we'll never achieve perfection, so by aiming for perfection we will indeed achieve good enough results". (Yes, I went through a smart ass phase). Later I grew up a bit more and "understood" that what I was really being told is to finish my work earlier and move on to other things because by trying to perfect that one thing, another N things that I was responsible for were suffering by not getting my attention - all things on my plate need to move beyond the line, not just one of them to go way over the line. It is really a statement of increasing scale and scope. To put it in other words, getting PASS grades on 10 things is better than getting an A+ with distinction on 1-2 and a FAIL on the rest. Instead of saying “I am able to do very well these X items” it is best if you can say I can do well enough on these X * Y items”, where Y > 1. That is how breadth impact is achieved. In the future, I may grow up again and have a different interpretation, but for now - even though I secretly try to "perfect" things, I try not to do that at the expense of other responsibilities. This means that I haven't had anybody quote that saying to me in a while (or perhaps my quality of work has dropped so much that it doesn't apply to me any more - who knows :-)). Wikipedia attributes the quote to Voltaire and it also makes connections to the “Law of diminishing returns”, and to the “80-20 rule” or “Pareto principle”… it commonly takes 20% of the full time to complete 80% of a task while to complete the last 20% of a task takes 80% of the effort …check out the Wikipedia entry on “Perfect is the enemy of Good” and its links. Also use your favorite search engine to search and see what others are saying (Bing, Google) – it is worth internalizing this in a way that makes sense to you… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Is Innovation Dead?

    - by wulfers
    My question is has innovation died?  For large businesses that do not have a vibrant, and fearless leadership (see Apple under Steve jobs), I think is has.  If you look at the organizational charts for many of the large corporate megaliths you will see a plethora of middle managers who are so risk averse that innovation (any change involves risk) is choked off since there are no innovation champions in the middle layers.  And innovation driven top down can only happen when you have a visionary in the top ranks, and that is also very rare.So where is actual innovation happening, at the bottom layer, the people who live in the trenches…   The people who live for a challenge. So how can big business leverage this innovation layer?  Remove the middle management layer.   Provide an innovation champion who has an R&D budget and is tasked with working with the bottom layer of a company, the engineers, developers  and business analysts that live on the edge (Where the corporate tires meet the road). Here are two innovation failures I will tell you about, and both have been impacted by a company so risk averse it is starting to fail in its primary business ventures: This company initiated an innovation process several years ago.  The process was driven companywide with team managers being the central points of collection of innovative ideas.  These managers were given no budget to do anything with these ideas.  There was no process or incentive for these managers to drive it about their team.  This lasted close to a year and the innovation program slowly slipped into oblivion…. A second example:  This same company failed an attempt to market a consumer product in a line where there was already a major market leader.  This product was under development for several years and needed to provide some major device differentiation form the current market leader.  This same company had a large Lead Technologist community made up of real innovators in all areas of technology.  Did this same company leverage the skills and experience of this internal community,   NO!!! So to wrap this up, if large companies really want to survive, then they need to start acting like a small company.  Support those innovators and risk takers!  Reward them by implementing their innovative ideas.  Champion (from the top down) innovation (found at the bottom) in your companies.  Remember if you stand still you are really falling behind.Do it now!  Take a risk!

    Read the article

  • Arçelik A.S. Uses Advanced Analytics to Improve Product Development

    - by Sylvie MacKenzie, PMP
    "Oracle’s Primavera P6 Enterprise Project Portfolio Management’s advanced analytics gives us better insight into the product development process by helping us to identify potential roadblocks.” – Iffet Iyigun Meydanli, Innovation and System Development Manager, R&D Center, Arçelik A.S. Founded in 1955, Arçelik A.S. is now the leading household appliance manufacturer in Turkey, and the third-largest household appliance company in Europe. It operates 14 production facilities in five countries (Turkey, Romania, Russia, China, and South Africa), with international sales and marketing offices in 20 countries. Additionally, the company manages 10 brands (Arçelik, Beko, Grundig, Blomberg, Elektrabregenz, Arctic, Leisure, Flavel, Defy, and Altus). The company has a household presence in more than 100 countries, including China and the United States. Arçelik’s Beko brand is among the top-10 household appliance brands in world, as a market leader for refrigerators, freezers, and washing machines in the United Kingdom. Arçelik implemented Oracle’s Primavera P6 Enterprise Project Portfolio Management for improved management of its design and manufacturing projects. With the solution, Arelik has improved its research and development (R&D) with the ability to evaluate technology risks when planning its projects. Also, it is now more easy to make plans for several locations, monitor all resources, and plan for future projects.  Challenges Improve monitoring of R&D resources?including human resources and critical laboratory equipment?to optimize management of the company’s R&D project portfolio Establish a transparent project platform to enable better product and process planning, gain insight into product performance, and facilitate advanced analytics that support R&D and overall business decisions Identify potential roadblocks for better risk management Solutions Worked with Oracle Partner PRM to implement Oracle’s Primavera P6 Enterprise Project Portfolio Management to manage the entire household-appliance, R&D project portfolio lifecycle, enabling managers and project leaders to better track and monitor resources and deliverables in real time Improved risk analysis and evaluation abilities for R&D projects Supported long-term planning needs Used advanced reporting features to capture data needed for budgeting and other project details, including employee performance evaluations Improved monitoring abilities and insight into the overall performance of products postproduction Enabled flexible, fast, and customized reporting with the P6 dashboard on a centralized platform to meet custom reporting needs for project leaders and support on-time and on-budget deliverables Integrated with other corporate departments, such as accounts payable, to upload project invoice data into the Primavera solution and the company’s e-mail system, so that project leaders will be alerted about milestones and other project related information Partner“Oracle Partner PRM provided us with a quick, reliable, and solution-focused approach to its support,” said Iffet Iyigun Meydanli, innovation and system development manager, R&D Center, Arçelik A.S. “The company’s service covered the entire spectrum of our needs, including implementation, training, configuration, problem solving, and integration.”

    Read the article

  • Setting up Ubuntu on my mother's computer

    - by idealmachine
    Intended use My mother had an old Compaq desktop computer running Windows 98, which she used for occasional Web browsing and playing cards. The name of her card game is Hoyle Card Games 3. Although I had to repair it several times over the last 10 years, it worked fine until it finally died at the end of last year. Hardware specifications A relative brought up a newer computer soon afterward: Operating system: Windows XP Asus K8N motherboard (with broken on-board sound; getting a sound card) Athlon 64? processor (don't remember the clock speed) 512 MB RAM Hope the graphics card works... Replacement sound card will be one of: Ensoniq ES1370 AudioPCI Diamond Monster Sound MX300 (Aureal chipset) Sound Blaster Audigy 2 SE Peripherals HP Scanjet 3400c scanner (USB connected) HP LaserJet multi-function printer (parallel port connected, and printing works with a PCL driver) Same serial mouse as old computer Question I had set up an SSH/VNC connection to allow for remotely working out problems. Or so I thought. A month later, the computer would not boot, rendering the SSH connection useless and an OS reinstall necessary. Unfortunately, I have neither the original Windows disc nor the product key. Unless I were to pay $200 for a full Windows 7 Home Premium license for my computer, I would not be able to re-install Windows XP on hers. I consider myself an advanced Linux user, having used Debian for years. So here are my questions. I have only one day to decide whether to use Ubuntu or buy Windows: A quick search leads me to believe all the hardware listed above is supposed to work with Linux, but am I mistaken? Would Ubuntu/Xubuntu suffice (specify which one if it matters), or would I be better off paying the $200 necessary for Windows XP? Is the card game likely to run on Wine? I believe the minimum system requirement is Windows 95. Failing Wine compatibility, will VirtualBox run fast enough on such a computer (Windows 98 as the guest OS)? Are there any free card games just as good? She plays mainly Bridge, Poker, and Solitaire. Is there any "Large Fonts" option for those with poor vision? The lack of it would be a big disadvantage. BONUS: Although I would probably replace the old mouse upon a move to Ubuntu, is it even possible to get a serial mouse working?

    Read the article

  • PeopleSoft New Design Solves Navigation Problem

    - by Applications User Experience
    Anna Budovsky, User Experience Principal Designer, Applications User Experience In PeopleSoft we strive to improve User Experience on all levels. Simplifying navigation and streamlining access to the most important pages is always an important goal. No one likes to waste time waiting for pages to load and watching a spinning glass going on and on. Those performance-affecting server trips, page-load waits and just-too-many clicks were complained about for a long time. Something had to be done. A few new designs came in PeopleSoft 9.2 helping users to access their everyday work areas easier and faster. For example, Dashboard and Work Center aggregate most accessed information sections on a single page; Related Information allows users to complete transaction-related-research without interrupting a transaction and Secure Search gets users to a specific page directly. Today we’ll talk about the Actions menu. Most PeopleSoft pages are shared between individual products and product lines. It means changing the content on a single page involves Oracle development and quality assurance time for making and testing the changes. In order to streamline the navigation and cut down on accessing PeopleSoft pages one-page-at-a-time, we introduced a new menu design. The new menu allows accessing shared pages without the Oracle development team making any local changes, and it works as an additional one-click-path to specific high-traffic actionable pages. Let’s look at how many steps it took to Change Salary for an employee in HCM 9.1 before: Figure 1. BEFORE: The 6 steps a user would take to Change Salary in PeopleSoft HCM 9.1 In PeopleSoft 9.1 it took 5 steps + page loading time + additional verification time for making sure a correct employee is selected from the table. In PeopleSoft 9.2 it only takes 2 steps. To complete Ad Hoc Change Salary action, the user can start from the HCM Manager's Dashboard, click the Action menu within a table, choose a menu option, and access a correct employee’s details page to take an action. Figure 2. AFTER: The 2 steps a user would take to Change Salary in PeopleSoft HCM 9.2 The new menu is placed on a row level which ensures the user accesses the correct employee’s details page. The Actions menu separates menu options into hierarchical sections which help to scan and access the correct option quickly. The new menu’s small size and its structure enabled users to access high-traffic pages from any page and from any part of the page. No more spinning hourglass, no more multiple pages upload. The flexible design fits anywhere on a page and provides a fast and reliable path to the correct destination within the product. Now users can: Access any target page no matter how far it is buried from the starting point; Reduce navigation and page-load time; Improve productivity and reduce errors. The new menu design is available and widely used in all PeopleSoft 9.2 product lines.

    Read the article

  • Alert: It is No Longer 1982, So Why is CRM Still There?

    - by Mike Stiles
    Hot off the heels of Oracle’s recent LinkedIn integration announcement and Oracle Marketing Cloud Interact 2014, the Oracle Social Cloud is preparing for another big event, the CRM Evolution conference and exhibition in NYC. The role of social channels in customer engagement continues to grow, and social customer engagement will be a significant theme at the conference. According to Paul Greenberg, CRM Evolution Conference Chair, author, and Managing Principal at The 56 Group, social channels have become so pervasive that there is no longer a clear reason to make a distinction between “social CRM” and traditional CRM systems. Why not? Because social is a communication hub every bit as vital and used as the phone or email. What makes social different is that if you think of it as a phone, it’s a party line. That means customer interactions are far from secret, and social connections are listening in by the hundreds, hearing whether their friend is having a positive or negative experience with your brand. According to a Mention.com study, 76% of brand mentions are neutral, neither positive nor negative. These mentions fail to get much notice. So think what that means about the remaining 24% of mentions. They’re standing out, because a verdict, about you, is being rendered in them, usually with emotion. Suddenly, where the R of CRM has been lip service and somewhat expendable in the past, “relationship” takes on new meaning, seriousness, and urgency. Remarkably, legions of brands still approach CRM as if it were 1982. Today, brands must provide customer experiences the customer actually likes (how dare they expect such things). They must intimately know not only their customers, but each customer, because technology now makes personalized experiences possible. That’s why the Oracle Social Cloud has been so mission-oriented about seamlessly integrating social with sales, marketing and customer service interactions so the enterprise can have an actionable 360-degree view of the customer. It’s the key to that customer-centricity we hear so much about these days. If you’re attending CRM Evolution, Chris Moody, Director of Product Marketing for the Oracle Marketing Cloud, will show you how unified customer experiences and enhanced customer centricity will help you attract and keep ideal customers and brand advocates (“The Pursuit of Customer-Centricity” Aug 19 at 2:45p ET) And Meg Bear, Group Vice President for the Oracle Social Cloud, will sit on a panel talking about “terms of engagement” and the ways tech can now enhance your interactions with customers (Aug 20 at 10a ET). If you can’t be there, we’ll be doing our live-tweeting thing from the @oraclesocial handle, so make sure you’re a faithful follower. You’ll notice NOBODY is writing about the wisdom of “company-centricity.” Now is the time to bring your customer relationship management into the socially connected age. @mikestilesPhoto: Sue Pizarro, freeimages.com

    Read the article

  • Brocken package manager due to incorrect Banshee package

    - by user54974
    Sup, so, I'm not familiar with linux at all so help is much appreciated. I've been trying to boot my pc up from a live CD unsuccessfully. I get to the stage at which there are the options to test without installing or install or so on where I select 'Install Ubuntu.' Here it relays through some fast DOS commands until it reaches 'end trace' and then, eventually, 'Killed.' I have already got a functional 11.10 version installed, could this be a problem? The reason I am attempting a reinstall is because the package system is damaged inside 11.10, a problem I can't seem to solve. If I try to install any new software from within the software centre it tells me that two banshee extensions must be removed. I try to remove these from inside the terminal, using apt-get remove, which results in:** You might want to run apt-get -f install to correct these: The following packages have unmet dependencies. banshee-extension-ubuntuonemusicstore : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). The software centre suggests that I disable all third party repositories and run apt-get install -f I have done so but the package system remains damaged and apt-get install -fattempts to install banshee 2.2.1 but returns: Errors were encountered while processing: /var/cache/apt/archives/banshee_2.2.1-1ubuntu3_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have also tried apt-get update (runs fine) and apt-get upgrade. The upgrade command apt-get upgrade results in: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. banshee-extension-soundmenu : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is installed banshee-extension-ubuntuonemusicstore : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is installed E: Unmet dependencies. Try using -f. I seem to be going round and round in circles here! If only I could reinstall successfully. Only proposed updates (oneiric proposed) is not enabled.

    Read the article

  • Oracle Could Lead In Cloud Business Apps Within Year

    - by Richard Lefebvre
    Below is the reprint from an article, writen by By Pete Barlas, Investor's Business Daily, published on Investorscom: Oracle (ORCL) is all but destined to become the largest seller of cloud business-software applications, analysts say, and perhaps within a year. What that means in the long run is much debated, though, as analysts aren't sure whether pricing competition might cut into profit or what other issues might develop in the fast-emerging cloud software field. But the database leader, which is either No. 1 or 2 to SAP (SAP) in business apps overall, simply has the size and scope to overtake current cloud business-app leader, Salesforce.com (CRM), analysts say. Oracle rolled out its first full suite of cloud applications on June 6. Cloud computing lets companies store data and apps on the Internet "cloud" and access it quickly and easily. The applications run the gamut of customer relationship management software to social networking sites for employees, partners and customers. For longtime software giants like Oracle, the cloud is a big switch. They get the great bulk of revenue from companies and other enterprises buying or licensing software that the customers keep on their own computer systems. Vendors also get annual maintenance fees. Analysts estimate Oracle is taking in a mere $1 billion or so a year from cloud-based software sales and services now. But while that's just a sliver of the company's $37 billion in sales last year, it's already about a third of the total sales for Salesforce, which is expected to end this year with some $3 billion in revenue. Operates In 145 Countries Oracle operates in more than 145 countries vs. about 70 for Salesforce. And Oracle has far more apps than Salesforce. Revenue doesn't equate to profit, but it's inevitable that huge Oracle will become the largest seller of cloud applications, says Trip Chowdhry, an analyst for Global Equities Research. "What Oracle has is global presence," he said. "They have two things driving the revenue: breadth of the offering and breadth of the distribution. You put those applications in those sales reps' hands and you get deployments not in just one country but several countries." At the June 6 event, Oracle CEO Larry Ellison emphasized that his company could and would beat Salesforce.com in head-to-head battles for customers. Oracle makes software to help companies manage such tasks as customer relationships, recruiting, supply chains, projects, finances and more. That range gives it an edge over all rivals, says Michael Fauscette, an analyst for research firm IDC.

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • How to cleanly add after-the-fact commits from the same feature into git tree

    - by Dennis
    I am one of two developers on a system. I make most of the commits at this time period. My current git workflow is as such: there is master branch only (no develop/release) I make a new branch when I want to do a feature, do lots of commits, and then when I'm done, I merge that branch back into master, and usually push it to remote. ...except, I am usually not done. I often come back to alter one thing or another and every time I think it is done, but it can be 3-4 commits before I am really done and move onto something else. Problem The problem I have now is that .. my feature branch tree is merged and pushed into master and remote master, and then I realize that I am not really done with that feature, as in I have finishing touches I want to add, where finishing touches may be cosmetic only, or may be significant, but they still belong to that one feature I just worked on. What I do now Currently, when I have extra after-the-fact commits like this, I solve this problem by rolling back my merge, and re-merging my feature branch into master with my new commits, and I do that so that git tree looks clean. One clean feature branch branched out of master and merged back into it. I then push --force my changes to origin, since my origin doesn't see much traffic at the moment, so I can almost count that things will be safe, or I can even talk to other dev if I have to coordinate. But I know it is not a good way to do this in general, as it rewrites what others may have already pulled, causing potential issues. And it did happen even with my dev, where git had to do an extra weird merge when our trees diverged. Other ways to solve this which I deem to be not so great Next best way is to just make those extra commits to the master branch directly, be it fast-forward merge, or not. It doesn't make the tree look as pretty as in my current way I'm solving this, but then it's not rewriting history. Yet another way is to wait. Maybe wait 24 hours and not push things to origin. That way I can rewrite things as I see fit. The con of this approach is time wasted waiting, when people may be waiting for a fix now. Yet another way is to make a "new" feature branch every time I realize I need to fix something extra. I may end up with things like feature-branch feature-branch-html-fix, feature-branch-checkbox-fix, and so on, kind of polluting the git tree somewhat. Is there a way to manage what I am trying to do without the drawbacks I described? I'm going for clean-looking history here, but maybe I need to drop this goal, if technically it is not a possibility.

    Read the article

  • How to apply effects that occur (or change) over time to characters in a game?

    - by Joshua Harris
    So assume that I have a system that applies Effects to Characters like so: public class Character { private Collection<Effect> _effects; public void AddEffect (Effect e) { e.ApplyTo(this); _effects.Add(e); } public void RemoveEffect (Effect e) { e.RemoveFrom(this); _effects.Remove(e); } } public interface Effect { public void ApplyTo (Character character); public void RemoveFrom (Character character); } Example Effect: Armor Buff for 5 seconds. void someFunction() { // Do Stuff ... Timer armorTimer = new Timer(5 seconds); ArmorBuff armorbuff = new ArmorBuff(); character.AddEffect(armorBuff); armorTimer.Start(); // Do more stuff ... } // Some where else in code public void ArmorTimer_Complete() { character.RemoveEffect(armorBuff); } public class ArmorBuff implements Effect { public void applyTo(Character character) { character.changeArmor(20); } public void removeFrom(Character character) { character.changeArmor(-20); } } Ok, so this example would buff the Characters armor for 5 seconds. Easy to get working. But what about effects that change over the duration of the effect being applied. Two examples come to mind: Damage Over Time: 200 damage every second for 3 seconds. I could mimic this by applying an Effect that lasts for 1 second and has a counter set to 3, then when it is removed it could deal 200 damage, clone itself, decrement the counter of the clone, and apply the clone to the character. If it repeats this until the counter is 0, then you got a damage over time ability. I'm not a huge fan of this approach, but it does describe the behavior exactly. Degenerating Speed Boost: Gain a speed boost that degrades over 3 seconds until you return to your normal speed. This is a bit harder. I can basically do the same thing as above except having timers set to some portion of a second, such that they occur fast enough to give the appearance of degenerating smoothly over time (even though they are really just stepping down incrementally). I feel like you could get away with only 12 steps over a second (maybe less, I would have to test it and see), but this doesn't seem very elegant to me. The only other way to implement this effect would be to change the system so that the Character checks the _effects collection for effects that alter any of the properties any time that they are being used. I could handle this in functions like getCurrentSpeed() and getCurrentArmor(), but you can imagine how much of a hassle it would be to have that kind of overhead every time you want to do a calculation with movement speed (which would be every time you move your character). Is there a better way to deal with these kinds of effects or events?

    Read the article

  • 5.1 surround sound

    - by rocker9455
    Ok, So i've always had trouble with enabling 5.1 in ubuntu. Running 'alsamixer': I have: Master, Heaphones, PCM, Front, Front Mi, Front Mi, Surround, Center All are at 100% Card:HDA Intel Chip:Realtek ALC888 (This is my onboard sound, Its a dell studio, with 7.1 integrated sound) Running "speaker-test -c6 -twav" I only get the front 2 speakers (Right/Left) making any noise. The others make no noise at all. I have no other sound card to use as all my PCI slots are used up. Daemon.conf: ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10

    Read the article

  • What arguments can I use to "sell" the BDD concept to a team reluctant to adopt it?

    - by S.Robins
    I am a bit of a vocal proponent of the BDD methodology. I've been applying BDD for a couple of years now, and have adopted StoryQ as my framework of choice when developing DotNet applications. Even though I have been unit testing for many years, and had previously shifted to a test-first approach, I've found that I get much more value out of using a BDD framework, because my tests capture the intent of the requirements in relatively clear English within my code, and because my tests can execute multiple assertions without ending the test halfway through - meaning I can see which specific assertions pass/fail at a glance without debugging to prove it. This has really been the tip of the iceberg for me, as I've also noticed that I am able to debug both test and implementation code in a more targeted manner, with the result that my productivity has grown significantly, and that I can more easily determine where a failure occurs if a problem happens to make it all the way to the integration build due to the output that makes its way into the build logs. Further, the StoryQ api has a lovely fluent syntax that is easy to learn and which can be applied in an extraordinary number of ways, requiring no external dependencies in order to use it. So with all of these benefits, you would think it an easy to introduce the concept to the rest of the team. Unfortunately, the other team members are reluctant to even look at StoryQ to evaluate it properly (let alone entertain the idea of applying BDD), and have convinced each other to try and remove a number of StoryQ elements from our own core testing framework, even though they originally supported the use of StoryQ, and that it doesn't impact on any other part of our testing system. Doing so would end up increasing my workload significantly overall and really goes against the grain, as I am convinced through practical experience that it is a better way to work in a test-first manner in our particular working environment, and can only lead to greater improvements in the quality of our software, given I've found it easier to stick with test first using BDD. So the question really comes down to the following: What arguments can I use to really drive the point home that it would be better to use StoryQ, or at the very least apply the BDD methodology? Can you point me to any anecdotal evidence that I can use to support my argument to adopt BDD as our standard method of choice? What counter arguments can you think of that could suggest that my wish to convert the team efforts to BDD might be in error? Yes, I'm happy to be proven wrong provided the argument is a sound one. NOTE: I am not advocating that we rewrite our tests in their entirety, but rather to simply start working in a different manner for all future testing work.

    Read the article

  • Evolution laggy due to IMAP -profile or due to some odd Sync -issue?

    - by Izzy
    I'm fighting with Evolution. Basically it's working fine -- but it is very slow to react in certain situations. Helper questions Could it be that changing away from Bonobo has to do with slowing-down? There might be some trouble with the new engine and "asynchronous actions". What to do about it? Are there e.g. any configuration files? I want to get the previous "working mood" back. How can I speed this thing up? Different scenarios when sending a mail, the composer window hangs there inactive for a couple of seconds, everything grayed out. Though there is a green check mark saying it's sent, I'm not sure a) why it's still blocking everything and b) whether I could simply close it without "breaking"/"losing" anything. In earlier versions, the composer window was closing pretty fast, and one could see the message being stored into the local "outbox" until it was sent, and one could immediately continue with the next task. I prefer that behaviour over the current, where I cannot do anything in the application until the window closes. switching between modules. Coming from mail and switching to the address book takes a couple of seconds. Same for switching to the calendar. I read about different "possible causes" and tried a few things: I only have 3 local address books, so no networking should be involved here. To make sure, I switched to offline mode and then tried to access the address book. No noticeable difference. I use 3 Google Calendars. Switching to offline mode made a minor difference, but so minor that it also could be "imagination" since one might have expected this in this case according to some reports, disabling the tasks should help. Well, it didn't in my case, as I don't use them regularly (just two local items stored here) Maybe I should also mention that I'm using the KDE4 desktop (so no Unity or Gnome, though both is installed on the computer). And I did not have this issue before I updated to 12.04.

    Read the article

  • Loose Coupling and UX Patterns for Applications Integrations

    - by ultan o'broin
    I love that software architecture phrase loose coupling. There’s even a whole book about it. And, if you’re involved in enterprise methodology you’ll know just know important loose coupling is to the smart development of applications integrations too. Whether you are integrating offerings from the Oracle partner ecosystem with Fusion apps or applications coexistence scenarios, loose coupling enables the development of scalable, reliable, flexible solutions, with no second-guessing of technology. Another great book Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions tells us about loose coupling benefits of reducing the assumptions that integration parties (components, applications, services, programs, users) make about each other when they exchange information. Eliminating assumptions applies to UI development too. The days of assuming it’s enough to hard code a UI with linking libraries called code on a desktop PC for an office worker are over. The book predates PaaS development and SaaS deployments, and was written when web services and APIs were emerging. Yet it calls out how using middleware as an assumptions-dissolving technology “glue" is central to applications integration. Realizing integration design through a set of middleware messaging patterns (messaging in the sense of asynchronously communicating data) that enable developers to meet the typical business requirements of enterprises requiring integrated functionality is very Fusion-like. User experience developers can benefit from the loose coupling approach too. User expectations and work styles change all the time, and development is now about integrating SaaS through PaaS. Cloud computing offers a virtual pivot where a single source of truth (customer or employee data, for example) can be experienced through different UIs (desktop, simplified, or mobile), each optimized for the context of the user’s world of work and task completion. Smart enterprise applications developers, partners, and customers use design patterns for user experience integration benefits too. The Oracle Applications UX design patterns (and supporting guidelines) enable loose coupling of the optimized UI requirements from code. Developers can get on with the job of creating integrations through web services, APIs and SOA without having to figure out design problems about how UIs should work. Adding the already user proven UX design patterns (and supporting guidelines to your toolkit means ADF and other developers can easily offer much more than just functionality and be super productive too. Great looking application integration touchpoints can be built with our design patterns and guidelines too for a seamless applications UX. One of Oracle’s partners, Innowave Technologies used loose coupling architecture and our UX design patterns to create an integration for a customer that was scalable, cost effective, fast to develop and kept users productive while paving a roadmap for customers to keep pace with the latest UX designs over time. Innowave CEO Basheer Khan, a Fusion User Experience Advocate explains how to do it on the Usable Apps blog.

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >