Search Results

Search found 15670 results on 627 pages for 'multi level'.

Page 377/627 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • ArchBeat Link-o-Rama Top 20 for April 1-9, 2012

    - by Bob Rhubart
    The top 20 most popular items shared via my social networks for the week of April 1 - 8, 2012. Webcast: Oracle Maximum Availability Architecture Best Practices w/Tom Kyte - April 12 Oracle Cloud Conference: dates and locations worldwide Bad Practice Use Case for LOV Performance Implementation in ADF BC | Oracle ACE Director Andresjus Baranovskis How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya MySQL Cluster 7.2 GA Released How to deal with transport level security policy with OSB | Jian Liang Webcast Series: Data Warehousing Best Practices http://bit.ly/I0yUx1 Interactive Webcast and Live Chat: Oracle Enterprise Manager Ops Center 12c Launch - April 12 Is This How the Execs React to Your Recommendations? | Rick Ramsey Unsolicited login with OAM 11g | Chris Johnson Event: OTN Developer Day: MySQL - New York - May 2 OTN Member discounts for April: Save up to 40% on titles from Oracle Press, Pearson, O'Reilly, Apress, and more Get Proactive with Fusion Middleware | Daniel Mortimer How to use the Human WorkFlow Web Services | Oracle ACE Edwin Biemond Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH IOUG Real World Performance Tour, w/Tom Kyte, Andrew Holdsworth, Graham Wood WebLogic Server Performance and Tuning: Part I - Tuning JVM | Gokhan Gungor Crawling a Content Folio | Kyle Hatlestad The Java EE 6 Example - Galleria - Part 1 | Oracle ACE Director Markus Eisele Reminder: JavaOne Call For Papers Closing April 9th, 11:59pm | Arun Gupta Thought for the Day "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." — Leslie Lamport

    Read the article

  • Question about mipmaps + anisotropic filtering

    - by Telanor
    I'm a bit confused here and maybe someone can explain this to me. I created a simple test texture for my terrain which is nothing more than a solid green color with a black grid overlayed on top of it. If I look at the terrain in the distance with mipmapping on and linear filtering, the grid lines become blurry fairly quickly and further back the grid is pretty much invisible. With these settings, I don't get any moire patterns at all. If I turn on anisotropic filtering, however, the higher the anisotropic level, the more the terrain looks like it did with without mipmapping. The lines are much crisper nearby but in the distance I start to see terrible moire patterns. My understanding was that mipmapping is supposed to get rid of moire patterns. I've always had anisotropic filtering on in every game I play and I've never noticed any moire patterns as a result, so I don't understand why it's happening in my game. I am using logarithmic depth however, could that be causing any problems? And if it is, how do I resolve it? I've created my sampler state like so (I'm using slimdx): ssa = SamplerState.FromDescription(Engine.Device, new SamplerDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, Filter = Filter.Anisotropic, MaximumAnisotropy = anisotropicLevel, MinimumLod = 0, MaximumLod = float.MaxValue });

    Read the article

  • The Other "C" in CRM

    - by [email protected]
    By Brian Dayton on April 5, 2010 7:04 PM Folks who know me know that I rarely, if ever, talk politics. And I never talk politicians. Having grown up in a household with one parent leaning left and the other leaning to the right it was the best way to keep the peace. This isn't about politics. It's about "constituents" and the need to improve the services and service levels for people--at the city, county, state/province, etc. level all the way up to national governments. As a citizen and tax payer it's also important to me that these services be provided at a reasonable cost. If there's a better and more efficient way to do something then it's my hope that a public sector organization takes advantage of technology the same way private sector companies do. Social services organizations have a complex job. They provide the services that people need, from healthcare and children's assistance to helping people find jobs. But many of these organizations are still managing these processes manually or outdated, home-grown applications that could have been written up to 30 years ago. A lot has changed in technology. On the (this is as political as I'm going to get) political front, stakeholders like you and me are expecting greater transparency on where and how funds are spent. I'll admit that most of the time, when I think about CRM systems, I think about my experience as a customer of my bank, utilities company or cable operator. But now that I'm older, have children and a house--I find myself interacting more and more with agencies and services organizations. My experiences are sometimes good and sometimes not so good. Along those lines, last week's announcement of Siebel CRM 8.2 for Public Sector caught my eye. You may not work in the public sector, but you are a constituent of some--actually a lot--of public sector organizations. I don't know which CRM systems city and county utilize but I'm going to start paying closer attention.

    Read the article

  • How to analyze a scenario where a bug didn't get caught and adjust development workflow to prevent similar errors

    - by durron597
    I had a bug that was really difficult to track down, because all the unit tests were green, but the production application didn't work properly. Here's what happened: I had a filter class that set my application to ignore data that was not in some specified time windows. The unit test, which seemed thorough to me, turned green. Additionally, my integration tests also produced results as expected. Production, however, did not work. As a result of the first two bullets, this problem was very difficult to find. It turned out the problem was that my test dates were using my time zone (America/Chicago) but the production data was providing dates in UTC, which I did not realize, and the logic for the filter wasn't correct for UTC dates. (I was using joda time DateTime objects). Where did my workflow break down? Did I fail to produce a spec that specified that the logic needed to handle dates in any time zone? Did I fail to thoroughly consider all cases at the unit test level? Did I fail to insure the integration test was sufficiently similar to production? Other? What changes can I make to my workflow to better prevent this sort of mistake in the future? How can I more effectively debug a problem when there is an issue in production but not in testing?

    Read the article

  • Kinect will recognise your finger movement

    - by Boonei
    Sources inside Microsoft suggest they MS guys are tying to improve motion controller. This could be a huge boost to gaming. There are quite a few things that we can do with our fingers when playing games like driving, shooting,sports etc. If fingers are captured then  XBox will give more realistic version of our avatar. Eurogamer has also suggested the same according to their sources. It would be only(mostly) a software update and would not require a new camera, because the USB controller interface currently in place in Kinect can take in data up-to 35MBps. The current utilization is only around roughly 1/2 of it. So there is currently a facility to send more data. Little more tech data, Kinect does transmit 320×240-pixel in 30 fps, if the device could capture and transfer at 640×480 pixels, then better resolution can detect more movements compared to current level of detection. Lets wait and watch ! This article titled,Kinect will recognise your finger movement, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Inheriting projects - General Rules? [closed]

    - by pspahn
    Possible Duplicate: When is a BIG Rewrite the answer? Software rewriting alternatives Are there any actual case studies on rewrites of software success/failure rates? When should you rewrite? We're not a software company. Is a complete re-write still a bad idea? Have you ever been involved in a BIG Rewrite? This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • Inheriting projects - General Rules?

    - by pspahn
    This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • Upgrading/Installing Demantra 7.3.1.1? Check this out!

    - by user702295
    Here is a summary for relase 7.3.1.1 install/upgrade/features Data Preservation Setting for General Levels  Deploying Demantra Application Server 10g  Important upgrade Information  Known upgrade issues  Mozilla Firefox Browser  Installer Issues  Reviewing / Simulating General Level Data Such as CTO Base Model Demand  Failure Rate Calculation  Demantra SSL Client Authentication and Java 6  CTO functionality does not work in release 7.3.1.1 after upgrading from 7.3.0 using the ‘Platform Upgrade Only’ option.  User Privileges and Export Worksheet to Excell  Cookie Attribute Causes Logging Issue in Worksheet  List of bugs fixed in 7.3.1.1 See the following for details. Demantra 7.3.1.1 Install / Upgrade Known Issues, Notes, Guidance, Defects, Workarounds (Doc ID 1370518.1) Related Documents For Demantra Version 7.3.1.1 And If Demantra Supports The Required Stacks (Doc ID 1367141.1)

    Read the article

  • New Oracle.com global navigation

    - by tim.bonnemann
    This is a guest post by Michal Kopec, Senior User Experience Architect her at Oracle Marketing Brand and Creative. We have just refreshed the Oracle.com global navigation to serve you better with Oracle related information and news. Highlights 1. Updated, user-oriented and business information balanced navigational categories. Say hello to the new categories: Downloads, Education and Oracle Technology Network. Oracle Partner Network navigation received a facelift too. 2. Brand new flyout based navigation - mouse over Partners for instance - providing both a high level content overview as well as shortcut links for most popular website destinations 3. Introducing audience based - I'm a... - and - I want to... - task-based navigation. Now you can navigate based on who you are or what is you want to accomplish. Please note this is an initial step - we want to build out those based on your opinions and feedback. 4. Adjusted Oracle Technology Network horizontal navigation to match Oracle.com. Oracle Technology Network users can now benefit from OTN content being accessible from anywhere during their Oracle.com and OTN visits of course :) 5. Last but not least - we applied the same refreshed global navigation to a couple of country sites - starting with Oracle Brazil and Oracle China. More to come. The project internal code name is Mosaic. It is an effort to provide you with unified user and brand experience during your Oracle websites visit. Every time you hear Mosaic expect great things to happen. With that - please let us know what you think. We value your opinion.

    Read the article

  • I know fundamental programming. But how do I get started in game development now?

    - by Rohan Menon
    I'm a 20 year old programming student. I know fundamental programming in BASIC, C, C++ and JAVA. What I wanted to ask is, where do I go from here? Are there any books that the community can mention that will help me develop a game or at least learn game development? I've had a lot of ideas and really want to make some sort of prototype to see if I'm suited for the industry. I really don't mind learning any new languages but I need to know what I should begin with. A good book that will help with a little more understanding as I go up will be very helpful. Maybe a tutorial to develop some basic 2D games like a side-scroller, snake or pocket tanks in an easy to understand SDK? I know that to get some credit under your belt, you need to be able to make a few games on your own. Also, what platform should I start on? The PC, iOS or Android (as an introduction) for now. I don't want to get into high level game design just yet. Just something a bit basic to help out in future development. Anything pointing me in the right direction will be really really helpful. Edit: Also, I want to say that I'm looking towards this from a game designer's point of view more than a game programmer. I want suggestions on any SDKs or easy to use programs I can use to understand game design. Then delve deeper into the programming after that. Not as employment but as developing your own games (for now).

    Read the article

  • Customer Concepts TV

    - by Richard Lefebvre
    Eliminate the Guesswork in Your Customer's Sales Organization Selling is the lifeblood of every business. In the past, companies would increase headcount to boost sales. In today’s business environment, companies need to re-evaluate the way in which they sell. Sales and marketing organisations must optimise performance, increase team productivity and focus on the best opportunities. Oracle Fusion CRM has been specifically designed with tools to help sales and marketing teams improve efficiency and drive revenue. Territory modeling and management, quota and commission management, collaborative features, real-time customer information and mobile device integration are just some features incorporated. Join us on Customer Concepts TV as we aim to help you find the right strategy for your prospect and customers. Whether they already have a CRM solution in place or are looking for the next level of CRM implementation, this online TV show will give you very practical advice that can help you to make the most out of your CRM implementation.Register now to reserve your spot for this exclusive, live-stream event. Customer Concepts TV comes to you on April 24. Watch the Customer Concepts TV trailer here

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • Which one is better offline method for large scale application

    - by Manish Pansiniya
    We've a big data management website used by several property. Some of our customers have downtime (they can't access net for an hour or two). We want our site to support offline data viewing and inventory management (typical data search and add/remove) and when the user goes online we can sync the changes back to our central database. Customers use several platforms like Windows, iOS, etc. We've been looking into several different options, here are the major choices - Develop offline web app supported in HTML5. Develop a 'fallback' mechanism and interact with data from the app cache as explained in here (http://www.htmlgoodies.com/html5/tutorials/introduction-to-offline-web- applications-using-) Develop a desktop based cross platform solution. I remember the old MONO which has been popular. Here's a post (What do you suggest for cross platform apps, including web cross-platform-apps-including-web) and another one (Technology choice for cross platform development (desktop and phone)? platform-development-desktop-and-phone?rq=1) I realize the the desktop solution might be hard to maintain and result in some compatibility issues and demand test environments. Can HTML5 handle moderate to high level complexity and data flow? Or would it be better to rely on a desktop based app for better scalability & performance?

    Read the article

  • Javascript: Machine Constants Applicable?

    - by DavidB2013
    I write numerical routines for students of science and engineering (although they are freely available for use by anybody else as well) and am wondering how to properly use machine constants in a JavaScript program, or if they are even applicable. For example, say I am writing a program in C++ that numerically computes the roots of the following equation: exp(-0.7x) + sin(3x) - 1.2x + 0.3546 = 0 A root-finding routine should be able to compute roots to within the machine epsilon. In C++, this value is specified by the language: DBL_EPSILON. C++ also specifies the smallest and largest values that can be held by a float or double variable. However, how does this convert to JavaScript? Since a Javascript program runs in a web browser, and I don't know what kind of computer will run the program, and JavaScript does not have corresponding predefined values for these quantities, how can I implement my own version of these constants so that my programs compute results to as much accuracy as allowed on the computer running the web browser? My first draft is to simply copy over the literal constants from C++: FLT_MIN: 1.17549435082229e-038 FLT_MAX: 3.40282346638529e+038 DBL_EPSILON: 2.2204460492503131e-16 I am also willing to write small code blocks that could compute these values for each machine on which the program is run. That way, a supercomputer might compute results to a higher accuracy than an old, low-level, PC. BUT, I don't know if such a routine would actually reach the computer, in which case, I would be wasting my time. Anybody here know how to compute and use (in Javascript) values that correspond to machine constants in a compiled language? Is it worth my time to write small programs in Javascript that compute DBL_EPSILON, FLT_MIN, FLT_MIN, etc. for use in numerical routines? Or am I better off simply assigning literal constants that come straight from C++ on a standard Windows PC?

    Read the article

  • How can I make an MMORPG appeal to casual players?

    - by Philipp
    I believe that there is a significant market of players who would enjoy the exploration and interaction aspects of MMORPGs, but simply don't have the time for the endless grinding marathons which are part of the average MMORPG. MMORPGs are all about interaction between players. But when different players have different amounts of time to invest into a game, those with less time to spend will soon lack behind their power-leveling friends and won't be able to interact with them anymore. One way to solve this would be to limit the progress a player can achieve per day, so that it simply doesn't make sense to play more than one or two hours a day. But even the busiest casual players sometimes like to spend a whole sunday afternoon playing a video game. Just stopping them after two hours would be really frustrating. It also creates a pressure to use the daily progress limit every day, because otherwise the player would feel like wasting something. This pressure would be detrimental for casual gamers. What else could be done to level the playing field between those players who play 40+ hours a week and those who can't play more than 10?

    Read the article

  • Precising definition of programming paradigm

    - by Kazark
    Wikipedia defines programming paradigm thus: a fundamental style of computer programming which is echoed in the descriptive text of the paradigms tag on this site. I find this a disappointing definition. Anyone who knows the words programming and paradigm could do about that well without knowing anything else about it. There are many styles of computer programming at many level of abstraction; within any given programming paradigm, multiple styles are possible. For example, Bob Martin says in Clean Code (13), Consider this book a description of the Object Mentor School of Clean Code. The techniques and teachings within are the way that we practice our art. We are willing to claim that if you follow these teachings, you will enjoy the benefits that we have enjoyed, and you will learn to write code that is clean and professional. But don't make the mistake of thinking that we are somehow "right" in any absolute sense. Thus Bob Martin is not claiming to have the correct style of Object-Oriented programming, even though he, if anyone, might have some claim to doing so. But even within his school of programming, we might have different styles of formatting the code (K&R, etc). There are many styles of programming at many levels. Sp how can we define programming paradigm rigorously, to distinguish it from other categories of programming styles? Fundamental is somewhat helpful, but not specific. How can we define the phrase in a way that will communicate more than the separate meanings of each of the two words—in other words, how can we define it in a way that will provide additional meaning for someone who speaks English but isn't familiar with a variety of paradigms?

    Read the article

  • Java developer webcasts for customers and partners

    - by Jürgen Kress
      Accelerate Your Development with Oracle WebLogic Suite Many organisations are reducing travel, conference, and training budgets for their developers without any change to the results expected of those developers. So how can you keep up with the latest developments? By receiving training, delivered free of charge, at your desk! Join us during February and March for a series of online events designed and run by the development team at Oracle. Learn how Oracle WebLogic Suite enables a whole new level of productivity for enterprise developers. Virtual Developer Day – 10th February Starting with our Virtual Developer Day on 10th February, join us for a blend of hands-on labs, live chat and presentations covering the latest on WebLogic, Java EE 6 and the programming tenets that have made it a true platform breakthrough. Weekly WebLogic Webcasts from 17th February to 17th March Afterwards, join us every week from 17th February to 17th March for our weekly one-hour webcasts where we will show you how to build an application from the ground up using Java and JEE technologies. Presented by the engineering team for WebLogic, these webcasts will be of great value to developers and architects, not just those already using WebLogic. For registration, full session abstracts and schedule please click here. Don’t miss out! Register now to join our virtual events and keep up with all the latest developments. Find out more and register now For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: WebLogic,Java,Oracle,OTN,OPN,Java EE6,Jürgen Kress

    Read the article

  • Game editor integration with the engine?

    - by Daniel
    What I am trying to figure out is what is the best way to integrate the editor(level, effects, model, etc...) in the most effective way? Now the first thing I thought would be to create the game engine(*) extremely modular. For example I took the example of game states. You could have multiple game states that all have their own update() and draw() methods among others. Each game state class would inherit from a base GameState class. This allows for a more modular approach and a useful one at that. Now would the most efficient approach be to implement the editor along with the modular engine, or create two different designs for both the game, and editor? I thought to take the game state example and extend it to window states, and well could be used for a lot more systems. Is there a better implementation of this design(game state) for use in other systems used in the engine? *: Now I know the term game engine is sorta irrelevant, and misused in many situations. What I am referring to as the "game engine" is the combination of the systems that the game must interact with for short. Also this is more of a theory / design question than an implementation. Even though both mix, i'd rather like to have a more general idea on how the editor is built in an efficient way and still using the same engine code as what the game uses. Thanks, Daniel P.S If you need more clarification or extra bits just leave a comment.

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • ALERT: Error Processing US Wage Attachment Elements In Payroll Run After RUP Patches

    - by LuciaC
    Customers who have run the Upgrade Wage Attachments process after applying the 2012 RUP are reporting errors similar to those listed below when either running a quickpay or processing a payroll for employee(s) with involuntary deductions. Error: HR_51118_HRPROC_ERR_ON_ASG ASGNO 1115 APP-PAY-51118: Error was encountered when processing assignment 1115 HR_51119_HRPROC_ERR_OCC_ON_ET ETNAME: Garnishment 3 APP-PAY-51119: Error was encountered when processing Element Type Garnishment 3 HR_6881_HRPROC_ORA_ERR SQLERRMC ORA-01403: No data found SQL_NO 520 TABLE_NAME pay_input_values_f APP-PAY-06881:Error ORA-01403: no data found has occured in table pay_input_values_f at location 520 This issue was logged in Bug 14679161 - QUICK PAY ERROR AFTER RUP (2012) AND WAGE ATTACHMENT UPGRADE APP-PAY-06881. The following one off patches have been released to My Oracle Support to resolve this issue*: 11i -  Patch 14679161 12.0 - Patch 14849394:R12.PAY.A 12.1 - Patch 14849394:R12.PAY.B * IMPORTANT:  Depending on when/if customers have run the Wage Attachment upgrade process will determine the appropriate action to take. Any customer who is encountering the above error and/or has run the Wage Attachment upgrade process AFTER applying the 2012 RUP (applicable to their release level) should log a Service Request with Oracle Support to receive assistance on the necessary steps to take to resolve the problem BEFORE applying the above patch. Any customer who has not yet run the Wage Attachment Upgrade process (either before or after applying the 2012 RUP), should follow the action plan documented in the patch readme. For those customers who have already run the Wage Attachment Upgrade process BEFORE applying the 2012 RUP, should apply the patch (applicable to your release) listed above. Be sure to run any post install processes, such as the data install utility and HR global driver.  See the patch readme for full details. Please consult Note 404478.1: Americas (US, CA, MX) HCM High Priority Alert for the latest Alert status.

    Read the article

  • Interpreting Others' Source Code

    - by Maxpm
    Note: I am aware of this question. This question is a bit more specific and in-depth, however, focusing on reading the actual code rather than debugging it or asking the author. As a student in an introductory-level computer science class, my friends occasionally ask me to help them with their assignments. Programming is something I'm very proud of, so I'm always happy to oblige. However, I usually have difficulty interpreting their source code. Sometimes this is due to a strange or inconsistent style, sometimes it's due to strange design requirements specified in the assignment, and sometimes it's just due to my stupidity. In any case, I end up looking like an idiot staring at the screen for several minutes saying "Uh..." I usually check for the common errors first - missing semicolons or parentheses, using commas instead of extractor operators, etc. The trouble comes when that fails. I often can't step through with a debugger because it's a syntax error, and I often can't ask the author because he/she him/herself doesn't understand the design decisions. How do you typically read the source code of others? Do you read through the code from top-down, or do you follow each function as it's called? How do you know when to say "It's time to refactor?"

    Read the article

  • Efficient mapping layout in 2D side-scroller, and collisions between character and the world

    - by Jack
    I haven't touched Visual Studio for a couple months now, but I was playing a game from the '90s toady and had an epiphany: I was looking for something what i didn't need, and I wasn't using what I knew correctly. One of those realizations was collision, so let me tell you a bit about my project that I was working on. The project's graphics looks like Mario or Dangerous Dave, etc., you get the idea - old-school pixels. So anyway I remember trying to think of something else than AABB for character form, but I couldn't think of anything. Perhaps I could get a suggestion for this? Another thing is the world - I don't want it to be just linear world, I want mountains, etc.. My idea is to use triangles, and no idea yet what to do if I want just part of the cube, say 3/4 or 2/4 or whatever. Hard-coding such things seems inefficient. P.S. I am not looking at the precision level offered by Box2D. Actually I remember trying to implement it at first, but I failed as my understanding of C++ wasn't advanced enough, as it'll be mentioned below. P.P.S. I am programming in C++, and I haven't done it for a couple months now. I have no means of testing it either, as my PC is broken down, and this one can barely run games from late '90s, not to speak about a compiler or a program with inefficient resource management... I am also not an expert (obviously), I don't even know if I can consider myself an average programmer. In short, I am simply curious about my thoughts and my past experience when programming the game. I may come back to it when my PC is fixed, I'm already filling a note about these things.

    Read the article

  • Oracle nomeada pela Forrester Leader em Enterprise Business Intelligence Platforms

    - by Paulo Folgado
    According to an October 2010 report from independent analyst firm Forrester Research, Inc., Oracle is a leader in enterprise business intelligence (BI) platforms. Forrester Research defines BI as a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information, which can then be used to enable more effective strategic, tactical, and operational insights and decision-making. Written by Forrester vice president and principal analyst Boris Evelson, The Forrester Wave: Enterprise Business Intelligence Platforms, Q4 2010 states that "Oracle has built new metadata-level [Oracle Business Intelligence Enterprise Edition 11g] integration with Oracle Fusion Middleware and Oracle Fusion Applications and continues to differentiate with its versatile ROLAP engine." The report goes on, "And in addition to closing some gaps it had in 10.x versions such as lack of RIA functionality, [the Oracle Business Intelligence Enterprise Edition 11g] actually leapfrogs the competition with the Common Enterprise Information Model (CEIM)--including the ability to define actions and execute processes right from BI metadata across BI and ERP applications." "We're pleased that the Forrester Wave recognizes Oracle Business Intelligence as a leading enterprise BI platform," said Paul Rodwick, vice president of product management, Oracle Business Intelligence. Key Innovations in Oracle Business Intelligence 11g Released in August 2010, Oracle Business Intelligence 11g represents the industry's most complete, integrated, and scalable suite of BI products. Encompassing thousands of new features and enhancements, the latest release offers three key areas of innovations. * A unified environment. The industry's first unified environment for accessing and analyzing data across relational, OLAP, and XML data sources. * Enhanced usability. A new, integrated scorecard application, plus innovations in reporting, visualization, search, and collaboration. * Enhanced performance, scalability, and security. Deeper integration with Oracle Enterprise Manager 11g and other components of Oracle Fusion Middleware provide lower management costs and increased performance, scalability, and security. Read the entire Forrester Wave Report.

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • Issue tracking multiple domains with Google Analytics

    - by user359650
    I have 2 domains mydomain.com and mydomain.net which I'm trying to track with the same GA code. Here are the options I turned on: Subdomains of mydomain ON Examples: www.mydomain.com -and- apps.mydomain.com -and- store.mydomain.com Multiple top-level domains of mydomain ON Examples: mydomain.uk -and- mydomain.cn -and- mydomain.fr Which gave me the following code: _gaq.push(['_setAccount', 'UA-123456789-1']); _gaq.push(['_setDomainName', 'mydomain.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview']); In this help page I read that _setDomainName must be changed for each domain which I did: -if you go to mydomain.net you get _gaq.push(['_setDomainName', 'mydomain.net']); -if you go to mydomain.com you get _gaq.push(['_setDomainName', 'mydomain.com']); When I generate traffic on both mydomain.dom and mydomain.net and watches GA push requests made with firebug I can see requests generated for both domains and the parameter called utmhn has the proper domain value (which matches that of _setDomainName and the browser address bar). However when I monitor the realtime statistics under Home->Real-Time->Overview I see pageviews for mydomain.net BUT NOT for mydomain.dom :( What am I missing to properly track both domains? PS: in the help page I mentioned they talk about setting up cross links which I didn't do for now as my understanding is that it shouldn't be needed to get what I'm trying to do to work. Also I want to mention that I do not have any tracking code for any of these 2 domains other than the one I mentioned.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >