Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 103/371 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • How to solve linear recurrences involving two functions?

    - by Aditya Bahuguna
    Actually I came across a question in Dynamic Programming where we need to find the number of ways to tile a 2 X N area with tiles of given dimensions.. Here is the problem statement Now after a bit of recurrence solving I came out with these. F(n) = F(n-1) + F(n-2) + 2G(n-1), and G(n) = G(n-1) + F(n-1) I know how to solve LR model where one function is there.For large N as is the case in the above problem we can do the matrix exponentiation and achieve O(k^3log(N)) time where k is the minimum number such that for all km F(n) does not depend on F(n-k). The method of solving linear recurrence with matrix exponentiation as it is given in that blog. Now for the LR involving two functions can anyone suggest an approach feasible enough for large N.

    Read the article

  • How can you predict the time it will take for two processes in two different machines in a cluster to communicate?

    - by Dokkat
    I am trying to develop a computing application which needs a lot of memory (500gb). Buying a single machine for that is overly expensive. I can, though, buy ~100 small instances on Digital Ocean or similar, divide the memory in blocks and use TCP to emulate shared memory between the instances. Now, my question is: how can I measure/predict the time it will take for two processes in two different machines like that to share information, in comparison to IPC and shared memory? Are there rules of thumb? I don't want exact values, but knowing more or less how much faster one is would be very helpful in visualising the feasibility of this approach.

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Why does Zend discourage "floating functions"?

    - by kojiro
    Zend's Coding Standard Naming Convention says Functions in the global scope (a.k.a "floating functions") are permitted but discouraged in most cases. Consider wrapping these functions in a static class. The common wisdom in Python says practically the opposite: Finally, use staticmethod sparingly! There are very few situations where static-methods are necessary in Python, and I've seen them used many times where a separate "top-level" function would have been clearer. (Not only does the above StackOverflow answer warn against overuse of static methods, but more than one Python linter will warn the same.) Is this something that can be generalized across programming languages, and if so, why does Python differ so from PHP? If it's not something that can be generalized, what is the basis for one approach or the other, and is there a way to immediately recognize in a language whether you should prefer bare functions or static methods?

    Read the article

  • if ('constant' == $variable) vs. if ($variable == 'constant')

    - by Tom Auger
    Lately, I've been working a lot in PHP and specifically within the WordPress framework. I'm noticing a lot of code in the form of: if ( 1 == $options['postlink'] ) Where I would have expected to see: if ( $options['postlink'] == 1 ) Is this a convention found in certain languages / frameworks? Is there any reason the former approach is preferable to the latter (from a processing perspective, or a parsing perspective or even a human perspective?) Or is it merely a matter of taste? I have always thought it better when performing a test, that the variable item being tested against some constant is on the left. It seems to map better to the way we would ask the question in natural language: "if the cake is chocolate" rather than "if chocolate is the cake".

    Read the article

  • Disadvantages of a fake phpMyAdmin honeypot that causes ip blacklisting and robots.txt disallow/exclusion of the honeypot?

    - by Tchalvak
    I'm trying to figure out whether I should set up a honeypot system with a fake phpMyAdmin (site gets hits all the time with people spidering for insecurities with that app). My thought was to create a honeypot php script that would mimic a phpMyAdmin login, and then blacklist ips that hit that url (and aren't already whitelisted). I would then add the appropriate urls to the robots.txt so that spiders that actually respect my robots.txt wouldn't be caught by the blacklist. Are there disadvantages to this approach, do legit robots sometimes not respect robots.txt in certain circumstances, are there any problems with this that I should consider in advance?

    Read the article

  • how to start fixing bugs in open source softwares

    - by suryak
    I a student and have good knowledge in C programming and like to contribute any open source project which is developed in C. I searched sourceforge and selected 7-Zip because its widely used one and developed using C. I thought to start first by fixing bugs (which was suggested by many people in their websites) and gone through few bugs but couldn't understand how to respond to them and how to start fixing them.. I didn't understand anything. Could you please explain how to approach this.. I have even gone through some files in the source code which I downloaded but didn't understood anything. Please help me!

    Read the article

  • Real Time Monitoring System using .net [closed]

    - by sameer
    I need to develop the application which display the dashboard where data from various SQL DB is fetched from different servers and displayed. Now this need to happen real time we can have refresh time say 5 min. Here is my thought, suggest if anything is wrong. 1) To Develop the Windows Service to accumulate the data from various SQL Server Instance. 2) Then Persist those details into SQL DB from which Dashboard will displayed on the web page. 3) Fetch of data from Windows service will be trigger every x minutes. 4) SQL Server Instance details will be stored in SQL DB which Windows Service will be referring. Thus this approach make sense. Thanks..

    Read the article

  • Navigation in a #WP7 application with MVVM Light

    - by Laurent Bugnion
    In MVVM applications, it can be a bit of a challenge to send instructions to the view (for example a page) from a viewmodel. Thankfully, we have good tools at our disposal to help with that. In his excellent series “MVVM Light Toolkit soup to nuts”, Jesse Liberty proposes one approach using the MVVM Light messaging infrastructure. While this works fine, I would like to show here another approach using what I call a “view service”, i.e. an abstracted service that is invoked from the viewmodel, and implemented on the view. Multiple kinds of view services In fact, I use view services quite often, and even started standardizing them for the Windows Phone 7 applications I work on. If there is interest, I will be happy to show other such view services, for example Animation services, responsible to start/stop animations on the view. Dialog service, in charge of displaying messages to the user and gathering feedback. Navigation service, in charge of navigating to a given page directly from the viewmodel. In this article, I will concentrate on the navigation service. The INavigationService interface In most WP7 apps, the navigation service is used in quite a straightforward way. We want to: Navigate to a given URI. Go back. Be notified when a navigation is taking place, and be able to cancel. The INavigationService interface is quite simple indeed: public interface INavigationService { event NavigatingCancelEventHandler Navigating; void NavigateTo(Uri pageUri); void GoBack(); } Obviously, this interface can be extended if necessary, but in most of the apps I worked on, I found that this covers my needs. The NavigationService class It is possible to nicely pack the navigation service into its own class. To do this, we need to remember that all the PhoneApplicationPage instances use the same instance of the navigation service, exposed through their NavigationService property. In fact, in a WP7 application, it is the main frame (RootFrame, of type PhoneApplicationFrame) that is responsible for this task. So, our implementation of the NavigationService class can leverage this. First the class will grab the PhoneApplicationFrame and store a reference to it. Also, it registers a handler for the Navigating event, and forwards the event to the listening viewmodels (if any). Then, the NavigateTo and the GoBack methods are implemented. They are quite simple, because they are in fact just a gateway to the PhoneApplicationFrame. The whole class is as follows: public class NavigationService : INavigationService { private PhoneApplicationFrame _mainFrame; public event NavigatingCancelEventHandler Navigating; public void NavigateTo(Uri pageUri) { if (EnsureMainFrame()) { _mainFrame.Navigate(pageUri); } } public void GoBack() { if (EnsureMainFrame() && _mainFrame.CanGoBack) { _mainFrame.GoBack(); } } private bool EnsureMainFrame() { if (_mainFrame != null) { return true; } _mainFrame = Application.Current.RootVisual as PhoneApplicationFrame; if (_mainFrame != null) { // Could be null if the app runs inside a design tool _mainFrame.Navigating += (s, e) => { if (Navigating != null) { Navigating(s, e); } }; return true; } return false; } } Exposing URIs I find that it is a good practice to expose each page’s URI as a constant. In MVVM Light applications, a good place to do that is the ViewModelLocator, which already acts like a central point of setup for the views and their viewmodels. Note that in some cases, it is necessary to expose the URL as a string, for instance when a query string needs to be passed to the view. So for example we could have: public static readonly Uri MainPageUri = new Uri("/MainPage.xaml", UriKind.Relative); public const string AnotherPageUrl = "/AnotherPage.xaml?param1={0}&param2={1}"; Creating and using the NavigationService Normally, we only need one instance of the NavigationService class. In cases where you use an IOC container, it is easy to simply register a singleton instance. For example, I am using a modified version of a super simple IOC container, and so I can register the navigation service as follows: SimpleIoc.Register<INavigationService, NavigationService>(); Then, it can be resolved where needed with: SimpleIoc.Resolve<INavigationService>(); Or (more frequently), I simply declare a parameter on the viewmodel constructor of type INavigationService and let the IOC container do its magic and inject the instance of the NavigationService when the viewmodel is created. On supported platforms (for example Silverlight 4), it is also possible to use MEF. Or, of course, we can simply instantiate the NavigationService in the ViewModelLocator, and pass this instance as a parameter of the viewmodels’ constructor, injected as a property, etc… Once the instance has been passed to the viewmodel, it can be used, for example with: NavigationService.NavigateTo(ViewModelLocator.ComparisonPageUri); Testing Thanks to the INavigationService interface, navigation can be mocked and tested when the viewmodel is put under unit test. Simply implement and inject a mock class, and assert that the methods are called as they should by the viewmodel. Conclusion As usual, there are multiple ways to code a solution answering your needs. I find that view services are a really neat way to delegate view-specific responsibilities such as animation, dialogs and of course navigation to other classes through an abstracted interface. In some cases, such as the NavigationService class exposed here, it is even possible to standardize the implementation and pack it in a class library for reuse. I hope that this sample is useful! Happy coding. Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Is Domain Driven Design useful / productive for not so complex domains?

    - by Elijah
    When assessing a potential project at work, I suggested that it might be advantageous to use a domain driven design approach to its object model. The project does not have an excessively complex domain, so my coworker threw this at me: It has been said, that DDD is favorable in instances where there is a complex domain model (“...It applies whenever we are operating in a complex, intricate domain” Eric Evans). What I'm lost on is - how you define the complexity of a domain? Can it be defined by the number of aggregate roots in the domain model? Is the complexity of a domain in the interaction of objects? The domain that we are assessing is related online publishing and content management.

    Read the article

  • Store VOD wmi data in a database directly or use CQRS?

    - by JD01
    I need to collect Video on demand bandwidth usage every few minutes (or maybe ever few seconds) and store this in a database so users can produce graphs on bandwidth usage over a period of time (few hours, days, weeks or even possibly months). So the sort of data that will be stored will be the number of users watching videos, current server bandwidth (Mb/s), multicast bit rate etc. I am wondering whether using CQRS would be a good approach with Event sourcing as I can then rebuild my objects to create different projections (I.e. different graphs/reports etc) but then again it seems like I am introducing complexity which might not be needed. Or would it be best to just put the data directly in a database (currently using PostGres) directly and query off that? Having thought about it, my table is a form of audit log anyway, so I don't think I need event sourcing at all. Any thoughts?

    Read the article

  • Rendering transparent textures in directX

    - by Vibhore Tanwer
    I am working with a directX application with WPF, I am facing a problem with videos and images that contains transparent pixels, I have to draw a color in background an then a video/image over it. What I expect is background color should be visible while playing video only non transparent pixels should be visible but what I get is a black background behind the video. I am using following settings on device to achieve alpha blending : device.RenderState.SourceBlend = Blend.SourceAlpha; device.RenderState.DestinationBlend = Blend.InvSourceAlpha; device.RenderState.AlphaBlendEnable = true; What am I missing here? What is the best approach to handle transparent videos? Any help will be of great value to me.

    Read the article

  • Whats a good structure to save and retrieve locations of images?

    - by Goot
    I got a java-ee application, where I collect informations about movies. Im my backend I provide data like the name, description, genre and a random uuid. I also got lots of related files, which are stored on a file server. Including some screenshots, the dvd or bluRay cover and video trailers. My current approach is: When saving the files to the fileserver, I retrieve the movies random uuid (which is the primary key btw.). I then rename the files screenshot_[UUID]_1, screenshot_[UUID]_2 ... etc. Now, there are lots of other ways to handle this, like saving all filenames in a database or creating a dir structure on the fileserver for every uuid and, e.g., return all images in the "[uuid]/screenshots" folder via REST. I expect about 30k requests a day, so the service has to be pretty performant. Whats the best way to solve this?

    Read the article

  • Switching to HTTPS - redirect question

    - by seengee
    Following the recent Google announcements about improved ranking for sites running on https we have a number of clients asking about this. Is it safe to just 301 redirect all pages to their SSL equivalent, for example in a common PHP include file: if($_SERVER['HTTPS']!="on"){ $redirect= "https://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']; header("Location:$redirect",true,301); exit(); } Obviously I'm aware this is also possible within a .htaccess file but that cannot be modified in our case. Obviously all internal links would be switched to https:// links but obviously we need to sort out incoming links from Google and elsewhere. Is this a sound approach? Are there any other gotchas to be aware of?

    Read the article

  • MVC shared model different required fields on different type

    - by kurasa
    I have a model called Car and depending on what type of Car the user select the view is presented differently. For example the user selects from a grid of different cars and depending if it is a Volvo or a Kia or a Ford the view must allow different fields to be editable. For example with a Volvo the color is editable and is mandatory but with a Kia it is not. I would like to use the one Car class to bind the view but want the client side validation to pick up the required fields based on what type of car. I want to go only to one Action method for the Update what is a good way to approach this problem...? create a base class and inherit from it? will this give me binding problems..?

    Read the article

  • Webinar: Riding the Fence or Planning the Upgrade to 11gR2?

    - by Greg Jensen
     Is your organization riding the Identity and Access fence where you can't decide if you are ready to upgrade?  Are you unsure what the technical and business value gains are, in upgrading to Oracle's 11gR2?  Or are you planning for the upgrade and just unsure of what to expect? In this webinar, experts from Oracle and AmerIndia will discuss the new features of 11gR2, latest market trends, and how IAM transforms organizations. In addition, planning and implementation strategy of the upgrade process will be discussed. The presenters will also share success stories and highlight challenges faced by organizations belonging to different verticals and how Oracle’s solutions and AmerIndia’s services addressed those challenges. Topics include: Market trends and 11gR2 Planning an upgrade Approach and Implementation Strategy Success stories Registration is now open for this Webinar for December 5th from 2pm - 3pm EST. https://blogs.oracle.com/OracleIDM/resource/amerindia-logo.png

    Read the article

  • Service Layer - how broad should it be, and should it also be on the local application?

    - by BornToCode
    Background: I need to build a desktop application with some operations (CRUD and more) (=winforms), I need to make another application which will re-use some of the functions of the main application (=webforms). I understood that using service layer is the best approach here. If I understood correctly the service should be calling the function on the BL layer (correct me if I'm wrong) The dilemma: In my main winform UI - should I call the functions from the BL, or from the service? (please explain why) Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? YOUR HELP IS MUCH APPRECIATED

    Read the article

  • Using box2d DrawDebugData with multi layer scene ?

    - by Mr.Gando
    In my Game, a Scene is composed by several layers. Each layer has different camera transformations. This way I can have a layer at z=3 (GUI), z=2 (Monsters), z=1 (scrolling background), and this 3 layers compose my whole Scene. My render loop looks something like: renderLayer() applyTransformations() renderVisibleEntities() renderChildLayers() end If I call DrawDebugData() in the render loop, the whole b2world debug data will be rendered once for each layer in my scene, this generates a mess, because the "debug boxes" get duplicated, some of them get the camera transformations applied and some of them don't, etc. What I would like to do, would be to make DrawDebugData to draw only certain debug boxes. In that way, I could call something like b2world->DrawDebugDataForLayer(int layer_id) and call that on each layer like : renderLayer() applyTransformations() renderVisibleEntities() //Only render my corresponding layer debug data b2world->DrawDebugDataForLayer(layer_id) renderChildLayers() end Is there a way to subclass b2World so I could add this functionality ( specific to my game ) ? If not, what would be the best way to achieve this (Cocos2d uses a similar scene graph approach and box2d, but I'm not sure if debugDraw works in Cocos2d... ) Thanks

    Read the article

  • How much configurability to give to users regarding concurrency?

    - by rwong
    This question is a narrowing-down of these related questions: How much effort should we spend to programming for multiple cores? Concurrency: How do you approach the design and debug the implementation? Given that each user's computers may have different performance characteristics with respect to calculations, memory, disk I/O bandwidth and network I/O bandwidth, and that it is difficult to implement an automated self-tuning system in your software, how much configurability should we give to the end-users so that they can find ways (by trial-and-error?) to improve our software's efficiency? If we give users the ability to change these settings, how do we give visual feedback to users so they can measure the performance changes?

    Read the article

  • Tab navigation and double content

    - by Guisasso
    I have a website in which i use tabs to navigate between pages. For example, page a displays A as an active tab and B and C background tabs. If the visitor gets to the website via page B, i also would like to display to page d, but not a and c. Question: I know i can just create index2 for b for example, so when the visitor gets to b from a, i display a,b,c and index1 when visitor gets to b from d for example. Is that a bad practice? I know double content isn't good, but in which other way can i or should i approach this problem? The tab navigation i designed uses < li and id tag do display active tab, defined in the < body tag.

    Read the article

  • Advisor Webcast: Hyperion Planning: Migrating Business Rules to Calc Manager

    - by inowodwo
    As you may be aware EPM 11.1.2.1 was the terminal release of Hyperion Business Rules (see Hyperion Business Rules Statement of Direction (Doc ID 1448421.1). This webcast aims to help you migrate from Business Rules to Calc Manager. Date: January 10, 2013 at 3:00 pm, GMT Time (London, GMT) / 4:00 pm, Europe Time (Berlin, GMT+01:00) / 07:00 am Pacific / 8:00 am Mountain / 10:00 am Eastern TOPICS WILL INCLUDE:    Calculation Manager in 11.1.2.2    Migration Consideration    How to migrate the the HBR rules from 11.1.2.1 to Calculation Manager 11.1.2.2    How to migrate the security of the Business Rules.    How to approach troubleshooting and known issues with migration. For registration details please go to Migrating Business Rules to Calc Manager (Doc ID 1506296.1). Alternatively, to view all upcoming webcasts go to Advisor Webcasts: Current Schedule and Archived recordings [ID 740966.1] and chose Oracle Business Analytics from the drop down menu.

    Read the article

  • Setting up Developers Conference

    - by Darknight
    In our local city in the UK, there are as far as I am aware no developer conferences. I am confident that our region has many professional developers as well as many graduating students whom who really benefit from a conference. I would like to ask the following questions: What steps or advice would one take if the task was given to set up a local developers conference? What would the costing look like? (excluding building/hosting of website(s)) How would one build interest and promote this? How would I approach, Local Companies & Universities to collaborate with them? I'm not just aiming this question to users who may have experience in setting up such conferences (but are highly welcome). Rather how would you attack this if you was tasked with this?

    Read the article

  • What are the pros and cons of a non-fixed-interval update loop?

    - by akonsu
    I am studying various approaches to implementing a game loop and I have found this article. In the article the author implements a loop which, if the processing falls behind in time, skips frame renderings and just updates the game in a loop (the last variant called "Constant Game Speed independent of Variable FPS"). I do not understand why it is acceptable to call update_game() in a loop without making sure the update function is called at a particular interval. I do not see any value in doing this. I would think that in my game I want to be sure the game is updated periodically with a known period. So maybe it is worthwhile to have two threads, one would call update periodically, and the other one would redraw the game, also periodically? Would this be a good and practical approach? Of course I would need to synchronise the threads.

    Read the article

  • Compizconfig Grid doesn't work under Ubuntu as virtual machine

    - by SoftTimur
    I have installed Ubuntu as virtual machine within Mac. I have installed VM tools. The approach in this link of Compizconfig Grid, which worked in my previous laptop where Ubuntu was not a virtual machine, does not work anymore. I set some shortcut, but no action is launched... Does anyone know why? The major functionality I need from Grid is tiling windows to left/right side of the screen... Does anyone know any alternative?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >