Search Results

Search found 17163 results on 687 pages for 'extension objects'.

Page 171/687 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Where did I hide my TSQL mojo?

    - by fatherjack
    LiveJournal Tags: How To,SQL Server,Tips and Tricks,TSQL,Reporting Services A little while ago I wrote a piece about finding database objects that rely on other objects that no longer exist - OK, I have my database ready, now what's missing? . This is linked to that sort of process. Many SQL Server installations are associated in some way with a Reporting Services installation, it's a very logical way to distribute your database contents to system users so they can work effectively. Databases,...(read more)

    Read the article

  • WCF client hell (2 replies)

    I've a remote service available via tcp://. When I add a service reference on my client project, VS doesn't create all proxy objects! I miss every xxxClient class, and I have only types used as parameters in my methods. I tried to start a new empty project, add the same service reference, and in this project I can see al proxy objects! It's an hell, what can I do? thanks

    Read the article

  • WCF client hell (2 replies)

    I've a remote service available via tcp://. When I add a service reference on my client project, VS doesn't create all proxy objects! I miss every xxxClient class, and I have only types used as parameters in my methods. I tried to start a new empty project, add the same service reference, and in this project I can see al proxy objects! It's an hell, what can I do? thanks

    Read the article

  • Learning resources for working with POCO entities in EF 4.0

    - by boghydan
    Here are some links that can help you start working with POCO entities in EF 4.0: ADO.NET Team blog: - Working with POCO objects: http://blogs.msdn.com/adonet/archive/2009/05/21/poco-in-the-entity-framework-part-1-the-experience.aspx - Proxy objects for POCO entities: http://blogs.msdn.com/adonet/archive/2009/12/22/poco-proxies-part-1.aspx MSDN Library: - Working with POCO  http://msdn.microsoft.com/en-us/library/dd456853.aspx - T4 editor for POCO generator can be downloaded from here: http://visualstudiogallery.msdn.microsoft.com/en-us/60297607-5fd4-4da4-97e1-3715e90c1a23

    Read the article

  • Duplication in parallel inheritance hierarchies

    - by flamingpenguin
    Using an OO language with static typing (like Java), what are good ways to represent the following model invariant without large amounts of duplication. I have two (actually multiple) flavours of the same structure. Each flavour requires its own (unique to that flavour data) on each of the objects within that structure as well as some shared data. But within each instance of the aggregation only objects of one (the same) flavour are allowed. FooContainer can contain FooSources and FooDestinations and associations between the "Foo" objects BarContainer can contain BarSources and BarDestinations and associations between the "Bar" objects interface Container() { List<? extends Source> sources(); List<? extends Destination> destinations(); List<? extends Associations> associations(); } interface FooContainer() extends Container { List<? extends FooSource> sources(); List<? extends FooDestination> destinations(); List<? extends FooAssociations> associations(); } interface BarContainer() extends Container { List<? extends BarSource> sources(); List<? extends BarDestination> destinations(); List<? extends BarAssociations> associations(); } interface Source { String getSourceDetail1(); } interface FooSource extends Source { String getSourceDetail2(); } interface BarSource extends Source { String getSourceDetail3(); } interface Destination { String getDestinationDetail1(); } interface FooDestination extends Destination { String getDestinationDetail2(); } interface BarDestination extends Destination { String getDestinationDetail3(); } interface Association { Source getSource(); Destination getDestination(); } interface FooAssociation extends Association { FooSource getSource(); FooDestination getDestination(); String getFooAssociationDetail(); } interface BarAssociation extends Association { BarSource getSource(); BarDestination getDestination(); String getBarAssociationDetail(); }

    Read the article

  • ROA on top of SOA

    - by Vaibhav Pujari
    I already have a stable Service Oriented Architecture for my application which exposes services as API calls. (the verbs) Now, I need to build a Resource Oriented Architecture to expose a RESTful API to interact with the application objects. (the nouns) What are the best practices to reuse the existing services: - without any persistence inside my new code. - without putting unnecessary logic into the REST layer i.e. it should ideally just leverage the services provided by SOA API. I want this layer to be as thin as possible. - without modifying the existing SOA API - allow easy extension of the REST API i.e. it should be easy to add more resources without changing the (yet to be written) core code. (I want to make resource names and their associated actions configurable so more contributors can easily add resources without a need to understand my module) Any advices/suggestions how to achieve this? Edit: Adding more info My Stack: My existing stacks is in Java. But since I plan to just use the services, I don't think that should affect the design of new REST code. I am planning to implement the new REST code in PHP. How well the services map to resources? Some services are mapped well i.e. there are services for creating, updating application objects. But for other application objects, there are no direct services available. More importantly, there are actions beyond just create, update etc. that apply to application objects. And I would like to provide some way for these actions to be exposed through REST. Since these are verbs, how do I deal with them? Where exactly I need help? I would appreciate any help towards high level design to accomplish the task along-with making the framework extendible. For instance, tomorrow there are some new services added to my SOA layer, I want to make sure it is easy for a fresh developer to write a REST call by simply registering a new resource (in a config file/db) and write code for connecting it with SOA calls. Just like plugin.

    Read the article

  • Be Careful When Referencing SPList.Items

    - by Brian Jackett
    Be very careful how you reference your SPListItem objects through the SharePoint API.  I’ll say it again.  Be very careful how you reference your SPListItem objects through the SharePoint API.  Ok, now that you get the point that this will be a “learn from my mistakes and don’t do unsmart things like I did” post, let’s dig into what it was that I did poorly. Scenario     For the past year I’ve been building custom .Net applications that are hosted through SharePoint.  These application involve a number of SharePoint lists, external databases, custom web parts, and other SharePoint elements to provide functionality.  About two weeks ago I received a message from one of our end users that a custom application was performing slowly.  Specifically performance was slow when users were performing actions that interacted with the primary SharePoint list storing data for that app. The Problem     I took a copy of the production site into a dev environment to investigate the code that was executing.  After attaching the debugger and running through the code I quickly found pieces of code referencing SPListItem objects (like below) that were performing very poorly: SPListItem myItem = SPContext.Current.Web.Lists["List Name"].Items.GetItemById(value); // do updates on SPListItem retrieved     As it turns out the SPList I was referencing was fairly large at ~1000 items and weighing in over 150 MB.  You see the problem with my above code is that I retrieved the SPListItem by first (unnecessarily) going through the Items member of the list.  As I understand it, when doing so the executing code will attempt to resolve that entity and pull it from the database and into RAM (all 150 MB.)  This causes the equivalent of a 50 car pile up in terms of performance with a single update taking more than 15 seconds. The Solution     The solution is actually quite simple and I wish I had realized this during development.  Instead of going through the Items member it is possible to call GetItemById(…) directly on the SPList as in the example below: SPListItem myItem = SPContext.Current.Web.Lists["List Name"].GetItemById(value); // do updates on SPListItem retrieved     After making this simple change performance skyrocketed and updates were back to less than a second.   Conclusion     When given the option between two solutions, usually the simplest is the best solution.  In my scenario I was adding extra complexity going through the API the long way around to get to the objects I needed and it ended up hurting performance greatly.  Luckily we were able to find and resolve the performance issue in a relatively short amount of time.  Like I said at the beginning of the post, learn from my mistakes and hope it helps you.         -Frog Out   Image linked from http://www.freespirit.com/files/IMAGE/COVER/LARGE/BeCarefulSafe.jpg

    Read the article

  • Music Notation Editor - Refactoring view creation logic elsewhere

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • Hype and LINQ

    - by Tony Davis
    "Tired of querying in antiquated SQL?" I blinked in astonishment when I saw this headline on the LinqPad site. Warming to its theme, the site suggests that what we need is to "kiss goodbye to SSMS", and instead use LINQ, a modern query language! Elsewhere, there is an article entitled "Why LINQ beats SQL". The designers of LINQ, along with many DBAs, would, I'm sure, cringe with embarrassment at the suggestion that LINQ and SQL are, in any sense, competitive ways of doing the same thing. In fact what LINQ really is, at last, is an efficient, declarative language for C# and VB programmers to access or manipulate data in objects, local data stores, ORMs, web services, data repositories, and, yes, even relational databases. The fact is that LINQ is essentially declarative programming in a .NET language, and so in many ways encourages developers into a "SQL-like" mindset, even though they are not directly writing SQL. In place of imperative logic and loops, it uses various expressions, operators and declarative logic to build up an "expression tree" describing only what data is required, not the operations to be performed to get it. This expression tree is then parsed by the language compiler, and the result, when used against a relational database, is a SQL string that, while perhaps not always perfect, is often correctly parameterized and certainly no less "optimal" than what is achieved when a developer applies blunt, imperative logic to the SQL language. From a developer standpoint, it is a mistake to consider LINQ simply as a substitute means of querying SQL Server. The strength of LINQ is that that can be used to access any data source, for which a LINQ provider exists. Microsoft supplies built-in providers to access not just SQL Server, but also XML documents, .NET objects, ADO.NET datasets, and Entity Framework elements. LINQ-to-Objects is particularly interesting in that it allows a declarative means to access and manipulate arrays, collections and so on. Furthermore, as Michael Sorens points out in his excellent article on LINQ, there a whole host of third-party LINQ providers, that offers a simple way to get at data in Excel, Google, Flickr and much more, without having to learn a new interface or language. Of course, the need to be generic enough to deal with a range of data sources, from something as mundane as a text file to as esoteric as a relational database, means that LINQ is a compromise and so has inherent limitations. However, it is a powerful and beautifully compact language and one that, at least in its "query syntax" guise, is accessible to developers and DBAs alike. Perhaps there is still hope that LINQ can fulfill Phil Factor's lobster-induced fantasy of a language that will allow us to "treat all data objects, whether Word files, Excel files, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook and so on, in the same logical way, as linked databases, and extract the metadata, create the entities and relationships in the same way, and use the same SQL syntax to interrogate, create, read, write and update them." Cheers, Tony.

    Read the article

  • Part 1: What are EBS Customizations?

    - by volker.eckardt(at)oracle.com
    Everything what is not shipped as Oracle standard may be called customization. And very often we differentiate between setup and customization, although setup can also be required when working with customizations.This highlights one of the first challenges, because someone needs to track setup brought over with customizations and this needs to be synchronized with the (standard) setup done manually. This is not only a tracking issue, but also a documentation issue. I will cover this in one of the following blogs in more detail.But back to the topic itself. Mainly our code pieces (java, pl/sql, sql, shell scripts), custom objects (tables, views, packages etc.) and application objects (concurrent programs, lookups, forms, reports, OAF pages etc.) are treated as customizations. In general we define two types: customization by extension and customization by modification. For sure we like to minimize standard code modifications, but sometimes it is just not possible to provide a certain functionality without doing it.Keep in mind that the EBS provides a number of alternatives for modifications, just to mention some:Files in file system    add your custom top before the standard top to the pathBI Publisher Report    add a custom layout and disable the standard layout, automatically yours will be taken.Form /OAF Change    use personalization or substitutionUsing such techniques you are on the safe site regarding standard patches, but for sure a retest is always required!Many customizations are growing over the time, initially it was just one file, but in between we have 5, 10 or 15 files in our customization pack. The more files you have, the more important is the installation order.Last but not least also personalization's are treated as customizations, although you may not use any deployment pack to transfer such personalisation's (but you can). For OAF personalization's you can use iSetup, I have also enabled iSetup to allow Forms personalizations to transport.Interfaces and conversion objects are quite often also categorized as customizations and I promote this decision. Your development standards are related to all these kinds of custom code whether we are exchanging data with users (via form or report) or with other systems (via inbound or outbound interface).To cover all these types of customizations two acronyms have been defined: RICE and CEMLI.RICE = Reports, Interfaces, Conversions, and ExtensionsCEMLI = Customization, Extension, Modification, Localization, IntegrationThe word CEMLI has been introduced by Oracle On Demand and is used within Oracle projects quite often, but also RICE is well known as acronym.It doesn't matter which acronym you are using, the main task here is to classify and categorize your customizations to allow everyone to understand when you talk about RICE- 211, CEMLI XXFI_BAST or XXOM_RPT_030.Side note: Such references are not automatically objects prefixes, but they are often used as such. I plan also to address this point in one other blog.Thank you!Volker

    Read the article

  • Extending Currying: Partial Functions in Javascript

    - by kerry
    Last week I posted about function currying in javascript.  This week I am taking it a step further by adding the ability to call partial functions. Suppose we have a graphing application that will pull data via Ajax and perform some calculation to update a graph.  Using a method with the signature ‘updateGraph(id,value)’. To do this, we have do something like this: 1: for(var i=0;i<objects.length;i++) { 2: Ajax.request('/some/data',{id:objects[i].id},function(json) { 3: updateGraph(json.id, json.value); 4: } 5: } This works fine.  But, using this method we need to return the id in the json response from the server.  This works fine, but is not that elegant and increase network traffic. Using partial function currying we can bind the id parameter and add the second parameter later (when returning from the asynchronous call).  To do this, we will need the updated curry method.  I have added support for sending additional parameters at runtime for curried methods. 1: Function.prototype.curry = function(scope) { 2: scope = scope || window 3: var args = []; 4: for (var i=1, len = arguments.length; i < len; ++i) { 5: args.push(arguments[i]); 6: } 7: var m = this; 8: return function() { 9: for (var i=0, len = arguments.length; i < len; ++i) { 10: args.push(arguments[i]); 11: } 12: return m.apply(scope, args); 13: }; 14: } To partially curry this method we will call the curry method with the id parameter, then the request will callback on it with just the value.  Any additional parameters are appended to the method call. 1: for(var i=0;i<objects.length;i++) { 2: var id=objects[i].id; 3: Ajax.request('/some/data',{id: id}, updateGraph.curry(id)); 4: } As you can see, partial currying gives is a very useful tool and this simple method should be a part of every developer’s toolbox.

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • Are factors such as Intellisense support and strong typing enough to justify the use of an 'Anaemic Domain Model'?

    - by David Osborne
    It's easy to accept that objects should be used in all layers except a layer nominated as a data layer. However, it's just as easy to end-up with an 'anaemic domain model' that is just an object representation of data with no real functionality ( http://martinfowler.com/bliki/AnemicDomainModel.html ). However, using objects in this fashion brings the benefit of factors such as Intellisense support, strong typing, readability, discoverability, etc. Are these factors strong arguments for an otherwise, anaemic domain model?

    Read the article

  • View AccuWeather Forecasts in Google Chrome

    - by Asian Angel
    Being able to keep an eye on the weather while at work or browsing the Internet is definitely helpful. If you like detailed forecasts then join us as we take a look at the Forecastfox Weather extension for Google Chrome. Getting Started As soon as the Forecastfox Weather extension has finished installing you will automatically be presented with the “Customize Forecastfox Page”. The default setting is for New York with English measurement units. Enter your location into the blank and hit “Enter” to display the listing for your city/area. If you are presented multiple options to choose from simply click on the appropriate listing. Once you have your city/area displayed you will notice that it is possible to have access to weather forecasts for multiple locations. You can easily remove any unneeded listings with the “Remove Link”. For our example we removed the New York listing. Note: Click on desired locations and measurement units to automatically set them as defaults (no save button required). Forecastfox Weather in Action You can hover your mouse over the “Toolbar Button” to see the current weather conditions. Clicking on the “Toolbar Button” opens a popup window with the current conditions, 7 day forecast, and a static satellite image. If desired you can access additional details for the current weather conditions. Clicking on “details” opens a new tab with a nice bit of information such as UV Index, Moon Phases, Cloud Ceiling, etc. Note: AccuWeather.com webpages will have some ads displayed. Perhaps you need the Hourly Forecast… Once again a new tab will be opened with the predicted hourly weather conditions for the current day. Going back to the popup window you may also select a specific day from the 7 day forecast. You will be presented with a “Day & Night” forecast for the chosen day with links to view “Additional Details & Hourly” information. Interested in the satellite image instead? You can click on either of the available links for larger images. Once the new tab is open you can choose from a variety of different satellite images. Conclusion If you have been wanting a solid weather forecast extension for your Chrome browser then Forecastfox Weather is definitely a recommended install. Links Download the Forecastfox Weather extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeView Weather Underground Forecasts in Google ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Visual Studio Extensions

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/18/visual-studio-extensions.aspxAs a product, Visual Studio has been around for a long time. In fact, it’s been 18 years since the first Visual Studio product was launched. In that time, there have been some major changes but perhaps the most important (or at least influential) changes for the course of the product have been in the last few years. While we can argue over what was and wasn’t an important change or what has and hasn’t changed, I want to talk about what I think is the single most important change Microsoft has made to Visual Studio. Specifically, I’m referring to the Visual Studio Gallery (first introduced in Visual Studio 2010) and the ability for third-parties to easily write extensions which can add new functionality to Visual Studio or even change existing functionality. I know Visual Studio had this ability before the Gallery existed, but it was expensive (both from a financial and development resource) perspective for a company or individual to write such an extension. The Visual Studio Gallery changed all of that. As of today, there are over 4000 items in the Gallery. Microsoft itself has over 100 items in the Gallery and more are added all of the time. Why is this such an important feature? Simply put, it allows third-parties (companies such as JetBrains, Telerik, Red Gate, Devart, and DevExpress, just to name a few) to provide enhanced developer productivity experiences directly within the product by providing new functionality or changing existing functionality. However, there is an even more important function that it serves. It also allows Microsoft to do the same. By providing extensions which add new functionality or change existing functionality, Microsoft is not only able to rapidly innovate on new features and changes but to also get those changes into the hands of developers world-wide for feedback. The end result is that these extensions become very robust and often end up becoming part of a later product release. An excellent example of this is the new CodeLens feature of Visual Studio 2013. This is, perhaps, the single most important developer productivity enhancement released in the last decade and already has huge potential. As you can see, out of the box CodeLens supports showing you information about references, unit tests and TFS history.   Fortunately, CodeLens is also accessible to Visual Studio extensions, and Microsoft DevLabs has already written such an extension to show code “health.” This extension shows different code metrics to help make sure your code is maintainable. At this point, you may have already asked yourself, “With over 4000 extensions, how do I find ones that are good?” That’s a really good question. Fortunately, the Visual Studio Gallery has a ratings system in place, which definitely helps but that’s still a lot of extensions to look through. To that end, here is my personal list of favorite extensions. This is something I started back when Visual Studio 2010 was first released, but so much has changed since then that I thought it would be good to provide an updated list for Visual Studio 2013. These are extensions that I have installed and use on a regular basis as a developer that I find indispensible. This list is in no particular order. NuGet Package Manager for Visual Studio 2013 Microsoft CodeLens Code Health Indicator Visual Studio Spell Checker Indent Guides Web Essentials 2013 VSCommands for Visual Studio 2013 Productivity Power Tools (right now this is only for Visual Studio 2012, but it should be updated to support Visual Studio 2013.) Everyone has their own set of favorites, so mine is probably not going to match yours. If there is an extension that you really like, feel free to leave me a comment!

    Read the article

  • Java Dynamic Binding

    - by Chris Okyen
    I am having trouble understanding the OOP Polymorphic principl of Dynamic Binding ( Late Binding ) in Java. I looked for question pertaining to java, and wasn't sure if a overall answer to how dynamic binding works would pertain to Java Dynamic Binding, I wrote this question. Given: class Person { private String name; Person(intitialName) { name = initialName; } // irrelevant methods is here. // Overides Objects method public void writeOutput() { println(name); } } class Student extends Person { private int studentNumber; Student(String intitialName, int initialStudentNumber) { super(intitialName); studentNumber = initialStudentNumber; } // irrellevant methods here... // overides Person, Student and Objects method public void writeOutput() { super.writeOutput(); println(studentNumber); } } class Undergaraduate extends Student { private int level; Undergraduate(String intitialName, int initialStudentNumber,int initialLevel) { super(intitialName,initialStudentNumber); level = initialLevel; } // irrelevant methods is here. // overides Person, Student and Objects method public void writeOutput() { super.writeOutput(); println(level); } } I am wondering. if I had an array called person declared to contain objects of type Person: Person[] people = new Person[2]; person[0] = new Undergraduate("Cotty, Manny",4910,1); person[1] = new Student("DeBanque, Robin", 8812); Given that person[] is declared to be of type Person, you would expect, for example, in the third line where person[0] is initialized to a new Undergraduate object,to only gain the instance variable from Person and Persons Methods since doesn't the assignment to a new Undergraduate to it's ancestor denote the Undergraduate object to access Person - it's Ancestors, methods and isntance variables... Thus ...with the following code I would expect person[0].writeOutput(); // calls Undergraduate::writeOutput() person[1].writeOutput(); // calls Student::writeOutput() person[0] to not have Undergraduate's writeOutput() overidden method, nor have person[1] to have Student's overidden method - writeOutput(). If I had Person mikeJones = new Student("Who?,MikeJones",44,4); mikeJones.writeOutput(); The Person::writeOutput() method would be called. Why is this not so? Does it have to do with something I don't understand about relating to arrays? Does the declaration Person[] people = new Person[2] not bind the method like the previous code would?

    Read the article

  • Map building - Tower Defense

    - by Dan K
    Before diving too deep into my question, let it be known that I am learning as far as java script goes and figured a simple Tower Defense game would be an excellent way to learn things. So I have found a simple background image with a path drawn on it and my question is how would I go about building a path so that I can animate my objects. Would I have to take the image and overlay a grid system, or can I store the path in some sort of array and have my objects move across it? Here is the background image:

    Read the article

  • Should main method be only consists of object creations and method calls?

    - by crucified soul
    A friend of mine told me that, the best practice is class containing main method should be named Main and only contains main method. Also main method should only parse inputs, create other objects and call other methods. The Main class and main method shouldn't do anything else. Basically what he is saying that class containing main method should be like: public class Main { public static void main(String[] args) { //parse inputs //create other objects //call methods } } Is it the best practice?

    Read the article

  • How to do geometric projection shadows?

    - by John Murdoch
    I have decided that since my game world is mostly flat I don't need better shadows than geometric projections - at least for now. The only problem is I don't even know how to do those properly - that is to produce a 4x4 matrix which would render shadows for my objects (that is, I guess, project them on a horizontal XZ plane). I would like a light source at infinity (e.g., the sun at some point in the sky) and thus parallel projection. My current code does something that looks almost right for small flying objects, but actually is a very rude approximation, as it doesn't project the objects onto the ground, but simply moves them there (I think). Also it always wrongly assumes the sun is always on the zenith (projecting straight down). Gdx.gl20.glEnable(GL10.GL_BLEND); Gdx.gl20.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //shells shellTexture.bind(); shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); transform.mul(state.transform); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); // shadows shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); m4.set(state.transform); state.transform.getTranslation(v3); m4.translate(0, -v3.y + 0.5f, 0); // TODO HACK: + 0.5f is a hack to ensure the shadow appears above the ground; this is overall a hack as we are just moving the shell to the surface instead of projecting it on the surface! transform.mul(m4); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); // TODO: make shadow black somehow shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); Gdx.gl.glDisable(GL10.GL_BLEND); So my questions are: a) What is the proper way to produce a Matrix4 to pass to openGL which would render the shadows for my objects? b) I am supposed to use another fragment shader for the shadows which would paint them in semi-transparent grey, correct? c) The limitation of this simplistic approach is that whenever there is some object on the ground (it is not flat) the shadows will not be drawn, correct? d) Do I need to add something very small to the y (up) coordinate to avoid z-fighting with ground textures? Or is the fact they will be semi-transparent enough to resolve that problem?

    Read the article

  • Dealing with state problems in functional programming

    - by Andrew Martin
    I've learned how to program primarily from an OOP standpoint (like most of us, I'm sure), but I've spent a lot of time trying to learn how to solve problems the functional way. I have a good grasp on how to solve calculational problems with FP, but when it comes to more complicated problems I always find myself reverting to needing mutable objects. For example, if I'm writing a particle simulator, I will want particle "objects" with a mutable position to update. How are inherently "stateful" problems typically solved using functional programming techniques?

    Read the article

  • Music Notation Editor - Refactoring view creation logic elseware

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • library for octree or kdtree

    - by Will
    Are there any robust performant libraries for indexing objects? It would need frustum culling and visiting objects hit by a ray as well as neighbourhood searches. I can find lots of articles showing the math for the component parts, often as algebra rather than simple C, but nothing that puts it all together (apart from perhaps Ogre, which has rather more involved and isn't so stand-alone). Surely hobby game makers don't all have to make their own octrees? (Python or C/C++ w/bindings preferred)

    Read the article

  • Data Auditor by Example

    - by Jinjin.Wang
    OWB has a node Data Auditors under Oracle Module in Projects Navigator. What is data auditor and how to use it? I will give an introduction to data auditor and show its usage by examples. Data auditor is an important tool in ensuring that data quality levels meet business requirements. Data auditor validates data against a set of data rules to determine which records comply and which do not. It gathers statistical metrics on how well the data in a system complies with a rule by auditing and marking how many errors are occurring against the audited table. Data auditors are typically scheduled for regular execution as part of a process flow, to monitor the quality of the data in an operational environment such as a data warehouse or ERP system, either immediately after updates like data loads, or at regular intervals. How to use data auditor to monitor data quality? Only objects with data rules can be monitored, so the first step is to define data rules according to business requirements and apply them to the objects you want to monitor. The objects can be tables, views, materialized views, and external tables. Secondly create a data auditor containing the objects. You can configure the data auditor and set physical deployment parameters for it as optional, which will be used while running the data auditor. Then deploy and run the data auditor either manually or as part of the process flow. After execution, the data auditor sets several output values, and records that are identified as not complying with the defined data rules contained in the data auditor are written to error tables. Here is an example. We have two tables DEPARTMENTS and EMPLOYEES (see pic-1 and pic-2. Click here for DDL and data) imported into OWB. We want to gather statistical metrics on how well data in these two tables satisfies the following requirements: a. Values of the EMPLOYEES.EMPLOYEE_ID attribute are three-digit numbers. b. Valid values for EMPLOYEES.JOB_ID are IT_PROG, SA_REP, SH_CLERK, PU_CLERK, and ST_CLERK. c. EMPLOYEES.EMPLOYEE_ID is related to DEPARTMENTS.MANAGER_ID. Pic-1 EMPLOYEES Pic-2 DEPARTMENTS 1. To determine legal data within EMPLOYEES or legal relationships between data in different columns of the two tables, firstly we define data rules based on the three requirements and apply them to tables. a. The first requirement is about patterns that an attribute is allowed to conform to. We create a Domain Pattern List data rule EMPLOYEE_PATTERN_RULE here. The pattern is defined in the Oracle Database regular expression syntax as ^([0-9]{3})$ Apply data rule EMPLOYEE_PATTERN_RULE to table EMPLOYEES.

    Read the article

  • A strong component keeps everything together

    - by Justin Paul-Oracle
    Most of the times you implement a WebCenter Content based system, you require some sort of customization. Sometimes these customizations need a Java class or two, or libraries (for example, the JavaMail API), or Database Objects (like new tables, views, indexes, etc). I have seen that libraries and Database Objects are usually put in place using manual steps. This means that the library jar files are copied to one of the common classes directory (set in the Content CLASSPATH variable) and/or the database scripts are executed manually. I have also seen people place the custom Java classes in the common classes directory. While this may seem like an easy solution, think about a scenario where you need to disable or uninstall the component or if you have to upgrade or migrate the system. You have to keep these manual steps documented and execute them every time you encounter the above scenarios. It is very common that some of these manual steps are missed when you have multiple teams and people working on the system. Here are a few points to ponder upon: Place all your custom Java classes within your component. Create a new directory, say ${COMPONENT_DIR}/classes, and place your code there. You can choose to bundle all your classes into a jar or you can place the entire class directory structure. Add a path entry to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path and the Custom Class Path Load Order under the Advanced Build Settings. This will ensure that the system CLASSPATH is updated to add this new directory. Create a new component for any new library that you want to add. Add the appropriate path entries to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path, Custom Class Path Load Order and/or the Custom Library Path under the Advanced Build Settings. Enter a comma separated list of features that this component will provide. When you create other components that will use the features exposed by this component, make sure that you specify a dependency to this library component by specifying the comma separated list of features in the Advanced Build Settings. The component wizard allows you to create custom install/uninstall Java code. The wizard will create a install filter class when you check the “Has Install” checkbox on the “Install/Uninstall Settings” tab. Consider using this filter class to create database objects when you install the component and drop the objects when you uninstall the component. If you do a lot of custom component development, consider creating a install/uninstall Java class, which can execute queries defined within the component. To sum up, whenever you write a new custom component, make sure that you bundle everything within the component.

    Read the article

  • The most dangerous SQL Script in the world!

    - by DrJohn
    In my last blog entry, I outlined how to automate SQL Server database builds from concatenated SQL Scripts. However, I did not mention how I ensure the database is clean before I rebuild it. Clearly a simple DROP/CREATE DATABASE command would suffice; but you may not have permission to execute such commands, especially in a corporate environment controlled by a centralised DBA team. However, you should at least have database owner permissions on the development database so you can actually do your job! Then you can employ my universal "drop all" script which will clear down your database before you run your SQL Scripts to rebuild all the database objects. Why start with a clean database? During the development process, it is all too easy to leave old objects hanging around in the database which can have unforeseen consequences. For example, when you rename a table you may forget to delete the old table and change all the related views to use the new table. Clearly this will mean an end-user querying the views will get the wrong data and your reputation will take a nose dive as a result! Starting with a clean, empty database and then building all your database objects using SQL Scripts using the technique outlined in my previous blog means you know exactly what you have in your database. The database can then be repopulated using SSIS and bingo; you have a data mart "to go". My universal "drop all" SQL Script To ensure you start with a clean database run my universal "drop all" script which you can download from here: 100_drop_all.zip By using the database catalog views, the script finds and drops all of the following database objects: Foreign key relationships Stored procedures Triggers Database triggers Views Tables Functions Partition schemes Partition functions XML Schema Collections Schemas Types Service broker services Service broker queues Service broker contracts Service broker message types SQLCLR assemblies There are two optional sections to the script: drop users and drop roles. You may use these at your peril, particularly as you may well remove your own permissions! Note that the script has a verbose mode which displays the SQL commands it is executing. This can be switched on by setting @debug=1. Running this script against one of the system databases is certainly not recommended! So I advise you to keep a USE database statement at the top of the file. Good luck and be careful!!

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >