Search Results

Search found 12107 results on 485 pages for 'pinned objects'.

Page 291/485 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • When module calling gets ugly

    - by Pete
    Has this ever happened to you? You've got a suite of well designed, single-responsibility modules, covered by unit tests. In any higher-level function you code, you are (95% of the code) simply taking output from one module and passing it as input to the next. Then, you notice this higher-level function has turned into a 100+ line script with multiple responsibilities. Here is the problem. It is difficult (impossible) to test that script. At least, it seems so. Do you agree? In my current project, all of the bugs came from this script. Further detail: each script represents a unique solution, or algorithm, formed by using different modules in different ways. Question: how can you remedy this situation? Knee-jerk answer: break the script up into single-responsibility modules. Comment on knee-jerk answer: it already is! Best answer I can come up with so far: create higher-level connector objects which "wire" modules together in particular ways (take output from one module, feed it as input to another module). Thus if our script was: FooInput fooIn = new FooInput(1, 2); FooOutput fooOutput = fooModule(fooIn); Double runtimevalue = getsomething(fooOutput.whatever); BarInput barIn = new BarInput( runtimevalue, fooOutput.someOtherValue); BarOutput barOut = barModule(BarIn); It would become with a connector: FooBarConnectionAlgo fooBarConnector = new fooBarConnector(fooModule, barModule); FooInput fooIn = new FooInput(1, 2); BarOutput barOut = fooBarConnector(fooIn); So the advantage is, besides hiding some code and making things clearer, we can test FooBarConnectionAlgo. I'm sure this situation comes up a lot. What do you do?

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • Limiting the speed of the mouse cursor

    - by idlewire
    I am working on a simple game where you can drag objects around with the mouse cursor. As I drag the object around quickly, I notice there is some juddering, which seems to be due to the fact that I can move the mouse cursor faster than the game's update/draw. So, although I maintain the offset from where the player initially clicked on the object, the mouse's relative position to the object shifts around slightly before settling as I move the object very quickly. The only way I have found to get smooth, exact 1:1 movement is if I turn both IsFixedTimeStep and SynchronizeWithVerticalRetrace to false. However, I'd rather not have to do that. I have also tried making a custom mouse cursor, hiding the real mouse, taking the real mouse delta and clamping it to a maximum speed. Here is the problem: In windowed mode, the "real" mouse cursor moves off the window while the custom mouse cursor (since it's movement is being scaled) is still somewhere inside the game window. This becomes bizarre and is obviously not desired, as clicking at this point means clicking on things outside the game window. Is there any way to accomplish this in windowed mode? In fullscreen mode, the "real" mouse cursor is bounded to the edges of the screen. So I get to a point where there is no more mouse delta, yet my custom cursor is still somewhere in the middle of the screen and hence can't move further in that direction. If I wanted to clamp it to the edge of the screen when the real cursor is at the edge, then I would get an abrupt jump to the edge of the screen, which isn't desired either Any help would be appreciated. I'd like to be able to limit the speed of the mouse, but also would appreciate help with the first issue (the non-smooth relative offset between mouse cursor movement and object movement).

    Read the article

  • Copy-and-Pasted Test Code: How Bad is This?

    - by joshin4colours
    My current job is mostly writing GUI test code for various applications that we work on. However, I find that I tend to copy and paste a lot of code within tests. The reason for this is that the areas I'm testing tend to be similar enough to need repetition but not quite similar enough to encapsulate code into methods or objects. I find that when I try to use classes or methods more extensively, tests become more cumbersome to maintain and sometimes outright difficult to write in the first place. Instead, I usually copy a big chunk of test code from one section and paste it to another, and make any minor changes I need. I don't use more structured ways of coding, such as using more OO-principles or functions. Do other coders feel this way when writing test code? Obviously I want to follow DRY and YAGNI principles, but I find that test code (automated test code for GUI testing anyway) can make these principles tough to follow. Or do I just need more coding practice and a better overall system of doing things? EDIT: The tool I'm using is SilkTest, which is in a proprietary language called 4Test. As well, these tests are mostly for Windows desktop applications, but I also have tested web apps using this setup as well.

    Read the article

  • good literature for teaching object oriented thinking in C [closed]

    - by Dipan Mehta
    Quite often C is the primary platform for the development. And when things are large scale, I have seen partitioning of the system as different objects is quite a natural thing. Some or many of the object orientated analysis and design principles are used here very well. This is not a debate question about whether or not C is a good candidate for object oriented programming or not. This is also NOT a question how to do OO in C. You can refer to this question and there are probably many such citations. As far as I am concerned, I have learned some of this things while working with many open source and commercial projects. (libjpeg, ffmpeg, Gstreamer which is based on GObject). I can probably extend a few references that explains some of these concepts such as - 1. Event Helix article, 2. Linux Mag article 3. one of my answers which links Schreiner's reference. Unfortunately, when we induct younger folks, it seems too hard to make them learn all of it the hard way. Usually, when we say it's C, a general reaction is to throw away all of the "Object thinking". Looking for help extending above references from those who have been in the similar areas of work. Are there any good formal literature that explains how Object thinking can be made to use while you are working in C? I have seen tons of book on general "object oriented paradigms" but they all focus on advanced languages mostly not in C. You see most C books - but most focus only on the syntax and the obfuscated corners of C and that's it. There are hardly ANY good reference, specially books or any systematic (I mean formal) literature on how to apply OO in C. This is very surprising given that so many large scale open source projects use C which are truly using this very well; but we hardly see any good formal literature on this subject.

    Read the article

  • Turning a problem into a data

    - by Fogmeister
    OK, I have an app that I'm creating but I'm just really not sure about how to approach the problem. The idea is fairly simple. I'm just not sure how to wrap it in a data model (or even if I should). TBH I feel like I'm making it more complicated than it needs to be. How it works. The app will have circles along the top in a row that need to be connected to circles along the bottom in a row. 10 circles at the top. 10 at the bottom. One connection per pair of dots. Anyway, I can get the dots to connect I'm just not sure how to wrap it in a data model so that I can analyse what has been connected and see if it is right or not. The circles will be questions and answers. I can make an array of question objects with a question and answer properties. I can then display these as the dot pairs. I'm just not sure how to record which questions have been connected to which answers. It is valid for a user to connect a wrong answer as they all get checked at the end. I was thinking of using SpriteKit but this isn't a restriction. I could use UIKit or something else. TBH, this question is fairly language free as I'm just after a way of modelling it.

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • Collision planes confusion

    - by Jeffrey
    I'm following this tutorial by thecplusplusguy and in the linked video he explain that for example for the world basement and walls we need to create the actual rendered (shown to the player) walls and then duplicate them, place them in the same coordinates as the rendered walls and call them collision (by defining their material to collision). Then it defines in the Object loader function that those objects with material == collision are collision planes and should not be rendered but just used to check collision. Now I'm pretty confused. Why would we add this kind of complexity to a problem that can easily be solved by a simple loadObject(string plane_object, bool check_collision);: Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true In this case we have lowered the complexity of his method and make it more flexible and faster to develop (faster because we don't always have to make a copy for each plane and flexible because we don't hardcode the Object loader). The only case in which this method could not work is when we need hidden collision planes, and for that we could modify the loadObject() function like this: loadObject(string plane_object, bool check_collision = true, bool hide_object = false); Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true And add the ability to actually show the object or hide it based on hide_object. The final question is: am I right? What would the possible problem encountered with my solution versus his?

    Read the article

  • Learning OpenGL GLSL - VAO buffer problems?

    - by Bleary
    I've just started digging through OpenGL and GLSL, and now stumbled on something I can't get my head around this one!? I've stepped back to loading a simple cube and using a simple shader on it, but the result is triangles drawn incorrectly and/or missing. The code I had working perfectly on meshes, but was attempting to move to using VAOs so none of the code for storing the vertices and indices has changed. http://i.stack.imgur.com/RxxZ5.jpg http://i.stack.imgur.com/zSU50.jpg What I have for creating the VAO and buffers is this //Create the Vertex array object glGenVertexArrays(1, &vaoID); // Finally create our vertex buffer objects glGenBuffers(VBO_COUNT, mVBONames); glBindVertexArray(vaoID); // Save vertex attributes into GPU glBindBuffer(GL_ARRAY_BUFFER, mVBONames[VERTEX_VBO]); // Copy data into the buffer object glBufferData(GL_ARRAY_BUFFER, lPolygonVertexCount*VERTEX_STRIDE*sizeof(GLfloat), lVertices, GL_STATIC_DRAW); glEnableVertexAttribArray(pos); glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, VERTEX_STRIDE*sizeof(GLfloat),0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVBONames[INDEX_VBO]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, lPolygonCount*sizeof(unsigned int), lIndices, GL_STATIC_DRAW); glBindVertexArray(0); And the code for drawing the mesh. glBindVertexArray(vaoID); glUseProgram(shader->programID); GLsizei lOffset = mSubMeshes[pMaterialIndex]->IndexOffset*sizeof(unsigned int); const GLsizei lElementCount = mSubMeshes[pMaterialIndex]->TriangleCount*TRIAGNLE_VERTEX_COUNT; glDrawElements(GL_TRIANGLES, lElementCount, GL_UNSIGNED_SHORT, reinterpret_cast<const GLvoid*>(lOffset)); // All the points are indeed in the correct place!? //glPointSize(10.0f); //glDrawElements(GL_POINTS, lElementCount, GL_UNSIGNED_SHORT, 0); glUseProgram(0); glBindVertexArray(0); Eyes have become bleary looking at this today so any thoughts or a fresh set of eyes would be greatly appreciated.

    Read the article

  • What should a domain object's validation cover?

    - by MarcoR88
    I'm trying to figure out how to do validation of domain objects that need external resources, such as data mappers/dao Firstly here's my code class User { const INVALID_ID = 1; const INVALID_NAME = 2; const INVALID_EMAIL = 4; int getID(); void setID(Int i); string getName(); void setName(String s); string getEmail(); void setEmail(String s); int getErrorsForInsert(); // returns a bitmask for INVALID_* constants int getErrorsForUpdate(); } My worries are about the uniqueness of the email, checking it would require the storage layer. Reading others' code seems that two solutions are equally accepted: both perform the unique validation in data mapper but some set an error state to the DO user.addError(User.INVALID_EMAIL) while others prefer to throw a totally different type of exception that covers only persistence, like: UserStorageException { const INVALID_EMAIL = 1; const INVALID_CITY = 2; } What are the pros and cons of these solutions?

    Read the article

  • Designing a plug-in system

    - by madflame991
    I'm working on a Java project and I would like to add a plug-in system. More precisely, I would like to let the user design his own module, pack it into a jar, leave it in a "plugins/" subfolder of my application and be done with it. I've managed to get a child classloader to instantiate objects of classes located in external jars, but now I'm facing a design dilemma: Say Joe makes a plug-in and he packs it in joeplugin.jar. I would really like Joe to have a class named "instantiation.Factory" and I would also like everyone to have this class with this exact location and name. (This factory class obviously implements a interface that I provide and through it I get what I want from the plug-in.) If Joe wouldn't be restricted in this way I would have to look into his entire jar for some class that implements my factory interface and I don't want to imagine how complicated things get. So my question is: should I enforce a strict naming convention for this single class? I have no idea how plug-in systems work.

    Read the article

  • How do you keep SOA DRY?

    - by TaylorOtwell
    In our organization, we've shifted to a more "service oriented architecture". To give an example, let's assume we need to retrieve a "Quote" object. This quote has a shipper, a consignee, phone numbers, contacts, email addresses, and other location information. In other words, a Quote object is made up of many other objects. So, it seems like it would make sense to make a "Quote Retrieval Service". In our situation, we've accomplished this by creating a .NET solution and writing the service. The service API looks something like this (in pseudo-code): Function GetQuote(String ID) Returns Quote So, so far so good. Now, when this service is consumed, to keep things "de-coupled", we are creating essentially a duplicate of the Quote object and mapping from the QuoteService version of the Quote into the consumer's version of the Quote. In many cases, these classes will have the exact same properties. So, if the Quote service is consumed by 5 other applications, we would have 6 definitions of what a "Quote" is. One for each consumer, and one for the service. This feels wrong. I thought code was supposed to be DRY, but it seems like our method of SOA is forcing us to create tons of duplicated class definitions. What are we doing wrong, or is the code duplication just a "necessary evil" of SOA?

    Read the article

  • Red Samurai Performance Audit Tool – OOW 2013 release (v 1.1)

    - by JuergenKress
    We are running our Red Samurai Performance Audit tool and monitoring ADF performance in various projects already for about one year and the half. It helps us a lot to understand ADF performance bottlenecks and tune slow ADF BC View Objects or optimise large ADF BC fetches from DB. There is special update implemented for OOW'13 - advanced ADF BC statistics are collected directly from your application ADF BC runtime and later displayed as graphical information in the dashboard. I will be attending OOW'13 in San Francisco, feel free to stop me and ask about this tool - I will be happy to give it away and explain how to use it in your project. Original audit screen with ADF BC performance issues, this is part of our Audit console application: Audit console v1.1 is improved with one more tab - Statistics. This tab displays all SQL Selects statements produced by ADF BC over time, logged users, AM access load distribution and number of AM activations along with user sessions. Available graphs: Daily Queries  - total number of SQL selects per day Hourly Queries - Last 48 Hours Logged Users - total number of user sessions per day SQL Selects per Application Module - workload per Application Module Number of Activations and User sessions - last 48 hours - displays stress load Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Red Samurai,ADF performance,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Javascript MVC in principle

    - by Michael
    Suppose I want to implement MVC in JavaScript. I am not asking about MVC frameworks. I am asking how to build it in principle. Let's consider a search page of an e-commerce site, which works s follows: User chooses a product and its attributes and press a "search" button. Application sends the search request to the server, receives a list of products. Application displays the list in the web page. I think the Model holds a Query and list of Product objects and "publishes" such events as "query updated", "list of products updated", etc. The Model is not aware of DOM and server, or course. View holds the entire DOM tree and "subscribes" to the Model events to update the DOM. Besides, it "publishes" such events as "user chose a product", "user pressed the search button" etc. Controller does not hold any data. It "subscribes" to the View events, calls the server and updates the Model. Does it make sense?

    Read the article

  • DataContractSerializer truncated string when used with MemoryStream,but works with StringWriter

    - by Michael Freidgeim
    We've used the following DataContractSerializeToXml method for a long time, but recently noticed, that it doesn't return full XML for a long object, but  truncated it and returns XML string with the length of  multiple-of-1024 , but the reminder is not included. internal static string DataContractSerializeToXml<T>(T obj) { string strXml = ""; Type type= obj.GetType();//typeof(T) DataContractSerializer serializer = new DataContractSerializer(type); System.IO.MemoryStream aMemStr = new System.IO.MemoryStream(); System.Xml.XmlTextWriter writer = new System.Xml.XmlTextWriter(aMemStr, null); serializer.WriteObject(writer, obj); strXml = System.Text.Encoding.UTF8.GetString(aMemStr.ToArray()); return strXml; }   I tried to debug and searched Google for similar problems, but didn't find explanation of the error. The most closed http://forums.codeguru.com/showthread.php?309479-MemoryStream-allocates-size-multiple-of-1024-( talking about incorrect length, but not about truncated string.fortunately replacing MemoryStream to StringWriter according to http://billrob.com/archive/2010/02/09/datacontractserializer-converting-objects-to-xml-string.aspxfixed the issue.   1: var serializer = new DataContractSerializer(tempData.GetType());   2: using (var backing = new System.IO.StringWriter())   3: using (var writer = new System.Xml.XmlTextWriter(backing))   4: {   5:     serializer.WriteObject(writer, tempData);   6:     data.XmlData = backing.ToString();   7: }v

    Read the article

  • Functional programming compared to OOP with classes

    - by luckysmack
    I have been interested in some of the concepts of functional programming lately. I have used OOP for some time now. I can see how I would build a fairly complex app in OOP. Each object would know how to do things that object does. Or anything it's parents class does as well. So I can simply tell Person().speak() to make the person talk. But how do I do similar things in functional programming? I see how functions are first class items. But that function only does one specific thing. Would I simply have a say() method floating around and call it with an equivalent of Person() argument so I know what kind of thing is saying something? So I can see the simple things, just how would I do the comparable of OOP and objects in functional programming, so I can modularize and organize my code base? For reference, my primary experience with OOP is Python, PHP, and some C#. The languages that I am looking at that have functional features are Scala and Haskell. Though I am leaning towards Scala. Basic Example (Python): Animal(object): def say(self, what): print(what) Dog(Animal): def say(self, what): super().say('dog barks: {0}'.format(what)) Cat(Animal): def say(self, what): super().say('cat meows: {0}'.format(what)) dog = Dog() cat = Cat() dog.say('ruff') cat.say('purr')

    Read the article

  • Problem graphics with ATI Radeon x1270 (RS690M) on Ubuntu 12.04

    - by Giuseppe Della Corte
    I'm Italian and so I apologize for my English! I'm a beginner to Ubuntu and I tried it on my desktop PC and it's fantastic, fast and fun! I decided to try it on my netbook Packard Bell DOT M/A and this is the configuration: AMD Athlon L110 1.2GHz 2GB of RAM ATI Radeon x1270 (RS690M) 150GB Hard Disk I installed Ubuntu 12.04 with Wubi (dualboot Windows 7 + Ubuntu 12.04), because the netbook does not have a DVD player! During the installation everything is OK, after it was installed I see mistakes in the graphical display! I see objects that are seen in different colors and moving (buttons, mouse pointer, text bar ... etc ...) At one point came not to see anything, such as this screenshot: The drivers are open, that is already installed in Ubuntu. Windows 7 video card runs fine, can run well Aero (transparent window effects), I can watch movies in HD and play some games with the Catalyst drivers from AMD. Now I ask a favor, you can help me solve this problem? Is there a fix for this driver or drivers different on the Internet? Thanks for your attention, good bye!

    Read the article

  • Reflective discovery of an inner class in an API

    - by wassup
    Let me ask you, as this bothers me for quite a while but appears to be subjectively the best solution for my problem, if reflective discovery of an inner class for API purposes is that bad idea? First, let me explain what I mean by saying "reflective discovery" and all that stuff. I am sketching an API for a Java database system, that'll be centered around block-based entities (don't ask me what that means - that's a long story), and those entities can be read and returned to the Java code as objects subclassed from the Entity class. I have an Entity.Factory class, that, by means of fluent interfaces, takes a Class<? extends Entity> argument and then, uses an instance of Section.Builder, Property.Builder, or whatever builder the entity has, to put it into the back-end storage. The idea about registering all entity types and their builders just doesn't appeal to me, so I thought that the closest solution to the problem that'd suffice my design needs would be to discover, using reflection, all inner classes of Entity classes and find one that's called Builder. Looking for some expert insight :) And if I missed some important design details (which could happen as I tried to make this question as concise as possible), just tell me and I'll add them.

    Read the article

  • How bad it's have two methods with the same name but differents signatures in two classes?

    - by Super User
    I have a design problem relationated with the public interface, the names of methods and the understanding of my API and my code. I have two classes like this: class A: ... function collision(self): .... ... class B: .... function _collision(self, another_object, l, r, t, b): .... The first class have one public method named collision and the second have one private method called _collision. The two methods differs in arguments type and number. In the API _m method is private. For the example let's say that the _collision method checks if the object is colliding with another_ object with certain conditions l, r, t, b (for example, collide the left side, the right side, etc) and returns true or false according to the case. The collision method, on the other hand, resolves all the collisions of the object with other objects. The two methods have the same name because I think is better avoid overload the design with different names for methods who do almost the same think, but in distinct contexts and classes. This is clear enough to the reader or I should change the method's name?

    Read the article

  • I've been told that Exceptions should only be used in exceptional cases. How do I know if my case is exceptional?

    - by tieTYT
    My specific case here is that the user can pass in a string into the application, the application parses it and assigns it to structured objects. Sometimes the user may type in something invalid. For example, their input may describe a person but they may say their age is "apple". Correct behavior in that case is roll back the transaction and to tell the user an error occurred and they'll have to try again. There may be a requirement to report on every error we can find in the input, not just the first. In this case, I argued we should throw an exception. He disagreed, saying, "Exceptions should be exceptional: It's expected that the user may input invalid data, so this isn't an exceptional case" I didn't really know how to argue that point, because by definition of the word, he seems to be right. But, it's my understanding that this is why Exceptions were invented in the first place. It used to be you had to inspect the result to see if an error occurred. If you failed to check, bad things could happen without you noticing. Without exceptions every level of the stack needs to check the result of the methods they call and if a programmer forgets to check in one of these levels, the code could accidentally proceed and save invalid data (for example). Seems more error prone that way. Anyway, feel free to correct anything I've said here. My main question is if someone says Exceptions should be exceptional, how do I know if my case is exceptional?

    Read the article

  • How to perform game object smoothing in multiplayer games

    - by spaceOwl
    We're developing an infrastructure to support multiplayer games for our game engine. In simple terms, each client (player) engine sends some pieces of data regarding the relevant game objects at a given time interval. On the receiving end, we step the incoming data to current time (to compensate for latency), followed by a smoothing step (which is the subject of this question). I was wondering how smoothing should be performed ? Currently the algorithm is similar to this: Receive incoming state for an object (position, velocity, acceleration, rotation, custom data like visual properties, etc). Calculate a diff between local object position and the position we have after previous prediction steps. If diff doesn't exceed some threshold value, start a smoothing step: Mark the object's CURRENT POSITION and the TARGET POSITION. Linear interpolate between these values for 0.3 seconds. I wonder if this scheme is any good, or if there is any other common implementation or algorithm that should be used? (For example - should i only smooth out the position? or other values, such as speed, etc) any help will be appreciated.

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • How to keep track of previous scenes and return to them in libgdx

    - by MxyL
    I have three scenes: SceneTitle, SceneMenu, SceneLoad. (The difference between the title scene and the menu scene is that the title scene is what you see when you first turn on the game, and the menu scene is what you can access during the game. During the game, meaning, after you've hit "play!" in the title scene.) I provide the ability to save progress and consequently load a particular game. An issue that I've run into is being able to easily keep track of the previous scene. For example, if you enter the load scene and then decide to change your mind, the game needs to go back to where you were before; this isn't something that can be hardcoded. Now, an easy solution off the top of my head is to simply maintain a scene stack, which basically keeps track of history for me. A simple transaction would be as follows I'm currently in the menu scene, so the top of the stack is SceneMenu I go to the load scene, so the game pushes SceneLoad onto the stack. When I return from the load scene, the game pops SceneLoad off the stack and initializes the scene that's currently at the top, which is SceneMenu I'm coding in Java, so I can't simply pass around Classes as if they were objects, so I've decided implemented as enum for eac scene and put that on the stack and then have my scene managing class go through a list of if conditions to return the appropriate instance of the class. How can I implement my scene stack without having to do too much work maintaining it?

    Read the article

  • Object oriented wrapper around a dll

    - by Tom Davies
    So, I'm writing a C# managed wrapper around a native dll. The dll contains several hundred functions. In most cases, the first argument to each function is an opaque handle to a type internal to the dll. So, an obvious starting point for defining some classes in the wrapper would be to define classes corresponding to each of these opaque types, with each instance holding and managing the opaque handle (passed to its constructor) Things are a little awkward when dealing with callbacks from the dll. Naturally, the callback handlers in my wrapper have to be static, but the callbacks arguments invariable contain an opaque handle. In order to get from the static callback back to an object instance, I've created a static dictionary in each class, associating handles with class instances. In the constructor of each class, an entry is put into the dictionary, and this entry is then removed in the Destructors. When I receive a callback, I can then consult the dictionary to retrieve the class instance corresponding to the opaque reference. Are there any obvious flaws to this? Something that seems to be a problem is that the existence static dictionary means that the garbage collector will not act on my class instances that are otherwise unreachable. As they are never garbage collected, they never get removed from the dictionary, so the dictionary grows. It seems I might have to manually dispose of my objects, which is something absolutely would like to avoid. Can anyone suggest a good design that allows me to avoid having to do this?

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >