Search Results

Search found 25496 results on 1020 pages for 'monotouch development'.

Page 532/1020 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • How much time it will take to learn 3ds Max

    - by Mirror51
    I am not a 3d developer but i want to lean 3ds max just for simple house building with 2-3 rooms. Actually i don't want to develop from scratch . What i really want to do is get the existing models of homes , rooms , hotels from the internet and add my name there or my photo there , just for fun . SO i want to know that how much time do u think it will take me to that sort of stuff. Its not my career but just hobby . If its going to take longer time , then i don't want to waste but i can get going in one week or so that will go good but i want to ask from experience developers thanks

    Read the article

  • Proper way to encapsulate a Shader into different modules

    - by y7haar
    I am planning to build a Shader system which can be accessed through different components/modules in C++. Each component has its own functionality like transform-relevated stuff (handle the MVP matrix, ...), texture handler, light calculation, etc... So here's an example: I would like to display an object which has a texture and a toon shading material applied and it should be moveable. So I could write ONE shading program that handles all 3 functionalities and they are accessed through 3 different components (texture-handler, toon-shading, transform). This means I have to take care of feeding a GLSL shader with different uniforms/attributes. This implies to know all necessary uniform locations and attribute locations, that the GLSL shader owns. And it would also necessary to provide different algorithms to calculate the value for each input variable. Similar functions would be grouped together in one component. A possible way would be, to wrap all shaders in a own definition file written in JSON/XML and parse that file in C++ to get all input members and create and compile the resulting GLSL. But maybe there is another way that is not so complex? So I'm searching for a way to build a system like that, but I'm not sure yet which is the best approach.

    Read the article

  • Attributes and Behaviours in game object design

    - by Brukwa
    Recently I have read interesting slides about game object design written by Marcin Chady Theory and Practice of the Game Object Component Architecture. I have prototyped quick sample that utilize all Attributes\Behaviour idea with some sample data. Now I have faced a little problem when I added a RenderingSystem to my prototype application. I have created an object with RenderBehaviour which listens for messages (OnMessage function) like MovedObject in order to mark them as invalid and in OnUpdate pass I am inserting a new renderable object to rederer queue. I have noticed that rendering updates should be the last thing made in single frame and this causes RenderBehaviour to depend on any other Behaviour that changes object position (i.ex. PhysicsSystem and PhysicsBehaviour). I am not even sure if I am doing this the way it should be. Do you have any clues that might put me on the right track?

    Read the article

  • World to Pixel Transformation

    - by D00d
    My objects have a location in world coordinates (basically 1.0f is a meter). If I simply draw my objects using their world coordinates, each meter will correspond to a pixel. Obviously that's not what I want. Now, I don't want to have to apply a transformation to each and every object's position when I draw them. As I happen to be using XNA, and spritebatch allows a Matrix to be passed in as an argument in it's begin method, I was wondering if there is a way to pass the World to Pixel transformation in there. Any suggestions? So far Matrix.CreateScale(new Vector3(zoom, zoom, 1)) puts the objects in their proper spot, but it also scales up the sprites. Is there a way to transform the position without enlarging the sprite? Thanks

    Read the article

  • Adding a short delay between bullets

    - by Sun
    I'm having some trouble simulating bullets in my 2D shooter. I want similar mechanics to Megaman, where the user can hold down the shoot button and a continues stream of bullets are fired but with a slight delay. Currently, when the user fires a bullet in my game a get an almost laser like effect. Below is a screen shot of some bullets being fired while running and jumping. In my update method I have the following: if(gc.getInput().isKeyDown(Input.KEY_SPACE) ){ bullets.add(new Bullet(player.getPos().getX() + 30,player.getPos().getY() + 17)); } Then I simply iterate through the array list increasing its x value on each update. Moreover, pressing the shoot button (Space bar) creates multiple bullets instead of just creating one even though I am adding only one new bullet to my array list. What would be the best way to solve this problem?

    Read the article

  • JiglibX addition to existing project questions

    - by SomeXnaChump
    Got a very simple existing project, that basically contains a lot of cubes. Now I am wanting to add a physics system to it and JiglibX seemed like the simplest one with some tutorials out there. My main problem is that the physics don't seem to be working how I imagined, I expected my tower of cubes to come crashing down, but they dont seem to do anything. I think my problem is that my cubes do not inherit DrawableGameComponent, they are managed by a world object that will update and render them. So they are at no point put into the games component list. I am not sure if this means that JiglibX will not be able to interact with them as in all the tutorials there are no explicit calls to add the Body objects to the physics system, so I can only presume that they are using a static/singleton under the hood which automatically hooks in all things, or they use the game objects component list somehow. I also noticed that in alot of the tutorials they use the following when setting up the physics system: float timeStep = (float)gameTime.ElapsedGameTime.Ticks / TimeSpan.TicksPerSecond; PhysicsSystem.CurrentPhysicsSystem.Integrate(timeStep); Would it not be better to keep a local instance of the created PhysicsSystem object and just call myPhysicsSystem.Integrate(timeStep)?

    Read the article

  • How to prioritize related game entity components?

    - by Paul Manta
    I want to make a game where you have to run over a bunch of zombies with your car. When moving around, the zombies have a few things to take into consideration: When there's no player around they might just roam about randomly. And even when some other component dictates a specific direction, they should wobble to the left and right randomly (like drunk people). This implies a small, random, deviation in their movement. They should avoid static obstacles. When they see they are headed towards a wall, they should reorient themselves. They should avoid the car. They should try to predict where the car will be based on its velocity and try to move out of the way. When they can, they should try to get near the player. All these types of decisions they have to do seem like they should be implemented in different components. But how should I manage them? How can I give different components different weights that reflect the importance of each decision (in a given situation)? I would need some other component that acts as a manager, but do you have any tips on how I should implement it? Or maybe there's a better solution?...

    Read the article

  • Tetris : Effective rotation

    - by hqt
    I rotate each piece by rotation formula. More detail, because rotation angle is 90 so : xNew = y; yNew = -x; But my method has met two problems : 1) Out of box : each type of pieces is fit in square 4x4. (0,0 at under left) But by this rotation, at some case they will out of this box. For example, there is a point with coordinate (5,6) So, please help me how to fit these coordinate into 4x4 box again, or give me another formula for this. 2) at I case : (4 squares at same row or same column), just has two rotations case. but in method above, they still has 4 pieces. So, how to prevent this. Thanks :)

    Read the article

  • Voronoi regions of a (convex) polygon.

    - by Xavura
    I'm looking to add circle-polygon collisions to my Separating Axis Theorem collision detection. The metanet software tutorial (http://www.metanetsoftware.com/technique/tutorialA.html#section3) on SAT, which I discovered in the answer to a question I found when searching, talks about voronoi regions. I'm having trouble finding material on how I would calculate these regions for an arbitrary convex polygon and aleo how I would determine if a point is in one + which. The tutorial does contain source code but it's a .fla and I don't have Flash unfortunately.

    Read the article

  • Rotate a vector

    - by marc wellman
    I want my first-person camera to smoothly change its viewing direction from direction d1 to direction d2. The latter direction is indicated by a target position t2. So far I have implemented a rotation that works fine but the speed of the rotation slows down the closer the current direction gets to the desired one. This is what I want to avoid. Here are the two very simple methods I have written so far: // this method initiates the direction change and sets the parameter public void LookAt(Vector3 target) { _desiredDirection = target - _cameraPosition; _desiredDirection.Normalize(); _rotation = new Matrix(); _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _isLooking = true; } // this method gets execute by the Update()-method if _isLooking flag is up. private void _lookingAt() { dist = Vector3.Distance(Direction, _desiredDirection); // check whether the current direction has reached the desired one. if (dist >= 0.00001f) { _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _rotation = Matrix.CreateFromAxisAngle(_rotationAxis, MathHelper.ToRadians(1)); Direction = Vector3.TransformNormal(Direction, _rotation); } else { _onDirectionReached(); _isLooking = false; } } Again, rotation works fine; camera reaches its desired direction. But the speed is not equal over the course of movement - it slows down. How to achieve a rotation with constant speed ?

    Read the article

  • Adobe Air Mobile AS3 app: challenges and how to overcome them?

    - by Arthur Wulf White
    I made a PC flash game for LD 26 - minimalism and I am working on porting it to Android. Some questions I'd like to ask: Is it bad to heavily use vector graphics (ie. this.graphics.lineTo()) in Mobile Air? Does Stencyl completely alleviate this issue? Are there any inherit disadvantages to using Air Mobile that I'm missing? Where is the documentation for Air mobile (I googled and found no recent books or documentation pdf so far)

    Read the article

  • How to handle jumping up a slope in a runner game?

    - by you786
    In an 2D endless runner, what should happen when the player is running "too fast" up a slope and jumps? For example, in a "normal" case: .O. . __..O_____ . / . / O/ _/ If he is moving to the right slowly enough, he will jump upwards and land on the flat part of the surface. However, if he is moving too fast, the jump will have no effect as his forward motion will bring him back in contact with the slope before he can get high enough to pass over it. When the speed is sufficiently high, there will effectively be no jump. _________ / .O/ O/ _/ Are there any known ways to solve this issue? I know it's physically correct*, but are there techniques that other games use to overcome this in a reasonable manner? As a last resort I'll have to just remove all slopes that are too slanted. *If you constrain the player to never jumping backwards.

    Read the article

  • Jumping over non-stationary objects without problems ... 2-D platformer ... how could this be solved? [on hold]

    - by help bonafide pigeons
    You know this problem ... take Super Mario Bros. for example. When Mario/Luigi/etc. comes in proximity with a nearing pipe image an invisible boundary setter must prevent him from continuing forward movement. However, when you jump and move both x and y you are coordinately moving in two dimensions at an exact time. When nearing the pipe in mid-air as you are falling, i.e. implementation of gravity in the computer program "pulling" the image back down, and you do not want them to get "stuck" in both falling and moving. That problem is solved, but how about this one: The player controlling the ball object is attempting to jump and move rightwards over the non-stationary block that moves up and down. How could we measure its top and lower x+y components to determine the safest way for the ball to accurately either fall back down, or catch the ledge, or get pushed down under it, etc.?

    Read the article

  • How often should multiplayer games communicate with the server?

    - by Bane
    I once heard that Runescape "ticks" every 0.3s, and that seemed like a very long period of time, although Runescape is kind of a slow game. I'm building a more dynamic top-down shooter game, and I'm wandering, how often should I communicate with the server? ASAP, or every 0.1s? How do shooter games usually do it? Both the server and the client are written in Javascript, node.js and socket.io are being used.

    Read the article

  • What is better for the overall performance and feel of the game: one setInterval performing all the work, or many of them doing individual tasks?

    - by Bane
    This question is, I suppose, not limited to Javascript, but it is the language I use to create my game, so I'll use it as an example. For now, I have structured my HTML5 game like this: var fps = 60; var game = new Game(); setInterval(game.update, 1000/fps); And game.update looks like this: this.update = function() { this.parseInput(); this.logic(); this.physics(); this.draw(); } This seems a bit inefficient, maybe I don't need to do all of those things at once. An obvious alternative would be to have more intervals performing individual tasks, but is it worth it? var fps = 60; var game = new Game(); setInterval(game.draw, 1000/fps); setInterval(game.physics, 1000/a); //where "a" is some constant, performing the same function as "fps" ... With which approach should I go and why? Is there a better alternative? Also, in case the second approach is the best, how frequently should I perform the tasks?

    Read the article

  • OutOfBounds Exception when creating a PolygonShape using jbox2d

    - by B3nGr33ni3r
    So here's the deal, i'm parsing a file that contains the vertices for a polygon, that i want to create in box2d. I create a new PolygonShape() and then call .set() giving it a defined array of Vec, and that defined array's .length property. I expected this to work, since the documentation for jbox2d says this method takes a Vec array, and the count of Vec objects in that array. However, it errors out, and it seems to be unrelated to my code. The error i get is Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 8 at org.jbox2d.collision.shapes.PolygonShape.set(PolygonShape.java:174) and, upon looking at that line in the jbox2d svn repository, i still cannot figure out the issue. Any help is appreciated!

    Read the article

  • Why occlusion is failing sometimes?

    - by cad
    I am rendering two cubes in the space using XNA 4.0 and occlusion only works from certain angles. Here is what I see from the front angle (everything ok) Here is what I see from behind This is my draw method. Cubes are drawn by serverManager and serverManager1 protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); switch (_gameStateFSM.State) { case GameFSMState.GameStateFSM.INTROSCREEN: spriteBatch.Begin(); introscreen.Draw(spriteBatch); spriteBatch.End(); break; case GameFSMState.GameStateFSM.GAME: spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend); // Text screenMessagesManager.Draw(spriteBatch, firstPersonCamera.cameraPosition, fpsHelper.framesPerSecond); // Camera firstPersonCamera.Draw(); // Servers serverManager.Draw(GraphicsDevice, firstPersonCamera.viewMatrix, firstPersonCamera.projMatrix); serverManager1.Draw(GraphicsDevice, firstPersonCamera.viewMatrix, firstPersonCamera.projMatrix); // Room //roomManager.Draw(GraphicsDevice, firstPersonCamera.viewMatrix); spriteBatch.End(); break; case GameFSMState.GameStateFSM.EXITGAME: break; default: break; } base.Draw(gameTime); fpsHelper.IncrementFrameCounter(); } serverManager and serverManager1 are instances of the same class ServerManager that draws a cube. The draw method for ServerManager is: public void Draw(GraphicsDevice graphicsDevice, Matrix viewMatrix, Matrix projectionMatrix) { cubeEffect.World = Matrix.CreateTranslation(modelPosition); // Set the World matrix which defines the position of the cube cubeEffect.View = viewMatrix; // Set the View matrix which defines the camera and what it's looking at cubeEffect.Projection = projectionMatrix; // Enable textures on the Cube Effect. this is necessary to texture the model cubeEffect.TextureEnabled = true; cubeEffect.Texture = cubeTexture; // Enable some pretty lights cubeEffect.EnableDefaultLighting(); // apply the effect and render the cube foreach (EffectPass pass in cubeEffect.CurrentTechnique.Passes) { pass.Apply(); cubeToDraw.RenderToDevice(graphicsDevice); } } Obviously there is something I am doing wrong. Any hint of where to look? (Maybe z-buffer or occlusion tests?)

    Read the article

  • How can I make smoother upwards/downwards controls in pygame?

    - by Zolani13
    This is a loop I use to interpret key events in a python game. # Event Loop for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_a: my_speed = -10; if event.key == pygame.K_d: my_speed = 10; if event.type == pygame.KEYUP: if event.key == pygame.K_a: my_speed = 0; if event.key == pygame.K_d: my_speed = 0; The 'A' key represents up, while the 'D' key represents down. I use this loop within a larger drawing loop, that moves the sprite using this: Paddle1.rect.y += my_speed; I'm just making a simple pong game (as my first real code/non-gamemaker game) but there's a problem between moving upwards <= downwards. Essentially, if I hold a button upwards (or downwards), and then press downwards (or upwards), now holding both buttons, the direction will change, which is a good thing. But if I then release the upward button, then the sprite will stop. It won't continue in the direction of my second input. This kind of key pressing is actually common with WASD users, when changing directions quickly. Few people remember to let go of the first button before pressing the second. But my program doesn't accommodate the habit. I think I understand the reason, which is that when I let go of my first key, the KEYUP event still triggers, setting the speed to 0. I need to make sure that if a key is released, it only sets the speed to 0 if another key isn't being pressed. But the interpreter will only go through one event at a time, I think, so I can't check if a key has been pressed if it's only interpreting the commands for a released key. This is my dilemma. I want set the key controls so that a player doesn't have to press one button at a time to move upwards <= downwards, making it smoother. How can I do that?

    Read the article

  • webgame engine how does it works

    - by TWCrap
    Hy all, first off all, don't yell that i shouldn't start with it, i just want to know how that works... The thing is, how does the engine of an webgame works. A game like tribalwars, grepolis and forge of empires. How does that keeping alive work. I mean, a user is building an building, and quit the browser... The building is build even when the session of the user is expired. but the points of the user is updated when the building is finished... So how does that works. What do you guys think? do they have some kind of cronjob that is fired every second, and that walks throug the database, and search for finished buildings, and update's the stuff? or do you guys think that they do it difrent?!? I hope that i was clear. -NOTE- i don't need anny code, i'm just intrested in the progress behind the game... Greetingz Marc

    Read the article

  • failbit is being set and I can't figure out why

    - by felipedrl
    I'm writing a MIDI file loader. Everything is going fine until at some track I get a failbit exception while trying to read from file. I can't figure out why, I've checked the file size and it's ok too. Upon checking "errno" and it returns "0". Any ideas? Thanks. The snippet follows: file.read(reinterpret_cast<char*>(&mHeader.id), sizeof(MidiHeader)); mTracks = new MidiTrack[mHeader.nTracks]; for (uint i = 0; i < mHeader.nTracks; ++i) { // this read fails on 6th i. I've checked hexadecimal file and it's // ok so far. file.read(reinterpret_cast<char*>(&mTracks[i].id), sizeof(uint)); if (file.fail()) { std::cerr << errno << std::endl; massert(false); } massert(mTracks[i].id == 0x6B72544D); file.read(reinterpret_cast<char*>(&mTracks[i].size), sizeof(uint)); mTracks[i].size = swapBytes(mTracks[i].size); mTracks[i].data = new char[mTracks[i].size]; file.read(mTracks[i].data, mTracks[i].size * sizeof(char)); totalBytesRead += 8 + mTracks[i].size; massert(totalBytesRead <= fileSize); }

    Read the article

  • How can you represent equip-able items in a 2d game?

    - by ThePlan
    I've been working on an item system for a post-apocalyptic RPG, with diablo as inspiration, and it would be awesome if I could visually represent an item that can be equipped on the player sprite. I was thinking you could have a player sprite with certain animations, then the equipped item would be drawn as if it was on the player with the same animations, so it syncs with the player animations but that couldn't work very smoothly, I imagine there's a better system. How can you graphically represent an item worn on the player, which moves like he does, and looks as if he's wearing it? I'm not asking you how to do it in framework X or platform X (altho if you REALLY need it, I'm using Allegro 5 with codeblocks on win XP) but instead I'm asking you how to generally program such an idea.

    Read the article

  • Swapping axis labels between 2D and 3D coordinates

    - by Will
    My game world is 3D. The map is only 2D, however. It is natural to think of the map as having an X and Y axis. And it is natural to think of the world has having an X, Y and Z axis, where Y is upwards. That is to say, X Y in 2D map coordinates is X Z in 3D coordinates. What conventions and approaches do you have to keeping things straight at a code level to make mapping between them natural? (Is Y usually upwards in 3D? Or do you have X and Z in map coordinates, or?)

    Read the article

  • Where can I find good (well organized) examples of game code?

    - by smasher
    Where can I find good (well organized) examples of game code? I'm hoping that I can pick up some organizational tips. Most examples in books are too short and leave out lots of detail for the sake of brevity. I'm particularly interested on how to group your variables and methods so that another programmer would know where to look in the code. For example initializers at the top, then methods that take input, then methods that update views. I don't care about a particular language, as long as its OOP. I looked at the Quake 2 and 3 sources, but they're straight C and not much help for getting tips on organizing your objects. So, have you seen some good source? Any pointers to code that makes you say "wow, that's well organized" would be great.

    Read the article

  • Game Asset Storage: Archive vs Individual files

    - by David Colson
    As I am in the process of creating a 3D c++ game and I was wondering what would be more beneficial when dealing with game assets with regards to storage. I have seen some games have a single asset file compressed with everything in it and other with lots of little compressed files. If I had lots of individual files I would not need to load a large file at once and use up memory but the code would have to go about file seeking when the level loads to find all the correct files needed. There is no file seeking needed when dealing with one large file, but again, what about all the assets not currently needed that would get loaded with the one file? I could also have an asset file for each level, but then how do I deal with shared assets This has been bothering me for a while so tell me what other advantages and disadvantages are there to either way of doing things.

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >