Search Results

Search found 32625 results on 1305 pages for 'ui development'.

Page 457/1305 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Need help starting with a game in Flash/AS3

    - by Hossein
    Hi, I am kinda new to flash/AS3. I have written some stuff and now I now the basics of AS. I want to develop a game in which my character moves. exactly like the way Super Mario moves(the background changes and new obstacles come in the way like and animation with which you can interact by climbing and shooting etc.) There is one big thing that I don't understand is that how to make my background moving and introduce new obstacles when my character moves forward. Currently my scene is static which means the background is the same and obstacles are also stationary. Can some point me to a tutorial or give me an answer that solves my confusion? thanks

    Read the article

  • IOS OpenGl transparency performance issue

    - by user346443
    I have built a game in Unity that uses OpenGL ES 1.1 for IOS. I have a nice constant frame rate of 30 until i place a semi transparent texture over the top on my entire scene. I expect the drop in frames is due to the blending overhead with sorting the frame buffer. On 4s and 3gs the frames stay at 30 but on the iPhone 4 the frame rate drops to 15-20. Probably due to the extra pixels in the retina compared to the 3gs and smaller cpu/gpu compared to the 4s. I would like to know if there is anything i can do to try and increase the frame rate when a transparent texture is rendered on top of the entire scene. Please not the the transparent texture overlay is a core part of the game and i can't disable anything else in the scene to speed things up. If its guaranteed to make a difference I guess I can switch to OpenGl ES 2.0 and write the shaders but i would prefer not to as i need to target older devices. I should add that the depth buffer is disabled and I'm blending using SrcAlpha One. Any advice would be highly appreciated. Cheers

    Read the article

  • Design pattern for animation sequence in LibGDX

    - by kevinyu
    What design pattern to use for sequence of animation that involve different actor in libGDX. For example I am making a game to choose a wolf from a group of sheeps. The first animation played when the game begin is the wolf enter the field that is filled with two sheeps.Then the wolf disguise as a sheep and goes to the center of the screen. Then the game will shuffle the sheeps. After it finished it will ask the player where is the wolf. The game wait for player input. After that the game will show animation to show the player whether their answer is right or wrong. I am currently using State design pattern. There are four states wolfEnterState,DisguiseState,ShuffleState,UserInputState, and answerAnimationState. I feel that my code is messy. I use addAction with action sequence and action completion(new Runnable()) a lot. I feel that the action sequence is getting long. Is there a better solution for this kind of problem

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

  • How are events in games handled?

    - by Alex
    In may games that I have played, I have seen events being triggered, such as when you walk into a certain land area while holding a specific object, it will trigger a special creature to spawn. I was wondering, how do games deal with events such as this? Not in a specific game, but in general among games. The first thought I had was that each place has a hard-coded set of events that it will call when something happens there. However, that would be too inefficient to maintain, as when something new is added, that would require modification of every part of the game that would potentially cause the event to be called. Next up, I had the idea of maybe how GUI programming works. In all of the GUI programming I've done, you create a component and a callback function, or a listener. Then, when the user interacts when the button, the callback function is called, allowing you to do something with it. So, I was thinking that in terms of a game, when a land area gets loaded the game loops over a list of all events, creating instances of them and calling public methods to bind them to the current scene. The events themselves then handle what scene it is, and if it is a scene that pertains to the event, will call the public method of the scene to bind the event to an action. Then, when the action takes place, the scene would call all events that are bound to that action. However, I'm sure that's not how games would operate either, as that would require a lot of creating of events all the time. So how to video games handle events, are either of those methods correct, or is it something completely different?

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Saving game data to server [on hold]

    - by Eugene Lim
    What's the best method to save the player's data to the server? Method to store the game saves Which one of the following method should I use ? Using a database structure(e.g.. mySQL) to store the game data as blobs? Using the server hard disk to store the saved game data as binary data files? Method to send saved game data to server What method should I use ? socketIO web socket a web-based scripting language to receive the game data as binary? for example, a php script to handle binary data and save it to file Meta-data I read that some games store saved game meta-data in database structures. What kind of meta data is useful to store?

    Read the article

  • XNA extending the existing Content type

    - by Maarten
    We are doing a game in XNA that reacts to music. We need to do some offline processing of the music data and therefore we need a custom type containing the Song and some additional data: // Project AudioGameLibrary namespace AudioGameLibrary { public class GameTrack { public Song Song; public string Extra; } } We've added a Content Pipeline extension: // Project GameTrackProcessor namespace GameTrackProcessor { [ContentSerializerRuntimeType("AudioGameLibrary.GameTrack, AudioGameLibrary")] public class GameTrackContent { public SongContent SongContent; public string Extra; } [ContentProcessor(DisplayName = "GameTrack Processor")] public class GameTrackProcessor : ContentProcessor<AudioContent, GameTrackContent> { public GameTrackProcessor(){} public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context), Extra = "Some extra data" // Here we can do our processing on 'input' }; } } } Both the Library and the Pipeline extension are added to the Game Solution and references are also added. When trying to use this extension to load "gametrack.mp3" we run into problems however: // Project AudioGame protected override void LoadContent() { AudioGameLibrary.GameTrack gameTrack = Content.Load<AudioGameLibrary.GameTrack>("gametrack"); MediaPlayer.Play(gameTrack.Song); } The error message: Error loading "gametrack". File contains Microsoft.Xna.Framework.Media.Song but trying to load as AudioGameLibrary.GameTrack. AudioGame contains references to both AudioGameLibrary and GameTrackProcessor. Are we maybe missing other references?

    Read the article

  • Open source vs commercial game engines

    - by Vanangamudi
    How commercial game accomplsih stunnning graphics with smooth game play? I am a huge die hard fan and follower of GNU Stallman and his philosophies and other Libre people Cmon how wud I miss Linus. but I got to admit commercial games does excellent jobs. One such good example is Assasin's Creed from Ubisoft. It has good quality graphcis and plays smoothly in my Dual core CPU with Nvidia Geforce 8400ES. Rockstar GTA4 has awesome graphcis but it's slower than AC considering the graphics quality tradeoff. Age of Empires from Ensemble studios, does include Massive crowd AI simulation, yet it plays so smoothly with eyecandy graphics and very large weapon sets and different techtree elements on the other hand. Open source games like Glest, 0A.D(still in alpha :) are not so smooth even though they have very restricted abilities? Coming to question: how do game companies achieve such optmizations, or the open source community is not doing optimizations, or there are any propriarity technological elements that benefits only the companies exists huh?? e.g the OpenSubDiv from Pixar just released open to community?? something like that. and why it is hard to implement optimizations? are there any legal restrictions???

    Read the article

  • std::map for storing static const Objects

    - by Sean M.
    I am making a game similar to Minecraft, and I am trying to fine a way to keep a map of Block objects sorted by their id. This is almost identical to the way that Minecraft does it, in that they declare a bunch of static final Block objects and initialize them, and then the constructor of each block puts a reference of that block into whatever the Java equivalent of a std::map is, so there is a central place to get ids and the Blocks with those ids. The problem is, that I am making my game in C++, and trying to do the exact same thing. In Block.h, I am declaring the Blocks like so: //Block.h public: static const Block Vacuum; static const Block Test; And in Block.cpp I am initializing them like so: //Block.cpp const Block Block::Vacuum = Block("Vacuum", 0, 0); const Block Block::Test = Block("Test", 1, 0); The block constructor looks like this: Block::Block(std::string name, uint16 id, uint8 tex) { //Check for repeat ids if (IdInUse(id)) { fprintf(stderr, "Block id %u is already in use!", (uint32)id); throw std::runtime_error("You cannot reuse block ids!"); } _id = id; //Check for repeat names if (NameInUse(name)) { fprintf(stderr, "Block name %s is already in use!", name); throw std::runtime_error("You cannot reuse block names!"); } _name = name; _tex = tex; //fprintf(stdout, "Using texture %u\n", _tex); _transparent = false; _solidity = 1.0f; idMap[id] = this; nameMap[name] = this; } And finally, the maps that I'm using to store references of Blocks in relation to their names and ids are declared as such: std::map<uint16, Block*> Block::idMap = std::map<uint16, Block*>(); //The map of block ids std::map<std::string, Block*> Block::nameMap = std::map<std::string, Block*>(); //The map of block names The problem comes when I try to get the Blocks in the maps using a method called const Block* GetBlock(uint16 id), where the last line is return idMap.at(id);. This line returns a Block with completely random values like _visibility = 0xcccc and such like that, found out through debugging. So my question is, is there something wrong with the blocks being declared as const obejcts, and then stored at pointers and accessed later on? The reason I cant store them as Block& is because that makes a copy of the Block when it is entered, so the block wouldn't have any of the attributes that could be set afterwards in the constructor of any child class, so I think I need to store them as a pointer. Any help is greatly appreciated, as I don't fully understand pointers yet. Just ask if you need to see any other parts of the code.

    Read the article

  • How can a NodeJS server be used from Game Maker HTML5?

    - by Tokyo Dan
    I want to create a client-server game that runs on Game Maker HTML5-NodeJS. The NodeJS server will be an AI server - a bot that acts like a human opponent and plays against the human player at a front-end game client that is coded in GM HTML5. How can a NodeJS server be used from GM HTML5. Are there any examples of such a system? I already got an iOS game that can talk to a remote AI server (coded in Lua) using TCP sockets. Can this be done with Game Maker HTML5 and NodeJS.

    Read the article

  • OpenGL - have object follow mouse

    - by kevin james
    I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is: Get the coordinates within the window with glfwGetCursorPos. The window was created with window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL); and the code to get coordinates is double xpos, ypos; glfwGetCursorPos(window, &xpos, &ypos); Next, I use GLM unproject, to get the coordinates in "object space" glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f); glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f); glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport); There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit: The (incorrectly) unprojected x-values range from about -0.552 to 0.552, and the y-values from about -0.411 to 0.411.

    Read the article

  • Using textureGrad for anisotropic integration approximation

    - by Amxx
    I'm trying to develop a real time rendering method using real time acquired envmap (cubemap) for lightning. This implies that my envmap can change as often as every frame and I therefore cannot use any method base on precomputation of the envmap (such as convolution with BRDF...) So far my method worked well with Phong BRDF. For specular contribution I direclty read the value in my sampleCube and I use mipmap levels + linear filter for simulating the roughtness of the material considered: int size = textureSize(envmap, 0).x; float specular_level = log2(size * sqrt(3.0)) - 0.5 * log2(ns + 1); vec3 env_specular = ks * specular_color * textureLod(envmap, l_g, specular_level); From this method I would like to upgrade to a microfacet based BRDF. I already have algorithm for evaluating the shape (including anisotropic direction) of the reflection but I cannot manage to read the values I want in my sampleCube. I believe I have to use textureGrad(envmap, l_g, X, Y); with l_g being the reflection direction in global space but I cannot manage to find which values to give to X and Y in order to correctly specify the area I want to consider. What value should I give to X and Y in orther for textureGrad(envmap, l_g, X, Y); to give the same result as textureLod(envmap, l_g, specular_level);

    Read the article

  • How do I break an image into 6 or 8 pieces of different shapes?

    - by Anil gupta
    I am working on puzzle game, where the player can select an image from iPhone photo gallery. The selected image will save in puzzle page and after 3 second wait the selected image will be broken into 6 or 8 parts of different shapes. Then player will arrange these broken parts of images to make the original image. I am not getting idea how to break the image and merged so that player arrange the broken part. I want to break image like this below frame. I am developing this game in cocos2d.

    Read the article

  • How do I apply different probability factors in an algorithm for a cricket simulation game?

    - by Komal Sharma
    I am trying to write the algorithm for a cricket simulation game which generates runs on each ball between 0 to 6. The run rate or runs generated changes when these factors come into play like skill of the batsman, skill of the bowler, target to be chased. Wickets left. If the batsman is skilled more runs will be generated. There will be a mode of play of the batsman aggressive, normal, defensive. If he plays aggressive chances of getting out will be more. If the chasing target is more the run rate should be more. If the overs are final the run rate should be more. I am using java random number function for this. The code so far I've written is public class Cricket { public static void main(String args[]) { int totalRuns=0; //i is the balls bowled for (int i = 1; i <= 60 ; i++) { int RunsPerBall = (int)(Math.random()*6); //System.out.println(Random); totalRuns=totalRuns+RunsPerBall; } System.out.println(totalRuns); } } Can somebody help me how to apply the factors in the code. I believe probability will be used with this. I am not clear how to apply the probability of the factors stated above in the code.

    Read the article

  • How can I improve this collision detection logic?

    - by Dan
    I’m trying to make an android game and I’m having a bit of trouble getting the collision detection to work. It works sometimes but my conditions aren’t specific enough and my program gets it wrong. How could I improve the following if conditions? public boolean checkColisionWithPlayer( Player player ) { // Top Left // Top Right // Bottom Right // Bottom Left // int[][] PP = { { player.x, player.y }, { player.x + player.width, player.y }, {player.x + player.height, player.y + player.width }, { player.x, player.y + player.height } }; // TOP LEFT - PLAYER // if( ( PP[0][0] > x && PP[0][0] < x + width ) && ( PP[0][1] > y && PP[0][1] < y + height ) && ( (x - player.x) < 0 ) ) { player.isColided = true; //player.isSpinning = false; // Collision On Right if( PP[0][0] > ( x + width/2 ) && ( PP[0][1] - y < ( x + width ) - PP[0][0] ) ) { Log.i("Colision", "Top Left - Right Side"); player.x = ( x + width ) + 1; player.Vh = player.phy.getVelsoityWallColision(player.Vh, player.Cr); } // Collision On Bottom else if( PP[0][1] > ( y + height/2 ) ) { Log.i("Colision", "Top Left - Bottom Side"); player.y = ( y + height ) + 1; if( player.Vv > 0 ) player.Vv = 0; } return true; } // TOP RIGHT - PLAYER // else if( ( PP[1][0] > x && PP[1][0] < x + width ) && ( PP[1][1] > y && PP[1][1] < y + height ) && ( (x - player.x) > 0 ) ) { player.isColided = true; //player.isSpinning = false; // Collision On Left if( PP[1][0] < ( x + width/2 ) && ( PP[1][0] - x < PP[1][1] - y ) ) { Log.i("Colision", "Top Right - Left Side"); player.x = ( x ) + 1; player.Vh = player.phy.getVelsoityWallColision(player.Vh, player.Cr); } // Collision On Bottom else if( PP[1][1] > ( y + height/2 ) ) { Log.i("Colision", "Top Right - Bottom Side"); player.y = ( y + height ) + 1; if( player.Vv > 0 ) player.Vv = 0; } return true; } // BOTTOM RIGHT - PLAYER // else if( ( PP[2][0] > x && PP[2][0] < x + width ) && ( PP[2][1] > y && PP[2][1] < y + height ) ) { player.isColided = true; //player.isSpinning = false; // Collision On Left if( PP[2][0] < ( x + width/2 ) && ( PP[2][0] - x < PP[2][1] - y ) ) { Log.i("Colision", "Bottom Right - Left Side"); player.x = ( x ) + 1; player.Vh = player.phy.getVelsoityWallColision(player.Vh, player.Cr); } // Collision On Top else if( PP[2][1] < ( y + height/2 ) ) { Log.i("Colision", "Bottom Right - Top Side"); player.y = y - player.height; player.Vv = player.phy.getVelsoityWallColision(player.Vv, player.Cr); //player.Vh = -1 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); int rs = x - player.x; Log.i("RS", String.format("%d", rs)); if( rs > 0 ) { player.direction = -1; player.isSpinning = true; player.Vh = -0.5 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); } if( rs < 0 ) { player.direction = 1; player.isSpinning = true; player.Vh = 0.5 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); } player.rotateSpeed = 1 * rs; } return true; } // BOTTOM LEFT - PLAYER // else if( ( PP[3][0] > x && PP[3][0] < x + width ) && ( PP[3][1] > y && PP[3][1] < y + height ) )//&& ( (x - player.x) > 0 ) ) { player.isColided = true; //player.isSpinning = false; // Collision On Right if( PP[3][0] > ( x + width/2 ) && ( PP[3][1] - y < ( x + width ) - PP[3][0] ) ) { Log.i("Colision", "Bottom Left - Right Side"); player.x = ( x + width ) + 1; player.Vh = player.phy.getVelsoityWallColision(player.Vh, player.Cr); } // Collision On Top else if( PP[3][1] < ( y + height/2 ) ) { Log.i("Colision", "Bottom Left - Top Side"); player.y = y - player.height; player.Vv = player.phy.getVelsoityWallColision(player.Vv, player.Cr); //player.Vh = -1 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); int rs = x - player.x; //Log.i("RS", String.format("%d", rs)); //player.direction = -1; //player.isSpinning = true; if( rs > 0 ) { player.direction = -1; player.isSpinning = true; player.Vh = -1 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); } if( rs < 0 ) { player.direction = 1; player.isSpinning = true; player.Vh = 1 * ( player.phy.getVelsoityWallColision(player.Vv, player.Cr) ); } player.rotateSpeed = 1 * rs; } //try { Thread.sleep(1000, 0); } //catch (InterruptedException e) {} return true; } else { player.isColided = false; player.isSpinning = true; } return false; }

    Read the article

  • Will I have an easier time learning OpenGL in Pygame or Pyglet? (NeHe tutorials downloaded)

    - by shadowprotocol
    I'm looking between PyGame and Pyglet, Pyglet seems to be somewhat newer and more Pythony, but it's last release according to Wikipedia is January '10. PyGame seems to have more documentation, more recent updates, and more published books/tutorials on the web for learning. I downloaded both the Pyglet and PyGame versions of the NeHe OpenGL tutorials (Lessons 1-10) which cover this material: lesson01 - Setting up the window lesson02 - Polygons lesson03 - Adding color lesson04 - Rotation lesson05 - 3D lesson06 - Textures lesson07 - Filters, Lighting, input lesson08 - Blending (transparency) lesson09 - 2D Sprites in 3D lesson10 - Moving in a 3D world What do you guys think? Is my hunch that I'll be better off working with PyGame somewhat warranted?

    Read the article

  • texture mapping with lib3ds and SOIL help

    - by Adam West
    I'm having trouble with my project for loading a texture map onto a model. Any insight into what is going wrong with my code is fantastic. Right now the code only renders a teapot which I have assinged after creating it in 3DS Max. 3dsloader.cpp #include "3dsloader.h" Object::Object(std:: string filename) { m_TotalFaces = 0; m_model = lib3ds_file_load(filename.c_str()); // If loading the model failed, we throw an exception if(!m_model) { throw strcat("Unable to load ", filename.c_str()); } // set properties of texture coordinate generation for both x and y coordinates glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); // if not already enabled, enable texture generation if(! glIsEnabled(GL_TEXTURE_GEN_S)) glEnable(GL_TEXTURE_GEN_S); if(! glIsEnabled(GL_TEXTURE_GEN_T)) glEnable(GL_TEXTURE_GEN_T); } Object::~Object() { if(m_model) // if the file isn't freed yet lib3ds_file_free(m_model); //free up memory glDisable(GL_TEXTURE_GEN_S); glDisable(GL_TEXTURE_GEN_T); } void Object::GetFaces() { m_TotalFaces = 0; Lib3dsMesh * mesh; // Loop through every mesh. for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { // Add the number of faces this mesh has to the total number of faces. m_TotalFaces += mesh->faces; } } void Object::CreateVBO() { assert(m_model != NULL); // Calculate the number of faces we have in total GetFaces(); // Allocate memory for our vertices and normals Lib3dsVector * vertices = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsVector * normals = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsTexel* texCoords = new Lib3dsTexel[m_TotalFaces * 3]; Lib3dsMesh * mesh; unsigned int FinishedFaces = 0; // Loop through all the meshes for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { lib3ds_mesh_calculate_normals(mesh, &normals[FinishedFaces*3]); // Loop through every face for(unsigned int cur_face = 0; cur_face < mesh->faces;cur_face++) { Lib3dsFace * face = &mesh->faceL[cur_face]; for(unsigned int i = 0;i < 3;i++) { memcpy(&texCoords[FinishedFaces*3 + i], mesh->texelL[face->points[ i ]], sizeof(Lib3dsTexel)); memcpy(&vertices[FinishedFaces*3 + i], mesh->pointL[face->points[ i ]].pos, sizeof(Lib3dsVector)); } FinishedFaces++; } } // Generate a Vertex Buffer Object and store it with our vertices glGenBuffers(1, &m_VertexVBO); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, vertices, GL_STATIC_DRAW); // Generate another Vertex Buffer Object and store the normals in it glGenBuffers(1, &m_NormalVBO); glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, normals, GL_STATIC_DRAW); // Generate a third VBO and store the texture coordinates in it. glGenBuffers(1, &m_TexCoordVBO); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsTexel) * 3 * m_TotalFaces, texCoords, GL_STATIC_DRAW); // Clean up our allocated memory delete vertices; delete normals; delete texCoords; // We no longer need lib3ds lib3ds_file_free(m_model); m_model = NULL; } void Object::applyTexture(const char*texfilename) { float imageWidth; float imageHeight; glGenTextures(1, & textureObject); // allocate memory for one texture textureObject = SOIL_load_OGL_texture(texfilename,SOIL_LOAD_AUTO,SOIL_CREATE_NEW_ID,SOIL_FLAG_MIPMAPS); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glBindTexture(GL_TEXTURE_2D, textureObject); // use our newest texture glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&imageWidth); glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&imageHeight); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // give the best result for texture magnification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //give the best result for texture minification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // don't repeat texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat textureglTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat texture glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,imageWidth,imageHeight,0,GL_RGB,GL_UNSIGNED_BYTE,& textureObject); } void Object::Draw() const { // Enable vertex, normal and texture-coordinate arrays. glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Bind the VBO with the normals. glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); // The pointer for the normals is NULL which means that OpenGL will use the currently bound VBO. glNormalPointer(GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glTexCoordPointer(2, GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glVertexPointer(3, GL_FLOAT, 0, NULL); // Render the triangles. glDrawArrays(GL_TRIANGLES, 0, m_TotalFaces * 3); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } 3dsloader.h #include "main.h" #include "lib3ds/file.h" #include "lib3ds/mesh.h" #include "lib3ds/material.h" class Object { public: Object(std:: string filename); virtual ~Object(); virtual void Draw() const; virtual void CreateVBO(); void applyTexture(const char*texfilename); protected: void GetFaces(); unsigned int m_TotalFaces; Lib3dsFile * m_model; Lib3dsMesh* Mesh; GLuint textureObject; GLuint m_VertexVBO, m_NormalVBO, m_TexCoordVBO; }; Called in the main cpp file with: VBO,apply texture and draw (pretty simple, how ironic) and thats it, please help me forum :)

    Read the article

  • Python library for scripting (C++ integration)

    - by Edward83
    Please advise me good wrapper/library for python. I need to implement simple scripting in c++ app; Under "good" I mean pretty understandable, well documented, no memory leaking, fast. For creating base interface of GameObject on Python and C++; Your own experience and useful links will be nice!!! I found link about it, but I need more specific within gamedev context. What combinations of libraries you used for python integration into c++? For example about ogre-python it said built using Py++ and Boost.Python library And one more question, maybe someone of you know how Python was integrated into BigWorld engine (it's own port or some library)? Thank you!!!

    Read the article

  • Ledge grab and climb in Unity3D

    - by BallzOfSteel
    I just started on a new project. In this project one of the main gameplay mechanics is that you can grab a ledge on certain points in a level and hang on to it. Now my question, since I've been wrestling with this for quite a while now. How could I actually implement this? I have tried it with animations, but it's just really ugly since the player will snap to a certain point where the animation starts.

    Read the article

  • Direct3D9 application won't write to depth buffer

    - by DeadMG
    I've got an application written in D3D9 which will not write any values to the depth buffer, resulting in incorrect values for the depth test. Things I've checked so far: D3DRS_ZENABLE, set to TRUE D3DRS_ZWRITEENABLE, set to TRUE D3DRS_ZFUNC, set to D3DCMP_LESSEQUAL The depth buffer is definitely bound to the pipeline at the relevant time The depth buffer was correctly cleared before use. I've used PIX to confirm that all of these things occurred as expected. For example, if I clear the depth buffer to 0 instead of 1, then correctly nothing is drawn, and PIX confirms that all the pixels failed the depth test. But I've also used PIX to confirm that my submitted geometry does not write to the depth buffer and so is not correctly rendered. Any other suggestions?

    Read the article

  • What prefer a game developer company? UDK experience or c++ game projects?

    - by momboco
    What prefer a game developer company? A developer with experience in UDK engine ? or, a developer with projects made entirely in c++ with a graphics engine like Ogre3D? I think that a coder can demonstrate better his abilities with games made in c++, because it requires a knowledge deeper in many fields. However, currently there is a lot of companies that develop his games with UDK. Now I don't know if is better specialize in a game engine like UDK.

    Read the article

  • Best approach for unit enemy "awareness" in RTS?

    - by Phil
    I'm using Unity3d to develop an RTS/TD hybrid prototype game. What is the best approach to have "awareness" between units and their enemies? Is it sane to have every unit check the distance to every enemy and engage if within range? The approach I'm going for right now is to have a trigger sphere on every unit. If an enemy enters the trigger, the unit becomes aware of the enemy and starts distance checking. I'm imagining that this would save some unnecessary checks? What's the best practice here (if there's such a thing)? Thanks for reading.

    Read the article

  • 2D Tile Map files for Platformer, JSON or DB?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • Cocos2d: Tongue effect like in Munch Time

    - by Joey Green
    I'm wanting to do a tongue effect for my character like the one in Munch Time( shown in pic ). The player does some action and his tongue attaches to the nearest platform. I'm thinking this is simply a get distance to platform and keep player at that distance as he moves back and forth giving him the swinging effect. For the drawing, I'm wanting the same effect where the tongue sprite is the skinniest in the middle of the distance between the character and platform. I know how to do this in a shader( I'm using cocos2d v2 btw ), but I'm wondering if there is some built-in functionality to allow me to do this. First, is this the right approach using distance? Second, is their an easy way to do the tongue sprite effect without a shader? Third, I'm wanting to have the player spring up at will in the direction of the platform. I'm using Box2D. Would there be a way to do this using force's or would it be easier to write my own code?

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >