Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 367/1021 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • Camera changes view when controller connected

    - by ChocoMan
    I have a weird situation. I have a model set to 0 for X,Y and Z. My camera's position is set to: 0 (X-value, but updates when the model moves around) the model's height + 20f (about the same level as the model's shoulders) 25f (behind the model) Without the controller plugged in, everything looks fine as I want it. But as soon as I plug the controller in, the camera aims to the sky! But when I unplug the controller, the camera is back to what it should be. Does anyone have any insight as to what may cause this from plugging a controller in?

    Read the article

  • Penalty for collision during a racing game

    - by Arthur Wulf White
    In a racing game: How should we penalize the player for colliding head on into obstacles such as walls, trees and so on. What is the way it is done in your favorite racing game? How is it done in other successful racing games? Do you think temporarily disabling the engine for a second is too severe? If I do go that route, how would I convey the 'engine is disabled' to the player in a subtle and easily understood way? Is this 'too much' of a penalty? Would the slow-down from the collision be sufficient to discourage the player from driving too carelessly? Which one is more fun? Should I consider a health-bar and affect engine performance for 'low health' status? Could you offer examples of games that handle this well and one that do it poorly? Please share your experience with racing games obstacles and reference games you feel perform well in this aspect. I am sure we all enjoy our racing games differently and I would like to hear different opinions regarding this issue. I would also like to hear how you feel we should penalize or reward for colliding with other vehicles? Should enemy vehicles be destroyable? Should they slow down severely when they hit the back of your car or would that make the gameplay imbalanced?

    Read the article

  • Anti-cheat Javascript for browser/HTML5 game

    - by Billy Ninja
    I'm planning on venturing on making a single player action rpg in js/html5, and I'd like to prevent cheating. I don't need 100% protection, since it's not going to be a multiplayer game, but I want some level of protection. So what strategies you suggest beyond minify and obfuscation? I wouldn't bother to make some server side simple checking, but I don't want to go the Diablo 3 path keeping all my game state changes on the server side. Since it's going to be a rpg of sorts I came up with the idea of making a stats inspector that checks abrupt changes in their values, but I'm not sure how it consistent and trusty it can be. What about variables and functions escopes? Working on smaller escopes whenever possible is safer, but it's worth the effort? Is there anyway for the javascript to self inspect it's text, like in a checksum? There are browser specific solutions? I wouldn't bother to restrain it for Chrome only in the early builds.

    Read the article

  • Typical Method Of Building Puzzle Levels

    - by Josh Kahane
    Hi I am designing a puzzle game for the iphone and was wondering as most puzzle games consist of the player progressing through multiple levels. You see for example Angry Birds has over 100 levels. Once the basis of the game is made, how do developers typically go about building their levels? Do they generally build them from scratch each one more or less, or work of their own template or have some other method which they use to tailor these levels? I imagine building so many levels is a long process, certainly if building each one individually. Do they do this, or have a method which speeds it up once they have their basis? Thanks.

    Read the article

  • Cocos2D 2.0 - masking a sprite

    - by Desperate Developer
    I have read this tutorial about how to mask sprites using Cocos2D 2.0. http://www.raywenderlich.com/4428/how-to-mask-a-sprite-with-cocos2d-2-0 But the author talks about OpenGL ES textures and vertices as they were common knowledge. My knowledge about OpenGl is zero raised to infinity. All I want is to use a rectangle to mask a sprite to it. How I would do in Photoshop using a rectangle as mask (yes, I want to clip a sprite to the rectangle bounds and no, I do not want to use the ClippingNode solution, that do not works for animation/scaling etc.). So, can you guys translate the klingon language used in this tutorial and tell how a solid rectangle can be used to mask a sprite in Cocos2D? I am desperate, as my username states. I am searching this for a week and have tried several solutions without satisfactory results. Please help me. Thanks!

    Read the article

  • What are some great papers/publications relating to game programming?

    - by Archagon
    What are some of your favorite papers and publications that closely relate to game programming? I'm particularly looking for examples that are well-written and illustrated, and/or have had a profound influence on the industry. (Here's one example: in this GDC talk, Bungie's David Aldridge mentions that a paper called "The TRIBES Engine Networking Model" was the starting point for Halo's network code.)

    Read the article

  • How to organize timeline in a Flash project?

    - by miguelSantirso
    Hi, I am starting a new Flash game and I was wondering if there is a better way to organize the timeline of the project. In my previous games I define a keyframe for each possible status of the game (loading, sponsor, intro, menu, gameplay, etc...). This method works but has some problems... For instance, it is not easy to implement transitions between the different screens in the game. How do you do this? Do you know of some better way?

    Read the article

  • Sphere Texture Mapping shows visible seams

    - by AvengerDr
    As you can see from the above picture there is a visible seam in the texture mapping. The underlying mesh is a geosphere based on octahedron subdivisions. On that particular latitude, vertices have been duplicated. However there still is a visible seam. Here is how I calculate the UV coordinates: float longitude = (float)Math.Atan2(normal.X, -normal.Z); float latitude = (float)Math.Acos(normal.Y); float u = (float)(longitude / (Math.PI * 2.0) + 0.5); float v = (float)(latitude / Math.PI); Is this a problem in the coordinates or a mipmapping issue?

    Read the article

  • Unity: Spin wheels to move vehicle

    - by Paul Manta
    I am just getting started with Unity and I'd like to ask a question. If I have a "Vehicle" object that has two children: "FrontWheel" and "BackWheel" (both 'wheels' are cylinders), how should I set everything up such that I can move the entire vehicle by turning its wheels? When I apply a torque to "FrontWheel", the vehicle starts to move, but instead of the whole thing the moving together, the chassis is rolling on the cylinders and eventually falls off. How can I prevent it from doing that?

    Read the article

  • Xna Loading Screens

    - by Cyral
    I'm making a 2D XNA game. I'd like to implement loading screens when stuff has to load for a while. Like when I login to an account, connect to the server, and generate worlds. I'm pretty sure it needs to be multithreaded, because I want to be able to do something like "Generating World 10%...11%...". GenerateWorld() { //Call StartLoading("Generating World"); or something //Starter generating, Updating progress... //End loading screen and fade into world } Help appreciated, I'm new.

    Read the article

  • best way to compute vertex normals from a Triangle's list

    - by nkint
    hi i'm a complete newbie in computergraphics so sorry if it's a stupid answer. i'm trying to make a simple 3d engine from scratch, more for educational purpose than for real use. i have a Surface object with inside a Triangle's list. For now i compute normals inside Triangle class, in this way: triangle.computeFaceNormals() { Vec3D u = v1.sub(v3) Vec3D v = v1.sub(v2) Vec3D normal = Vec3D.cross(u,v) normal.normalized() this.n1 = this.n2 = this.n3 = normal } and when building surface: t = new Triangle(v1,v2,v3).computeFaceNormals() surface.addTriangle(t) and i think this is the best way to do that.. isn't it? now.. what about for vertex normals? i've found this simple algorithm: flipcode vertex normal but.. hei this algorithm has.. exponential complexity? (if my memory doesn't fail my computer science background..) (bytheway.. it has 3 nested loops.. i don't think it's the best way to do it..) any suggestion?

    Read the article

  • Scene Graph as Object Container?

    - by Bunkai.Satori
    Scene graph contains game nodes representing game objects. At a first glance, it might seem practical to use Scene Graph as physical container for in game objects, instead of std::vector< for example. My question is, is it practical to use Scene Graph to contain the game objects, or should it be used only to define scene objects/nodes linkages, while keepig the objects stored in separate container, such as std::vector<?

    Read the article

  • 3DS Max exporting too many vertexes for model

    - by Juan Pablo
    I have a sample model of a cube and a buddha downloaded from internet in 3ds format which I can load correctly into my program and view them without problem, but wanted to try and create my own model. I created a simple box mesh in 3ds max, and exported it as .3ds (Converted to mesh - export as .3ds) When inspecting the .3ds file with a hex viewer, I was expecting to see 8 vertexes and 12 faces declared (as the model I downloaded from internet). But what i found was that it listed 26 vertexes, and 12 faces! And when I try to load that file with my .3ds viewer, my parser isn't detecting the face block (0x4120), which is strange because it worked for other objects downloaded from internet. Do I have to set any special property in order to export a 3ds file with minimum vertexes and a vertex-index list?

    Read the article

  • How to implement a stack of game states in C++

    - by Lisandro Vaccaro
    I'm new to C++ and as a college proyect I'm building a 2D platformer with some classmates, I recently read that it's a good idea to have a stack of gamestates instead of a single global variable with the game state (which is what I have now) but I'm not sure how to do it. Currently this is my implementation: class GameState { public: virtual ~GameState(){}; virtual void handle_events() = 0; virtual void logic() = 0; virtual void render() = 0; }; class Menu : public GameState { public: Menu(); ~Menu(); void handle_events(); void logic(); void render(); }; Then I have a global variable of type GameState: GameState *currentState = NULL; And in my Main I define the currentState and call it's methods: int main(){ currentState = new Menu(); currentState.handle_events(); } How can I implement a stack or something similar to go from that to something like this: int main(){ statesStack.push(new Menu()); statesStack.getTop().handle_events(); }

    Read the article

  • With 2 superposed cameras at different depths and switching their culling masks between layers to implement object-selective antialising:

    - by user36845
    We superposed two cameras, one of which uses AA as post-processing effect (AA filtering is cancelled). The camera with the AA effect has depth 0 and the camera with no effect has depth 1 as can be seen in the 5th and 6th Picture. The objects seen on the left are in layer 1 and the ones on the right are in layer 2. We then wrote a script that switches the culling masks of the cameras between the two layers at the push of buttons 1 and 2 respectively, and accomplishes object-selective antialiasing as seen in the first the three pictures. (The way two cameras separately switch culling masks between layers is illustrated in pictures 7,8 & 9.) HOWEVER, after making the environment 3D (see pictures 1-4), by parenting the 2 cameras under First-Person Controller, we started moving around in the environment and stumbled upon a big issue: When we look at the objects from such an angle as in the 4th Picture and we want to apply antialiasing to the first object (object on the left) which stands closer to our cameras now, the culling mask of 1st camera which is at depth 0, has to be switched to that object’s layer while the second object has to be in the culling mask of the 2nd camera at depth 1. And since the two image outputs of two superposed cameras are laid on top of one another; we obtain the erroneous/unrealistic result of the object farther in the back appearing closer to the camera than the front object (see 4th Picture). We already tried switching depths of cameras so that the 1st camera –with AA- now has depth 1 and the second has depth 0; BUT the camera with the AA effect Works in such a way that it applies the AA effect to its full view. So; the camera with the AA effect always has to remain at the lowest depth and the layer of the object to be antialiased has to be then assigned to the culling mask of the AA camera; otherwise all objects in the AA camera’s view (the two cubes in our case) become antialised, which we don’t want. So; how can we resolve this? The pictures are below and in the comments since each post can have 2 pics: Pic 1. No button is pushed: Both objects seem aliased. Pic 2. Button 1 is pushed: Left (1st) object is antialiased. 2nd object remains aliased. Pic 3. Button 2 is pushed: Right (2nd) object is antialiased. 1st object remains aliased. Pic 4. The problematic result in 3D, when using two superposed cameras with different depths Pic 5. Camera 1’s properties can be seen: using AA post-processing and its depth is 0 Pic 6. Camera 2’s properties can be seen: NOT using AA post-processing and its depth is 1 Pic 7. When no button is pushed, both objects are in the culling mask of Camera 2 and are aliased Pic 8. When pushed 1, camera 1 (bottom) shows the 1st object and camera 2 (top) shows the 2nd Pic 9. When pushed 2, camera 1 (bottom) shows the 2nd object and camera 2 (top) shows the 1st

    Read the article

  • Cocos2dx- Draw primitives(polygons) on Update

    - by Haider
    In my game I'm trying to draw polygons on on each step i.e. update method. I call draw() method to draw new polygon with dynamic vertices. Following is my code: void HelloWorld::draw(){glLineWidth(1);CCPoint filledVertices[] = {ccp(drawX1,drawY1),ccp(drawX2,drawY2), ccp(drawX3,drawY3), ccp(drawX4,drawY4)};ccDrawSolidPoly( filledVertices, 4, ccc4f(0.5f, 0.5f, 1, 1 ));} I call the draw() method from the update(float dt) method. The engine is behaving inconsistently i.e. sometimes it displays the polygons and on other occasions it does not. Is it the right approach to do such a task? If not what is the best way to display large number of primitives?

    Read the article

  • Creating blur with an alpha channel, incorrect inclusion of black

    - by edA-qa mort-ora-y
    I'm trying to do a blur on a texture with an alpha channel. Using a typical approach (two-pass, gaussian weighting) I end up with a very dark blur. The reason is because the blurring does not properly account for the alpha channel. It happily blurs in the invisible part of the image, whcih happens to be black, and thus results in a very dark blur. Is there a technique to blur that properly accounts for the alpha channel?

    Read the article

  • Advices fo starting a video game design career

    - by Allen Gabriel Baker
    I'm 24 and have a passion for video games and game-design. I've decided I want to design video games as my career. I have no experience with designing video games or coding but I'm interested and willing to learn. I want a job at any level but what would I need to land a job? I have no college experience and I have no money. What is a cheap school, or do I really need to go to school for this, or can I learn on my own? Is it possible to do this with no money? I'm literally broke but I want this so bad I feel like its the only career I'll enjoy. I want to call up company's and ask them what they are looking for in someone they want to hire, is that a good idea? Also I don't know the history of video game design and I don't want to sound like a dummy when someone says something about this field or talks about a famous designer and I have no idea who they're talking about. So what is key info when it comes to this field and where should I find it? Hopefully some of you guys and girls can help me out: I know in the future I will create something everyone will enjoy and you guys will remember when you gave me advice and I will always remember you guys for helping me. I'm gifted I know I am and I want to share my gift with the rest of the world by making games that change the Industry. Help me out please.

    Read the article

  • Why doesn't Unity's OnCollisionEnter give me surface normals, and what's the most reliable way to get them?

    - by michael.bartnett
    Unity's on collision event gives you a Collision object that gives you some information about the collision that happened (including a list of ContactPoints with hit normals). But what you don't get is surface normals for the collider that you hit. Here's a screenshot to illustrate. The red line is from ContactPoint.normal and the blue line is from RaycastHit.normal. Is this an instance of Unity hiding information to provide a simplified API? Or do standard 3D realtime collision detection techniques just not collect this information? And for the second part of the question, what's a surefire and relatively efficient way to get a surface normal for a collision? I know that raycasting gives you surface normals, but it seems I need to do several raycasts to accomplish this for all scenarios (maybe a contact point/normal combination misses the collider on the first cast, or maybe you need to do some average of all the contact points' normals to get the best result). My current method: Back up the Collision.contacts[0].point along its hit normal Raycast down the negated hit normal for float.MaxValue, on Collision.collider If that fails, repeat steps 1 and 2 with the non-negated normal If that fails, try steps 1 to 3 with Collision.contacts[1] Repeat 4 until successful or until all contact points exhausted. Give up, return Vector3.zero. This seems to catch everything, but all those raycasts make me queasy, and I'm not sure how to test that this works for enough cases. Is there a better way?

    Read the article

  • Gamemaker: Making a bullet Spawn at the enemy it was called from

    - by Strokes
    I'm making a gamemaker game with gml. In this game I have multiple enemies (same object) on screen at the same time. I want them to all spawn a bullet at their location. But instead each enemy spawns a bullet at one single enemy. They all shoot but the bullets appear in the wrong location. I want the bullet to spawn at the location of the instance is was called for. How do I do this? Thank you for reading my question. Code: obj_carrier is the enemy I want to spawn from. obj_carrier_bullet is the bullet I want to spawn at location of the carrier There are multiple carriers around the stage. In the step event of the carrier following an if statement: instance_create(obj_carrier.x, obj_carrier.y, obj_carrier_bullet)

    Read the article

  • I am looking for an graduation project idea for bacelor of computer engineering [closed]

    - by project idea
    I am interested in computer graphics and I have developed many hobby projects, mostly 2D and 3D games/scenes in directX and openGL, But for a grad project, proffesors wont allow games. I browsed many similar questions here and I am convinced project should be something I am really interested in as I will give considerable time to it. But apart from games I am not able to decide on the topic. I am also open to ideas on social apps and android.

    Read the article

  • Resolution independence - resize on the fly or ship all sizes?

    - by RecursiveCall
    My game relies heavily on textures of various sizes with some being full-screen. The game is targeted for multiple resolutions. I found that resizing textures (downsizing) works quite well for this game’s art type (it’s not Pixel Art or anything like that). I asked my artist to ensure that all textures at the edges of the screen to be created in such a way that they can safely “overflow” off screen; this means that aspect ratio is not an issue. So with no aspect ratio issues, I figured that I would simply ask my artist to create assets in very high resolution, and then resize them down to the appropriate screen resolution. The question is, when and how do I do that? Do I pre-resize everything to common resolutions in Photoshop and package all assets in the final product (increasing the size download that the user has to deal with) and then select the appropriate asset based on the detected resolution? Or do I ship with the largest set of Textures, detect the resolution on load, set a render target and draw all downsized assets to it and use that? Or for the latter, do I use some sort of a CPU-sided algorithm to resize on game load?

    Read the article

  • Create dynamic buffer SharpDX

    - by fedab
    I want to set a buffer that is updated every frame but can't figure it out, what i have to do. The only working thing i have is this: mdexcription = new BufferDescription(Matrix.SizeInBytes * Matrices.Length, ResourceUsage.Dynamic, BindFlags.VertexBuffer, CpuAccessFlags.Write, ResourceOptionFlags.None, 0); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); Draw: //Change Matrices (Matrix[]) every frame... instanceBuffer.Dispose(); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); I guess Dispose() and creating a new buffer is slow and can be done much faster. I've read about DataStream but i do not know, how to set this up properly. What steps do i have to do to set up a DataStream to achieve fast every-frame update?

    Read the article

  • What kind of steering behaviour or logic can I use to get mobiles to surround another?

    - by Vaughan Hilts
    I'm using path finding in my game to lead a mob to another player (to pursue them). This works to get them overtop of the player, but I want them to stop slightly before their destination (so picking the penultimate node works fine). However, when multiple mobs are pursuing the mobile they sometimes "stack on top of each other". What's the best way to avoid this? I don't want to treat the mobs as opaque and blocked (because they're not, you can walk through them) but I want the mobs to have some sense of structure. Example: Imagine that each snake guided itself to me and should surround "Setsuna". Notice how both snakes have chosen to prong me? This is not a strict requirement; even being slightly offset is okay. But they should "surround" Setsuna.

    Read the article

  • Android device - C++ OpenGL 2: eglCreateWindowSurface invalid

    - by ThreaderSlash
    I am trying to debug and run OGLES on Native C++ in my Android device in order to implement a native 3D game for mobile smart phones. The point is that I got an error and see no reason for that. Here is the line from the code that the debugger complains: mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); And this is the error message: Invalid arguments ' Candidates are: void * eglCreateWindowSurface(void *, void *, unsigned long int, const int *) ' --x-- Here is the declaration: android_app* mApplication; EGLDisplay mDisplay; EGLint lFormat, lNumConfigs, lErrorResult; EGLConfig lConfig; // Defines display requirements. 16bits mode here. const EGLint lAttributes[] = { EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT, EGL_BLUE_SIZE, 5, EGL_GREEN_SIZE, 6, EGL_RED_SIZE, 5, EGL_SURFACE_TYPE, EGL_WINDOW_BIT, EGL_RENDER_BUFFER, EGL_BACK_BUFFER, EGL_NONE }; // Retrieves a display connection and initializes it. packt_Log_debug("Connecting to the display."); mDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY); if (mDisplay == EGL_NO_DISPLAY) goto ERROR; if (!eglInitialize(mDisplay, NULL, NULL)) goto ERROR; // Selects the first OpenGL configuration found. packt_Log_debug("Selecting a display config."); if(!eglChooseConfig(mDisplay, lAttributes, &lConfig, 1, &lNumConfigs) || (lNumConfigs <= 0)) goto ERROR; // Reconfigures the Android window with the EGL format. packt_Log_debug("Configuring window format."); if (!eglGetConfigAttrib(mDisplay, lConfig, EGL_NATIVE_VISUAL_ID, &lFormat)) goto ERROR; ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat); // Creates the display surface. packt_Log_debug("Initializing the display."); mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); --x-- Hope someone here can shed some light on it.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >