Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 541/1020 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • Change the shape of body dynamically

    - by user45491
    I have a problem where i have a ballon which i need to continuously inflate and defalte in update method, I have tried to used setScaleCenter but it is not giving desired result. Below is a code i am trying to make work scale += (float) ((dist-lastDist)/lastDist); Log.d("pinch","scale is "+scale); Log.d("pinch","change in scale is "+(float) ((lastDist-dist)/lastDist)); player.setScaleCenter(scale, scale); player.setScale(scale);

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

  • Working with lots of cubes. Improving performance?

    - by Randomman159
    Edit: To sum the question up, I have a voxel based world (Minecraft style (Thanks Communist Duck)) which is suffering from poor performance. I am not positive on the source but would like any possible advice on how to get rid of it. I am working on a project where a world consists of a large quantity of cubes (I would give you a number, but it is user defined worlds). My test one is around (48 x 32 x 48) blocks. Basically these blocks don't do anything in themselves. They just sit there. They start being used when it comes to player interaction. I need to check what cubes the users mouse interacts with (mouse over, clicking, etc.), and for collision detecting as the player moves. Now I had a massive amount of lag at first, looping through every block. I have managed to decrease that lag, by looping through all the blocks, and finding which blocks are within a particular range of the character, and then only looping through those blocks for the collision detection, etc. However, I am still going at a depressing 2fps. Does anyone have any other ideas on how I could decrease this lag? Btw, I am using XNA (C#) and yes, it is 3d.

    Read the article

  • What OpenGL version(s) to learn and/or use?

    - by zuko
    So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user-base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn/use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation; so I guess that factors into it as well, because, as far as I understand that's only in 4.x? [just btw, my desktop supports 4.2]

    Read the article

  • How would I use JBox2d in Java?

    - by BluFire
    So I did some research and a found Box2d. I then proceeded to download it and the testbed. Now that i have it, I don't know how to properly use it. I'm looking for a clear simple answer on how to use the engine. The things I did was that I put it into a lib folder and referenced the JBox2D jar file. After that i got stuck. How can i use this to program games for android? I'm very confused since Box2d was intended for C++.

    Read the article

  • Crash when trying to detect touch

    - by iQue
    I've got a character in a 2D game using surfaceView that I want to be able to move using a button (eventually a joystick), but my game crashes as soon as I try to move my sprite. This is my onTouch-method for my steering button: public void handleActionDown(int eventX, int eventY) { if (eventX >= (x - bitmap.getWidth() / 2) && (eventX <= (x + bitmap.getWidth()/2))) { if (eventY >= (y - bitmap.getHeight() / 2) && (y <= (y + bitmap.getHeight() / 2))) { setTouched(true); } else { setTouched(false); } } else { setTouched(false); } And if I try to put this in my update-method: public void update() { x += (speed.getXv() * speed.getxDirection()); y += (speed.getYv() * speed.getyDirection()); } The sprite moves on its own just fine, but as soon as I add: public void update() { if(steering.isTouched()){ x += (speed.getXv() * speed.getxDirection()); y += (speed.getYv() * speed.getyDirection()); } the game crashes. Does anyone know why this is or how to fix it? I cannot figure it out. I'm using MotionEvent.ACTION_DOWN to check if the user if pressing the screen.

    Read the article

  • Finding out which tile a mouse click landed in

    - by Shard
    I am working on an icometric grid based game and im having an issue trying to link a mouse click from the user to a tile. I have been able to split the problem into 2 parts which is first finding a rectangle that sourounds a tile, which I have been able to do but the second part of figuring out from the rectangle which tile the click landed in has got me stumped. Here is an example of a rectangle with tiles on the inside: The rectangle is 70px long and 30px high so if i use an input of say 30x(top)/20y(left) how would I go about determining which tile this fell into?

    Read the article

  • Several classes need to access the same data, where should the data be declared?

    - by Juicy
    I have a basic 2D tower defense game in C++. Each map is a separate class which inherits from GameState. The map delegates the logic and drawing code to each object in the game and sets data such as the map path. In pseudo-code the logic section might look something like this: update(): for each creep in creeps: creep.update() for each tower in towers: tower.update() for each missile in missiles: missile.update() The objects (creeps, towers and missiles) are stored in vector-of-pointers. The towers must have access to the vector-of-creeps and the vector-of-missiles to create new missiles and identify targets. The question is: where do I declare the vectors? Should they be members of the Map class, and passed as arguments to the tower.update() function? Or declared globally? Or are there other solutions I'm missing entirely?

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • Playing a sound on collision?

    - by Eric McLoughlin
    I'm making a pool game. I've separated the gameplay logic and the physics into two systems, entities and physics. Each entity holds a reference to a body which the physics system uses. The body itself holds a reference back to it's owner. When an entity collides with another entity, the Collided(Entity other) method is called on both. What I'm trying to do now is to play a sound when both entities colliding are of a certain subclass. I'm not sure how to do that. I could do it in the Collided method, but then the sound would be played two times at the same time, since the method was called on both entities. How do you suggest I do this?

    Read the article

  • UDK - How to make sure a PhysicalMaterial mask actually works?

    - by tomacmuni
    Hello, I have been reading the documentation for UDK about physical materials and masks. I have my 1bit BMP mask, and the two physical material assets I want to shoot off in the black and white channels. I have applied my material to both a rigid body and to a skeletal mesh and neither apparently uses the mask. If I assign a regular physical material (one that doesn't use a mask) then it will work fine, but this defeats the point because it gives only one hit reaction. In the documentation it states that it is possible to extend a class on which we want to use a physical material based on the KActor class's usage. How to do that? Here is the quote: "The following properties [ie, ImpactEffect - Particle system to spawn at the point of impact + ImpactSound - Sound to play when an impact occurs] allow you to attach sounds and effects to physical collisions. These only work on classes which support them, which at the moment is only KActor. By looking at the implementation in KActor though, you can add this functionality to other classes (or you can subclass KActor)." Essentially, how to make sure a PhysicalMaterial mask actually works? What code could be added to a skeletal mesh class perhaps, to get it going? Any help appreciated.

    Read the article

  • How Would I create alternate players (Turn base Event)

    - by Blue
    The picture above shows 2 players. Each containing 3 characters. I want to know how to make a Turn based event starting with player 1 alternating turns with player 2. And in every alternation each character gets a turn. If a character dies, the next character on the same team goes, and so on. How would I create this? Is there a tutorial? I haven't made any turn-based games so I don't know how to program these kinds of stuff.

    Read the article

  • two guitexture that do not work together

    - by London2423
    I have two GUITexture that move left and right a cube. Is pretty strange but together they don't work. If I activate only one it works. To be more specific: If I have the left GUItexture alone in the game the cube move left. If I have the right GUITexture activated alone the cube move right. Seems all fine I thought but If I have both of them the cube move only right and not left. Where is the mistake? Here is the code inside the GameObject cube for Right move void OnMousedown () { transform.position += Vector3.right * Time.deltaTime; } For Left move void OnMousedown () { transform.position += Vector3.left * Time.deltaTime; } And this is the left GUITexture code //move the cube left Cube.GetComponent<Left> ().enabled = true; left.transform.position += Vector3.left * Time.deltaTime; This is the right GUITexture //move the cube right Cube.GetComponent<Left> ().enabled = true; right.transform.position += Vector3.right * Time.deltaTime; What is the reason for this? I hope someone can help me.

    Read the article

  • Resume Button error

    - by user3178359
    i have two class. if i press button pause it can show button resume, retry,menu and the game time is paused. but when i press the resume the game time still paused. help me plase how to continue the game time ?? code for button pause : using UnityEngine; using System.Collections; public class pause : MonoBehaviour { public GUITexture showMenu; public GUITexture btnResume; public bool gamePaused = false; void OnMouseDown() { gamePaused = true; Time.timeScale = 0; showMenu.pixelInset = new Rect(220, 200, showMenu.pixelInset.width, showMenu.pixelInset.height); btnResume.pixelInset = new Rect(300, 300, btnResume.pixelInset.width, btnResume.pixelInset.height); code for button resume : using UnityEngine; using System.Collections; public class btResume : pause { //public GUITexture shoe; void onMouseDown() { base.gamePaused = false; Time.timeScale = 1; btnResume.pixelInset = new Rect(300, -300, btnResume.pixelInset.width, btnResume.pixelInset.height); showMenu.pixelInset = new Rect(220, -200, showMenu.pixelInset.width, showMenu.pixelInset.height); } }

    Read the article

  • How to prevent overlapping of gunshot sounds when using fast-firing weapons

    - by G3tinmybelly
    So I am now trying to find sounds for my guns but when I grab a gun sound effect and play it in my game a lot of the sounds are either terrible sounding or have this horrible echoing effect because as a gun shoots sometimes the previous sound is playing still. public void shoot(float x, float y, float direction){ if(empty){ PlayHUD.message = "No more bullets!"; return; } if(reloading){ return; } if(System.currentTimeMillis() - lastShot < fireRate){ //AssetsLoader.lmgSound.stop(); return; } float dx = (float) (-13 * Math.cos(direction) + 75 * Math.sin(direction)); float dy = (float) (-14 * -Math.sin(direction) + 75 * Math.cos(direction)); float dx1 = (float) (-13 * Math.cos(direction) + 75 * Math.sin(direction)); float dy1 = (float) (-14 * -Math.sin(direction) + 75 * Math.cos(direction)); PlayState.effects.add(new MuzzleFlashEffect(x + dx1, y + dy1, (float) Math.toDegrees(-direction))); PlayState.projectiles.add(new Bullet(this, x + dx, y + dy, (float) (direction + (Math.toRadians(MathUtils.random(-accuracy, accuracy)))))); if(OptionState.soundOn){ AssetsLoader.lmgSound.play(OptionState.volume); } bulletsInClip--; lastShot = System.currentTimeMillis(); } Here is the code for where the sound plays. Every time this method is called the sound is called but it happens so often in this case that there is this terrible echoing. Any idea on how to fix this?

    Read the article

  • C# XNA render an entire frame to a texture2d

    - by redcodefinal
    I asked a question here: C# XNA Make rendered screen a texture2d But, I ended up not getting the exact result I was looking for since I didn't ask the question right. In a game I am writing, I render an extremely large city out of objects, this can cause lag when moving the camera to view things that are off screen. I need a way to render then ENTIRE city, even the stuff that is off screen, and make it into a Texture2D. The answer I chose for the last one didn't work entirely right because it only gets what is on screen, not what is off.

    Read the article

  • How to manage enemy movement and shoot in a shmup?

    - by whatever
    I'm wondering what is the best (or at least a good) way of managing enemies in a shoot-em-up. Basically, what I'd do would be a class that manages displaying and updating positions of all the enemies. But how to create good deplacements for enemies? A list of where-to-go points? gravitating around some fixed points (with ponderation, distance evaluation etc.)? Same question for the shoot patterns? Can you please put me on a track?

    Read the article

  • Microsoft XNA code sample wont work with blender model

    - by FreakinaBox
    I downloaded this code sample and integrated it into my game http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing It works with the model that they supplied, but throws and exception whenever I use one of my models. The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. I tried pluging my model into their original source code and same thing. My model is an fbx from blender and has a texture. This is the function that throws the error GraphicsDevice.DrawInstancedPrimitives( PrimitiveType.TriangleList, 0, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount, instances.Length );

    Read the article

  • Cube rotation DX10

    - by German
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • Scrolling background with changing textures

    - by Simran kaur
    I have the 2 cubic structures that are my tracks and are scrolling basically to give effect of movement on object. In my OnBecameInvisible() method, I have changed their Tiling using mainTextureScale void OnBecameInvisible() { renderer.material.mainTextureScale = new Vector2(1, numberOfLanes); this.transform.position = new Vector3(this.transform.position.x, this.transform.position.y, 20.0f); } The tiling works fine. But the alternative tracks have their Tiling set to 0 which is giving an undesirable effect. Requirement: I want to be able to set the Tiling of every track that is visible on the screen. How do I do it?

    Read the article

  • Do unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. Now I'm wondering does it have a noticeable affect in FPS if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? Edit Final output will be a w*h cell grid, but for technical issues it's much easier for me to allocate (w+1)*(h+1) vertices. Sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect FPS or not? (Note that mesh is only generated once in each time you play the game)

    Read the article

  • Exporting spritesheet for Cocos2d

    - by Terko
    I would like to know how people usually save the animations in order to load them easily in Cocos2d with as few hard-code as possible. E.G. The solution I thought of is to have one plist file containing information about each frame, and the second plist to contain information about each of the animation(name of the animation, which frames to play, and the delay probably). If this is the correct solution, how can I generate such plist files for spritesheet automatically?

    Read the article

  • TGA loader: reverse y-axis

    - by aVoX
    I've written a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option origin set to Top Left (Note: Actually TGA files are meant to be stored upside down - Bottom Left in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGA, so my question is: How do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • Physics don't apply on a unique body AndEngine

    - by Kanga
    I am developing a game in AndEngine so far I managed to create everything I wanted for my sprite that was connected to a BoxBody. I was rotating it moving it everything was great. I wanted my collision detection to be more precise so I changed from boxBody to unique irregular shaped body. I found all the vertices and I just replaced the newly created irregular shaped body with the boxbody everywhere in my code. Not only that the image of the sprite is not in place but all the physics and maths I was doing for movement and physics wont work. Please help me :(. Thanks in advance.

    Read the article

  • Randomly spawning bitmaps on cnvas

    - by Toystoj
    I need some ideas in order to finish algorithm. I'm randomly placing objects (bitmaps) on canvas without overlapping. Time needed to finish it is my problem. When I need to spawn for example 80% of canvas it takes to long. So i was thinking : I should make some change when the bitmaps take off 50 % of canvas. I want to tell algorithm that it should generate new locations (x,y) where it is free space. My question is : How to render new location (x,y) in place where is free space. In summary: Things I know : object location (x,y) 4 corners (x,y) of object object width, height canvas width, height Any suggestions?

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >