Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 623/1274 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Material System

    - by Towelie
    I'm designing Material/Shader System (target API DX10+ and may be OpenGL3+, now only DX10). I know, there was a lot of topics about this, but i can't find what i need. I don't want to do some kind of compilation/parsing scripts in real-time. So there some artist-created material, written at some analog of CG. After it compiled to hlsl code and after to final shader. Also there are some hard-coded ConstantBuffers, like cbuffer EveryFrameChanging { float4x4 matView; float time; float delta; } And shader use shared constant buffers to get parameters. For each mesh in the scene, getting needs and what it can give (normals, binormals etc.) and finding corresponding permutation of shader or calculating missing parts. Also, during build calculating render states and the permutations or hash for this shader which later will be used for sorting or even giving the ID from 0 to ShaderCount w/o gaps to it for sorting. FinalShader have only 1 technique and one pass. After it for each Mesh setting some shader and it's good to render. some pseudo code SetConstantBuffer(ConstantBuffer::PerFrame); foreach (shader in FinalShaders) SetConstantBuffer(ConstantBuffer::PerShader, shader); SetRenderState(shader); foreach (mesh in shader.GetAllMeshes) SetConstantBuffer(ConstantBuffer::PerMesh, mesh); SetBuffers(mesh); Draw(); class FinalShader { public: UUID m_ID; RenderState m_RenderState; CBufferBindings m_BufferBindings; } But i have no idea how to create this CG language and do i really need it?

    Read the article

  • Rotating a Quad around it center

    - by Trixmix
    How can you rotate a quad around its center? This is what im trying to do but it aint working: GL11.glTranslatef(x-getWidth()/2, y-getHeight()/2, 0); GL11.glRotatef(30, 0.0f, 0.0f, 1.0f); GL11.glTranslatef(x+getWidth()/2, y+getHeight()/2, 0); DRAW my main problem is that it renders it off the screen.. draw code: GL11.glBegin(GL11.GL_QUADS); { GL11.glTexCoord2f(0, 0); GL11.glVertex2f(0, 0); GL11.glTexCoord2f(0, getTexture().getHeight()); GL11.glVertex2f(0, height); GL11.glTexCoord2f(getTexture().getWidth(), getTexture().getHeight()); GL11.glVertex2f(width,height); GL11.glTexCoord2f(getTexture().getWidth(), 0); GL11.glVertex2f(width,0); } GL11.glEnd();

    Read the article

  • How can I make a collection of mini-games in XNA where the user can download packs of minigames and the main .exe can run them without being altered?

    - by Pyroka
    I'm currently making a PC game in XNA. It's actually a collection of mini-games (there's 3 mini-games at the moment) however I plan to make and add more, in downloadable 'packs'. My question is, what's the best way to achieve this? Currently my thoughts are: Create a 'game' interface Build games to this interface but create them as .dlls Have the main .exe file scan a directory and load in the .dlls at runtime. I've not messed around with the idea much, but I know there are applications at-least that use this plug-in approach (Notepad++ seems to), but I'm not sure of any games that do (although I'm sure they must exist). However it seems that this is a problem that has been solved previously, so I'm wondering if there's any form of established best-practice.

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

  • HowTo Enable jBullet DebugMode

    - by Kenneth Bray
    I would like to render the physics world of jBullet to debug some issues in my game, and I am not finding too much on enabling the debugDraw method of jBullet. Do I need to write my own debugDraw method, or is there an easier way to draw the physics models to the screen? If there is already a built in method I would prefer to use that, otherwise I guess I will start making my own functions to handle this.

    Read the article

  • What is the standard technique for shifting the frames of a sprite according to user input?

    - by virtual__
    From my own experience, I developed two techniques for changing the sprites of a character that's reacting to user input -- this in the context of a classic 2D platformer. The first one is to store all character's pixmaps in a list, putting the index of the currently used pixmap in an ordinary variable. This way, every time the player presses a key -- say the right arrow for moving the character forward -- the graphics engine sees what's the next pixmap to draw, draws it, and increments the index counter. That's a pretty common approach I believe, the problem is that in this case the animation's quality depends not only on the number of sprites available but also on how often your engine listens to user input. The second technique is to actually play an animation every key press event. For this you can use any sort of animation framework you want. It's only necessary to set the timer, the animation steps and to call the animation's play() method on your key press event handler. The problem with that approach is that is lacks responsiveness, since the character won't react to any input while the current animation is still being played. What I want to know is whether you are using one of these techniques -- or something similar -- in your games, or whether there's a standard method for animating sprites out there that's widely known by everybody but me.

    Read the article

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • World to Pixel Transformation

    - by D00d
    My objects have a location in world coordinates (basically 1.0f is a meter). If I simply draw my objects using their world coordinates, each meter will correspond to a pixel. Obviously that's not what I want. Now, I don't want to have to apply a transformation to each and every object's position when I draw them. As I happen to be using XNA, and spritebatch allows a Matrix to be passed in as an argument in it's begin method, I was wondering if there is a way to pass the World to Pixel transformation in there. Any suggestions? So far Matrix.CreateScale(new Vector3(zoom, zoom, 1)) puts the objects in their proper spot, but it also scales up the sprites. Is there a way to transform the position without enlarging the sprite? Thanks

    Read the article

  • Transparent parts of texture are opaque black instead

    - by Aaron
    I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial: uniform sampler2D texture; varying vec2 f_texcoord; void main() { gl_FragColor = texture2D(texture, f_texcoord); } I have glEnable(GL_BLEND) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL_RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise).

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • Climbing boxes in box2D

    - by Rothens
    I've just stepped into the world of Box2D with libgdx. I've already made a stack of boxes: They are dropped randomly ontop of each other. What I'd like to achieve is to make a character, that could freely climb on the boxes, (He can grip on the boxes anywhere, not just on the side/top of a box) but his weight affects the stack as well, so the boxes could fall down. My google-fu failed me... Is there any way to make this possible?

    Read the article

  • What is involved with writing a lobby server?

    - by Kira
    So I'm writing a Chess matchmaking system based on a Lobby view with gaming rooms, general chat etc. So far I have a working prototype but I have big doubts regarding some things I did with the server. Writing a gaming lobby server is a new programming experience to me and so I don't have a clear nor precise programming model for it. I also couldn't find a paper that describes how it should work. I ordered "Java Network Programming 3rd edition" from Amazon and still waiting for shipment, hopefully I'll find some useful examples/information in this book. Meanwhile, I'd like to gather your opinions and see how you would handle some things so I can learn how to write a server correctly. Here are a few questions off the top of my head: (may be more will come) First, let's define what a server does. It's primary functionality is to hold TCP connections with clients, listen to the events they generate and dispatch them to the other players. But is there more to it than that? Should I use one thread per client? If so, 300 clients = 300 threads. Isn't that too much? What hardware is needed to support that? And how much bandwidth does a lobby consume then approx? What kind of data structure should be used to hold the clients' sockets? How do you protect it from concurrent modification (eg. a player enters or exists the lobby) when iterating through it to dispatch an event without hurting throughput? Is ConcurrentHashMap the correct answer here, or are there some techniques I should know? When a user enters the lobby, what mechanism would you use to transfer the state of the lobby to him? And while this is happening, where do the other events bubble up? Screenshot : http://imageshack.us/photo/my-images/695/sansrewyh.png/

    Read the article

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • Will we see a trend of "3d" games coming up in the near future?

    - by Vish
    I've noticed that the trend of movies is diving into the world of movies with 3-dimensional camera.For me it provoked a thought as if it was the same feeling people got when they saw a colour movie for the first time, like in the transition from black and white to colour it is a whole new experience. For the first time we are experiencing the Z(depth) factor and I really mean when I said "experiencing". So my question is or maybe if not a question, but Is there a possibility of a genre of 3d camera games upcoming?

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • Do I need the 'w' component in my Vector class?

    - by bobobobo
    Assume you're writing matrix code that handles rotation, translation etc for 3d space. Now the transformation matrices have to be 4x4 to fit the translation component in. However, you don't actually need to store a w component in the vector do you? Even in perspective division, you can simply compute and store w outside of the vector, and perspective divide before returning from the method. For example: // post multiply vec2=matrix*vector Vector operator*( const Matrix & a, const Vector& v ) { Vector r ; // do matrix mult r.x = a._11*v.x + a._12*v.y ... real w = a._41*v.x + a._42*v.y ... // perspective divide r /= w ; return r ; } Is there a point in storing w in the Vector class?

    Read the article

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • How do I reconstruct depth in deferred rendering using an orthographic projection?

    - by Jeremie
    I've been trying to get my world space position of my pixel but I4m missing something. I'm using a orthographic view for a 2.5d game. My depth is linear and this is my code. float3 lightPos = lightPosition; float2 texCoord = PostProjToScreen(PSIn.lightPosition)+halfPixel; float depth = tex2D(depthMap, texCoord); float4 position; position.x = texCoord.x *2-1; position.y = (1-texCoord.y)*2-1; position.z = depth.r; position.w = 1; position = mul(position, inViewProjection); //position.xyz/=position.w; // I comment it but even without it it doesn't work float4 normal = (tex2D(normalMap, texCoord)-.5f) * 2; normal = normalize(normal); float3 lightDirection = normalize(lightPos-position); float att = saturate(1.0f - length(lightDirection) /attenuation); float lightning = saturate (dot(normal, lightDirection)); lightning*= brightness; return float4(lightColor* lightning*att, 1); I'm using a sphere but it's not working the way I want. I reproject the texture properly onto the sphere but the light coordinates in the pixel shader seems to be stuck at zero even if when I move the light volume update properly.

    Read the article

  • When dealing with a static game board, what are some methods to make it more interesting?

    - by Ólafur Waage
    Let's say you have a game board that you look at. It does not move but there is some action going on. For example Chess, Checkers, Solitaire. The game I'm working on is not one of these but it's a good reference. What are some methods you can apply to the game or the design that increases the appeal of the game to the user? Of course you can make it prettier but what are some other methods you can use? For example: Visual cues, game design changes, user interface arrangement, etc.

    Read the article

  • Open Source Analysis

    - by BluFire
    There are a lot of code in open source projects, looking at all of the code is time consuming and can be confusing to a novice like me. Are there any sections of open-source projects that should be focused on? What should I focus on when I look at code? I'm asking this in general because if I ask this specifically, the question will only apply in one or two projects rather than an entire group of projects ranging in different types of games and difficulty.

    Read the article

  • How can I improve the "smoothness" of a 2D side-scrolling iPhone game?

    - by MrDatabase
    I'm working on a relatively simple 2D side-scrolling iPhone game. The controls are tilt-based. I use OpenGL ES 1.1 for the graphics. The game state is updated at a rate of 30 Hz... And the drawing is updated at a rate of 30 fps (via NSTimer). The smoothness of the drawing is ok... But not quite as smooth as a game like iFighter. What can I do to improve the smoothness of the game? Here are the potential issues I've briefly considered: I'm varying the opacity of up to 15 "small" (20x20 pixels) textures at a time... Apparently varying the opacity in this manner can degrade drawing performance I'm rendering at only 30 fps (via NSTimer)... Perhaps 2D games like iFighter are rendered at a higher frame rate? Perhaps the game state could be updated at a faster rate? Note the acceleration vales are updated at 100 Hz... So I could potentially update part of the game state at 100 hz All of my textures are PNG24... Perhaps PNG8 would help (due to smaller size etc)

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >