Search Results

Search found 28031 results on 1122 pages for 'personal development'.

Page 536/1122 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Box2Dweb very slow on node.js

    - by Peteris
    I'm using Box2Dweb on node.js. I have a rotated box object that I apply an impulse to move around. The timestep is set at 50ms, however, it bumps up to 100ms and even 200ms as soon as I add any more edges or boxes. Here are the edges I would like to use as bounds around the playing area: // Computing the corners var upLeft = new b2Vec2(0, 0), lowLeft = new b2Vec2(0, height), lowRight = new b2Vec2(width, height), upRight = new b2Vec2(width, 0) // Edges bounding the visible game area var edgeFixDef = new b2FixtureDef edgeFixDef.friction = 0.5 edgeFixDef.restitution = 0.2 edgeFixDef.shape = new b2PolygonShape var edgeBodyDef = new b2BodyDef; edgeBodyDef.type = b2Body.b2_staticBody edgeFixDef.shape.SetAsEdge(upLeft, lowLeft) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(lowLeft, lowRight) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(lowRight, upRight) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(upRight, upLeft) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) Can box2d really become this slow for even two bodies or is there some pitfall? It would be very surprising given all the demos which successfully use tens of objects.

    Read the article

  • snapping an angle to the closest cardinal direction

    - by Josh E
    I'm developing a 2D sprite-based game, and I'm finding that I'm having trouble with making the sprites rotate correctly. In a nutshell, I've got spritesheets for each of 5 directions (the other 3 come from just flipping the sprite horizontally), and I need to clamp the velocity/rotation of the sprite to one of those directions. My sprite class has a pre-computed list of radians corresponding to the cardinal directions like this: protected readonly List<float> CardinalDirections = new List<float> { MathHelper.PiOver4, MathHelper.PiOver2, MathHelper.PiOver2 + MathHelper.PiOver4, MathHelper.Pi, -MathHelper.PiOver4, -MathHelper.PiOver2, -MathHelper.PiOver2 + -MathHelper.PiOver4, -MathHelper.Pi, }; Here's the positional update code: if (velocity == Vector2.Zero) return; var rot = ((float)Math.Atan2(velocity.Y, velocity.X)); TurretRotation = SnapPositionToGrid(rot); var snappedX = (float)Math.Cos(TurretRotation); var snappedY = (float)Math.Sin(TurretRotation); var rotVector = new Vector2(snappedX, snappedY); velocity *= rotVector; //...snip private float SnapPositionToGrid(float rotationToSnap) { if (rotationToSnap == 0) return 0.0f; var targetRotation = CardinalDirections.First(x => (x - rotationToSnap >= -0.01 && x - rotationToSnap <= 0.01)); return (float)Math.Round(targetRotation, 3); } What am I doing wrong here? I know that the SnapPositionToGrid method is far from what it needs to be - the .First(..) call is on purpose so that it throws on no match, but I have no idea how I would go about accomplishing this, and unfortunately, Google hasn't helped too much either. Am I thinking about this the wrong way, or is the answer staring at me in the face?

    Read the article

  • Should I be using a game engine?

    - by Kyle
    I'm an experienced programmer, but I'm completely new to making games. I'm thinking of making an iPhone game that is similar to a 2d tower defense type game. In the web programming world, it would be a big waste of time to make a website without using some sort of web framework (eg ruby on rails). Is that the same for making games? Do people mostly use some sort of framework/game engine for making a game? If so, what are the popular ones for iOS?

    Read the article

  • Can I name a team with the name of their city to avoid trademark issues?

    - by Paul
    I was wondering, if you want to make a NBA game on smartphones, without the license held by EA, the first solution seems to name your teams with a different name, such as "Chicragro Brulls" (this is just for the example), but would it be possible to just call your team with the name of the city, such as "Chicago vs. Dallas" ? I know the first solution was chosen by Pro Evolution Soccer, would you know any other game that don't use a license?

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • How to calculate direction from initial point and another point?

    - by Dvole
    I'm making a simple game where I shoot things from a certain point on screen (A). I tap the screen and shoot the projectile from initial point(A) to the tap point(B). But I want the projectile to move along the same path instead and fly out of bounds of the screen. How do I calculate a point that is on the same line that these two points, but further away? This is a simple math, but I can't figure it out.

    Read the article

  • port opengl2.x to opengl 3.x

    - by user46759
    I'm trying to port opencloth example to OpenGL 3.x. I've mostly done it to the shaders but I'm not sure of this part : glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexPointer(4, GL_FLOAT, 0,0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboTexID); glTexCoordPointer(2, GL_FLOAT,0, 0); glEnableClientState(GL_NORMAL_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboNormID); glNormalPointer(GL_FLOAT,sizeof(float)*4, 0); maybe glEnableVertexAttriArray somewhere ? any clue ? thanx edit : maybe something like that ? glEnableVertexAttribArray (2) ; // Ou glEnableVertexAttribArray (positionIndex) ; glBindBuffer(GL_ARRAY_BUFFER, vboTexID); glVertexAttribPointer (2, 2, GL_FLOAT, GL_FALSE, 0, 0) ; glEnableVertexAttribArray (3) ; // Ou glEnableVertexAttribArray (positionIndex) ; glBindBuffer(GL_ARRAY_BUFFER, vboNormID); glVertexAttribPointer (3, 4, GL_FLOAT, GL_FALSE, sizeof (float) * 4, 0) ;

    Read the article

  • Figuring out what object is closer to a certain point?

    - by user1157885
    I'm trying to create fog of war, I have the visual effect created but I'm not sure how to deal with the hiding of other players if they're within the fog of war. So right now the thing I'm trying to do is if another player is hiding behind a wall then not to render that player. I was thinking of doing it by sending a ray in the direction of all the players, and then creating a list of all the obstacles that ray collides with and then trying to figure out if an obstacle was closer than the player in order to predict the distance. But then I realized I'm not really sure how to figure out if the obstacle is infact closer or not because I have to account for all the dimensions, so I'm kind of stuck. First of all is this approach the correct way to go about it and secondly how would I calculate if the obstacle was infact closer taking into account the X Y and Z. Thanks

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • Light on every model and not in the whole scene

    - by alecnash
    I am using a custom shader and try to pass the effect on my Models like that: foreach (ModelMesh mesh in Model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; } mesh.Draw(); } My only issue is that every Model now has its own light source in it. Why is this happening and is this a problem of my shader? Edit: These are the parameters passed to the shader: private void Get_lambertEffect() { if (_lambertEffect == null) _lambertEffect = Engine.LambertEffect; //Lambert technique (LambertWithShadows, LambertWithShadows2x2PCF, LambertWithShadows3x3PCF) _lambertEffect.CurrentTechnique = _lambertEffect.Techniques["LambertWithShadows3x3PCF"]; _lambertEffect.Parameters["texelSize"].SetValue(Engine.ShadowMap.TexelSize); //ShadowMap parameters _lambertEffect.Parameters["lightViewProjection"].SetValue(Engine.ShadowMap.LightViewProjectionMatrix); _lambertEffect.Parameters["textureScaleBias"].SetValue(Engine.ShadowMap.TextureScaleBiasMatrix); _lambertEffect.Parameters["depthBias"].SetValue(Engine.ShadowMap.DepthBias); _lambertEffect.Parameters["shadowMap"].SetValue(Engine.ShadowMap.ShadowMapTexture); //Camera view and projection parameters _lambertEffect.Parameters["view"].SetValue(Engine._camera.ViewMatrix); _lambertEffect.Parameters["projection"].SetValue(Engine._camera.ProjectionMatrix); _lambertEffect.Parameters["world"].SetValue( Matrix.CreateScale(Size) * world ); //Light and color _lambertEffect.Parameters["lightDir"].SetValue(Engine._sourceLight.Direction); _lambertEffect.Parameters["lightColor"].SetValue(Engine._sourceLight.Color); _lambertEffect.Parameters["materialAmbient"].SetValue(Engine.Material.Ambient); _lambertEffect.Parameters["materialDiffuse"].SetValue(Engine.Material.Diffuse); _lambertEffect.Parameters["colorMap"].SetValue(ColorTexture.Create(Engine.GraphicsDevice, Color.Red)); }

    Read the article

  • How can I improve the "smoothness" of a 2D side-scrolling iPhone game?

    - by MrDatabase
    I'm working on a relatively simple 2D side-scrolling iPhone game. The controls are tilt-based. I use OpenGL ES 1.1 for the graphics. The game state is updated at a rate of 30 Hz... And the drawing is updated at a rate of 30 fps (via NSTimer). The smoothness of the drawing is ok... But not quite as smooth as a game like iFighter. What can I do to improve the smoothness of the game? Here are the potential issues I've briefly considered: I'm varying the opacity of up to 15 "small" (20x20 pixels) textures at a time... Apparently varying the opacity in this manner can degrade drawing performance I'm rendering at only 30 fps (via NSTimer)... Perhaps 2D games like iFighter are rendered at a higher frame rate? Perhaps the game state could be updated at a faster rate? Note the acceleration vales are updated at 100 Hz... So I could potentially update part of the game state at 100 hz All of my textures are PNG24... Perhaps PNG8 would help (due to smaller size etc)

    Read the article

  • jump pads problem

    - by Pasquale Sada
    I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total force the character can jump ): deltaPosition = target - character_position; sqrtTerm = Sqrt(2*-gravity.y * deltaPosition.y + MaxYVelocity* character_MaxForce); time = (MaxYVelocity-sqrtTerm) /gravity.y; speedSq = jumpVelocity.x* jumpVelocity.x + jumpVelocity.z *jumpVelocity.z; if speedSq < (character_MaxForce * character_MaxForce) we have the right time so we can store the value jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; otherwise we try the other solution time = (MaxYVelocity+sqrtTerm) /gravity.y; and then store it jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; jumpVelocity.y = MaxYVelocity; rigidbody_velocity = jumpVelocity; The problem is that the character is jumping away from the landing pad or sometime he jumps too far never hitting the landing pad.

    Read the article

  • What are the semantics of glRotate and glTranslate's parameters?

    - by Zarkopafilis
    I have been trying to play with OpenGL after watching some tutorials and I don't understand how the glTranslatef and glRotatef functions work. I believe a simple picture would help me. I understand that glTranslatef changes the position of the "camera" (but does it change the position in wich the shapes are getting drawn)? However, I don't understand the rotation concept at all. If I do glRotatef(1,0,0,1) it makes my quad spin around. If I just do glRotatef(1,0,0,0) it makes the quad smaller (further away) but if I try to rotate around the X or Y axis, I get a black screen. I don't understand the angle either. Help would be appreciated.

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • Material tiling and offset in unity

    - by Simran kaur
    Ambiguity: What exactly is the difference between Tiling the material and Offset of material? Need to do: I need the material to be repeated n times on the object where I need to set the value of n via script.How do I do it? It seems to happen through Tiling(tried via inspector) but again what is difference between mainTextureOffset and setTextureOffset? Tried: Following is the line of code that I tried to repeat the texture n number of times on an object(repeat across the width of object), but it does nothing significant that I can see.

    Read the article

  • infer half vector length in BRDF

    - by cician
    it's my first question on stack. Is it possible to infer length of the half angle vector for specular lighting from N·L and N·V without the whole view and light vectors? I may be completely off-track, but I have this gut feeling it's possible... Why? I'm working on a skin shader and I'm already doing one texture lookup with N·L+N·E and one texture lookup for specular with N·H+N·V. The latter one can be transformed into N·L+N·E lookup if only I had the half vector length. Doing so could simplify the shader a bit and move some operations into the pre-computed lookup texture. It would make a huge difference since I'm trying to squeeze as much functionality as possible to a single pass mobile version so instruction count matters. Thanks.

    Read the article

  • How can I make a collection of mini-games in XNA where the user can download packs of minigames and the main .exe can run them without being altered?

    - by Pyroka
    I'm currently making a PC game in XNA. It's actually a collection of mini-games (there's 3 mini-games at the moment) however I plan to make and add more, in downloadable 'packs'. My question is, what's the best way to achieve this? Currently my thoughts are: Create a 'game' interface Build games to this interface but create them as .dlls Have the main .exe file scan a directory and load in the .dlls at runtime. I've not messed around with the idea much, but I know there are applications at-least that use this plug-in approach (Notepad++ seems to), but I'm not sure of any games that do (although I'm sure they must exist). However it seems that this is a problem that has been solved previously, so I'm wondering if there's any form of established best-practice.

    Read the article

  • How many textures can usually I bind at once?

    - by Avi
    I'm developing a game engine, and it's only going to work on modern (Shader model 4+) hardware. I figure that, by the time I'm done with it, that won't be such an unreasonable requirement. My question is: how many textures can I bind at once on a modern graphics card? 16 would be sufficient. Can I expect most modern graphics cards to support that amount? My GTX 460 appears to support 32, but I have no idea if that's representative of most modern video cards.

    Read the article

  • How do I reconstruct depth in deferred rendering using an orthographic projection?

    - by Jeremie
    I've been trying to get my world space position of my pixel but I4m missing something. I'm using a orthographic view for a 2.5d game. My depth is linear and this is my code. float3 lightPos = lightPosition; float2 texCoord = PostProjToScreen(PSIn.lightPosition)+halfPixel; float depth = tex2D(depthMap, texCoord); float4 position; position.x = texCoord.x *2-1; position.y = (1-texCoord.y)*2-1; position.z = depth.r; position.w = 1; position = mul(position, inViewProjection); //position.xyz/=position.w; // I comment it but even without it it doesn't work float4 normal = (tex2D(normalMap, texCoord)-.5f) * 2; normal = normalize(normal); float3 lightDirection = normalize(lightPos-position); float att = saturate(1.0f - length(lightDirection) /attenuation); float lightning = saturate (dot(normal, lightDirection)); lightning*= brightness; return float4(lightColor* lightning*att, 1); I'm using a sphere but it's not working the way I want. I reproject the texture properly onto the sphere but the light coordinates in the pixel shader seems to be stuck at zero even if when I move the light volume update properly.

    Read the article

  • When dealing with a static game board, what are some methods to make it more interesting?

    - by Ólafur Waage
    Let's say you have a game board that you look at. It does not move but there is some action going on. For example Chess, Checkers, Solitaire. The game I'm working on is not one of these but it's a good reference. What are some methods you can apply to the game or the design that increases the appeal of the game to the user? Of course you can make it prettier but what are some other methods you can use? For example: Visual cues, game design changes, user interface arrangement, etc.

    Read the article

  • XNA When to call LoadContent

    - by Peteyslatts
    I have an enum in my game that denotes the game state ie MainMenu, InGame, GameOver, Exit and I was wondering if it would be advisable to add a new one in for PrepGame - in which the game creates viewports for however many players there are, creates the battlefield etc. I feel like this is a good idea except for one thing: should I make a call back to LoadContent() in this state? I could just put a switch statement in the LoadContent for my currentGameState. If it equals PrepGame load things like the skybox, ship models, texures, HUD graphics etc. Or is it a good idea to create an Asset Manager class in the first call to LoadContent() and load everything then. I feel like both approaches have different benefits: faster, but more load times vs slower initial load time, but then all my objects are referencing the same variables so I only have to load each on once. Any help is greatly appreciated. Thanks, Peter

    Read the article

  • What is the recommended library for using Lua from C++?

    - by DevilWithin
    I am currently planning how to integrate Lua scripting in my 2D Game Engine, and i would like to go straight to the most adequate solution for having C++ classes and objects exposed. I've read this (if it helps you help): http://lua-users.org/wiki/BindingCodeToLua If you have a better scripting language to recomend, go for it ;D All help is welcome, i need to pickup the best solution to start implementing Thanks

    Read the article

  • Creating shooting arrow class [on hold]

    - by I.Hristov
    OK I am trying to write an XNA game with one controllable by the player entity, while the rest are bots (enemy and friendly) wondering around and... shooting each other from range. Now the shooting I suppose should be done with a separate class Arrow (for example). The resulting object would be the arrow appearing on screen moving from shooting entity to target entity. When target is reached arrow is no longer active, probably removed from the list. I plan to make a class with fields: Vector2 shootingEntity; Vector2 targetEntity; float arrowSpeed; float arrowAttackSpeed; int damageDone; bool isActive; Then when enemy entities get closer than a int rangeToShoot (which each entity will have as a field/prop) I plan to make a list of arrows emerging from each entity and going to the closest opposite one. I wonder if that logic will enable me later to make possible many entities to be able to shoot independently at different enemy entities at the same time. I know the question is broad but it would be wise to ask if the foundations of the idea are correct.

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >