Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 532/1022 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • which is better performance, using a disposable local variable or reusing a global one?

    - by petervaz
    This is for an android game. Suppose I have a function that is called several times for second and do some calculations involving an arraylist (or any other complex objects for what matter). Which approach would be preffered? local: private void doStuff(){ ArrayList<Type> XList = new ArrayList<Type>(); // do stuff with list } global: private ArrayList<Type> XList = new ArrayList<Type>(); private void doStuff(){ XList.clear(); // do stuff with list }

    Read the article

  • Best way to solve tile drawing in 2D side scroller?

    - by TheCompBoy
    What i still can't figure out is which would be the more sane way / easier and faster way to draw the map on the screen.. I mean i will use many tiles for my maps in my side scroller.. But problem is should i make the maps in whole images like one .png file for each map (Example) or should i draw the tiles by code like a for loop in c++.. Which way is most recomended or where can i read about which way is the best.

    Read the article

  • Android Activity access Unity Classes

    - by Anomaly
    I have made my own C# classes in Unity, is there any way I can access these classes from the Android Activity that starts the UnityPlayer? Example: I have a C# class called testClass in Unity: class testClass{ public static string myString="test string"; } From the Android activity in Java I want to access that class: string str=testClass.myString; Is this possible? If so, how? Or is there some other way to do this? In the end I basically want to communicate between my Android activity and the UnityPlayer object. Thanks in advance. EDIT: Ok so I looked at building Android plugins for Unity but this wasn't satisfactory to me. I ended up building a socket client-server interface in Unity with C# and another one in Java for the Android app: So Unity listens on port X and broadcasts on port Y The Android activity listens on port Y and broadcasts on port X This is necessary as both interfaces are running on the same host. So that's how I solved my problem, but I'm open for any suggestions if anyone knows a better way of communicating between the Unityplayer and your app.

    Read the article

  • What is the most efficient way to add and remove Slick2D sprites?

    - by kirchhoff
    I'm making a game in Java with Slick2D and I want to create planes which shoots: int maxBullets = 40; static int bullet = 0; Missile missile[] = new Missile[maxBullets]; I want to create/move my missiles in the most efficient way, I would appreciate your advise: public void shoot() throws SlickException{ if(bullet<maxBullets){ if(missile[bullet] != null){ missile[bullet].resetLocation(plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); }else{ missile[bullet] = new Missile("resources/missile.png", plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); } }else{ bullet = 0; missile[bullet].resetLocation(plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); } bullet++; } I created the method resetLocation in my Missile class in order to avoid loading again the resource. Is it correct? In the update method I've got this to move all the missiles: if(bullet > 0 && bullet < maxBullets){ float hyp = 0.4f * delta; if(bullet == 1){ missile[0].move(hyp); }else{ for(int x = 0; x<bullet; x++){ missile[x].move(hyp); } } }

    Read the article

  • Drawing Shape in DebugView (Farseer)

    - by keyvan kazemi
    As the title says, I need to draw a shape/polygon in Farseer using debugview. I have this piece of code which converts a Texture to polygon: //load texture that will represent the tray trayTexture = Content.Load<Texture2D>("tray"); //Create an array to hold the data from the texture uint[] data = new uint[trayTexture.Width * trayTexture.Height]; //Transfer the texture data to the array trayTexture.GetData(data); //Find the vertices that makes up the outline of the shape in the texture Vertices verts = PolygonTools.CreatePolygon(data, trayTexture.Width, false); //Since it is a concave polygon, we need to partition it into several smaller convex polygons _list = BayazitDecomposer.ConvexPartition(verts); Vector2 vertScale = new Vector2(ConvertUnits.ToSimUnits(1)); foreach (Vertices verti in _list) { verti.Scale(ref vertScale); } tray = BodyFactory.CreateCompoundPolygon(MyWorld, _list, 10); Now in DebugView I guess I have to use "DrawShape" method which requires: DrawShape(Fixture fixture, Transform xf, Color color) My question is how can I get the variables needed for this method, namely Fixture and Transform?

    Read the article

  • Presenting game center leaderboard

    - by Coder404
    I have the following code: -(void)showLeaderboard { GKLeaderboardViewController *leaderboardController = [[GKLeaderboardViewController alloc] init]; if (leaderboardController != NULL) { leaderboardController.timeScope = GKLeaderboardTimeScopeAllTime; leaderboardController.leaderboardDelegate = self; [self presentModalViewController: leaderboardController animated: YES]; } [leaderboardController release]; } Which I am trying to use to make my leader-board pop up. The issue is cocos2d will not allow me to use the line: [self presentModalViewController: leaderboardController animated: YES]; How do I fix this? Thanks PS I am using cocos2d

    Read the article

  • How to create a Raining Effect(Particles) on Android?

    - by user19495
    I am developing a 2d android strategy game, it runs on SurfaceView, so I can't(or can I?) use LibGdx's particle system. And I would like to make a raining effect, I am aiming for something like this( http://ridingwiththeriver.files.wordpress.com/2010/09/rain-fall-animation.gif ), I don't need the splash effect in the end (although that would be superb, but probably would take up a lot of system resources). How could I achieve that raining effect? Any ideas? Thank You a lot in advance!

    Read the article

  • What libgdx project files can I ignore from version control?

    - by Zhen
    In an automatically created libgdx project, what files can I safely tell Git (or other revision control systems) to ignore? I'm considering these: *-android/.settings/ *-android/bin/ *-desktop/.settings/ *-desktop/bin/ *-html/.settings/ *-html/gwt-unitCache/ *-html/war/WEB-INF/classes/ *-html/war/WEB-INF/deploy/ *-html/war/assets/ *-html/war/ */.settings/ */bin/ Am I missing some? Is there a complete list somewhere?

    Read the article

  • Libraries for game developement in c++? [on hold]

    - by LPeter1997
    It's time for me to start developing games in c++ (I have experience in game developement with xna and java). What libraries do you recommend? I tried Allegro, but the installation is pretty headcrushing already. Could you share me your experiences with the library(ies) you use? (maybe even advantages and disadvantages) By the way it's a good point if it can be easily connected to CodeBlocks or Dev-C++. Thanks for the answers!

    Read the article

  • Is there a (family of) monotonically non-decreasing noise function(s)?

    - by Joe Wreschnig
    I'd like a function to animate an object moving from point A to point B over time, such that it reaches B at some fixed time, but its position at any time is randomly perturbed in a continuous fashion, but never goes backwards. The objects move along straight lines, so I only need one dimension. Mathematically, that means I'm looking for some continuous f(x), x ? [0,1], such that: f(0) = 0 f(1) = 1 x < y ? f(x) = f(y) At "most" points f(x + d) - f(x) bears no obvious relation to d. (The function is not uniformly increasing or otherwise predictable; I think that's also equivalent to saying no degree of derivative is a constant.) Ideally, I would actually like some way to have a family of these functions, providing some seed state. I'd need at least 4 bits of seed (16 possible functions), for my current use, but since that's not much feel free to provide even more. To avoid various issues with accumulation errors, I'd prefer the function not require any kind of internal state. That is, I want it to be a real function, not a programming "function".

    Read the article

  • Defining and selecting an area from an image

    - by meth0d_
    I want to create a game, where the world is loaded from an image file, much like Paradox Interactive does it for their games. If I have this image: Then the red, green, blue, cyan, magenta, white, black and grey should be different provinces. I know how to loop through them, and check if it's a new province, the problem is that I don't know how to define the region of the province for selection: I don't know how I can load in data, to make sure you can click anywhere on that province, and make sure it gets selected.

    Read the article

  • Common way to store model transformations

    - by redreggae
    I ask myself what's the best way to store the transformations in a model class. What I came up with is to store the translation and scaling in a Vector3 and the rotation in a Matrix4. On each update (frame) I multiply the 3 matrices (first build a Translation and Scaling Matrix) to get the world matrix. In this way I have no accumulated error. world = translation * scaling * rotation Another way would be to store the rotation in a quaternion but then I would have a high cost to convert to a matrix every time step. If I lerp the model I convert the rotation matrix to quaternion and then back to matrix. For speed optimization I have a dirty flag for each transformation so that I only do a matrix multiplication if necessary. world = translation if (isScaled) { world *= scaling } if (isRotated) { world *= rotation } Is this a common way or is it more common to have only one Matrix4 for all transformations? And is it better to store the rotation only as quaternion? For info: Currently I'm building a CSS3D engine in Javascript but these questions are relevant for every 3D engine.

    Read the article

  • How do I draw a 2d plane and rotate camara (To be a board) in a 3d XNA game?

    - by Mech0z
    I am trying to create a simple board game, but the 3d part of this is really killing me. From what I can gather I have created a plane, but it never moves even though I turn the camara, but that partially makes sense as I only turn the camara with a 3d model, but in my head that makes 0 sense, in my head if I turn the camara it should affect ALL my models? But with this code the camara only "cares" about the 3d cylinder, the plane is just completely still private void OnDraw(object sender, GameTimerEventArgs e) { SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); foreach (ModelMesh mesh in cylinderModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { //effect.World = Matrix.CreateRotationX((float)e.TotalTime.TotalSeconds * 2); effect.View = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.EnableDefaultLighting(); } mesh.Draw(); } //cameraPosition.Z -= 5.0f; _effect.World = Matrix.CreateRotationZ((MathHelper.ToRadians(((float)e.TotalTime.Milliseconds / 2) % 360))); foreach (EffectPass pass in _effect.CurrentTechnique.Passes) { pass.Apply(); SharedGraphicsDeviceManager.Current.GraphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, _vertices, 0, 1, VertexPositionColor.VertexDeclaration); } } Is there a way to get the camara to affect all models?

    Read the article

  • What are the key "connectors" for animation creation?

    - by qaisjp
    I'm creating an animation "engine" for a 2D game which loads a *.2dped file to load a character (it's body part positions, height, length of arm etc), and then a *.2difp to manipulate the body part positions. I'd like to know what the key body parts (bones, I mean) I should allow to be manipulated. My current list, sorted by ID's: 1: BONE_PELVIS1 2: BONE_PELVIS 3: BONE_SPINE1 4: BONE_UPPERTORSO 5: BONE_NECK 6: BONE_HEAD2 7: BONE_HEAD1 8: BONE_HEAD 21: BONE_RIGHTUPPERTORSO 22: BONE_RIGHTSHOULDER 23: BONE_RIGHTELBOW 24: BONE_RIGHTWRIST 25: BONE_RIGHTHAND 26: BONE_RIGHTTHUMB 31: BONE_LEFTUPPERTORSO 32: BONE_LEFTSHOULDER 33: BONE_LEFTELBOW 34: BONE_LEFTWRIST 35: BONE_LEFTHAND 36: BONE_LEFTTHUMB 41: BONE_LEFTHIP 42: BONE_LEFTKNEE 43: BONE_LEFTANKLE 44: BONE_LEFTFOOT 51: BONE_RIGHTHIP 52: BONE_RIGHTKNEE 53: BONE_RIGHTANKLE 54: BONE_RIGHTFOOT It's currently made to support real people, but am I going too accurate for a 2D character?

    Read the article

  • Repelling a rigidbody in the direction an object is rotating

    - by ndg
    Working in Unity, I have a game object which I rotate each frame, like so: void Update() { transform.Rotate(new Vector3(0, 1, 0) * speed * Time.deltaTime); } However, I'm running into problems when it comes to applying a force to rigidbodies that collide with this game objects sphere collider. The effect I'm hoping to achieve is that objects which touch the collider are thrown in roughly the same direction as the object is rotating. To do this, I've tried the following: Vector3 force = ((transform.localRotation * Vector3.forward) * 2000) * Time.deltaTime; collision.gameObject.rigidbody.AddForce(force, ForceMode.Impulse); Unfortunately this doesn't always match the rotation of the object. To debug the issue, I wrote a simple OnDrawGizmos script, which (strangely) appears to draw the line correctly oriented to the rotation. void OnDrawGizmos() { Vector3 pos = transform.position + ((transform.localRotation * Vector3.forward) * 2); Debug.DrawLine(transform.position, pos, Color.red); } You can see the result of the OnDrawGizmos function below: What am I doing wrong?

    Read the article

  • Is it only possible to display 64k vertices on the monitor with 16bit?

    - by Aufziehvogel
    I did the first 3D tutorial over at riemers.net and stumbled upon that my graphic card only supports Shader 2.0 (Reach profile in XNA) which means I can only use Int16 to store the indices (triangle to vertex). This means that I can only store 2^16 = 65536 vertices. Also I read on the internet that you should prefer 16-bit over 32-bit because not all hardware (like mine) does support 32-bit. Yet, I am wondering: Do really all game scenes get along with only so little vertices? I though already faces of people used a lot of polygons (which are made up of vertices?). It’s not relevant for me yet, but I am interested: Do game scenes use only 65536 vertices? Do you use some trade-off to display more (e.g. 64k in GPU buffer rest on RAM) Is there some method to get more into the GPU buffer? I already read on some other posts that there seems to be a limit of 64k per mesh too, so maybe you can compact stuff to meshes?

    Read the article

  • Need ideas on how to give my levels structure

    - by akuritsu
    I am making an iOS game for a project at school. It is going to be a tiny bit like Fruit Ninja, as in it will have different things on the screen, and when you hit them, they die, and you get points. The trouble is that unlike Fruit Ninja, my game will have different types of sprites, all doing different things (moving different places, doing different things, etc). The one thing that is bad about having all of these sprites that do different things is that it is hard for them to look neat on the screen all together. I was planning on having a couple of different gamemodes: Time Trial You have 120 seconds to kill as many sprites as possible. Survival You have three lives, every time you try to hit a sprite and miss, you lose a life. ???? Whatever I think of. I am a rookie to game design in general, and I don't know the best way to make my game look good, and play well. I could have all of these sprites on the screen at the same time, or I could have them come in waves, for example 10 of sprite_a come on, and once they are killed, 10 of sprite_b come on, etc... Please give me your opinion about which one I should code. If you have any other suggestions for either a third gamemode, or a completely different way to make the levels, feel free to tell me.

    Read the article

  • How can I keep the correct alpha during rendering particles?

    - by April
    Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is: ADDITIVE: srcAlpha VS 1 BLEND: srcAlpha VS 1-srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS:If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles.

    Read the article

  • Space Invaders-type game: Keeping the enemies aligned with each other as they turn around?

    - by CorundumGames
    OK, so here's the lowdown of the problem I'm trying to solve. I'm developing a game in PyGame that's a cross between Space Invaders and Columns. I'm trying to make the motion of the enemies similar to that of the aliens in Space Invaders; that is, they're all clustered in a grid, and if even one hits the side of the screen, the entire formation moves down and turns around. However, the motion of these aliens is continuous (as continuous as a monitor can be, anyway), not on a discrete grid like in the original. The enemies are instances of an Enemy class, and in turn they're held by a 2D array in a enemysquadron module (which, if you don't use Python, is in this case essentially a singleton due to the way Python modules work). Inside the Enemy class I have a class-scope velocity vector that is reversed every time an Enemy object touches the edge of the screen. This won't do, though, because as time goes on the enemies just become disorganized and jumbled (i.e. not in a grid as planned). I haven't implemented the Enemies going downward yet, so let's not worry about that right now. Any tips?

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • How do I implement deceleration for the player character?

    - by tesselode
    Using delta time with addition and subtraction is easy. player.speed += 100 * dt However, multiplication and division complicate things a bit. For example, let's say I want the player to double his speed every second. player.speed = player.speed * 2 * dt I can't do this because it'll slow down the player (unless delta time is really high). Division is the same way, except it'll speed things way up. How can I handle multiplication and division with delta time? Edit: it looks like my question has confused everyone. I really just wanted to be able to implement deceleration without this horrible mass of code: else if speed > 0 then speed = speed - 20 * dt if speed < 0 then speed = 0 end end if speed < 0 then speed = speed + 20 * dt if speed > 0 then speed = 0 end end end Because that's way bigger than it needs to be. So far a better solution seems to be: speed = speed - speed * whatever_number * dt

    Read the article

  • Sprite rotation

    - by Kipras
    I'm using OpenGL and people suggest using glRotate for sprite rotation, but I find that strange. My problem with it is that it rotates the whole matrix, which sort of screws up all my collision detection and so on and so forth. Imagine I had a sprite at position (100, 100) and in position (100, 200) is an obstacle and the sprite's facing it. I rotate the sprite away from the obstacle and when move upwards my y axis, even though the projection shows like it's going away from the obstacle, the sprite will intersect it. So I don't see another way of a rotating a sprite and not screwing up all collision detection other than doing mathematical operations on the image itself. Am I right or am I missing something?

    Read the article

  • Why isn't one of the constant buffers being loaded inside the shader?

    - by Paul Ske
    I however got the model to load under tessellation; only problem is that one of the constant buffers aren't actually updating the shader's tessellation factor inside the hullshader. I created a messagebox at the rendering point so I know for sure the tessellation factor is assigned to the dynamic constant buffer. Inside the shader code where it says .Edges[1] = tessellationAmount; the tessellationAmount is suppose to be sent from the dynamic buffer to the shader. Otherwise it's just a plain box. In better explanation; there's a matrixBuffer, cameraBuffer, TessellationBuffer for constant. There's a multiBuffer array that assigns the matrix, camera, tesselation. So, when I set the Hull Shader, PixelShader, VertexShader, DomainShader it gets assigned by the multibuffer. E.G. devcon-HSSetConstantBuffers(0,3,multibuffer); The only way around the whole ideal would be to go in the shader and change how much the edges tessellate and inside the edges as well with the same number. My question is why wouldn't the tessellationBuffer not work in the shader?

    Read the article

  • Phone complains that identical GLSL struct definition differs in vert/frag programs

    - by stephelton
    When I provide the following struct definition in linked frag and vert shaders, my phone (Samsung Vibrant / Android 2.2) complains that the definition differs. struct Light { mediump vec3 _position; lowp vec4 _ambient; lowp vec4 _diffuse; lowp vec4 _specular; bool _isDirectional; mediump vec3 _attenuation; // constant, linear, and quadratic components }; uniform Light u_light; I know the struct is identical because its included from another file. These shaders work on a linux implementation and on my Android 3.0 tablet. Both shaders declare "precision mediump float;" The exact error is: Uniform variable u_light type/precision does not match in vertex and fragment shader Am I doing anything wrong here, or is my phone's implementation broken? Any advice (other than file a bug report?)

    Read the article

  • How to use GetActiveUniform (in SharpGL)?

    - by frankie
    Generally, guesting is in header. I cannot understand how to use GetActiveUniform function. public void GetActiveUniform(uint program, uint index, int bufSize, int[] length, int[] size, uint[] type, string name); My attempt looks like this (everything is compiled and linked): var uniformSize = new int[1]; var unifromLength = new int[1]; var uniformType = new uint[1]; var uniformName = ""; Gl.GetActiveUniform(Id, index, uniformNameMaxLength[0], unifromLength, uniformSize, uniformType, uniformName); After call I get proper uniformSize, length and type, but not name.

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >