Search Results

Search found 41789 results on 1672 pages for 'software development'.

Page 607/1672 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Question about separating game core engine from game graphics engine...

    - by Conrad Clark
    Suppose I have a SquareObject class, which implements IDrawable, an interface which contains the method void Draw(). I want to separate drawing logic itself from the game core engine. My main idea is to create a static class which is responsible to dispatch actions to the graphic engine. public static class DrawDispatcher<T> { private static Action<T> DrawAction = new Action<T>((ObjectToDraw)=>{}); public static void SetDrawAction(Action<T> action) { DrawAction = action; } public static void Dispatch(this T Obj) { DrawAction(Obj); } } public static class Extensions { public static void DispatchDraw<T>(this object Obj) { DrawDispatcher<T>.DispatchDraw((T)Obj); } } Then, on the core side: public class SquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } And on the graphics side: public static class SquareRender{ //stuff here public static void Initialize(){ DrawDispatcher<SquareObject>.SetDrawAction((Square)=>{//my square rendering logic}); } } Do this "pattern" follow best practices? And a plus, I could easily change the render scheme of each object by changing the DispatchDraw parameter, as in: public class SuperSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } public class RedSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<RedSquareObject>(); } #endregion } RedSquareObject would have its own render method, but SuperSquareObject would render as a normal SquareObject I'm just asking because i do not want to reinvent the wheel, and there may be a design pattern similar (and better) to this that I may be not acknowledged of. Thanks in advance!

    Read the article

  • How would I balance a multiplayer competitive game

    - by Simon
    I'm looking at my first foray into developing a game, and would love to know whether you guys have any thoughts on game balancing on limited multiplayer games. The game I have in mind involves a neutral player that has to achieve a goal, with two supporting "deity" players who are one of 'good' and 'evil' - One of the deity players would try to help the player achieve their goal, while the other would try to thwart them. Any thoughts or pointers on how I can ensure the deities are balanced? If you want me to expand, I will, just didn't want to give away too much of the game play before I finish it.

    Read the article

  • Z-order with Alpha blending in a 3D world

    - by user41765
    I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C++) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element = 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment: Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist: create a new one. Batch exist for element with this Material: Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type = 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here: Explication here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? Thanks!

    Read the article

  • 2d shapes in XNA 4.0?

    - by Lautaro
    Having some experience of XNA but none of 3D programming. I have an idea i want to realize but i have not decided to do it in 3d or 2d. Im not sure which one will be best in XNA. I want to have a shape like a blob that can reshape depending on input. The morphing does not need to be very advanced. It could be a circle (2d) or globe (3d) that just has one point that moves slightly in a random direction. In ASP.NET i have made this through the 2d Draw classes where i can make lines, circles, squares etc and then modify the points that makes them up. But it seems to me that XNA does not have classes for making 2d shapes (can i get this confirmed?). If it had, then this would be the quickest solution for me.

    Read the article

  • How to control a spaceship near a planet in Unity3D?

    - by tyjkenn
    Right now I have spaceship orbiting a small planet. I'm trying to make an effective control system for that spaceship, but it always end up spinning out of control. After spinning the ship to change direction, the thrusters thrust the wrong way. Normal airplane controls don't work, since the ship is able to leave the atmosphere and go to other planets, in the journey going "upside-down". Could someone please enlighten me on how to get thrusters to work the way they are supposed to?

    Read the article

  • Algorithm for optimal control on space ship using accelerometer input data

    - by mm24
    Does someone have a good algorithm for controlling a space ship in a vertical shooter game using acceleration data? I have done a simple algorithm, but works very badly. I save an initial acceleration value (used to calibrate the movement according to the user's initial position) and I do subtract it from the current acceleration so I get a "calibrated" value. The problem is that basing the movement solely on relative acceleration has an effect of loss of sensitivity: certain movements are independent from the initial position. Would anyone be able to share a a better solution? I am wondering if I should use/integrate also inputs from gyroscope hardware. Here is my sample of code for a Cocos2d iOS game: - (void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { if (calibrationLayer.visible){ [self evaluateCalibration:acceleration]; initialAccelleration=acceleration; return; } if([self evaluatePause]){ return; } ShooterScene * shooterScene = (ShooterScene *) [self parent]; ShipEntity *playerSprite = [shooterScene playerShip]; float accellerationtSensitivity = 0.5f; UIAccelerationValue xAccelleration = acceleration.x - initialAccelleration.x; UIAccelerationValue yAccelleration = acceleration.y - initialAccelleration.y; if(xAccelleration > 0.05 || xAccelleration < -0.05) { [playerSprite setPosition:CGPointMake(playerSprite.position.x + xAccelleration * 80, playerSprite.position.y + yAccelleration * 80)]; } else if(yAccelleration > 0.05 || yAccelleration < -0.05) { [playerSprite setPosition:CGPointMake(playerSprite.position.x + xAccelleration * 80, playerSprite.position.y + yAccelleration * 80)]; } }

    Read the article

  • Staggered Isometric Map: Calculate map coordinates for point on screen

    - by Chris
    I know there are already a lot of resources about this, but I haven't found one that matches my coordinate system and I'm having massive trouble adjusting any of those solutions to my needs. What I learned is that the best way to do this is to use a transformation matrix. Implementing that is no problem, but I don't know in which way I have to transform the coordinate space. Here's an image that shows my coordinate system: How do I transform a point on screen to this coordinate system?

    Read the article

  • Translate extrinsic rotations to intrinsic rotations ( Euler angles )

    - by MineMan287
    The problem I have is very frustrating: I am using the Jitter Physics library which gives Quaternion rotations, you can extract the extrinsic rotations but I need intrinsic rotations to rotate in OpenTK (There are other reasons as well so I don't want to make OpenTK use a Matrix) GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) EDIT : Response to the first answer Like This? GL.Rotate(zr, 0, 0, 1) GL.Rotate(yr, 0, 1, 0) GL.Rotate(xr, 1, 0, 0) Or This? GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) GL.Rotate(zr, 0, 0, 1) GL.Rotate(yr, 0, 1, 0) GL.Rotate(xr, 1, 0, 0) GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) I'm confused, please give an example

    Read the article

  • Any open source editor to make video games online without programming knowledge?

    - by chelder
    With Scratch we can create video games online, from its web platform, and publish them on the same web. I could download its source code and use it, as many others already did (see Scratch modifications). Unfortunately, we need programming knowledge to use it. Actually, Scratch is mainly for teaching kids to code. I also found editors like Construct 2, GameSalad Creator and many others (just type on Google: create a video game without programming). With those editors we can create video games without coding. Unfortunately they are neither open source nor web platform. They need to be installed on Windows or Mac. Do you know some editor like Construct 2 or GameSalad Creator but open source and executable from a web server? Maybe some HTML5 game engine can do it?

    Read the article

  • Isometric Screen View to World View

    - by Sleepy Rhino
    I am having trouble working out the math to transform the screen coordinates to the Grid coordinates. The code below is how far I have got but it is totally wrong any help or resources to fix this issue would be great, had a complete mind block with this for some reason. private Point ScreenToIso(int mouseX, int mouseY) { int offsetX = WorldBuilder.STARTX; int offsetY = WorldBuilder.STARTY; Vector2 startV = new Vector2(offsetX, offsetY); int mapX = offsetX - mouseX; int mapY = offsetY - mouseY + (WorldBuilder.tileHeight / 2); mapY = -1 * (mapY / WorldBuilder.tileHeight); mapX = (mapX / WorldBuilder.tileHeight) + mapY; return new Point(mapX, mapY); }

    Read the article

  • How can I Implement KeyListeners/ActionListeners into the JFrame?

    - by A.K.
    I'll get to the point: I have a player in my game that you control with the keyboard yet the key methods in the player class and ActionListener w/ KeyAdapter in the Board class don't seem to fire. So far I've tried adding these key methods into the JFrame, doesn't seem to let me move him even though other objects that I have (enemies) can move fine. Here's part of the JFrame class with the event listeners: frm.addKeyListener(KeyBoardListener); public void mouseClicked(MouseEvent e) { nSound.play(); StartB.setContentAreaFilled(false); cards.remove(StartB); frm.remove(TitleL); frm.remove(cards); frm.setLayout(new GridLayout(1, 1)); frm.add(nBoard); //Add Game "Tiles" Or Content. x = 1200 nBoard.setPreferredSize(new Dimension(1200, 420)); cards.revalidate(); frm.validate(); } public KeyListener KeyBoardListener = new KeyListener() { @Override public void keyPressed(KeyEvent args0) { int key = args0.getKeyCode(); if(key == KeyEvent.VK_LEFT) { nBoard.S.vx = -4; } if(key == KeyEvent.VK_RIGHT) { nBoard.S.vx = 4; } if(key == KeyEvent.VK_UP) { nBoard.S.vy = -4; } if(key == KeyEvent.VK_DOWN) { nBoard.S.vy = 4; } if(key == KeyEvent.VK_SPACE) { nBoard.S.fire(); } } @Override public void keyReleased(KeyEvent args0) { int key = args0.getKeyCode(); if(key == KeyEvent.VK_LEFT) { nBoard.S.vx = 0; } if(key == KeyEvent.VK_RIGHT) { nBoard.S.vx = 0; } if(key == KeyEvent.VK_UP) { nBoard.S.vy = 0; } if(key == KeyEvent.VK_DOWN) { nBoard.S.vy = 0; } } @Override public void keyTyped(KeyEvent args0) { // TODO Auto-generated method stub } };

    Read the article

  • How to Make Objects Fall Faster in a Physics Simulation

    - by David Dimalanta
    I used the collision physics (i.e. Box2d, Physics Body Editor) and implemented onto the java code. I'm trying to make the fall speed higher according to the examples: It falls slower if light object (i.e. feather). It falls faster depending on the object (i.e. pebble, rock, car). I decided to double its falling speed for more excitement. I tried adding the mass but the speed of falling is constant instead of gaining more speed. check my code that something I put under input processor's touchUp() return method under same roof of the class that implements InputProcessor and Screen: @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { // TODO Touch Up Event if(is_Next_Fruit_Touched) { BodyEditorLoader Fruit_Loader = new BodyEditorLoader(Gdx.files.internal("Shape_Physics/Fruity Physics.json")); Fruit_BD.type = BodyType.DynamicBody; Fruit_BD.position.set(x, y); FixtureDef Fruit_FD = new FixtureDef(); // --> Allows you to make the object's physics. Fruit_FD.density = 1.0f; Fruit_FD.friction = 0.7f; Fruit_FD.restitution = 0.2f; MassData mass = new MassData(); mass.mass = 5f; Fruit_Body[n] = world.createBody(Fruit_BD); Fruit_Body[n].setActive(true); // --> Let your dragon fall. Fruit_Body[n].setMassData(mass); Fruit_Body[n].setGravityScale(1.0f); System.out.println("Eggs... " + n); Fruit_Loader.attachFixture(Fruit_Body[n], Body, Fruit_FD, Fruit_IMG.getWidth()); Fruit_Origin = Fruit_Loader.getOrigin(Body, Fruit_IMG.getWidth()).cpy(); is_Next_Fruit_Touched = false; up = y; Gdx.app.log("Initial Y-coordinate", "Y at " + up); //Once it's touched, the next fruit will set to drag. if(n < 50) { n++; }else{ System.exit(0); } } return true; } And take note, at show() method , the view size from the camera is at 720x1280: camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 1280; camera_1.viewportWidth = 720; camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); I know it's a good idea to add weight to make the falling object falls faster once I released the finger from the touchUp() after I picked the object from the upper right of the screen but the speed remains either constant or slow. How can I solve this? Can you help?

    Read the article

  • How can I downsample a texture using FBOs?

    - by snape
    I am rendering a scene to FBO as my render target whose size is 8 times the size of the orignal screen in OpenGL. Now i wan to downsample the texture generated by FBO to the size of the screen so as to achieve spatial anti aliasing. How do i achieve the down sampling ? Please provide implementation details. Note : If there is a better way of doing anti aliasing in FBOs please mention that too. I am trying to remove the aliasing in the image attached below.

    Read the article

  • How to proceed on the waypoint path?

    - by Alpha Carinae
    I'm using Dijkstra algorithm to find shortest path and I'm drawing this path on the screen. As the character object moves on, path updates itself(shortens as the object approaches the target and gets longer as the object moves away from it.) I tried to visualize my problem. This is the beginning state. 'A' node is the target, path is the blue and the object is the green one. I draw this path, from object to the closest node. In this case my problem occurs. Because 'D' node is more closer to the object than 'C' node, something like this happens: So, how can i decide that the object passed the 'D' node? Path should be look like this: One thing comes to my mind is that I use some distance variables between the two closest nodes in the route path. (In this example these are 'C' and 'D' nodes.) As the object approaches 'C' and moves away from the 'D' node at the same time, this means character passed the 'D'. However, I think there are some standardized and easy ways to solve this. What approach should I take?

    Read the article

  • Why don't Normal maps in tangent space have a single blue color?

    - by seahorse
    Normal maps are predominantly blue in color because the z component maps to Blue and since normals point out of the surface in the z direction we see Blue as the predominant component. If the above is true then why are normal maps just of one color i.e. blue and they should not be having any other shades(not even shades of blue) Since by definition tangent space is perpendicular to normal at any point we should have the normal always pointing in the Z (Blue direction) with no X(Red component) and Y(Green component). Thus the normal map(since it is a "normal map") should have had color of normals which is just the Blue(Z =Blue compoennt = 1, R=0, G=0) and the normal map should have been of only Blue color with no shades in between. But even then normal maps are not so, and they have gradients of shades in them, why is this so?

    Read the article

  • Blender: How to "meshify" an object I made from Bezier curves

    - by capcom
    I made a star shape using Bezier curves, and extruded it (see pic below): What I want to do is give it a rounder look - not just around the edges by using beveling. I want it to kind of look like this (well, that shape anyway): How would I go about doing this? Please keep in mind that I am extremely new to Blender. I thought that I could somehow turn this star into those default shapes that have tonnes of squares which I could pull out, and apply a mirror to it so that the same thing happens on both sides. I really don't know how to do it, and would appreciate your help.

    Read the article

  • How to give a ball a following texture trailing effect

    - by Evan Kohilas
    How do I draw copies of the leading texture so that there is a line of the leading ball following behind it? (that don't collide) So far I have tried to create the effect by placing another graphic 2 pixels off the graphic, but I don't see the second ball being drawn. spriteBatch.Draw(ballTexture, ballPos, null, Color.White, 0.0f, new Vector2(Ballpos.X +2, ballPos.Y +2), ballSize, SpriteEffects.None, 0); Thanks.

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

  • Embed Unity3D and load multiple games from a single app

    - by Rafael Steil
    Is is possible to export an entire unity3d project/game as an AssetBundle and load it on iOS/Android/Windows on an app that doesn't know anything about such game beforehand? What I have in mind is something like the web plugin does - it loads a series of .unity3d files over http, and render inline in the browser window. Is it even possible to do something closer for iOS/Android? I have read a lot of docs so far, but still can't be sure: http://floored.com/blog/2013/integrating-unity3d-within-ios-native-application.html http://docs.unity3d.com/Manual/LoadingResourcesatRuntime.html http://docs.unity3d.com/Manual/AssetBundlesIntro.html The code from the post at http://forum.unity3d.com/threads/112703-Override-Unity-Data-folder-path?p=749108&viewfull=1#post749108 works for Android, but how about iOS and other platforms?

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • What do you use to bundle / encrypt data?

    - by David McGraw
    More and more games are going the data driven route which means that there needs to be a layer of security around easy manipulation. I've seen it where games completely bundle up their assets (audio, art, data) and I'm wondering how they are managing that? Are there applications / libraries that will bundle and assist you with managing the assets within? If not is there any good resources that you would point to for packing / unpacking / encryption? This specific question revolves around C++, but I would be open to hear how this is managed in C#/XNA as well. Just to be clear -- I'm not out to engineer a solution to prevent hacking. At the fundamental level we're all manipulating 0's and 1's. But, we do want to keep the 99% of people that play the game from simply modifying XML files that are used to build the game world. I've seen plenty of games bundle all of their resources together. I'm simply curious about the methods they're using.

    Read the article

  • How can I simulate objects floating on water without a physics engine?

    - by user1075940
    In my game the water movement is done in a shader using Gerstner equations. The water movement looks realistic enough for a school project but I encounter serious problem when I wanted to do sailing on waves (similar to this). I managed to do collision with land by calculating quad's vertices and normals beneath ship, however same method can not be applied to water because XZ are displaced and Y is calculated in a shader :( How to approach this problem ? Is it possible to retrieve transformed grid from shader? Unfortunately no external physics libraries can be used.

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • Overriding component behavior

    - by deft_code
    I was thinking of how to implement overriding of behaviors in a component based entity system. A concrete example, an entity has a heath component that can be damaged, healed, killed etc. The entity also has an armor component that limits the amount of damage a character receives. Has anyone implemented behaviors like this in a component based system before? How did you do it? If no one has ever done this before why do you think that is. Is there anything particularly wrong headed about overriding component behaviors? Below is rough sketch up of how I imagine it would work. Components in an entity are ordered. Those at the front get a chance to service an interface first. I don't detail how that is done, just assume it uses evil dynamic_casts (it doesn't but the end effect is the same without the need for RTTI). class IHealth { public: float get_health( void ) const = 0; void do_damage( float amount ) = 0; }; class Health : public Component, public IHealth { public: void do_damage( float amount ) { m_damage -= amount; } private: float m_health; }; class Armor : public Component, public IHealth { public: float get_health( void ) const { return next<IHealth>().get_health(); } void do_damage( float amount ) { next<IHealth>().do_damage( amount / 2 ); } }; entity.add( new Health( 100 ) ); entity.add( new Armor() ); assert( entity.get<IHealth>().get_health() == 100 ); entity.get<IHealth>().do_damage( 10 ); assert( entity.get<IHealth>().get_health() == 95 ); Is there anything particularly naive about the way I'm proposing to do this?

    Read the article

  • Android Live Testing

    - by Matthew Dockerty
    I am making a game for android and in it I am using sensors which are not available in the emulator. At the moment I am connecting my device and transferring the apk, then installing to test but that is a pain to do, and I have gotten to the stage where I need to start logging values for debugging. I have gone into the run configs of my app and set it to prompt me to pick a device, but my device is never in the list when it is connected to my PC and I try to run it. How am I supposed to set it up to work properly? Thanks for the help.

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >