Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 373/657 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Getting Center and Radius of Irregural Object

    - by Moaz ELdeen
    I have drawn an asteroid object manually , and would like to get its center/radius by a specific equation. I think I can get them by calculated and hard-coded values. The code to draw the asteroid: void Asteroid::Draw() { float ratio = app::getWindowWidth()/app::getWindowHeight(); gl::pushMatrices(); gl::translate(m_Pos*ratio); gl::scale(3.5*ratio,3.5*ratio,3.5*ratio); gl::color(ci::Color(1,1,1)); gl::drawLine(Vec2f(-15,0),Vec2f(-10,-5)); gl::drawLine(Vec2f(-10,-5),Vec2f(-5,-5)); gl::drawLine(Vec2f(-5,-5),Vec2f(-5,-8)); gl::drawLine(Vec2f(-5,-8),Vec2f(5,-8)); gl::drawLine(Vec2f(5,-8),Vec2f(5,-5)); gl::drawLine(Vec2f(5,-5),Vec2f(10,-5)); gl::drawLine(Vec2f(10,-5),Vec2f(15,0)); gl::drawLine(Vec2f(15,0),Vec2f(10,5)); gl::drawLine(Vec2f(10,5),Vec2f(-10,5)); gl::drawLine(Vec2f(-10,5),Vec2f(-10,5)); gl::drawLine(Vec2f(-15,0),Vec2f(-10,5)); gl::popMatrices(); } According to the answer I have written that code to calculate the radius, is it correct or not ? cinder::Vec2f Asteroid::getCenter() { return ci::Vec2f(m_Pos.x, m_Pos.y); } double Asteroid::getRadius() { ci::Vec2f _vec = (getCenter()- Vec2f(15,5)); return _vec.length()*0.3f; }

    Read the article

  • What should I do if my text exceeds my text render target boundaries?

    - by user1423893
    I have a method for drawing strings in 3D that does the following: Set a render target Draw each character as a quadrangle using a orthographic projection to the render target Unset the render target Draw the render target texture using a perspective projection and a world transform My problem is how to deal with strings whose characters length exceeds that of the render target dimensions? For example if I have string "This is a reallllllllllly long string" and the render target can't accommodate it, it will only capture "This is a realllll". The render target (and its size) could be set each frame but wouldn't that be far too costly?

    Read the article

  • Making video from 3D gaphics in OpenGL

    - by MVTC
    What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N-Body gravity simulation by rendering non-real-time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT: I am also interested in providing the described functionality: The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • Should iOS games use a Timer?

    - by ????
    No matter what frameworks we use -- Core Graphics, Cocos2D, OpenGL ES -- to write games, should a timer be used (for games that has animation even when a user doesn't do any input, such as after firing a missile and waiting to see if the UFO is hit)? I read that NSTimer might not get fired until after scheduled time (interval), and CADisplayLink can delay and get fired at a later time as well, only that it tells you how late it is so you can move the object more, so it can make the object look like it skipped frame. Must we use a Timer? And if so, what is the best one to use?

    Read the article

  • Drawing two orthogonal strings in 3d space in Android Canvas?

    - by hasanghaforian
    I want to draw two strings in canvas.First string must be rotated around Y axis,for example 45 degrees.Second string must be start at the end of first string and also it must be orthogonal to first string. This is my code: String text = "In the"; float textWidth = redPaint.measureText(text); Matrix m0 = new Matrix(); Matrix m1 = new Matrix(); Matrix m2 = new Matrix(); mCamera = new Camera(); canvas.setMatrix(null); canvas.save(); mCamera.rotateY(45); mCamera.getMatrix(m0); m0.preTranslate(-100, -100); m0.postTranslate(100, 100); canvas.setMatrix(m0); canvas.drawText(text, 100, 100, redPaint); mCamera = new Camera(); mCamera.rotateY(90); mCamera.getMatrix(m1); m1.preTranslate(-textWidth - 100, -100); m1.postTranslate(textWidth + 100, 100); m2.setConcat(m1, m0); canvas.setMatrix(m2); canvas.drawText(text, 100 + textWidth, 100, greenPaint); But in result,only first string(text with red font)is visible. How can I do drawing two orthogonal strings in 3d space?

    Read the article

  • Rotate an image and get back to its original position - opengles glkit

    - by Manoj
    I need to rotate an image in opengles GLkit and get it back to its original position in GLkit. rotation += 5; _modelViewMatrix = GLKMatrix4Rotate( _modelViewMatrix, GLKMathDegreesToRadians(5), 1, 0, 0); _modelViewMatrix = GLKMatrix4Rotate( _modelViewMatrix, GLKMathDegreesToRadians(rotation), 1,0,0); I need to move it in x axis for certain amount and getting back to its original position from where it started. How should i do it?

    Read the article

  • How to configure background image to be at the bottom OpenGL Android

    - by Maxim Shoustin
    I have class that draws white line: public class Line { //private FloatBuffer vertexBuffer; private FloatBuffer frameVertices; ByteBuffer diagIndices; float[] vertices = { -0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; public Line(GL10 gl) { // a float has 4 bytes so we allocate for each coordinate 4 bytes ByteBuffer vertexByteBuffer = ByteBuffer.allocateDirect(vertices.length * 4); vertexByteBuffer.order(ByteOrder.nativeOrder()); // allocates the memory from the byte buffer frameVertices = vertexByteBuffer.asFloatBuffer(); // fill the vertexBuffer with the vertices frameVertices.put(vertices); // set the cursor position to the beginning of the buffer frameVertices.position(0); } /** The draw method for the triangle with the GL context */ public void draw(GL10 gl) { gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, frameVertices); gl.glColor4f(1.0f, 1.0f, 1.0f, 1f); gl.glDrawArrays(GL10.GL_LINE_LOOP , 0, vertices.length / 3); gl.glLineWidth(5.0f); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } It works fine. The problem is: When I add BG image, I don't see the line glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setEGLConfigChooser(8, 8, 8, 8, 16, 0); glView.setRenderer(new mainRenderer(this)); // Use a custom renderer glView.setBackgroundResource(R.drawable.bg_day); // <- BG glView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); glView.getHolder().setFormat(PixelFormat.TRANSLUCENT); How to get rid of that?

    Read the article

  • Unity mouse input not working in webplayer build

    - by Califer
    I have a button script with the following code void OnMouseDown() { animation.Play("button-squish"); enlarged = true; audio.PlayOneShot(buttonSound); } void OnMouseUpAsButton() { if (enlarged) { SelectThisButton(); enlarged = false; animation.Play("button-return"); } } void OnMouseExit() { if (enlarged) { enlarged = false; animation.Play("button-return"); } } It works great in the editor, but when I made a build and tested it in Chrome none of the buttons had any response. Further testing revealed that it did work in Firefox. Rather than telling people to change their browser if they want to play, I want to make the button code work. How else can I get the buttons to know when they're being pressed if the built-in stuff isn't working?

    Read the article

  • How would I use JBox2d in Java?

    - by BluFire
    So I did some research and a found Box2d. I then proceeded to download it and the testbed. Now that i have it, I don't know how to properly use it. I'm looking for a clear simple answer on how to use the engine. The things I did was that I put it into a lib folder and referenced the JBox2D jar file. After that i got stuck. How can i use this to program games for android? I'm very confused since Box2d was intended for C++.

    Read the article

  • Modular building technique with angles? (A roof)

    - by Mungoid
    Ive been spending a bit of time lately studying the modular buildings of many games and reading/viewing several tutorials about it as well, but almost every example I see uses a plain square building that does not have any angled roof or similar. In all my applications (CS6, Blender/Max, UDK) I adhere to the same grid spacing and I get pretty good results, but trying to make modular angled pieces is confusing me as I'm not sure the best way to approach it. Below is some shots of my template sheet and workflow I have been doing. Should I do the roof separately or is it possible for me to keep it in the same texture sheet? The main issue is below. I have made a couple modular roof pieces but when i try to use them, i end up needing to model multiple other parts to fill gaps based on what roof shape i want. I then model those 'filler' pieces and now i have that much less space left in my texture sheet and those pieces are usually not that reusable for anything else. This is where im not sure how to proceed. If anyone has any links to documents or papers talking about this or advice, I would greatly appreciate it! =-) My main roof pieces with the gaps My power of 2 texture sheet, with 16x16 grid squares. The texture sheet loaded into blender on a 16x16 plane and starting to separate and extrude.

    Read the article

  • Isometric Collision Detection

    - by Sleepy Rhino
    I am having some issues with trying to detect collision of two isometric tile. I have tried plotting the lines between each point on the tile and then checking for line intercepts however that didn't work (probably due to incorrect formula) After looking into this for awhile today I believe I am thinking to much into it and there must be a easier way. I am not looking for code just some advise on the best way to achieve detection of overlap

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • TGA loader: reverse y-axis

    - by aVoX
    I've written a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option origin set to Top Left (Note: Actually TGA files are meant to be stored upside down - Bottom Left in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGA, so my question is: How do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • Identifying connected lines drawn free-hand by a user

    - by rawrgoesthelion
    I have a series of 'images' described by a mixture of connected lines and curves. Users will draw on the screen, free hand, and my goal is to break their drawing down into a series of lines and curves that can be matched with the 'images' in my set. For the sake of simplicity, let's assume this is occurring on a touch screen. These lines will be connected. Each time the user's finger moves, the dx and dy is recorded. The drawing is considered complete and analyzed when the user's finger leaves the screen. I'm having trouble figuring out a good way to break the user's drawing down into lines. Is there any well known approach to this problem, a C++ library that solves it, or any good articles/technical papers on how to achieve this?

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • SDL_DisplayFormat works, but not SDL_DisplayFormatAlpha

    - by Bounderby
    The following code is intended to display a green square on a black background. It executes, but the green square does not show up. However, if I change SDL_DisplayFormatAlpha to SDL_DisplayFormat the square is rendered correctly. So what don't I understand? It seems to me that I am creating *surface with an alpha mask and I am using SDL_MapRGBA to map my green color, so it would be consistent to use SDL_DisplayFormatAlpha as well. (I removed error-checking for clarity, but none of the SDL API calls fail in this example.) #include <SDL.h> int main(int argc, const char *argv[]) { SDL_Init( SDL_INIT_EVERYTHING ); SDL_Surface *screen = SDL_SetVideoMode( 640, 480, 32, SDL_HWSURFACE | SDL_DOUBLEBUF ); SDL_Surface *temp = SDL_CreateRGBSurface( SDL_HWSURFACE, 100, 100, 32, 0, 0, 0, ( SDL_BYTEORDER == SDL_BIG_ENDIAN ? 0x000000ff : 0xff000000 ) ); SDL_Surface *surface = SDL_DisplayFormatAlpha( temp ); SDL_FreeSurface( temp ); SDL_FillRect( surface, &surface->clip_rect, SDL_MapRGBA( screen->format, 0x00, 0xff, 0x00, 0xff ) ); SDL_Rect r; r.x = 50; r.y = 50; SDL_BlitSurface( surface, NULL, screen, &r ); SDL_Flip( screen ); SDL_Delay( 1000 ); SDL_Quit(); return 0; }

    Read the article

  • matrix to transform unit cube to space defined by 8 arbitrary points

    - by aadster
    I asked a question relating to similar to this already, but I think this is a clearer objective of what Im trying to achieve.. or whether its possible at all! Im trying to find a transformation (matrix ideally) which would transform the 8 points of a 3d unit cube to 8 arbitrary points in space. The 8 target points have no known structure. e.g: My gut feeling is that a matrix is unable to provide this xform since the cube faces vertices can be concave.. but are there any other methods of transformation? Thanks!

    Read the article

  • Why Android for new (micro) consoles?

    - by Klaim
    There are a lot of new (micro) consoles with Android inside coming soon or already there (OUYA for example). My question is: why use Android and not another OS as base for these consoles? I assume that there are pragmatic answers to this but I can't see any clear killer feature. For example, one can assume that any stable Linux distribution would work (like Valve seem to think). Android is primarily targetted at mobile platforms which mean it is built around the idea of being interrupted which goes against a lot of what console devs needs in new hardware - but is not killing Android as a platform for gamedev either as it's just a constraint. Why not another OS? What's the Android killer features that make micro-console builder using it instead of Linux or anything else?

    Read the article

  • Issues implementing arcball viewer

    - by Pris
    My scene has a simple cube, and a camera built with the lookAt function (I'm using OpenGL). The scene renders fine, and I'm sure I have my model/view/projection matrices set up correctly. Now I'm trying to implement arcball rotation for my camera, but I'm having some trouble. I've got it down to calculating the angle/axis rotation for a virtual sphere in normalized screen coordinates. That means when I move my mouse left to right, I get an angle around the Y axis... and moving my mouse up/down will get me an angle about X. I'm not sure where to go from here -- what do I need to do with my axis so I can apply the angle to simulate camera rotation about its viewpoint? If I try directly applying the axis/angle rotation the camera/view transform I get what you'd expect. The view is rotated about the world axes which the mouse moving over the virtual sphere on the screen corresponds to. So if I move the mouse up/down the view rotates about the world's X axis (what I get reminds me of a first-person view)... but this isn't what I want. I think I need the axis I get to be transformed so it passes through the camera viewpoint and is oriented correct in reference to the camera... but I don't know if that's right or how to do that.

    Read the article

  • How Would I create alternate players (Turn base Event)

    - by Blue
    The picture above shows 2 players. Each containing 3 characters. I want to know how to make a Turn based event starting with player 1 alternating turns with player 2. And in every alternation each character gets a turn. If a character dies, the next character on the same team goes, and so on. How would I create this? Is there a tutorial? I haven't made any turn-based games so I don't know how to program these kinds of stuff.

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • How to draw unlimited FPS on Mac OS X with OpenGL?

    - by V1ru8
    I d'like to draw as many frames as possible with OpenGL on Mac OS X to measure the performance on different scenes. What I've tried so far: Using a CVDisplayLink that has NSOpenGLCPSwapInterval set to 0, so it does not sync with the Display. But with that it's still stuck at max 60FPS Using normal -drawRect: with a timer that fires 1/1000sec and calls -setNeedsDisplay: Still not more than 60FPS Same as 2. but I call -display in the timer callback. With that I get the FPS above 60, but it still stops at 100-110 FPS. Although the frame rate should easily be at 10times more. Andy idea how I can really draw as many frames as possible?

    Read the article

  • How to calculate new velocities between resting objects (AABB) after accelerations?

    - by Tiedye
    lately I have been trying to create a 2D platformer engine in C++ with Direct2D. The problem I am currently having is getting objects that are resting against each other to interact correctly after accelerations like gravity have been applied to them. Right now I can detect collisions and respond to them correctly (I think) and when objects collide they remember what other objects they're resting against so objects can be pushed by other objects (note that there is no bounce in any collisions so when objects collide they are guaranteed to become resting until something else happens). Every time the simulation advances, the acceleration for objects is applied to their velocities (for example vx += ax * t, where t is time elapsed since last advancement). After these accelerations are applied, I want to check if any objects that are resting against each other are moving at different speeds than their counterparts (as different objects can have different accelerations) and depending on that difference either unlink the two objects so they are no longer resting, or even out their velocities so they are moving at the same speed once again. I am having trouble creating an algorithm that can do this across many resting objects. Here's a diagram to help explain my problem

    Read the article

  • OpenGL ES 2.0. Sprite Sheet Animation

    - by Project Dumbo Dev
    I've found a bunch of tutorials on how to make this work on Open GL 1 & 1.1 but I can't find it for 2.0. I would work it out by loading the texture and use a matrix on the vertex shader to move through the sprite sheet. I'm looking for the most efficient way to do it. I've read that when you do the thing I'm proposing you are constantly changing the VBO's and that that is not good. Edit: Been doing some research myself. Came upon this two Updating Texture and referring to the one before PBO's. I can't use PBO's since i'm using ES version of OpenGL so I suppose the best way is to make FBO's but, what I still don't get, is if I should create a Sprite atlas/batch and make a FBO/loadtexture for each frame of if I should load every frame into the buffer and change just de texture directions.

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >