Search Results

Search found 21194 results on 848 pages for 'game state'.

Page 380/848 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Maya is lagging in a specific way...?

    - by Aerovistae
    My Maya installation worked perfectly. It is not my computer. Something caused it to stop working overnight, somehow. When I try to drag a vertex or something like that, it moves the vertex, but then I have to click like 3 times somewhere outside the mesh before the actual mesh will catch up and follow the vertex. Until I do that, it just stays as it was, with a floating vertex somewhere inside it or outside it. It makes modeling borderline impossible and completely infuriating. What ought to be happening is what we're all used to-- as I move the vertex, the mesh follows it actively, so I can see what it looks like at every given moment until I release the vertex in its new position. Other weird thing: this only applies to complex meshes, like a couple thousand faces. A simple cube works fine. What gives?? Anybody?

    Read the article

  • Spherical to Cartesian Coordinates

    - by user1258455
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • Javascript - Canvas image never appears on first function run

    - by Matt
    I'm getting a bit of a weird issue, the image never shows the first time you run the game in your browser, after that you see it every time. If you close your browser and re open it and run the game again, the same issue occurs - you don't see the image the first time you run it. Here's the issue in action, just hit a wall and there's no image the first time on the end game screen. Any help would be appreciated. Regards, Matt function showGameOver() { ctx.clearRect(0, 0, canvas.width, canvas.height); ctx.fillStyle = "black"; ctx.font = "16px sans-serif"; ctx.fillText("Game Over!", ((canvas.width / 2) - (ctx.measureText("Game Over!").width / 2)), 50); ctx.font = "12px sans-serif"; ctx.fillText("Your Score Was: " + score, ((canvas.width / 2) - (ctx.measureText("Your Score Was: " + score).width / 2)), 70); myimage = new Image(); myimage.src = "xcLDp.gif"; var size = [119, 26], //set up size coord = [443, 200]; ctx.font = "12px sans-serif"; ctx.fillText("Restart", ((canvas.width / 2) - (ctx.measureText("Restart").width / 2)), 197); ctx.drawImage( //draw it on canvas myimage, coord[0], coord[1], size[0], size[1] ); $("canvas").click(function(e) { //when click.. if ( testIfOver(this, e, size, coord) ) { startGame(); //reload } }); $("canvas").mousemove(function(e) { //when mouse moving if ( testIfOver(this, e, size, coord) ) { $(this).css("cursor", "pointer"); //change the cursor } else { $(this).css("cursor", "default"); //change it back } }); function testIfOver(ele,ev,size,coord){ if ( ev.pageX > coord[0] + ele.offsetLeft && ev.pageX < coord[0] + size[0] + ele.offsetLeft && ev.pageY > coord[1] + ele.offsetTop && ev.pageY < coord[1] + size[1] + ele.offsetTop ) { return true; } return false; } }

    Read the article

  • Zooming to point of interest

    - by user1010005
    I have the following variables: Point of interest which is the position(x,y) in pixels of the place to focus. Screen width,height which are the dimensions of the window. Zoom level which sets the zoom level of the camera. And this is the code I have so far. void Zoom(int pointOfInterestX,int pointOfInterstY,int screenWidth, int screenHeight,int zoomLevel) { glTranslatef( (pointOfInterestX/2 - screenWidth/2), (pointOfInterestY/2 - screenHeight/2),0); glScalef(zoomLevel,zoomLevel,zoomLevel); } And I want to do zoom in/out but keep the point of interest in the middle of the screen. but so far all of my attempts have failed and I would like to ask for some help.

    Read the article

  • Converting a WIndows Store App to Android

    - by pm_2
    I cross posted this from SO I'm very new to Xamarin. I have a few published Windows Store apps and want to convert them to Android. I'm attempting to use Xamarin for this. I'm just using the free version of Xamarin. Here's where I am so far: I am trying two apps - one was build with Monogame and one is just build on the WinRT framework. I have managed to get them both into Xamarin studio, basically by hacking the csproj files. I'm getting build errors because it's missing references. There does appear to be some equivalent Mono / .Net4 libraries, but things like Microsoft.Xna.Framework seem to be missing. So, my question is: am I going about this the right way and, if so, am I missing a step ("convert dependencies" or something)? If I'm not going about this the right way then how should I be doing this (I found very few online resources on this subject)?

    Read the article

  • Splitting Pygame functionality between classes or modules?

    - by sec_goat
    I am attempting to make my pygame application more modular so that different functionalities are split up into different classes and modules. I am having some trouble getting pygame to allow me to draw or load images in secondary classes when the display has been set and pygame.init() has been done in my main class. I have typically used C# and XNA to accomplish this sort of behavior, but this time I need to use python. How do I init pygame in class1, then create an instance of class2 which loads and converts() images. I have tried pygame.init() in class 2 but then it tells me no display mode has been set, when it has been set in class1. I am under the impression i do not wnat to create multiple pygame.displays as that gets problematic I am probably missing something pythonic and simple but I am not sure what. How do I create a Display class, init python and then have other modules do my work like loading images, fonts etc.? here is the simplest version of what I am doing: class1: def __init__(self): self.screen = pygame.display.set_mode((600,400)) self.imageLoader = class2() class2: def __init__(self): self.images = ['list of images'] def load_images(): self.images = os.listdir('./images/') #get all images in the images directory for img in self.images: #read all images in the directory and load them into pygame new_img = pygame.image.load(os.path.join('images', img)).convert() scale_img = pygame.transform.scale(new_img, (pygame.display.Info().current_w, pygame.display.Info().current_h)) self.images.append(scale_img) if __name__ == "__main__": c1 = class1() c1.imageLoader.load_images() Of course when it tries to load an convert the images it tells me pygame has not been initialized, so i throw in a pygame.init() in class2 ( i have heard it is safe to init multiple times) and then the error goes to pygame.error: No video mode has been set

    Read the article

  • Very basic OpenGL ES 2 error

    - by user16547
    This is an incredibly simple shader, yet I'm having a lot of trouble understanding what's wrong with it. I'm trying to send a float to my fragment shader. Its purpose is to adjust the alpha of the fragment colour. Here is my fragment shader: precision mediump float; uniform sampler2D u_Texture; uniform float u_Alpha; varying vec2 v_TexCoordinate; void main() { gl_FragColor = texture2D(u_Texture, v_TexCoordinate); gl_FragColor.a *= u_Alpha; } and below is my rendering method. I get a 1282 (invalid operation) on the GLES20.glUniform1f(u_Alpha, alpha); line. alpha is 1 (but I tried other values as well) and transparent is true: public void render() { GLES20.glUseProgram(mProgram); if(transparent) { GLES20.glEnable(GLES20.GL_BLEND); GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA); GLES20.glUniform1f(u_Alpha, alpha); } Matrix.setIdentityM(mModelMatrix, 0); Matrix.rotateM(mModelMatrix, 0, angle, 0, 0, 1); Matrix.translateM(mModelMatrix, 0, x, y, z); Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0); Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0); GLES20.glUniformMatrix4fv(u_MVPMatrix, 1, false, mMVPMatrix, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[0]); GLES20.glVertexAttribPointer(a_Position, 3, GLES20.GL_FLOAT, false, 12, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[1]); GLES20.glVertexAttribPointer(a_TexCoordinate, 2, GLES20.GL_FLOAT, false, 8, 0); //snowTexture start GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]); GLES20.glUniform1i(u_Texture, 0); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo[0]); GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, indices.capacity(), GLES20.GL_UNSIGNED_BYTE, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0); if(transparent) { GLES20.glDisable(GLES20.GL_BLEND); } GLES20.glUseProgram(0); }

    Read the article

  • Tiled perlin/value noise texture with (2^n)+1 size

    - by tobi
    Actually what I have in mind is value noise I think, but what I am going to ask applies to both of them. It is known that if you want to produce tiled texture by using the perlin/value noise, the size of the texture should be specified as the power of 2 (2^n). Without any modifications to the algorithm when you use the size of (2^n)+1 the texture cannot be tiled anymore, so I am wondering whether it is possible (by modifying the algorithm somehow) to generate such tiling texture with the size of (2^n)+1. The article (from which I have my implementation) is here: http://devmag.org.za/2009/04/25/perlin-noise/ I am aware that I can produce texture with 2^n size and just copy twice the last column/row from the ends to make it (2^n)+1, but I don't want to, because such repetitions are visible too much.

    Read the article

  • Premultiplying matrices with Perspective destroys them

    - by Shadows In Rain
    If I apply world_to_camera, perspective and camera_to_screen to my mesh, everything is okay. But if I premultiply given matrices (i.e. transform = world_to_camera * perpective * camera_to_screen) before applying, then it seems like only perspective has effect. If it is important... My 3d framework was written from scratch (test project for job interview). But it works flawlessly, or at least I think so. So, question. This is expected behaviour, or my implementation is wrong?

    Read the article

  • Player Movement DirectX

    - by SullY
    I'm reading on a Book that's about Gamedevelopment with C++ and DirectX 9. There is something that interrests me: It says that playermovements are increasing with the power of the CPU. Becouse a faster CPU will move the player with every frame ( better CPU = better FPS ) To bypass it, it says you have just to multiplicate time*movementfactor . I'd like to know is there an another way to bypass it ?

    Read the article

  • Multiplication for MVP matrices: Any benefits to doing so within the vertex shader?

    - by Nick Wiggill
    I'd like to understand under what circumstances (if any) it is worth doing MVP matrix multiplication inside a vertex shader. The vertex shader is run once per vertex, and a single mesh typically contains many vertices. All MVP inputs remain the same for each vertex in the vertex batch relating to a given draw call (model). Surely then, you're always better off keeping the multiplications in the client code, such that you pass in the whole MVP precalculated as a uniform? (avoiding redundant ops between individual vertices)

    Read the article

  • How to detect collisions between sprite and a user generated shape of some sort?

    - by Huwell
    How to detect a collision between a sprite and a user generated shape of some sort. For example. There are some objects on the screen. The user takes their finger and draws an circle shape around a object (The selection rule is painting circle around the sprite, but the painting shapes may be various). I need to detect which object selected, which just like: (demo images) http://i52.tinypic.com/28h0t1g.png

    Read the article

  • Why do consoles have so little memory compared to classic computers?

    - by jokoon
    I remember the Playstation having 2MB ram and 1MB graphic memory. The Playstation 3 now has only 256MB ram and 256MB graphic memory, and I'm sure that the day the console was released, even laptop's "standard" capacity was at least 1GB. So why do they put so little memory in their machines, while developers would benefit a lot by having more ? Or is the memory that much faster than desktops and thus more expensive ? Or is it not that much worth it for developers ? What are the Sony/XBox/Nintendo engineers thinking that seems to be the same reason ?

    Read the article

  • What is going on in this SAT/vector projection code?

    - by ssb
    I'm looking at the example XNA SAT collision code presented here: http://www.xnadevelopment.com/tutorials/rotatedrectanglecollisions/rotatedrectanglecollisions.shtml See the following code: private int GenerateScalar(Vector2 theRectangleCorner, Vector2 theAxis) { //Using the formula for Vector projection. Take the corner being passed in //and project it onto the given Axis float aNumerator = (theRectangleCorner.X * theAxis.X) + (theRectangleCorner.Y * theAxis.Y); float aDenominator = (theAxis.X * theAxis.X) + (theAxis.Y * theAxis.Y); float aDivisionResult = aNumerator / aDenominator; Vector2 aCornerProjected = new Vector2(aDivisionResult * theAxis.X, aDivisionResult * theAxis.Y); //Now that we have our projected Vector, calculate a scalar of that projection //that can be used to more easily do comparisons float aScalar = (theAxis.X * aCornerProjected.X) + (theAxis.Y * aCornerProjected.Y); return (int)aScalar; } I think the problems I'm having with this come mostly from translating physics concepts into data structures. For example, earlier in the code there is a calculation of the axes to be used, and these are stored as Vector2, and they are found by subtracting one point from another, however these points are also stored as Vector2s. So are the axes being stored as slopes in a single Vector2? Next, what exactly does the Vector2 produced by the vector projection code represent? That is, I know it represents the projected vector, but as it pertains to a Vector2, what does this represent? A point on a line? Finally, what does the scalar at the end actually represent? It's fine to tell me that you're getting a scalar value of the projected vector, but none of the information I can find online seems to tell me about a scalar of a vector as it's used in this context. I don't see angles or magnitudes with these vectors so I'm a little disoriented when it comes to thinking in terms of physics. If this final scalar calculation is just a dot product, how is that directly applicable to SAT from here on? Is this what I use to calculate maximum/minimum values for overlap? I guess I'm just having trouble figuring out exactly what the dot product is representing in this particular context. Clearly I'm not quite up to date on my elementary physics, but any explanations would be greatly appreciated.

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • Dynamic content realoding

    - by Kikaimaru
    Is there a relatively simple way to dynamicaly reload content files? (ie: effect files) I know i can do following: Detect change of file Run content pipeline to rebuild that specific file Unload ALL content that was loaded Load All content And use double references to reference content files. Problem is with step 3 (and step 2 isn't that nice too). But i need to unload everything because if i have model Hero.x which references Model.fx effect, and i change Model.fx file, i need to reload Hero.x file which will then call LoadExternalReference on Model.fx. So I guess question is, did someone mange to make this work without rewriting whole ContentManager (and every ContentReader) and tracking calls to LoadExternalReference?

    Read the article

  • OpenGL ES multiple objects not being rendered

    - by ladiesMan217
    I am doing the following to render multiple balls move around the screen but only 1 ball is seen to appear and function. I don't know why the rest (count-1) balls are not being drawn public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glDisable(GL10.GL_DITHER); gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glClientActiveTexture(DRAWING_CACHE_QUALITY_HIGH); gl.glLoadIdentity(); for(int i=0;i<mParticleSystem.getParticleCount();i++){ gl.glPushMatrix(); gl.glTranslatef(mParticleSystem.getPosX(i), mParticleSystem.getPosY(i), -3.0f); gl.glScalef(0.3f, 0.3f, 0.3f); gl.glColor4f(r.nextFloat(), r.nextFloat(), r.nextFloat(), 1); gl.glEnable(GL10.GL_TEXTURE_2D); mParticleSystem.getBall(i).draw(gl); gl.glPopMatrix(); } } Here is my void draw(GL10 gl) method public void draw(GL10 gl){ gl.glEnable(GL10.GL_CULL_FACE); gl.glEnable(GL10.GL_SMOOTH); gl.glEnable(GL10.GL_DEPTH_TEST); // gl.glTranslatef(0.2f, 0.2f, -3.0f); // gl.glScalef(size, size, 1.0f); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertBuff); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points/2); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); }

    Read the article

  • Make Gameobject Stand On Surface Facing Certain Direction

    - by Julian
    I want to make a biped character stand on any surface I click on. Surfaces have up vectors of any of positive or negative X,Y,Z. So imagine a cube with each face being a gameobject whose up vector pointing directly away from the cube. If my character is facing "forward" and I click on a surface which is to the left or right of me ( left or right walls), I want my character to now be standing on that surface but still be facing in the direction he initially was. If I click on a wall which is in the forward path of my character i want him to now be standing on that surface and his forward to now be what was once "up" relative to my character. Here is the code I am working with now. void Update() { if (Input.GetMouseButtonUp (0)) { RaycastHit hit; var ray = Camera.main.ScreenPointToRay(Input.mousePosition); if (Physics.Raycast(ray, out hit)) { Vector3 upVectBefore = transform.up; Vector3 forwardVectBefore = transform.forward; Quaternion rotationVectBefore = transform.rotation; Vector3 hitPosition = hit.transform.position; transform.position = hitPosition; float lookDifference = Vector3.Distance(hit.transform.up, forwardVectBefore); if(Vector3.Distance(hit.transform.up, upVectBefore) < .23) //Same normal { transform.rotation = rotationVectBefore; } else if(lookDifference > 1.412 && lookDifference <= 1.70607) //side wall { transform.up = hit.transform.up; transform.forward = forwardVectBefore; } else //head on wall { transform.up = hit.transform.up; transform.forward = upVectBefore; } } } } The first case "Same normal" works fine, however the other two do not work as I would like them to. Sometimes my character is laying down on the surface or on the wrong side of the surface. Does anyone know nice way of solving this problem?

    Read the article

  • Unity: Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics and I'd like to know what is the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have 2D png for players placeholders and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However it would be a waste of memory and thus less efficient. What do you suggest me?

    Read the article

  • Separate collision mesh model?

    - by Menno Gouw
    I want to have another go at 3D within XNA. What I have seen from some other games that they just have a separate very low poly model "cage" around the environment model. However I can not find any reference to this. I have not that much experience with XNA 3D either. Is it possible to have this cage within each of my environmental models already? Lets just say I call the mesh within the .FBX wall and col_wall. How would I call to these different meshes within XNA? The player would just have a tight collision cube around. To make it a bit more efficient I will be making divide the map up by cubes and only calculate collision if the player is in it. Question two: I can't find anywhere to do cube vs mesh collision. Is there a method for this? Or perhaps it is possible to build my collision cage out of cubes in the 3D app and on loading of the models in XNA replace them directly by cubes? So I could just do box to box collision which should be very cheap and still give the player the ability to move over ledges on the static models.

    Read the article

  • glTranslate, how exactly does it work?

    - by mykk
    I have some trouble understanding how does glTranslate work. At first I thought it would just simply add values to axis to do the transformation. However then I have created two objects that would load bitmaps, one has matrix set to GL_TEXTURE: public class Background { float[] vertices = new float[] { 0f, -1f, 0.0f, 4f, -1f, 0.0f, 0f, 1f, 0.0f, 4f, 1f, 0.0f }; .... private float backgroundScrolled = 0; public void scrollBackground(GL10 gl) { gl.glLoadIdentity(); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glTranslatef(0f, 0f, 0f); gl.glPushMatrix(); gl.glLoadIdentity(); gl.glMatrixMode(GL10.GL_TEXTURE); gl.glTranslatef(backgroundScrolled, 0.0f, 0.0f); gl.glPushMatrix(); this.draw(gl); gl.glPopMatrix(); backgroundScrolled += 0.01f; gl.glLoadIdentity(); } } and another to GL_MODELVIEW: public class Box { float[] vertices = new float[] { 0.5f, 0f, 0.0f, 1f, 0f, 0.0f, 0.5f, 0.5f, 0.0f, 1f, 0.5f, 0.0f }; .... private float boxScrolled = 0; public void scrollBackground(GL10 gl) { gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glLoadIdentity(); gl.glTranslatef(0f, 0f, 0f); gl.glPushMatrix(); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glLoadIdentity(); gl.glTranslatef(boxScrolled, 0.0f, 0.0f); gl.glPushMatrix(); this.draw(gl); gl.glPopMatrix(); boxScrolled+= 0.01f; gl.glLoadIdentity(); } } Now they are both drawn in Renderer.OnDraw. However background moves exactly 5 times faster. If I multiply boxScrolled by 5 they will be in sinc and will move together. If I modify backgrounds vertices to be float[] vertices = new float[] { 1f, -1f, 0.0f, 0f, -1f, 0.0f, 1f, 1f, 0.0f, 0f, 1f, 0.0f }; It will also be in sinc with the box. So, what is going under glTranslate?

    Read the article

  • BOX2D Kinematic Platform with parallax layer

    - by Marcell
    I am using a kinematic body for my moving platform on x-axis, so I set the linear velocity to b2vec2(5,0). When the player jump on the platform, it works like it is suppose to. But the thing is that my platform is on the obstacle layer and I am moving it with the parallax layer. So if I setTransform the kinematic platform to follow the obstacle layer than it's physics will not work and the player will slip-off the platform. I'm developing for iOS and using cocos2d api. Anyway around this?

    Read the article

  • Rotation angle based on touch move

    - by Siddharth
    I want to rotate my stick based on the movement of the touch on the screen. From my calculation I did not able to find correct angle in degree. So please provide guidance, my code snippet for that are below. if (pSceneTouchEvent.isActionMove()) { pValueX = pSceneTouchEvent.getX(); pValueY = CAMERA_HEIGHT - pSceneTouchEvent.getY(); rotationAngle = (float) Math.atan2(pValueX, pValueY); stick.setRotation((float) MathUtils.radToDeg(rotationAngle)); }

    Read the article

  • How can I load .obj files in the Soya3D engine?

    - by John Riselvato
    I recently just found soya3d. I want to import .obj files, but it seems to only accept .data files. How can I import .obj files? Importing a .obj file named "house" produces this error: Traceback (most recent call last): File "introduction.py", line 7, in <module> model = soya.Model.get("house") File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 259, in get return klass._alls.get(filename) or klass._alls.setdefault(filename, klass.load(filename)) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 268, in load dirname = klass._get_directory_for_loading_and_check_export(filename) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 194, in _get_directory_for_loading_and_check_export dirname = klass._get_directory_for_loading(filename, ext) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 171, in _get_directory_for_loading raise ValueError("Cannot find a %s named %s!" % (klass, filename)) ValueError: Cannot find a <class 'soya.Model'> named house! * Soya3D * Quit...

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >