Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 442/1285 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Arcball 3D camera - how to convert from camera to object coordinates

    - by user38873
    I have checked multiple threads before posting, but i havent been able to figure this one out. Ok so i have been following this tutorial, but im not using glm, ive been implementing everything up until now, like lookat etc. http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Arcball So i can rotate with the click and drag of the mouse, but when i rotate 90º degrees around Y and then move the mouse upwards or donwwards, it rotates on the wrong axis, this problem is demonstrated on this part of the tutorial An extra trick is converting the rotation axis from camera coordinates to object coordinates. It's useful when the camera and object are placed differently. For instace, if you rotate the object by 90° on the Y axis ("turn its head" to the right), then perform a vertical move with your mouse, you make a rotation on the camera X axis, but it should become a rotation on the Z axis (plane barrel roll) for the object. By converting the axis in object coordinates, the rotation will respect that the user work in camera coordinates (WYSIWYG). To transform from camera to object coordinates, we take the inverse of the MV matrix (from the MVP matrix triplet). What i have to do acording to the tutorial is convert my axis_in_camera_coordinates to object coordinates, and the rotation is done well, but im confused on what matrix i use to do just that. The tutorial talks about converting the axis from camera to object coordinates by using the inverse of the MV. Then it shows these 3 lines of code witch i havent been able to understand. glm::mat3 camera2object = glm::inverse(glm::mat3(transforms[MODE_CAMERA]) * glm::mat3(mesh.object2world)); glm::vec3 axis_in_object_coord = camera2object * axis_in_camera_coord; So what do i aply to my calculated axis?, the inverse of what, i supose the inverse of the model view? So my question is how do you transform camera axis to object axis. Do i apply the inverse of the lookat matrix? My code: if (cur_mx != last_mx || cur_my != last_my) { va = get_arcball_vector(last_mx, last_my); vb = get_arcball_vector( cur_mx, cur_my); angle = acos(min(1.0f, dotProduct(va, vb)))*20; axis_in_camera_coord = crossProduct(va, vb); axis.x = axis_in_camera_coord[0]; axis.y = axis_in_camera_coord[1]; axis.z = axis_in_camera_coord[2]; axis.w = 1.0f; last_mx = cur_mx; last_my = cur_my; } Quaternion q = qFromAngleAxis(angle, axis); Matrix m; qGLMatrix(q,m); vi = mMultiply(m, vi); up = mMultiply(m, up); ViewMatrix = ogLookAt(vi.x, vi.y, vi.z,0,0,0,up.x,up.y,up.z);

    Read the article

  • JMonkey Engine (JME) load Blender scene with textures?

    - by leigero
    I am having the hardest time trying to accomplish the simplest task. I have created a floor and 4 walls (not that complicated) in Blender. I added a basic material and cloud texture so they have something to look at other than gray. When I import them into JMonkey they show up as solid white objects with no shading or depth. White silhouettes. I thought this may be a lighting issue, but I have ambient light added to the scene. I can remove that light or adjust its intensity and it has no affect on the scene. I exported all Blender files into OgreXML format, then converted them to .j3o format in JMonkey. I renamed the textures to match their corresponding mesh and this didn't do anything. Does anybody know how to create a flat object and put it into JMonkey with a texture? This sounds simple and there is absolutely no information on this. This should be step 1!

    Read the article

  • If I use my own normal values, should I turn off winding order culling?

    - by Phil
    I've discovered that I managed to program a series of boxes with indexed vertices in such a way that every other triangle (Half of each face) has a backwards winding order. As a result, XNA is culling half of them. However, my Vertex objects contain normal data that I have explicitly set, and I am going to implement my own backface culling shortly to reduce the size of the VertexBuffer. Should I turn off winding order culling and manage it myself, or should I make sure the winding order is consistent and let XNA handle it?

    Read the article

  • Cool examples of procedural pixel shader effects?

    - by Robert Fraser
    What are some good examples of procedural/screen-space pixel shader effects? No code necessary; just looking for inspiration. In particular, I'm looking for effects that are not dependent on geometry or the rest of the scene (would look okay rendered alone on a quad) and are not image processing (don't require a "base image", though they can incorporate textures). Multi-pass or single-pass is fine. Screenshots or videos would be ideal, but ideas work too. Here are a few examples of what I'm looking for (all from the RenderMonkey samples): PS - I'm aware of this question; I'm not asking for a source of actual shader implementations but instead for some inspirational ideas -- and the ones at the NVIDIA Shader Library mostly require a scene or are image processing effects. EDIT: this is an open-ended question and I wish there was a good way to split the bounty. I'll award the rep to the best answer on the last day.

    Read the article

  • Problems in exporting terrain from autodesk 3ds

    - by Jatin Kumar
    i am trying to make small counter strike sort of game and for the terrain part i have exported the terrain in 3ds format from Autodesk 3ds-max and imported the same in opengl using lib3ds. Its working fine but with few problems: The terrain is mainly made up of some cubical boxes with texture on them and placed on a big flat surface with boundary wall. In opengl i have enabled anti aliasing but still there is too much aliasing on the boundaries (visible when rotating the camera). I have tiled the floor with some image but in opengl it is just the single image stretched over the complete surface. I have exported animated model (Skelton+mesh+material+animation) from 3ds and used cal3d library for reading the same. Model has a gun also which is not appearing in opengl and it too has too much of aliasing problem. I have googled around but couldn't find any relevant solutions. Thanks in advance

    Read the article

  • How many vertices are needed to draw reasonably good-looking terrain?

    - by bobbaluba
    I have some pretty expensive code in my terrain vertex shader, and I am trying to figure out if it will still be fast enough. I haven't yet developed a level-of-detail system for my terrain rendering, but I can easily benchmark my code by just drawing mock triangles. My problem is, how do I know how many vertices to test with? Are there for example rendering engines that will tell me how many terrain vertices are currently on-screen? Or maybe it is possible to create a formula that will give me an estimate based on screen resolution?

    Read the article

  • Any reliable polygon normal calculation code?

    - by Jenko
    I'm currently calculating the normal vector of a polygon using this code, but for some faces here and there it calculates a wrong normal. I don't really know what's going on or where it fails but its not reliable. Do you have any polygon normal calculation that's tested and found to be reliable? // calculate normal of a polygon using all points var n:int = points.length; var x:Number = 0; var y:Number = 0; var z:Number = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // turn large values into a unit vector if (length != 0){ x = x / length; y = y / length; z = z / length; }else { throw new Error("Cannot calculate normal since triangle has an area of 0"); }

    Read the article

  • Anisotropic and trilinear filtering?

    - by fedab
    I'm confused about the usage of trilinear filtering and anisotropic filtering in SharpDX. As far as i understood, trilinear filtering does linear filtering to the textures and in a case of LOD-change it also interpolates between the too LODs to smooth the transition. Anisotropic filtering make the texture bigger. Now it is possible to use trilinear filtering to do the same thing, due to anisotropic filtering with bigger textures. This causes a lesser blurred image, when you use anisotropy, because the interpolation is better. Now, it should be possible to use trilinear filtering and anisotropic filtering at the same time. But in the SamplerState i can only choose Filter.Anisotropy or Filter.MinMagMipLinear (should be trilinear, right?). You can see all possible filters here: D3D11 Filter Enumeration. So my question: Can you use both techniques together, if yes, how can i archieve that in SharpDX with SamplerState?

    Read the article

  • 3Ds Max is exporting model with more normals than vertices

    - by Delta
    I made a simple teapot with the "Create Standard Primitives" option and exported it as a collada file, ended up with this: < float_array id="Teapot001-POSITION-array" count="1590" < float_array id="Teapot001-Normal0-array" count="9216" For what I know there should be only one normal per vertex, am I wrong? What am I supposed to do with that much normals? Just put them on the normal buffer all at once normally?

    Read the article

  • how to implement motion blur effect?

    - by PlayerOne
    I wanted to know how one would implement this motion blur or fade effect behind the soccer ball . Here is what I was thinking . You have the balls current position and you also keep its previous position(couple of sec back). and you draw a "streak" sprite between the 2 points. I have seen this effect lots of time implemented for projects in various 2d games and wanted to know if there is a standard technique. http://i45.tinypic.com/2n24j7r.png

    Read the article

  • GLM Velocity Vectors - Basic Maths to Simulate Steering

    - by Reanimation
    UPDATE - Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • Implementing Light Volume Front Faces

    - by cubrman
    I recently read an article about light indexed deferred rendering from here: http://code.google.com/p/lightindexed-deferredrender/ It explains its ideas in a clear way, but there was one point that I failed to understand. It in fact is one of the most interesting ones, as it explains how to implement transparency with this approach: Typically when rendering light volumes in deferred rendering, only surfaces that intersect the light volume are marked and lit. This is generally accomplished by a “shadow volume like” technique of rendering back faces – incrementing stencil where depth is greater than – then rendering front faces and only accepting when depth is less than and stencil is not zero. By only rendering front faces where depth is less than, all future lookups by fragments in the forward rendering pass will get all possible lights that could hit the fragment. Can anyone explain how exactly you need to render only front faces? Another question is why do you need the front faces at all? Why can't we simply render all the lights and store the ones that overlap at this pixel in a texture? Does this approach serves as a cut-off plane to discard lights blocked by opaque geometry?

    Read the article

  • LibGdx efficient data saving/loading?

    - by grimrader22
    Currently, my LibGDX game consists of a 512 x 512 map of Tiles and entities such as players and monsters. I am wondering how to efficiently save and load the data of my levels. At the moment I am using JSON serialization for each class I want to save. I implement the Json.Serializable interface for all of these classes and write only the variables that are necessary. So my map consists of 512 x 512 tiles, that's 260,000 tiles. Each tile on the map consists of a Tile object, which points to some final Tile object like a GRASS_TILE or a STONE_TILE. When I serialize each level tile, the final Tile that it points to is re-serialized over and over again, so if I have 100 Tiles all pointing to GRASS_TILE, the data of GRASS_TILE is written 100 times over. When I go to load/deserialize my objects, 100 GrassTile objects are created, but they are each their own object. They no longer point to the final tile object. I feel like this reading/writing files very slow. If I were to abandon JSON serialization, to my knowledge my next best option would be saving the level data to a sql database. Unless there is a way to speed up serializing/deserializing 260,000 tiles I may have to do this. Is this a good idea? Could I really write that many tiles to the database efficiently? To sum all this up, I am trying to save my levels using JSON serialization, but it is VERY slow. What other options do I have for saving the data of so many tiles. I also must note that the JSON serialization is not slow on a PC, it is only VERY slow on a mobile device. Since file writing/reading is so slow on mobile devices, what can I do?

    Read the article

  • Audio Panning using RtAudio

    - by user1801724
    I use Rtaudio library. I would like to implement an audio program where I can control the panning (e.g. shifting the sound from the left channel to the right channel). In my specific case, I use a duplex mode (you can find an example here: duplex mode). It means that I link the microphone input to the speaker output. I seek on the web, but I did not find anything useful. Should I apply a filter on the output buffer? What kind of filter? Can anyone help me? Thanks

    Read the article

  • Game Sound Effects Availability

    - by Ben
    Is there a need in the community for affordable game-focused sound effect packs? I am considering putting together some effects specifically geared toward games and indie developers that desire to get a working prototype quickly off the ground. Is there a need for this, or is there another standard "go-to" spot for this kind of thing? I want to offer value to the community but wanted to assess the need first. If anyone has thoughts, insight, or personal opinions on this I would love to hear it!

    Read the article

  • Asset missing problem XNA

    - by ChocoMan
    I'm using VS2010 with XNA 4.0 and I'm trying to load an FBX model with texture on the screen. The problem I'm having is this error: Missing Asset: C:\Users\ChocoMan\Documents\Visual Studio 2010\Projects\XNAGame\Documents\Visual Studio\Projects\XNAGame\XNAGameContent\Textures\texture.bmp but the actual path to the texture is C:\Users\ChocoMan\Documents\Visual Studio\Projects\XNAGame\XNAGameContent\Textures\texture.bmp Also, when I linked the texture in Maya, I used the above address. Does anyone know why VS is looking for an incorrect address that doesnt exist?

    Read the article

  • Problem with Assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • About Alpha blending sprites in Direct3D9

    - by ambrozija
    I have a Direct3D9 application that is rendering ID3DXSprites. The problem I am experiencing is best described in this situation: I have a texture that is totally opaque. On top of it I draw a rectangle filled with solid color and alpha of 128. On top of the rectangle I have a text that is totally opaque. I draw all of this and get the resulting image through GetRenderTarget call. The problem is that on the resulting image, on the area where the transparent rectangle is, I have semi transparent pixels. It is not a problem that the rectangle is transparent, the problem is that the resulting image is. The question is how to setup the blending so in this situation I don't get the transparent pixels in the resulting image? I use the sprite with D3DXSPRITE_ALPHABLEND which sets the device state to D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA. I tried couple of combinations of SetRenderState, like D3DBLEND_SRCALPHA, D3DBLEND_DESTALPHA etc., but couldn't make it work. Thanks.

    Read the article

  • android: How to apply pinch zoom and pan to 2D GLSurfaceView

    - by mak_just4anything
    I want to apply pinch zoom and panning effect on GLSurfaceView. It is Image editor, so It would not be 3D object. I tried to implement using these following links: https://groups.google.com/forum/#!topic/android-developers/EVNRDNInVRU Want to apply pinch and zoom to GLSurfaceView(3d Object) http://www.learnopengles.com/android-lesson-one-getting-started/ These all are links for 3D object rendering. I can not use ImageView as I need to work out with OpenGL so, had to implement it on GLSurfaceView. Suggest me or any reference links are there for such implementation. **I need it for 2D only.

    Read the article

  • How are realistic 3D faces created and animated in video games?

    - by Anton
    I'm interested in being able to create realistic faces and facial expressions for the 3D characters of a game I'm working on. Think something similar to the dialog scenes in games like Mass Effect. Unfortunately I'm not sure where to begin. I'm sure the faces/animations are created through 3D Modeling software, but otherwise I am lost. Do facial animations use the same "bones" that normal body animation uses? Is there any preferred 3D software for realistic faces and animations? Is there a preferred format to export these faces and animations in?

    Read the article

  • Game programming in C++ [closed]

    - by Asaf
    I am a new programmer. I know C++ quite well and I know C# very good. I'm really eager to learn how to program games well and I cant really find where to start learning from. I have never developed any graphics in C++ , only a crappy game with windows forms graphics. I'm really into game programming and hoping I can get employed in it in the future. I'd be glad to have some advice about this. Thanks in advance, Asaf

    Read the article

  • Lighting get darker when texture is aplied

    - by noah
    Im using OpenGL ES 1.1 for iPhone. I'm attempting to implement a skybox in my 3d world and started out by following one of Jeff Lamarches tutorials on creating textures. Heres the tutorial: iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html Ive successfully added the image to my 3d world but am not sure why the lighting on the other shapes has changed so much. I want the shapes to be the original color and have the image in the background. Before: https://www.dropbox.com/s/ojmb8793vj514h0/Screen%20Shot%202012-10-01%20at%205.34.44%20PM.png After: https://www.dropbox.com/s/8v6yvur8amgudia/Screen%20Shot%202012-10-01%20at%205.35.31%20PM.png Heres the init OpenGL: - (void)initOpenGLES1 { glShadeModel(GL_SMOOTH); // Enable lighting glEnable(GL_LIGHTING); // Turn the first light on glEnable(GL_LIGHT0); const GLfloat lightAmbient[] = {0.2, 0.2, 0.2, 1.0}; const GLfloat lightDiffuse[] = {0.8, 0.8, 0.8, 1.0}; const GLfloat matAmbient[] = {0.3, 0.3, 0.3, 0.5}; const GLfloat matDiffuse[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat matSpecular[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat lightPosition[] = {0.0, 0.0, 1.0, 0.0}; const GLfloat lightShininess = 100.0; //Configure OpenGL lighting glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, matSpecular); glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightfv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightfv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightfv(GL_LIGHT0, GL_POSITION, lightPosition); // Define a cutoff angle glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 40.0); // Set the clear color glClearColor(0, 0, 0, 1.0f); // Projection Matrix config glMatrixMode(GL_PROJECTION); glLoadIdentity(); CGSize layerSize = self.view.layer.frame.size; // Swapped height and width for landscape mode gluPerspective(45.0f, (GLfloat)layerSize.height / (GLfloat)layerSize.width, 0.1f, 750.0f); [self initSkyBox]; // Modelview Matrix config glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // This next line is not really needed as it is the default for OpenGL ES glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glDisable(GL_BLEND); // Enable depth testing glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LESS); glDepthMask(GL_TRUE); } Heres the drawSkybox that gets called in the drawFrame method: -(void)drawSkyBox { glDisable(GL_LIGHTING); glDisable(GL_DEPTH_TEST); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); static const SSVertex3D vertices[] = { {-1.0, 1.0, -0.0}, { 1.0, 1.0, -0.0}, {-1.0, -1.0, -0.0}, { 1.0, -1.0, -0.0} }; static const SSVertex3D normals[] = { {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0} }; static const GLfloat texCoords[] = { 0.0, 0.5, 0.5, 0.5, 0.0, 0.0, 0.5, 0.0 }; glLoadIdentity(); glTranslatef(0.0, 0.0, -3.0); glBindTexture(GL_TEXTURE_2D, texture[0]); glVertexPointer(3, GL_FLOAT, 0, vertices); glNormalPointer(GL_FLOAT, 0, normals); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_LIGHTING); glEnable(GL_DEPTH_TEST); } Heres the init Skybox: -(void)initSkyBox { // Turn necessary features on glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_SRC_COLOR); // Bind the number of textures we need, in this case one. glGenTextures(1, &texture[0]); // create a texture obj, give unique ID glBindTexture(GL_TEXTURE_2D, texture[0]); // load our new texture name into the current texture glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); NSString *path = [[NSBundle mainBundle] pathForResource:@"space" ofType:@"jpg"]; NSData *texData = [[NSData alloc] initWithContentsOfFile:path]; UIImage *image = [[UIImage alloc] initWithData:texData]; GLuint width = CGImageGetWidth(image.CGImage); GLuint height = CGImageGetHeight(image.CGImage); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); void *imageData = malloc( height * width * 4 ); // times 4 because will write one byte for rgb and alpha CGContextRef cgContext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big ); // Flip the Y-axis CGContextTranslateCTM (cgContext, 0, height); CGContextScaleCTM (cgContext, 1.0, -1.0); CGColorSpaceRelease( colorSpace ); CGContextClearRect( cgContext, CGRectMake( 0, 0, width, height ) ); CGContextDrawImage( cgContext, CGRectMake( 0, 0, width, height ), image.CGImage ); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData); CGContextRelease(cgContext); free(imageData); [image release]; [texData release]; } Any help is greatly appreciated.

    Read the article

  • Using XNA for a 2D isometric game, but wanna move on

    - by Daniel Ribeiro
    I've been building a 2D isometric game (with learning purposes) in C# using XNA. I found it's really easy to manage sprite sheets loading, collision, basic physics and such with the XNA api. The thing is, I want to move on. My real goal is to learn C++ and develop a game using that language. What engine/library would you guys recommend for me to keep going on that same 2D isometric game direction using pretty much sprite sheets for the graphical part of the game?

    Read the article

  • Does use of simple shaders improve performace/battery life?

    - by Miro
    I'm making OpenGL game for Android. Till now i've used only fixed function pipeline, but i'm rendering simple things. Fixed function pipeline includes a lot of stuff i don't need. So i'm thinking about implementing shaders in my game to simplify OpenGL pipeline if it can make better performance. Better performance = better battery life, unless fps is limited by software limit, not hardware power.

    Read the article

  • Why is my Simplex Noise appearing in four columns?

    - by Joe the Person
    I'm trying to make a Texture out of Simplex noise, but it keeps appearing like this regardless of how big or small scale is: The following code is used to produce the image's color date: private Color[,] GetSimplex() { Color[,] colors = new Color[800, 600]; float scale = colors.GetLength(0); for (int x = 0; x < 800; x++) { for (int y = 0; y < 600; y++) { byte noise = (byte)(Noise.Generate(x / scale, y / scale) * 255); colors[x, y] = new Color(noise, noise, noise); } } return colors; }

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >