Search Results

Search found 33 results on 2 pages for 'glulookat'.

Page 2/2 | < Previous Page | 1 2 

  • LWJGL not working

    - by Jesse Welch
    I'm working on a homework assignment to modify code given by my professor using LightWeight Java Game Library. The problem is that I can't fully load the test code to begin testing modifications. I've linked against the jar file as it says to in the modifications, but I still have one lingering error. The import statement import org.lwjgl.util.glu.*; Cannot be resolved, so I have errors on the following lines, spread throughout the code: textures[0] = GLApp.makeTexture("green.bmp"); GLU.gluPerspective(45.0f, (float)Display.getDisplayMode().getWidth() / (float)Display.getDisplayMode().getHeight(), 0.1f, 100.0f); GLU.gluLookAt(cameraX, cameraY, cameraZ, lookX, lookY, lookZ, 0.0f, 1.0f, 0.0f);` Any ideas on what is going wrong?

    Read the article

  • Getting object coordinates from camera

    - by user566757
    I've implemented a camera in Java using a position vector and three direction vectors so I can use gluLookAt(); moving around in `ghost mode' works fine enough, but I want to add collision detection. I can't seem to figure out how to transform my position vector to coordinates in which OpenGL draws my objects. A rough sketch of my drawing loop is this: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); camera.setView(); drawer.drawTheScene(); I'm at a loss of how to proceed; looking at the ModelView matrix between calls and my position vector, I haven't found any kind of correlation.

    Read the article

  • Image loaded from TGA texture isn't displayed correctly

    - by Ramy Al Zuhouri
    I have a TGA texture containing this image: The texture is 256x256. So I'm trying to load it and map it to a cube: #import <OpenGL/OpenGL.h> #import <GLUT/GLUT.h> #import <stdlib.h> #import <stdio.h> #import <assert.h> GLuint width=640, height=480; GLuint texture; const char* const filename= "/Users/ramy/Documents/C/OpenGL/Test/Test/texture.tga"; void init() { // Initialization glEnable(GL_DEPTH_TEST); glViewport(-500, -500, 1000, 1000); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45, width/(float)height, 1, 1000); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0, 0, -100, 0, 0, 0, 0, 1, 0); // Texture char bitmap[256][256][3]; FILE* fp=fopen(filename, "r"); assert(fp); assert(fread(bitmap, 3*sizeof(char), 256*256, fp) == 256*256); fclose(fp); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, bitmap); } void display() { glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture); glColor3ub(255, 255, 255); glBegin(GL_QUADS); glVertex3f(0, 0, 0); glTexCoord2f(0.0, 0.0); glVertex3f(40, 0, 0); glTexCoord2f(0.0, 1.0); glVertex3f(40, 40, 0); glTexCoord2f(1.0, 1.0); glVertex3f(0, 40, 0); glTexCoord2f(1.0, 0.0); glEnd(); glDisable(GL_TEXTURE_2D); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE); glutInitWindowPosition(100, 100); glutInitWindowSize(width, height); glutCreateWindow(argv[0]); glutDisplayFunc(display); init(); glutMainLoop(); return 0; } But this is what I get when the window loads: So just half of the image is correctly displayed, and also with different colors.Then if I resize the window I get this: Magically the image seems to fix itself, even if the colors are wrong.Why?

    Read the article

  • Why when I render a scene with a revolving camera objects that you'd expect to appear behind others

    - by seaworthy
    Hello, I am rendering a scene in which I have two spheres. I am revolving a camera around one of them. What happens is counter-intuitive. When the camera goes around the sphere the other gets in front of it when you'd expect it to be behind. So it appears as though the spheres aren't revolving around each other and the one the should go around is always upfront. Please help. Here is the code that renders the scene: glLoadIdentity(); [self positionCamera]; glutSolidSphere(2, 12,12); glPushMatrix(); glTranslatef(5, 0, 0); glutSolidSphere(0.5, 12,12); glPopMatrix(); glFlush(); This block is part of a class that gets called on using. [NSTimer scheduledTimerWithTimeInterval:DEFAULT_ANIMATION_INTERVAL target:self selector:@selector(drawRect) userInfo:nil repeats:YES]; And -(void)positionCamera{} Contains math do camera revolution and gluLookAt()

    Read the article

  • OpenGL Nothing will Display

    - by m00st
    Why can't I get anything to display with this code? #include <iostream> #include "GL/glfw.h" #ifndef MAIN #define MAIN #include "GL/gl.h" #include "GL/glu.h" #endif using namespace std; void display(); int main() { int running = GL_TRUE; glfwInit(); if( !glfwOpenWindow( 640,480, 0,0,0,0,0,0, GLFW_WINDOW ) ) { glfwTerminate(); return 0; } while( running ) { //GL Code here display(); glfwSwapBuffers(); // Check if ESC key was pressed or window was closed running = !glfwGetKey( GLFW_KEY_ESC ) && glfwGetWindowParam( GLFW_OPENED ); } glfwTerminate(); return 0; } void display() { glClearColor(0, 0,0, 0.0f); glClear(GL_COLOR_BUFFER_BIT); glLoadIdentity(); gluLookAt(0, 0, 5, 0.0, 0.0, 0.0, 0, 1, 0); glScalef(1.0f, 1.0f, 1.0f); glTranslatef(0, 0, -2); glBegin(GL_POLYGON); glColor3f(1.0, 0.2, 0.2); glVertex3f(0.25, 0.25, 0.0); glVertex3f(0.75, 0.25, 0.0); glVertex3f(0.75, 0.75, 0.0); glVertex3f(0.25, 0.75, 0.0); glEnd(); glFlush(); }

    Read the article

  • 3D picking lwjgl

    - by Wirde
    I have written some code to preform 3D picking that for some reason dosn't work entirely correct! (Im using LWJGL just so you know.) I posted this at stackoverflow at first but after researching some more in to my problem i found this neat site and tought that you guys might be more qualified to answer this question. This is how the code looks like: if(Mouse.getEventButton() == 1) { if (!Mouse.getEventButtonState()) { Camera.get().generateViewMatrix(); float screenSpaceX = ((Mouse.getX()/800f/2f)-1.0f)*Camera.get().getAspectRatio(); float screenSpaceY = 1.0f-(2*((600-Mouse.getY())/600f)); float displacementRate = (float)Math.tan(Camera.get().getFovy()/2); screenSpaceX *= displacementRate; screenSpaceY *= displacementRate; Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * Camera.get().getNear()), (float) (screenSpaceY * Camera.get().getNear()), (float) (-Camera.get().getNear()), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * Camera.get().getFar()), (float) (screenSpaceY * Camera.get().getFar()), (float) (-Camera.get().getFar()), 1); Matrix4f tmpView = new Matrix4f(); Camera.get().getViewMatrix().transpose(tmpView); Matrix4f invertedViewMatrix = (Matrix4f)tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceFar, worldSpaceFar); Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); Ray clickRay = new Ray(rayPosition, rayDirection); Vector tMin = new Vector(), tMax = new Vector(), tempPoint; float largestEnteringValue, smallestExitingValue, temp, closestEnteringValue = Camera.get().getFar()+0.1f; Drawable closestDrawableHit = null; for(Drawable d : this.worldModel.getDrawableThings()) { // Calcualte AABB for each object... needs to be moved later... firstVertex = true; for(Surface surface : d.getSurfaces()) { for(Vertex v : surface.getVertices()) { worldPosition.x = (v.x+d.getPosition().x)*d.getScale().x; worldPosition.y = (v.y+d.getPosition().y)*d.getScale().y; worldPosition.z = (v.z+d.getPosition().z)*d.getScale().z; worldPosition = worldPosition.rotate(d.getRotation()); if (firstVertex) { maxX = worldPosition.x; maxY = worldPosition.y; maxZ = worldPosition.z; minX = worldPosition.x; minY = worldPosition.y; minZ = worldPosition.z; firstVertex = false; } else { if (worldPosition.x > maxX) { maxX = worldPosition.x; } if (worldPosition.x < minX) { minX = worldPosition.x; } if (worldPosition.y > maxY) { maxY = worldPosition.y; } if (worldPosition.y < minY) { minY = worldPosition.y; } if (worldPosition.z > maxZ) { maxZ = worldPosition.z; } if (worldPosition.z < minZ) { minZ = worldPosition.z; } } } } // ray/slabs intersection test... // clickRay.getOrigin().x + clickRay.getDirection().x * f = minX // clickRay.getOrigin().x - minX = -clickRay.getDirection().x * f // clickRay.getOrigin().x/-clickRay.getDirection().x - minX/-clickRay.getDirection().x = f // -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x = f largestEnteringValue = -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + minY/clickRay.getDirection().y; if(largestEnteringValue < temp) { largestEnteringValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + minZ/clickRay.getDirection().z; if(largestEnteringValue < temp) { largestEnteringValue = temp; } smallestExitingValue = -clickRay.getOrigin().x/clickRay.getDirection().x + maxX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + maxY/clickRay.getDirection().y; if(smallestExitingValue > temp) { smallestExitingValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + maxZ/clickRay.getDirection().z; if(smallestExitingValue < temp) { smallestExitingValue = temp; } if(largestEnteringValue > smallestExitingValue) { //System.out.println("Miss!"); } else { if (largestEnteringValue < closestEnteringValue) { closestEnteringValue = largestEnteringValue; closestDrawableHit = d; } } } if(closestDrawableHit != null) { System.out.println("Hit at: (" + clickRay.setDistance(closestEnteringValue).x + ", " + clickRay.getCurrentPosition().y + ", " + clickRay.getCurrentPosition().z); this.worldModel.removeDrawableThing(closestDrawableHit); } } } I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but the result of the ray are verry strange it sometimes removes the thing im clicking at, sometimes it removes things thats not even close to what im clicking at, and sometimes it removes nothing at all. Edit: Okay so i have continued searching for errors and by debugging the ray (by painting smal dots where it travles) i can now se that there is something oviously wrong with the ray that im sending out... it has its origin near the world center (nearer or further away depending on where on the screen im clicking) and always shots to the same position no matter where I direct my camera... My initial toughts is that there might be some error in the way i calculate my viewMatrix (since it's not possible to get the viewmatrix from the gluLookAt method in lwjgl; I have to build it my self and I guess thats where the problem is at)... Edit2: This is how i calculate it currently: private double[][] viewMatrixDouble = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,1}}; public Vector getCameraDirectionVector() { Vector actualEye = this.getActualEyePosition(); return new Vector(lookAt.x-actualEye.x, lookAt.y-actualEye.y, lookAt.z-actualEye.z); } public Vector getActualEyePosition() { return eye.rotate(this.getRotation()); } public void generateViewMatrix() { Vector cameraDirectionVector = getCameraDirectionVector().normalize(); Vector side = Vector.cross(cameraDirectionVector, this.upVector).normalize(); Vector up = Vector.cross(side, cameraDirectionVector); viewMatrixDouble[0][0] = side.x; viewMatrixDouble[0][1] = up.x; viewMatrixDouble[0][2] = -cameraDirectionVector.x; viewMatrixDouble[1][0] = side.y; viewMatrixDouble[1][1] = up.y; viewMatrixDouble[1][2] = -cameraDirectionVector.y; viewMatrixDouble[2][0] = side.z; viewMatrixDouble[2][1] = up.z; viewMatrixDouble[2][2] = -cameraDirectionVector.z; /* Vector actualEyePosition = this.getActualEyePosition(); Vector zaxis = new Vector(this.lookAt.x - actualEyePosition.x, this.lookAt.y - actualEyePosition.y, this.lookAt.z - actualEyePosition.z).normalize(); Vector xaxis = Vector.cross(upVector, zaxis).normalize(); Vector yaxis = Vector.cross(zaxis, xaxis); viewMatrixDouble[0][0] = xaxis.x; viewMatrixDouble[0][1] = yaxis.x; viewMatrixDouble[0][2] = zaxis.x; viewMatrixDouble[1][0] = xaxis.y; viewMatrixDouble[1][1] = yaxis.y; viewMatrixDouble[1][2] = zaxis.y; viewMatrixDouble[2][0] = xaxis.z; viewMatrixDouble[2][1] = yaxis.z; viewMatrixDouble[2][2] = zaxis.z; viewMatrixDouble[3][0] = -Vector.dot(xaxis, actualEyePosition); viewMatrixDouble[3][1] =-Vector.dot(yaxis, actualEyePosition); viewMatrixDouble[3][2] = -Vector.dot(zaxis, actualEyePosition); */ viewMatrix = new Matrix4f(); viewMatrix.load(getViewMatrixAsFloatBuffer()); } Would be verry greatfull if anyone could verify if this is wrong or right, and if it's wrong; supply me with the right way of doing it... I have read alot of threads and documentations about this but i can't seam to wrapp my head around it... Edit3: Okay with the help of Byte56 (thanks alot for the help) i have now concluded that it's not the viewMatrix that is the problem... I still get the same messedup result; anyone that think that they can find the error in my code, i certenly can't, have bean working on this for 3 days now :(

    Read the article

  • OpenGL alpha blending issue with back face visible.

    - by Max
    I'm trying to display "transparent" surfaces (not closed volumes) with both the front face and back face are visible (not culled). For example displaying a cone or cylinder where the transparency is applied on both sides. There are some visible artifacts where some part of the surface does not seems to be handling the alpha values correctly. The issue it seems is when I (opengl) is trying to apply the alpha from the front side of the surface to the backside of the surface. (when both the inside/outside of the surface is visible). void init() { glMatrixMode(GL_PROJECTION); gluPerspective( /* field of view in degree */ 40.0, /* aspect ratio */ 1.0, /* Z near */ 1.0, /* Z far */ 10.0); glMatrixMode(GL_MODELVIEW); gluLookAt(0.0, 0.0, 5.0, /* eye is at (0,0,5) */ 0.0, 0.0, 0.0, /* center is at (0,0,0) */ 0.0, 1.0, 0.); /* up is in positive Y direction */ glTranslatef(0.0, 0.6, -1.0); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glLightfv(GL_LIGHT0, GL_AMBIENT, light0_ambient); glLightfv(GL_LIGHT0, GL_DIFFUSE, light0_diffuse); glLightfv(GL_LIGHT1, GL_DIFFUSE, light1_diffuse); glLightfv(GL_LIGHT1, GL_POSITION, light1_position); glLightfv(GL_LIGHT2, GL_DIFFUSE, light2_diffuse); glLightfv(GL_LIGHT2, GL_POSITION, light2_position); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); //glEnable(GL_CULL_FACE); glFrontFace( GL_CW ); glShadeModel(GL_SMOOTH); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); } void draw () { static GLfloat amb[] = {0.4f, 0.4f, 0.4f, 0.0f}; static GLfloat dif[] = {1.0f, 1.0f, 1.0f, 0.0f}; static GLfloat back_amb[] = {0.4f, 0.4f, 0.4f, 1.0f}; static GLfloat back_dif[] = {1.0f, 1.0f, 1.0f, 1.0f}; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_LIGHT1); glDisable(GL_LIGHT2); amb[3] = dif[3] = 0.5f;// cos(s) / 2.0f + 0.5f; glMaterialfv(GL_FRONT, GL_AMBIENT, amb); glMaterialfv(GL_FRONT, GL_DIFFUSE, dif); glMaterialfv(GL_BACK, GL_AMBIENT, back_amb); glMaterialfv(GL_BACK, GL_DIFFUSE, back_dif); glPushMatrix(); glTranslatef(-0.3f, -0.3f, 0.0f); glRotatef(angle1, 1.0f, 5.0f, 0.0f); glutSolidCone(1.0, 1.0, 50, 2 ); glPopMatrix(); ///... SwapBuffers(wglGetCurrentDC()); // glutSwapBuffers(); } The code is based on : http://www.sgi.com/products/software/opengl/examples/glut/examples/source/blender.c tinyurled links to 2 images on flickr showing the issue (but from out production code, not the above code, but both have the same kind of problems): http://flic.kr/p/99soxy and http://flic.kr/p/99pg18 Thanks. Max.

    Read the article

  • Tracking finger move in order to rotate a triangle : tracking is not perfect

    - by Laurent BERNABE
    I've written a custom view, with the OpenGL_1 technology, in order to let user rotate a red triangle just by dragging it along x axis. (Will give a rotation around Y axis). It works, but there is a bit of latency when dragging from one direction to the other (without releasing the mouse/finger). So it seems that my code is not yet "goal perfect". (I am convinced that no code is perfect in itself). I thought of using a quaternion, but maybe it won't be so usefull : must I really use a Quaternion (or a kind of Matrix) ? I've designed application for Android 4.0.3, but it could fit into Android api 3 (Android 1.5) as well (at least, I think it could). So here is my main layout : activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <com.laurent_bernabe.android.triangletournant3d.MyOpenGLView android:layout_width="match_parent" android:layout_height="match_parent" /> </LinearLayout> Here is my main activity : MainActivity.java package com.laurent_bernabe.android.triangletournant3d; import android.app.Activity; import android.os.Bundle; import android.support.v4.app.NavUtils; import android.view.Menu; import android.view.MenuItem; public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case android.R.id.home: NavUtils.navigateUpFromSameTask(this); return true; } return super.onOptionsItemSelected(item); } } And finally, my OpenGL view MyOpenGLView.java package com.laurent_bernabe.android.triangletournant3d; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import android.content.Context; import android.graphics.Point; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.opengl.GLU; import android.util.AttributeSet; import android.view.MotionEvent; public class MyOpenGLView extends GLSurfaceView implements Renderer { public MyOpenGLView(Context context, AttributeSet attrs) { super(context, attrs); setRenderer(this); } public MyOpenGLView(Context context) { this(context, null); } @Override public boolean onTouchEvent(MotionEvent event) { int actionMasked = event.getActionMasked(); switch(actionMasked){ case MotionEvent.ACTION_DOWN: savedClickLocation = new Point((int) event.getX(), (int) event.getY()); break; case MotionEvent.ACTION_UP: savedClickLocation = null; break; case MotionEvent.ACTION_MOVE: Point newClickLocation = new Point((int) event.getX(), (int) event.getY()); int dx = newClickLocation.x - savedClickLocation.x; angle += Math.toRadians(dx); break; } return true; } @Override public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); GLU.gluLookAt(gl, 0f, 0f, 5f, 0f, 0f, 0f, 0f, 1f, 0f ); gl.glRotatef(angle, 0f, 1f, 0f); gl.glColor4f(1f, 0f, 0f, 0f); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, triangleCoordsBuff); gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3); } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); GLU.gluPerspective(gl, 60f, (float) width / height, 0.1f, 10f); gl.glMatrixMode(GL10.GL_MODELVIEW); } @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glEnable(GL10.GL_DEPTH_TEST); gl.glClearDepthf(1.0f); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); buildTriangleCoordsBuffer(); } private void buildTriangleCoordsBuffer() { ByteBuffer buffer = ByteBuffer.allocateDirect(4*triangleCoords.length); buffer.order(ByteOrder.nativeOrder()); triangleCoordsBuff = buffer.asFloatBuffer(); triangleCoordsBuff.put(triangleCoords); triangleCoordsBuff.rewind(); } private float [] triangleCoords = {-1f, -1f, +1f, -1f, +1f, +1f}; private FloatBuffer triangleCoordsBuff; private float angle = 0f; private Point savedClickLocation; } I don't think I really have to give you my manifest file. But I can if you think it is necessary. I've just tested on Emulator, not on real device. So, how can improve the reactivity ? Thanks in advance.

    Read the article

< Previous Page | 1 2