Search Results

Search found 2068 results on 83 pages for 'camera'.

Page 15/83 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Why does my 3D model not translate the way I expect? [closed]

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } I'm suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • 3D Model not translating correctly (visually)

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } Im suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • How do I create a bounding frustrum from a view & projection matrix?

    - by Narf the Mouse
    Given a left-handed Projection matrix, a left-handed View matrix, a ViewProj matrix of View * Projection - How do I create a bounding Frustum comprised of near, far, left, right and top, bottom planes? The only example I could find on Google (Tutorial 16: Frustum Culling) seems to not work; for example, if the math is used as given, the near-plane's distance is a negative. This places the near-plane behind the camera...

    Read the article

  • Cannot run Android "Camera Preview" sample

    - by noisesolo
    The sample I'm referring to is: CameraPreview. The program simply force closes upon start up. I've also tried other camera demos that have the same problem. I'm trying to run the samples on my Nexus One and the emulator with the same problem on both. I'm not even sure if the emulator should be able to run them or not. Based on LogCat, the error is: 06-08 16:39:10.483: ERROR/AndroidRuntime(6726): Uncaught handler: thread main exiting due to uncaught exception 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): java.lang.RuntimeException: Fail to connect to camera service 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.hardware.Camera.native_setup(Native Method) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.hardware.Camera.<init>(Camera.java:110) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.hardware.Camera.open(Camera.java:90) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at com.example.android.apis.graphics.Preview.surfaceCreated(CameraPreview.java:69) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.SurfaceView.updateWindow(SurfaceView.java:454) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.SurfaceView.dispatchDraw(SurfaceView.java:287) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.View.draw(View.java:6557) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.widget.FrameLayout.draw(FrameLayout.java:352) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.ViewGroup.drawChild(ViewGroup.java:1531) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.View.draw(View.java:6557) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.widget.FrameLayout.draw(FrameLayout.java:352) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at com.android.internal.policy.impl.PhoneWindow$DecorView.draw(PhoneWindow.java:1830) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.ViewRoot.draw(ViewRoot.java:1349) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.ViewRoot.performTraversals(ViewRoot.java:1114) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.view.ViewRoot.handleMessage(ViewRoot.java:1633) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.os.Handler.dispatchMessage(Handler.java:99) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.os.Looper.loop(Looper.java:123) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at android.app.ActivityThread.main(ActivityThread.java:4363) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at java.lang.reflect.Method.invokeNative(Native Method) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at java.lang.reflect.Method.invoke(Method.java:521) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 06-08 16:39:10.494: ERROR/AndroidRuntime(6726): at dalvik.system.NativeStart.main(Native Method) All I did to try out the sample was create a new Android 2.1update1 Project, named everything according to the supplied Java file, copied the Java file from the URL to the CameraPreview.java file, then run it. Am I supposed to do anything else? Any help would be appreciated. Thanks in advance.

    Read the article

  • Display last picture

    - by steve
    Hi I'm inserting an image from the camera (Taking a picture) into the MediaStore.Images.Media datastore. Does anyone know how I can go about displaying the last picture taken? I used Uri image = ContentUris.withAppendedId(externalContentUri, 45); to display an image from the datastore but obviously 45 is not the correct image. I try to pass the information from the previous activity (Camera) to the display activity but I'm assuming due to the photo call back being its own thread the value never gets set. Photo code is as follows Camera.PictureCallback photoCallback = new Camera.PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { // TODO Auto-generated method stub FileOutputStream fos; try { Bitmap bm = BitmapFactory.decodeByteArray(data, 0, data.length); fileUrl = MediaStore.Images.Media.insertImage(getContentResolver(), bm, "LastTaken", "Picture"); if(fileUrl == null) { Log.d("Still", "Image Insert Failed"); return; } else { picUri = Uri.parse(fileUrl); sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, picUri)); } } catch(Exception e) { Log.d("Picture", "Error Picture: ", e); } camera.startPreview(); } };

    Read the article

  • Display last image taken in Media.Images

    - by steve
    Hi I'm inserting an image from the camera (Taking a picture) into the MediaStore.Images.Media datastore. Does anyone know how I can go about displaying the last picture taken? I used Uri image = ContentUris.withAppendedId(externalContentUri, 45); to display an image from the datastore but obviously 45 is not the correct image. I try to pass the information from the previous activity (Camera) to the display activity but I'm assuming due to the photo call back being its own thread the value never gets set. Photo code is as follows Camera.PictureCallback photoCallback = new Camera.PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { // TODO Auto-generated method stub FileOutputStream fos; try { Bitmap bm = BitmapFactory.decodeByteArray(data, 0, data.length); fileUrl = MediaStore.Images.Media.insertImage(getContentResolver(), bm, "LastTaken", "Picture"); if(fileUrl == null) { Log.d("Still", "Image Insert Failed"); return; } else { picUri = Uri.parse(fileUrl); sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, picUri)); } } catch(Exception e) { Log.d("Picture", "Error Picture: ", e); } camera.startPreview(); } };

    Read the article

  • ISSUE IN CONNECTING PRO9000 CAMERA WITH OMAP3530

    - by Vinay krishna
    I have a video phone application running on OMAP 3530 board.The problem is when i connect the camera(pro 9000) through a powered USB hub (Inp:100to240v, OUT:5v,1A) everything works fine when I make a video call.But If i connect the camera directly to the OMAP3530 board and try to make a video call,OMAP board is not sending any video packets captured locally.And also the PIP(Picture In Picture) is disabled.

    Read the article

  • Issue in connecting Pro9000 camera directly with OMAP3530

    - by Vinay krishna
    I have a video phone application running on an OMAP3530 board. The problem is when I connect the camera (Pro 9000) through a powered USB hub (In:100-240V, Out:5V,1A) everything works fine when I make a video call. But if I connect the camera directly to the OMAP3530 board and try to make a video call, the OMAP board is not sending any video packets captured locally. And also the PIP (Picture In Picture) is disabled.

    Read the article

  • Re. copying movie from USB card reader back to my camera

    - by Alice P
    I have a Canon Digital Ixus 860IS. I originally copied a short movie from my camera onto the computer via a USB card reader and then copied it back onto the camera via the same USB card reader along with some photos. The photos have copied back fine but the movie, although it's showing to have copied, can't be seen. Any reason for this? Thanks

    Read the article

  • Axis Aligned Billboard: how to make the object look at camera

    - by user19787
    I am trying to make an Axis Aligned Billboard with Pyglet. I have looked at several tutorials, but they only show me how to get the Up,Right,and Look vectors. So far this is what I have: target = cam.pos look = norm( target - billboard.pos ) right = norm( Vector3(0,1,0)*look ) up = look*right gluLookAt( look.x, look.y, look.z, self.pos.x, self.pos.y, self.pos.z, up.x, up.y, up.z ) This does nothing for me visibly. Any idea what I'm doing wrong?

    Read the article

  • How do I convert my matrix from OpenGL to Marmalade?

    - by King Snail
    I am using a third party rendering API, Marmalade, on top of OpenGL code and I cannot get my matrices correct. One of the API's authors states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • How do I implement camera axis aligned billboards?

    - by user19787
    I am trying to make an axis-aligned billboard with Pyglet. I have looked at several tutorials, but they only show me how to get the up, right, and look vectors. So far this is what I have: target = cam.pos look = norm(target - billboard.pos) right = norm(Vector3(0,1,0) * look) up = look * right gluLookAt( look.x, look.y, look.z, self.pos.x, self.pos.y, self.pos.z, up.x, up.y, up.z ) This does nothing for me visibly. Any idea what I'm doing wrong?

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • Inside the Guts of a DSLR

    - by Jason Fitzpatrick
    It’s safe to assume that there is a lot more going on inside your modern DSLR than your grandfather’s Kodak Brownie, but just how much hardware is packed into the small casing of your average DSLR is quite surprising. Over at iFixit they’ve done a tear down of Nikon’s newest prosumer camera, the Nikon D600. The guts of the DSLR are absolutely bursting with hardware and flat-ribbon cable as seen in the photo above. For a closer look at the individual parts and to see it further torn down, hit up the link below. Nikon D600 Teardown [iFixit via Extreme Tech] 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Determine corners of a specific plane in the frustum

    - by Takumi
    I'm working on a game with a 2D view in a 3D world. It's a kind of shoot'em up. I've a spaceship at the center of the screen and i want that ennemies appear at the borders of my window. Now i don't know how to determine positions of the borders of the window. For example, my camera is at (0,0,0) and looking forward (0,0,1). I set my spaceship at (0,0,50). I also know the near plane (1) and the far plane(1000). I think i'd have to find the 4 corners of the plane in the frustum whose z position is 50, and with these corner i can determine borders. But i don't know how to determine x and y.

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • Ubuntu DVR - what are the options?

    - by Alex D
    According to my research, the best solution to use ubuntu server as a DVR system would be ZoneMinder. is there any alternatives to zoneminder out there? I'm not really happy it only has a web interface to control/view my cameras. And it doesn't have an option to record video stream non-stop. Am I missing something in its configuration? And the thing I really disappointed, I cant find a way to control my PTZ camera with it. what do manufacturers sell along with their standalone linux powered dvr systems?

    Read the article

  • What would be a good game making engine supporting Vector images?

    - by Qqwy
    I want to create a simple platforming game, in which you are a square in a wonderful world. I would like this game to be able to be played in browsers. Basically I am searching for something similar to "Flixel", but with the following features: Support Vector Graphics Allow zooming/rotating objects without producing huge amounts of lag as soon as you are using more objects. (Because I want to rotate the map around the player) So in other words, preferably zoom the viewport/camera instead of the objects themselves. Does an engine like that exist?

    Read the article

  • How do I get started with fog type effects in a first person game?

    - by Dream Lane
    Hey guys, I'm currently using JME3 to learn 3d game development in java, and I have run into a situation. I would like to add fog effects to my games, but I don't even know where to start to implement this. I know how to set the camera's far frustum to limit the render distance, but that just simply makes a sharp cutoff. I'd like the fog it up a bit to make it feel more natural. I'm looking for an answer that points me into the correct direction. I'm not looking for specific code snippets or even JME3's engine specifics. I just want to get an idea of how this stuff works in general. Thanks!

    Read the article

  • Rotate camera with mouse? [closed]

    - by ezio160324
    Once again, using tutorial 10 at NeHe. I want the code if (keys[VK_RIGHT]) // Is The Right Arrow Being Pressed? { yrot -= 1.5f; // Rotate The Scene To The Left } if (keys[VK_LEFT]) // Is The Left Arrow Being Pressed? { yrot += 1.5f; // Rotate The Scene To The Right } and if (keys[VK_PRIOR]) { lookupdown -= 1.0f; } if (keys[VK_NEXT]) { lookupdown += 1.0f; } to be done with the mouse instead of left/right arrow and Page Up/ Page Down. I tried everything I could think of. Can anyone help? EDIT: I tried using WM_MOUSEMOVE message. I just could not figure it out. EDIT2: I am using pure OpenGL to do this. No window management system or other libs such as GLUT, GLFW, SDL, SFML etc. Just OpenGL. OpenGL and GLEW. EDIT: Issue has been solved.

    Read the article

  • Webcam security camera software that runs as a service

    - by hurfdurf
    I've been looking for Windows webcam software that will run as a Windows service without any user login. The goal is to use the webcam as a cheap security camera and log the results to secure networked storage (windows share, not FTP). The requirements are: Motion detection Video capture Runs as a service (should start recording immediately after reboot) Nice to have: Round-robin storage, e.g. 10Gb limit, oldest files overwritten/deleted when space gets low I've read the other webcam questions but still haven't stumbled across anything suitable. Evaluations thus far: Title MotionDetect Service Snapshots Video SpaceLimit License Yawcam Yes Yes Yes No No GPL WebCam ZoneTrigger Yes No Yes Yes No Commercial Dorgem Yes No Yes Yes No GPL AbelCam Yes No Yes Yes No Commercial Logitech Yes No Yes Yes No Paired with camera IspyConnect Yes No Yes Yes Yes Free SecureCam (SourcefoYes No Yes Yes No GPL AbelCam Yes No Yes Yes No Commercial Active WebCam Yes Yes(?) Yes Yes Volume Free Commercial WebCam Surveyor Yes No Yes Yes No Commercial WebCamsPy NA NA NA NA NA GPL Camera: Logitech Webcam Pro 9000 Windows 7 32-bit WebCamsPy failed to initialize so couldn't be tested So far, the contenders: Active Webcam comes the closest, and claims to run as a service, but i haven't been able to get it to record after a cold boot even though a service is running. Yawcam can be set up as a service but doesn't record video. IspyConnect has exactly the type of space limit I want and looks great, but doesn't run as a service (seems also to be a bit of a cpu hog) Any other suggestions? I'm locked into Windows so can't use linux Motion, which looks almost perfect. Any pointers to rich Windows webcam/motion detection libraries out there that could easily be turned into a command line program would also be appreciated.

    Read the article

  • Setting up an IP Camera with silverlight

    - by Sean
    I am trying to set up an IP camera and have it work through Silverlight I am using both Microsoft Expression and Microsoft Visual Studio 2008. I am able to do encoding with a usb connected web cam but I cannot find a way to use the encoder to connect to ip camera connected to our switch. Does anyone have experience setting up an ip camera to encode into the Silverlight framework?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >