Search Results

Search found 2068 results on 83 pages for 'camera'.

Page 7/83 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Transferring videos from Sony Video Camera to PC via USB connection

    - by Mehper C. Palavuzlar
    I have a Sony video camera (DCR-TRV340e) and loads of video tapes that I like to transfer to my PC. The camera has USB connection and I've loaded its driver with no problems. Which video software do you recommend me to use? I want to have my videos in XviD format with maximum 1 GB for a 1 hour film. How much time does it take to do that for a 1 hour film? Should I use USB connection or another type of connection? Detailed information about connection and software settings would be highly appreciated.

    Read the article

  • Ray intersection and child camera

    - by lutangar
    I've been playing with three.js for few weeks now and got few inconsistencies on ray casting. Here is a simplified version demonstrating one of the bug I encoutered : http://jsfiddle.net/eMrhb/12/ The camera is added to the sphere mesh for further use of TrackBallControl for example. scene.add(mesh); mesh.add(camera); Clicking a few times on the sphere and opening the console, show us none of the expected intersections between the ray and the mesh. Adding the camera to the scene (http://jsfiddle.net/eMrhb/9/), solves the problem: scene.add(mesh); scene.add(camera); But I could use a much more complex hierarchy between my scene objects and the camera to suit my needs. Is this a limitation? If it is, is there any workarounds I could use?

    Read the article

  • Galaxy Nexus (Specificaly) camera startPreview failed

    - by Roman
    I'm having a strange issue on a galaxy nexus when trying to have a live camera view in my app. I get this error in the log cat: 06-29 16:31:26.681 I/CameraClient(133): Opening camera 0 06-29 16:31:26.681 I/CameraHAL(133): camera_device open 06-29 16:31:26.970 D/DOMX (133): ERROR: failed check:(eError == OMX_ErrorNone) || (eError == OMX_ErrorNoMore) - returning error: 0x80001005 - Error returned from OMX API in ducati 06-29 16:31:26.970 E/CameraHAL(133): Error while configuring rotation 0x80001005 06-29 16:31:27.088 I/am_on_resume_called(21274): [0,digifynotes.Activity_Camera] 06-29 16:31:27.111 V/PhoneStatusBar(693): setLightsOn(true) 06-29 16:31:27.205 E/CameraHAL(133): OMX component is not in loaded state 06-29 16:31:27.205 E/CameraHAL(133): setNSF() failed -22 06-29 16:31:27.205 E/CameraHAL(133): Error: CAMERA_QUERY_RESOLUTION_PREVIEW -22 06-29 16:31:27.252 I/MonoDroid(21274): UNHANDLED EXCEPTION: Java.Lang.Exception: Exception of type 'Java.Lang.Exception' was thrown. 06-29 16:31:27.252 I/MonoDroid(21274): at Android.Runtime.JNIEnv.CallVoidMethod (intptr,intptr) <0x00068> 06-29 16:31:27.252 I/MonoDroid(21274): at Android.Hardware.Camera.StartPreview () <0x0007f> 06-29 16:31:27.252 I/MonoDroid(21274): at DigifyNotes.CameraPreviewView.SurfaceChanged (Android.Views.ISurfaceHolder,Android.Graphics.Format,int,int) <0x000d7> 06-29 16:31:27.252 I/MonoDroid(21274): at Android.Views.ISurfaceHolderCallbackInvoker.n_SurfaceChanged_Landroid_view_SurfaceHolder_III (intptr,intptr,intptr,int,int,int) <0x0008b> 06-29 16:31:27.252 I/MonoDroid(21274): at (wrapper dynamic-method) object.4c65d912-497c-4a67-9046-4b33a55403df (intptr,intptr,intptr,int,int,int) <0x0006b> That very same source code works flawlessly on a Samsung Galaxy Ace 2X (4.0.4) and an LG G2X (2.3.7). I will later test on a galaxy s4 if my friend lends it to me. Galaxy Nexus runs Android 4.2.2 I believe. Any one have any ideas? EDIT: Here are my camera classes: [Please note I am using mono] [The formatting is more readable if you view it as raw] Camera Activity: http://pastebin.com/YPcGXJRB Camera Preview View: http://pastebin.com/zNf8AWDf

    Read the article

  • compile ICS/JB camera application - native library jni-mosaic error

    - by liorry
    I would like to use the Panorama mode that the ICS/JB camera application has. I've downloaded the source code (with resources) and managed to solve all code compilation errors but as soon as I start the application on my device (running JB), I get this error: 10-25 14:42:53.617: E/AndroidRuntime(23147): FATAL EXCEPTION: GLThread 2586 10-25 14:42:53.617: E/AndroidRuntime(23147): java.lang.UnsatisfiedLinkError: Native method not found: com.app.camera.panorama.MosaicRenderer.reset:(IIZ)V 10-25 14:42:53.617: E/AndroidRuntime(23147): at com.app.camera.panorama.MosaicRenderer.reset(Native Method) 10-25 14:42:53.617: E/AndroidRuntime(23147): at com.app.camera.panorama.MosaicRendererSurfaceViewRenderer.onSurfaceChanged(MosaicRendererSurfaceViewRenderer.java:49) 10-25 14:42:53.617: E/AndroidRuntime(23147): at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1505) 10-25 14:42:53.617: E/AndroidRuntime(23147): at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1240) I do have a libjni-mosaic lib, located in armeabi-v7a/armeabi/x86 and it seems to load it fine but it probably doesn't contain the methods the MosaicRenderer implements. I also tried compiling the CyanogenMod camera app https://github.com/CyanogenMod/android_packages_apps_Camera/tree/ics but I get the same error... The camera itself works, for stills and video recording but as soon as I change to panorama mode, it crashes. Can anyone maybe point me to the right jni-mosaic lib or maybe to what I'm doing wrong? Do I need to do something in order to make my app use the JNI/SO files?

    Read the article

  • SurfaceView for Camera Preview won't get destroyed when pressing Power-Botton

    - by for3st
    I want to implement a camera preview. For that I have a custom View CameraView extends ViewGroup that in the constructor programatically creates an surfaceView. I have the following components (higly simplified for beverity): ScannerFragment.java public View onCreateView(..) { //inflate view and get cameraView } public void onResume() { //open camera -> set rotation -> startPreview (in a thread) -> //set preview callback -> start decoding worker } public void onPause() { // stop decoding worker -> stop Preview -> release camera } CameraView.java extends ViewGroup public void setUpCalledInConstructor(Context context) { //create a surfaceview and add it to this viewgroup -> //get SurfaceHolder and set callback } /* SurfaceHolder.Callback */ public void surfaceCreated(SurfaceHolder holder) { camera.setPreviewDisplay(holder); } public void surfaceDestroyed(SurfaceHolder holder) { //NOTHING is done here } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { camera.getParameters().setPreviewSize(previewSize.width, previewSize.height); } fragment_scanner.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent"> <com.myapp.camera.CameraView android:id="@+id/cameraPreview" android:layout_width="match_parent" android:layout_height="match_parent"/> </RelativeLayout> I think I have set the lifecycle correct (getting resources onResume(), releasing it onPause() roughly said) and the following works just fine: pressing home and returning pressing Taskswitcher and returning rotation But one thing doesn't work and that is when I press the power-button on the device and then return to the camera-preview. The result is: the preview is stuck with the image that was last captured before button was pressed. If I rotate it works fine again, since it will get through the lifecycle. After some research I found out that this is probably due to the fact that surfaceView won't get destroyed when the power-button is pressed, i.e. SurfaceHolder.Callback.surfaceDestroyed(SurfaceHolder holder) won't be called. And in fact when I compare the (very verbose) log output of the home-button-case and the power-button-case it's the same except that 'surfaceDestroyed' won't get called. So far I found no solution whatsoever to work around it. I purposely avoid any resource cleaning code in my surfaceDestroyed(), but this does not help. My idea was to manually destroy the surfaceView like asked in this question but this seems not possible. I also tested other applications with surfaceViews/cameras and they don't seem to have this issue. So I would appreciate any hints or tips on that.

    Read the article

  • Issue getting camera emulation to work with Tom G's HttpCamera

    - by user591524
    I am trying to use the android emulator to preview video from webcam. I have used the tom gibara sample code, minus the webbroadcaster (i am instead using VLC streaming via http). So, I have modified the SDK's "CameraPreview" app to use the HttpCamera, but the stream never appears. Debugging through doesn't give me any clues either. I wonder if anything obvious is clear to others? The preview app launches and remains black. Notes: 1) I have updated the original CameraPreview class as described here: http://www.inter-fuser.com/2009/09/live-camera-preview-in-android-emulator.html, but referencing httpCamera instead of socketcamera. 2) I updated Tom's original example to reference "Camera" type instead of deprecated "CameraDevice" type. 3) Below is my CameraPreview.java. 4) THANK YOU package com.example.android.apis.graphics; import android.app.Activity; import android.content.Context; import android.hardware.Camera; import android.os.Bundle; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.Window; import java.io.IOException; import android.graphics.Canvas; // ---------------------------------------------------------------------- public class CameraPreview extends Activity { private Preview mPreview; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Hide the window title. requestWindowFeature(Window.FEATURE_NO_TITLE); // Create our Preview view and set it as the content of our activity. mPreview = new Preview(this); setContentView(mPreview); } } // ---------------------------------------------------------------------- class Preview extends SurfaceView implements SurfaceHolder.Callback { SurfaceHolder mHolder; //Camera mCamera; HttpCamera mCamera;//changed Preview(Context context) { super(context); // Install a SurfaceHolder.Callback so we get notified when the // underlying surface is created and destroyed. mHolder = getHolder(); mHolder.addCallback(this); //mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);//changed } public void surfaceCreated(SurfaceHolder holder) { // The Surface has been created, acquire the camera and tell it where // to draw. //mCamera = Camera.open(); this.StartCameraPreview(holder); } public void surfaceDestroyed(SurfaceHolder holder) { // Surface will be destroyed when we return, so stop the preview. // Because the CameraDevice object is not a shared resource, it's very // important to release it when the activity is paused. //mCamera.stopPreview();//changed mCamera = null; } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { // Now that the size is known, set up the camera parameters and begin // the preview. //Camera.Parameters parameters = mCamera.getParameters(); //parameters.setPreviewSize(w, h); //mCamera.setParameters(parameters); //mCamera.startPreview(); this.StartCameraPreview(holder); } private void StartCameraPreview(SurfaceHolder sh) { mCamera = new HttpCamera("10.213.74.247:443", 640, 480, true);//changed try { //mCamera.setPreviewDisplay(holder); Canvas c = sh.lockCanvas(null); mCamera.capture(c); sh.unlockCanvasAndPost(c); } catch (Exception exception) { //mCamera.release(); mCamera = null; // TODO: add more exception handling logic here } } }

    Read the article

  • Libgdx 2D Game, Random generated World of random size, how to get mouse coordinates?

    - by Solom
    I'm a noob and English is not my mothertongue, so please bear with me! I'm generating a map for a Sidescroller out of a 2D-array. That is, the array holds different values and I create blocks based on that value. Now, my problem is to match mouse coordinates on screen with the actual block the mouse is pointing at. public class GameScreen implements Screen { private static final int WIDTH = 100; private static final int HEIGHT = 70; private OrthographicCamera camera; private Rectangle glViewport; private Spritebatch spriteBatch; private Map map; private Block block; ... @Override public void show() { camera = new OrthographicCamera(WIDTH, HEIGHT); camera.position.set(WIDTH/2, HEIGHT/2, 0); glViewport = new Rectangle(0, 0, WIDTH, HEIGHT); map = new Map(16384, 256); map.printTileMap(); // Debugging only spriteBatch = new SpriteBatch(); } @Override public void render(float delta) { // Clear previous frame Gdx.gl.glClearColor(1, 1, 1, 1 ); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); GL30 gl = Gdx.graphics.getGL30(); // gl.glViewport((int) glViewport.x, (int) glViewport.y, (int) glViewport.width, (int) glViewport.height); spriteBatch.setProjectionMatrix(camera.combined); camera.update(); spriteBatch.begin(); // Draw Map this.drawMap(); // spriteBatch.flush(); spriteBatch.end(); } private void drawMap() { for(int a = 0; a < map.getHeight(); a++) { // Bounds check (y) if(camera.position.y + camera.viewportHeight < a)// || camera.position.y - camera.viewportHeight > a) break; for(int b = 0; b < map.getWidth(); b++) { // Bounds check (x) if(camera.position.x + camera.viewportWidth < b)// || camera.position.x > b) break; // Dynamic rendering via BlockManager int id = map.getTileMap()[a][b]; Block block = BlockManager.map.get(id); if(block != null) // Check if Air { block.setPosition(b, a); spriteBatch.draw(block.getTexture(), b, a, 1 ,1); } } } } As you can see, I don't use the viewport anywhere. Not sure if I need it somewhere down the road. So, the map is 16384 blocks wide. One block is 16 pixels in size. One of my naive approaches was this: if(Gdx.input.isButtonPressed(Input.Buttons.LEFT)) { Vector3 mousePos = new Vector3(); mousePos.set(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); System.out.println(Math.round(mousePos.x)); // *16); // Debugging // TODO: round // map.getTileMap()[mousePos.x][mousePos.y] = 2; // Draw at mouse position } I confused myself somewhere down the road I fear. What I want to do is, update the "block" (or rather the information in the Map/2D-Array) so that in the next render() there is another block. Basically drawing on the spriteBatch g So if anyone could point me in the right direction this would be highly appreciated. Thanks!

    Read the article

  • 3d Model Scaling With Camera

    - by spasarto
    I have a very simple 3D maze program that uses a first person camera to navigate the maze. I'm trying to scale the blocks that make up the maze walls and floor so the corridors seem more roomy to the camera. Every time I scale the model, the camera seems to scale with it, and the corridors always stay the same width. I've tried apply the scale to the model in the content pipe (setting the scale property of the model in the properties window in VS). I've also tried to apply the scale using Matrix.CreateScale(float) using the Scale-Rotate-Transform order with the same result. If I leave the camera speed the same, the camera moves slower, so I know it's traversing a larger distance, but the world doesn't look larger; the camera just seems slower. I'm not sure what part of the code to include since I don't know if it is an issue with my model, camera, or something else. Any hints at what I'm doing wrong? Camera: Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, _device.Viewport.AspectRatio, 1.0f, 1000.0f ); Matrix camRotMatrix = Matrix.CreateRotationX( _cameraPitch ) * Matrix.CreateRotationY( _cameraYaw ); Vector3 transCamRef = Vector3.Transform( _cameraForward, camRotMatrix ); _cameraTarget = transCamRef + CameraPosition; Vector3 camRotUpVector = Vector3.Transform( _cameraUpVector, camRotMatrix ); View = Matrix.CreateLookAt( CameraPosition, _cameraTarget, camRotUpVector ); Model: World = Matrix.CreateTranslation( Position );

    Read the article

  • 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing?

    - by Arthur Wulf White
    I wish to add the ability to zoom-in, zoom-out, rotate and move the view in a top-down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details: Offset of the camera pov from the origin of the map is: Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is: alpha The scale is: S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement: I read in the AS3 Bible book that: In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean: http://wonderfl.net/c/eFp0/

    Read the article

  • Quaternions, Axis Angles and Rotation Matrices. Which of these should I use for FP Camera?

    - by Afonso Lage
    After 2 weeks of reading many math formulas and such I know what is a Quaternion, an Axis Angles and Matrices. I have made my own math libary (Java) to use on my game (LWJGL). But I'm really confused about all this. I want to have a 3D first person camera. The move (translation) is working fine but the rotation isnt working like I need. I need a camera to rotate arround world Axis and not about its own axis. But even using Quaternions, this doesnt work and no matter how much I read about Euler Angles, everybody says to me dont touch on it! This is a little piece of code that i'm using to make the rotation: Quaternion qPitch = Quaternion.createFromAxis(cameraRotate.x, 1.0f, 0.0f, 0.0f); Quaternion qYaw = Quaternion.createFromAxis(cameraRotate.y, 0.0f, 1.0f, 0.0f); this.multiplicate(qPitch.toMatrix4f().toArray()); this.multiplicate(qYaw.toMatrix4f().toArray()); Where this is a Matrix4f view matrix and cameraRotate is a Vector3f that just handle the angles to rotate obtained from mouse move. So I think I'm doing everything right: Translate the view Matrix Rotate the Move Matrix So, after reading all this, I just want to know: To obtain a correct first person camera rotate, I must need to use Quaternios to make the rotations, but how to rotate around world axis? Thanks for reading it. Best regards, Afonso Lage

    Read the article

  • How do I calculate the boundary of the game window after transforming the view?

    - by Cypher
    My Camera class handles zoom, rotation, and of course panning. It's invoked through SpriteBatch.Begin, like so many other XNA 2D camera classes. It calculates the view Matrix like so: public Matrix GetViewMatrix() { return Matrix.Identity * Matrix.CreateTranslation(new Vector3(-this.Spatial.Position, 0.0f)) * Matrix.CreateTranslation(-( this.viewport.Width / 2 ), -( this.viewport.Height / 2 ), 0.0f) * Matrix.CreateRotationZ(this.Rotation) * Matrix.CreateScale(this.Scale, this.Scale, 1.0f) * Matrix.CreateTranslation(this.viewport.Width * 0.5f, this.viewport.Height * 0.5f, 0.0f); } I was having a minor issue with performance, which after doing some profiling, led me to apply a culling feature to my rendering system. It used to, before I implemented the camera's zoom feature, simply grab the camera's boundaries and cull any game objects that did not intersect with the camera. However, after giving the camera the ability to zoom, that no longer works. The reason why is visible in the screenshot below. The navy blue rectangle represents the camera's boundaries when zoomed out all the way (Camera.Scale = 0.5f). So, when zoomed out, game objects are culled before they reach the boundaries of the window. The camera's width and height are determined by the Viewport properties of the same name (maybe this is my mistake? I wasn't expecting the camera to "resize" like this). What I'm trying to calculate is a Rectangle that defines the boundaries of the screen, as indicated by my awesome blue arrows, even after the camera is rotated, scaled, or panned. Here is how I've more recently found out how not to do it: public Rectangle CullingRegion { get { Rectangle region = Rectangle.Empty; Vector2 size = this.Spatial.Size; size *= 1 / this.Scale; Vector2 position = this.Spatial.Position; position = Vector2.Transform(position, this.Inverse); region.X = (int)position.X; region.Y = (int)position.Y; region.Width = (int)size.X; region.Height = (int)size.Y; return region; } } It seems to calculate the right size, but when I render this region, it moves around which will obviously cause problems. It needs to be "static", so to speak. It's also obscenely slow, which causes more of a problem than it solves. What am I missing?

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Configure BL-C111A IP Camera

    - by csmba
    I am an ATT DSL customer. I want the camera to send an email when motion detection is triggered. Can anyone tell me how he managed to do that using: GMail (I cannot because I don't think it supports SSL) other alternatives if GMail is not supported

    Read the article

  • Google Earth - zoom like with a scope, rotate the camera

    - by Suma
    I know it is possible to change a viewing altitude in Google Earth, but is it possible to zoom in/out as well, like with a telescope? I would like to preview what could be viewed from a particular lookout point, but without zoom in it seems very hard. Another feature which would be handy for the purpose: is it possible to turn the viewing direction around the camera, and not around the target?

    Read the article

  • Asus Laptop camera PID code

    - by Littlet-ENG
    I just installed windows 7 on my daughters laptop. I want to make sure the on board web camera has the correct installed driver. Asus asks for a pid code? How do I find out what the pid code is? Asus f3ka laptop Any suggestions?

    Read the article

  • Quaternion Camera Orbiting around a Sphere

    - by jessejuicer
    Background: I'm trying to create a game where the camera is always rotating around a single sphere. I'm using the DirectX D3DX math functions in C++ on Windows. The Problem: I cannot get both the camera position and orientation both working properly at the same time. Either one works but not both together. Here's the code for my quaternion camera that revolves around a sphere, always looking at the centerpoint of the sphere, ... as far as I understand it (but which isn't working properly): (I'm only going to present rotation around the X axis here, to simplify this post) Whenever the UP key is pressed or held down, the camera should rotate around the X axis, while looking at the centerpoint of the sphere (which is at 0,0,0 in the world). So, I build a quaternion that represents a small angle of rotation around the x axis like this (where 'deltaAngle' is a small enough number for a slow rotation): D3DXVECTOR3 rotAxis; D3DXQUATERNION tempQuat; tempQuat.x = 0.0f; tempQuat.y = 0.0f; tempQuat.z = 0.0f; tempQuat.w = 1.0f; rotAxis.x = 1.0f; rotAxis.y = 0.0f; rotAxis.z = 0.0f; D3DXQuaternionRotationAxis(&tempQuat, &rotAxis, deltaAngle); ...and I accumulate the result into the camera's current orientation quat, like this: D3DXQuaternionMultiply(&cameraOrientationQuat, &cameraOrientationQuat, &tempQuat); ...which all works fine. Now I need to build a view matrix to pass to DirectX SetTransform function. So I build a rotation matrix from the camera orientation quat as follows: D3DXMATRIXA16 rotationMatrix; D3DXMatrixIdentity(&rotationMatrix); D3DXMatrixRotationQuaternion(&rotationMatrix, &cameraOrientationQuat); ...Now (as seen below) if I just transpose that rotationMatrix and plug it into the 3x3 section of the view matrix, then negate the camera's position and plug it into the translation section of the view matrix, the rotation magically works. Perfectly. (even when I add in rotations for all three axes). There's no gimbal lock, just a smooth rotation all around in any direction. BUT- this works even though I never change the camera's position. At all. Which sorta blows my mind. I even display the camera position and can watch it stay constant at it's starting point (0.0, 0.0, -4000.0). It never moves, but the rotation around the sphere is perfect. I don't understand that. For proper view rotation, the camera position should be revolving around the sphere. Here's the rest of building the view matrix (I'll talk about the commented code below). Note that the camera starts out at (0.0, 0.0, -4000.0) and m_camDistToTarget is 4000.0: /* D3DXVECTOR3 vec1; D3DXVECTOR4 vec2; vec1.x = 0.0f; vec1.y = 0.0f; vec1.z = -1.0f; D3DXVec3Transform(&vec2, &vec1, &rotationMatrix); g_cameraActor->pos.x = vec2.x * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.y = vec2.y * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.z = vec2.z * g_cameraActor->m_camDistToTarget; */ D3DXMatrixTranspose(&g_viewMatrix, &rotationMatrix); g_viewMatrix._41 = -g_cameraActor->pos.x; g_viewMatrix._42 = -g_cameraActor->pos.y; g_viewMatrix._43 = -g_cameraActor->pos.z; g_viewMatrix._44 = 1.0f; g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix ); ...(The world matrix is always an identity, and the perspective projection works fine). ...So, without the commented code being compiled, the rotation works fine. But to be proper, for obvious reasons, the camera position should be rotating around the sphere, which it currently is not. That's what the commented code is supposed to do. And when I add in that chunk of code to do that, and look at all the data as I hold the keys down (using UP, DOWN, LEFT, RIGHT to rotate different directions) all the values look correct! The camera position is rotating around the sphere just fine, and I can watch that happen visually too. The problem is that the camera orientation does not lookat the center of the sphere. It always looks straight forward down the z axis (toward positive z) as it revolves around the sphere. Yet the values of both the rotation matrix and the view matrix seem to be behaving correctly. (The view matrix orientation is the same as the rotation matrix, just transposed). For instance if I just hold down the key to spin around the x axis, I can watch the values of the three axes represented in the view matrix (x, y, and z axes)... view x-axis stays at (1.0, 0.0, 0.0), and view y-axis and z-axis both spin around the x axis just fine. All the numbers are changing as they should be... well, almost. As far as I can tell, the position of the view matrix is spinning around the sphere one direction (like clockwise), and the orientation (the axes in the view matrix) are spinning the opposite direction (like counter-clockwise). Which I guess explains why the orientation appears to stay straight ahead. I know the position is correct. It revolves properly. It's the orientation that's wrong. Can anyone see what am I doing wrong? Am I using these functions incorrectly? Or is my algorithm flawed? As usual I've been combing my code for simple mistakes for many hours. I'm willing to post the actual code, and a video of the behavior, but that will take much more effort. Thought I'd ask this way first.

    Read the article

  • camera preview on androd - strange lines on 1.5 version of sdk

    - by Marko
    Hi all, I am developing the camera module for an android application. In main application when user clicks on 'take picture' button, new view with SurfaceView control is opened and camera preview is shown. When users click on dpad center, camera takes picture and save it to the disc. Pretty simple and straightforward. Everything works fine on my device - HTC Tattoo, minsdkversion 1.6 ...but when I tested application on HTC Hero minsdkversion 1.5, when camera preview is shown,some strange lines occur. Anyone has idea what is going on? p.s. altough preview is crashed, taking of pictures works fine here is the picture: Thanx Marko

    Read the article

  • Virtual camera/direct show filter for network stream

    - by Jeje
    Hi guys, i'm working with Flash Live Encoder. It's using camera for streaming video. Support forum say's that i can create custom direct show filter and stream data that i need. I cann't understand how direct show filter will display in the source list of the live encoder. I've tryed to use some commercial virtual camera and it work's fine, but it cann't use source from network stream. Summary. I have a several network streams. I think that i must to create virtual camera for each one. But if i find examples with direct show filter on C#, i cann't find for virtual camera.

    Read the article

  • Vectors rotations 3D camera tiliting

    - by TallGuy
    Hopefully easy answer, but I cannot get it. I have a 3D render engine I have written. I have the camera position, the lookat position and the up vector. I want to be able to "tilt" the camera left, right, up and down. Like a camera on a fixed tripod that you can grab the handle and tilt it it up, down, left right etc. The maths stumps me. I have been able to do forwards/backwards dolly and up/down/left/right panning, but cannot work out the vector math to get it to tilt. For left and right tilt I want to rotate the lookat position around the camera position, but I need to take into account the up vector, otherwise the rotation doesn't know which axis to to turn around. The maths/algorithm I need is along the lines of... Camera=(cx,cy,cz) Lookat=(lx,ly,lz) Up=(ux,uy,uz) RotatePointAroundVector(lx,ly,lz,ux,uy,uz,amount) Can anyone assist with the maths involved? Many thanks.

    Read the article

  • Windows Mobile camera capture issue with DirectShow

    - by user1112111
    I wrote a C# application that uses DirectShow to allow users to take photos with the camera attached to a Windows Mobile 6 phone. The problem I spotted out is that the preview functionality stops working after a period of not using it. How could I determine what's causing this ? Is there a timeout in Windows registry that tells how much time the camera should stay on ? This is very strange as my code runs perfect when turning on/off the camera quickly enough (i.e. not letting the camera off more than ~15 seconds).

    Read the article

  • Weird camera Intent behavior

    - by David Erosa
    Hi all. I'm invoking the MediaStore.ACTION_IMAGE_CAPTURE intent with the MediaStore.EXTRA_OUTPUT extra so that it does save the image to that file. On the onActivityResult I can check that the image is being saved in the intended file, which is correct. The weird thing is that anyhow, the image is also saved in a file named something like "/sdcard/Pictures/Camera/1298041488657.jpg" (epoch time in which the image was taken). I've checked the Camera app source (froyo-release branch) and I'm almost sure that the code path is correct and wouldn't have to save the image, but I'm a noob and I'm not completly sure. AFAIK, the image saving process starts with this callback (comments are mine): private final class JpegPictureCallback implements PictureCallback { ... public void onPictureTaken(...){ ... // This is where the image is passed back to the invoking activity. mImageCapture.storeImage(jpegData, camera, mLocation); ... public void storeImage(final byte[] data, android.hardware.Camera camera, Location loc) { if (!mIsImageCaptureIntent) { // Am i an intent? int degree = storeImage(data, loc); // THIS SHOULD NOT BE CALLED WITHIN THE CAPTURE INTENT!! ....... // An finally: private int storeImage(byte[] data, Location loc) { try { long dateTaken = System.currentTimeMillis(); String title = createName(dateTaken); String filename = title + ".jpg"; // Eureka, timestamp filename! ... So, I'm receiving the correct data, but it's also being saved in the "storeImage(data, loc);" method call, which should not be called... It'd not be a problem if I could get the newly created filename from the intent result data, but I can't. When I found this out, I found about 20 image files from my tests that I didn't know were on my sdcard :) I'm getting this behavior both with my Nexus One with Froyo and my Huawei U8110 with Eclair. Could please anyone enlight me? Thanks a lot.

    Read the article

  • Configuring DSVL to show camera images from Logitech9000 webcam

    - by curryage
    I am trying to view camera feed from a Logitech9000 camera using DSVL(DirectShow Video Library) http://sourceforge.net/projects/dsvideolib/. The xml file currently looks as below: <?xml version="1.0" encoding="UTF-8"?> <dsvl_input xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="C:\Documents and Settings\Thomas\My Documents\projects\ARToolKit &amp; DSVideoLib\ARVideoLib\DsVideoLib\DsVideoLib.xsd"> <camera show_format_dialog="false" friendly_name="Logitech Webcam Pro 9000"> <pixel_format> <RGB24 flip_v="false"/> </pixel_format> </camera> </dsvl_input> However, the image that comes up looks vertically inverted. I tried changing the flip_v value to true in the config above but it did not make any difference. Any suggestions?

    Read the article

  • XNA 2d camera with arbitrary zoom center

    - by blooop
    I have a working 2D camera in XNA with these guts: ms = Mouse.GetState(); msv = new Vector2(ms.X, ms.Y); //screenspace mouse vecor pos = new Vector2(0, 0); //camera center of view zoom_center = cursor; //I would like to be able to define the zoom center in world coords offset = new Vector2(scrnwidth / 2, scrnheight / 2); transmatrix = Matrix.CreateTranslation(-pos.X, -pos.Y, 0) * Matrix.CreateScale(scale, scale, 1) * Matrix.CreateTranslation(offset.X, offset.Y, 0); inverse = Matrix.Invert(transmatrix); cursor = Vector2.Transform(msv, inverse); //the mouse position in world coords I can move the camera position around and change the zoom level (with other code that I have not pasted here for brevity). The camera always zooms around the center of the screen, but I would like to be able to zoom about an arbitrary zoom point (the cursor in this case), like the indie game dyson http://www.youtube.com/watch?v=YiwjjCMqnpg&feature=player_detailpage#t=144s I have tried all the combinations that make sense to me, but am completely stuck.

    Read the article

  • Camera Stream in Flash Problem

    - by Peter
    Please help me fill the question marks. I want to get a feed from my camera and to pass it to the receive function. Also in flash builder(in design mode) how do I put elements so they can play a camera feed?? Because as it seems VideoDisplay just doesn't work public function receive(???:???):void{ //othercam is a graphic element(VideoDisplay) othercam.??? = ????; } private function send():void{ var mycam:Camera = Camera.getCamera(); //mycam2.attachCamera(mycam); //sendstr is a stream we send sendstr.attachCamera(mycam); //we pass mycam into receive sendstr.send("receive",mycam); }

    Read the article

  • Zoneminder camera permanently idle

    - by C. Ross
    I have an Ubuntu server with zoneminder installed. I only have one camera (cheap Logitech USB model) and it seems to be permanently idle. I'm also getting this error repeatedly in the logs 06/06/2010 08:26:56.563783 zmwatch[28234].INF [Restarting capture daemon for main_room, shared memory not valid] 06/06/2010 08:26:56.812964 zmwatch[28234].INF ['zmc -d /dev/video0' starting at 10/06/06 08:26:56, pid = 29214] 06/06/2010 08:27:06.814486 zmwatch[28234].INF [Restarting capture daemon for main_room, shared memory not valid] 06/06/2010 08:27:07.054854 zmwatch[28234].INF ['zmc -d /dev/video0' starting at 10/06/06 08:27:07, pid = 29219] I've already tried following the instructions in the FAQ for increasing shared memory, but that doesn't seem to work. What do I need to do to get this working?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >