Search Results

Search found 2068 results on 83 pages for 'camera'.

Page 19/83 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Textured Primitives in XNA with a first person camera

    - by 131nary
    So I have a XNA application set up. The camera is in first person mode, and the user can move around using the keyboard and reposition the camera target with the mouse. I have been able to load 3D models fine, and they appear on screen no problem. Whenever I try to draw any primitive (textured or not), it does not show up anywhere on the screen, no matter how I position the camera. In Initialize(), I have: quad = new Quad(Vector3.Zero, Vector3.UnitZ, Vector3.Up, 2, 2); quadVertexDecl = new VertexDeclaration(this.GraphicsDevice, VertexPositionNormalTexture.VertexElements); In LoadContent(), I have: quadTexture = Content.Load<Texture2D>(@"Textures\brickWall"); quadEffect = new BasicEffect(this.GraphicsDevice, null); quadEffect.AmbientLightColor = new Vector3(0.8f, 0.8f, 0.8f); quadEffect.LightingEnabled = true; quadEffect.World = Matrix.Identity; quadEffect.View = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up); quadEffect.Projection = this.Projection; quadEffect.TextureEnabled = true; quadEffect.Texture = quadTexture; And in Draw() I have: this.GraphicsDevice.VertexDeclaration = quadVertexDecl; quadEffect.Begin(); foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes) { pass.Begin(); GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalTexture>( PrimitiveType.TriangleList, quad.Vertices, 0, 4, quad.Indexes, 0, 2); pass.End(); } quadEffect.End(); I think I'm doing something wrong in the quadEffect properties, but I'm not quite sure what.

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • gluLookAt vectors and FPS-style camera

    - by Kevin Pamplona
    I am attempting to implemented an FPS-style camera by updating three vectors: EYE, DIR, UP. These vectors are the same that are used by gluLookAt (since gluLookAt is specified by the position of the camera, the direction it is looking at, and an up vector). I have already implemented the left-right and up-down strafing movements, but I'm having a lot of trouble understanding the math behind making the camera look-around while remaining stationary. In this case, the EYE vector remains the same, while I must update DIR and UP. Below is the code I tried, but it doesn't seem to work properly. Any suggestions? Thanks. void Transform::left(float degrees, vec3& dir, vec3& up) { vec3 axis; axis = glm::normalize(up); mat3 R = rotate(-degrees, axis); dir = R*dir; dir = R*up; }; void Transform::up(float degrees, vec3& dir, vec3& up) { vec3 axis; axis=glm::normalize(glm::cross(dir,up)); mat3 R = rotate(-degrees, axis); dir = R*dir-; up = R*up; };

    Read the article

  • OpenCV: Getting and Setting Camera Settings

    - by jhaip
    I have been searching around and can't find an example of how to get and set the camera capturing settings. For example the capturing resolution, fps, color balance, etc. I have only seen examples of how to change the settings when saving the captured video but I want to be able to find all the camera's capturing modes and choose which one I want. For example, I am using the PS3eye webcam and in the test program it allows you to change the settings (320x240 at 15,30,60,120 fps, 640x480 at 15,30,60,75 fps). So is there a function in OpenCV for getting all the camera's capture modes and choosing one? I remember in OpenFrameworks there was a function to change these settings but I would like to know how to do it in OpenCV. Here is the code for OpenFrameworks with OpenCV that does sort of what I want: vidGrabber.setDeviceID( 4 ); vidGrabber.setDesiredFrameRate( 30 ); //I want this vidGrabber.videoSettings(); vidGrabber.setVerbose(true); vidGrabber.initGrabber(320,240); //And this

    Read the article

  • Browse for folder can't see camera device

    - by Robert Frank
    In Delphi 2010, I want to allow users to browse and select a folder. The folder is on a device (?) created by a DSLR: The folder is visible in the Windows Explorer as shown above. And, the folder is visible in a TOpenDialog, allowing them to browse into the folder and choose a file. Unfortunately, I have been unable to get either SHBrowseForFolder (code I found on the web but don't understand) or SelectDirectory to see the camera device or folder beneath it. (Side note: IMO, SelectDirectory is a far nicer UI, since the user can see the files in the folders while browsing.) I assume this has to do with the fact that the folder is in a device (?) created by the camera software. I've seen some tricks where you call TOpenDialog to browse for folders with '*.' and then ExtractFileDir on the result, but that's not robust or, IMO, a good UI. What I'm looking for is a "Browse for folder" that can see the same devices (including the camera device) the TOpenDialog & Windows Explorer can see. (Ideally, it would have the nice appearance like the one below!) Any suggestions? Image of a MS-Word's folder browsing in Win7. (I wonder if it looks this pretty in XP.)

    Read the article

  • VLC (Server) re-stream Security Camera Feed

    - by Aaron
    I purchased a Swann Home Security DVR system and was hoping for some help on how to duplicate the streaming video on my server. In order to get their web view (streaming video in the browser) to work, I had to install the following plugins: HiDvrPlugin.dmg for mac. Hidvrocx.cab for Windows. I was originally thinking it was a sign of some form of DRM? Maybe. Maybe not. HTML wise, the following code is in the source of the safari version of the web view: <embed pluginspage="SurveilClient.dmg" width="10px" height="10px" type="application/x-scplugin" id="MacDiv" style="height: 592px; width: 720px; left: 278px; top: 61px; "> It seems to be the main display area. Using wireshark, I am able to see that the video stream is on port 9000. However, I have no idea what type of stream it is. I've tried opening it in VLC with no such luck. http://dvr_ip:9000 tcp://dvr_ip:9000 My hope was to do the following to redistribute the feed vlc dvr_ip:9000 --sout h264-version-on-localhost:3000 TLDR; Trying to re-distribute a stream from a security camera (can't tell the format) using vlc (re-distribute via h.264 / HTML5). Not sure how to accomplish this. Is it possible that the software has some type of DRM that only the plugins can decode?

    Read the article

  • Single intent to let user take picture OR pick image from gallery in Android

    - by Damian
    I'm developing an app for Android 2.1 upwards. I want to enable my users to select a profile picture within my app (I'm not using the contacts framework). The ideal solution would be to fire an intent that enables the user to select an image from the gallery, but if an appropriate image is not available then use the camera to take a picture (or vice-versa i.e. allow user to take picture but if they know they already have a suitable image already, let them drop into the gallery and pick said image). Currently I can do one or the other but not both. If I go directly into camera mode using MediaStore.ACTION_IMAGE_CAPTURE then there is no option to drop into the gallery. If I go directly to the gallery using Intent.ACTION_PICK then I can pick an image but if I click the camera button (in top right hand corner of gallery) then a new camera intent is fired. So, any picture that is taken is not returned directly to my application. (Sure you can press the back button to drop back into the gallery and select image from there but this is an extra unnecessary step and is not at all intuitive). So is there a way to combine both or am I going to have to offer a menu to do one or the other from within my application? Seems like it would be a common use case...surely I'm missing something?

    Read the article

  • Android - Take a photo, save it in app drawables and display it in an ImageButton

    - by Andres7X
    I have an Android app with an ImageButton. When user clicks on it, intent launches to show camera activity. When user capture the image, I'd like to save it in drawable folder of the app and display it in the same ImageButton clicked by the user, replacing the previous drawable image. I used the activity posted here: Capture Image from Camera and Display in Activity ...but when I capture an image, activity doesn't return to activity which contains ImageButton. Edit code is: public void manage_shop() { static final int CAMERA_REQUEST = 1888; [...] ImageView photo = (ImageView)findViewById(R.id.getimg); photo.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent camera = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(camera, CAMERA_REQUEST); } }); [...] } And onActivityResult(): protected void onActivityResult(int requestCode, int resultCode, Intent data) { ImageButton getimage = (ImageButton)findViewById(R.id.getimg); if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) { Bitmap getphoto = (Bitmap) data.getExtras().get("data"); getimage.setImageBitmap(getphoto); } } How can I also store the captured image in drawable folder?

    Read the article

  • Dealing with AVCHD. Any free software to extract or edit AVCHD video?

    - by Kelsey
    I have a Panasonic HDC-SD5 video carmera which records in HD format to a memory card. It also came with a DVD burner which burns this format to DVD which I need a blu-ray player to view. I can get about 30 min worth of video on 1 DVD. My problem is the software that comes with the camera is not very good at all so I am constantly just using the 'backup' feature in the included burner (connect drive to camera, push backup, and it spits out up to 4 DVDs worth of HD video). Now that I have the video backed up on DVD in the HD format (blu-ray), is there any free software that I can use to edit this video and create other HD DVD's with my own menus, transitions, etc? I was considering buying Pinnacle Studio but I wanted to exaust any free options before biting the bullet. Any suggestions for software I could use or anything else I could do to make dealing with this AVCHD format any easier that I am unaware of? Edit: Sorry forgot to include that I am running Windows Vista 64-bit. Edit: Still haven't found anything that is truely free. Everything has limitations by either time, watermark the video or degrade the quality. Edit: So I still haven't found really anything, so is there some software I can use to convert the video to another format that I then could use to edit the video?

    Read the article

  • Save image to camera roll with UIImageWriteToSavedPhotosAlbum

    - by Momeks
    Hi , i try to save a photo with a button to camera roll after user capture a picture with Camera , but i dont know why my picture doesn't save at photo library !! here is my code : -(IBAction)savePhoto{ UIImageWriteToSavedPhotosAlbum(img.image,nil, nil, nil); } -(IBAction)takePic { ipc = [[UIImagePickerController alloc]init]; ipc.delegate = self; ipc.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:ipc animated:YES]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { img.image = [[info objectForKey:UIImagePickerControllerOriginalImage]retain]; [[picker parentViewController]dismissModalViewControllerAnimated:YES]; [picker release]; } ipc is UIImagePickerController and img is UIIMageView what's my problem ?

    Read the article

  • Scaled live iPhone Camera view in center, "CGAffineTransformTranslate" not working

    - by Gavin
    Hi, I have a little problem which I could not solve. I really hope someone can help me with that. I wanted to resize the live camera view and place it in the center, using the following code below: picker.cameraViewTransform = CGAffineTransformScale(picker.cameraViewTransform, 0.5, 0.56206); picker.cameraViewTransform = CGAffineTransformTranslate(picker.cameraViewTransform, 80, 120); But all I got was a scaled 1/2 sized view on the top left of the screen. It seems as though "CGAffineTransformTranslate" does nothing at all. The translation didn't work even when I used: picker.cameraViewTransform = CGAffineTransformMake(1, 0, 0, 1, 80, 120); The translation portion seems to have no effect on the live camera view. Hope someone can enlighten me. Thanks.

    Read the article

  • Loading an OverlayView from XIB -vs- programmatically for use with UIImagePickerController

    - by PLG
    I am currently making a camera app for iPhone and I have a strange phenomenon that I can't figure out. I would appreciate some help understanding. When recreating an overlay view for passing to UIImagePickerController, I have been successfully been able to create the view programmatically. What I haven't been able to do is create the view with/without controller in IB, load it and pass it to the overlay pointer successfully. If I do it via IB, the view is not opaque and obscures the view of the camera completely. I can not figure out why. I was thinking that the normal view pointer might be assigned when loading from XIB and therefore overwrite the camera's view, but I have an example programmatically where view and overlayView are set equal in the controller class. Perhaps the load order is overwriting a pointer? Help would be appreciated... kind regards.

    Read the article

  • how to get the camera data

    - by beof
    Hello ,guys, My app needs to get the camera data from Iphone. In my ImagePickerController, there is overlayView drawing realtime indications. I use UIGetScreenImage() to get the screenshot, and I also dump overlayview to image, then I can restore the original Image based on these two images. if the overlayView is still, it works quite well, but if the overlayView keeps changing, UIGetScreenImage() can not keep up with it. For example,if the overlayView changes from a rectangle to a circle, then calling UIGetScreenImage() returns with a rectangle on top of it. Is there a way to get the realtime camera data? I really appreciate if someone could help.

    Read the article

  • Real time video stream from camera via server to Iphone

    - by devdevdev
    Hi I want to create an Iphone app that can display real time video from a camera. The intended setup is. Camera connected to Mac Mini delivering real time video to Iphone over local network. The Iphone doesn't need to display high quality or use a high frame rate, but it has to be real time. Max 1 sec delay. I've been searching a lot for a solution, but so far I have not found a resonable one. HTTP Live Stream is not a solution due to the delay. Any suggestions?

    Read the article

  • Getting the Video file Stored on DVR by the IP Camera in BlackBerry

    - by sHaH..
    HI all.. i need help of you guys.. Well i am making an application which will display the live video from the IP camera and display it on my screen. well my friends now the help i required is that many camera also save their videos onto the DVR.. now how can i access the DVR and get the video stored onto that storage medi.. i need some helping material or guide from you all.. since m new in black berry... Thnks a bunch in advance.. i m waiting for ur +ve responce..

    Read the article

  • iPhone simple commercial barcode reader via camera

    - by user347635
    Hey all, I've done some iPhone programming (safely midlevel) and now a requirement has come up for us to write a barcode reader that uses the iPhone camera. This is not for shopping or the general public but will be for internal use. My first thought was to simply programatically take a picture, upload it to server, and have a web service or something similar handle the heavy lifting of extracting the numbers out of the bar code. However I recently installed some apps already available on the iPhone that read bar codes, refer you to shopping sites, etc. and they seem to use the video camera to identify barcodes and extract the numbers. Does anyone know how this is being done or could point me to some open source or sample code?

    Read the article

  • How Matlab calculates the camera view angle?

    - by iman
    I am using matlab to visualise a scene. In order to zoom in the scene, I can do that by eather: - Fix the cameraposition and the cameratarget and change the cameraviewangle.or - Fix the cameratarget and the cameraview angle and moving the camera along the viewing line (defined by the cameraPosition and the cameraTarget). I know how to set the values of cameraposition CameraTarget and viewangle, but I do not how to define the optimal view angle. In auto mode of cameraviewangle, matlab calculate the smalest view angle that capture all the scene from the specefied camera position. I appreciate any help in understanding this. Iman

    Read the article

  • Getting object coordinates from camera

    - by user566757
    I've implemented a camera in Java using a position vector and three direction vectors so I can use gluLookAt(); moving around in `ghost mode' works fine enough, but I want to add collision detection. I can't seem to figure out how to transform my position vector to coordinates in which OpenGL draws my objects. A rough sketch of my drawing loop is this: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); camera.setView(); drawer.drawTheScene(); I'm at a loss of how to proceed; looking at the ModelView matrix between calls and my position vector, I haven't found any kind of correlation.

    Read the article

  • Resizing the Camera Preview

    - by uday
    I have implemented a camera application, which will show the preview in the entire screen of the device. But my requirement is to show the Camera preview in a small screen. My device resolution is 800x480(WxH) i.e, Nexus one. I can able to show the preview in the entire screen without scale down the preview. Its coming perfectly, but when i try show the preview in small screens(part of my total mobile screen), the preview is getting stretched and not looking good. Is there any way to show the preview correctly in a small screen. I think we need to scale down the image preview. But when i try to scale down the image preview, the android system itself doesn't allowing to set the scaled preview size. Could any one please help me how to scale the image preview in small screen.

    Read the article

  • Image is not detected if taken from camera Using UIImagePickerControllerSourceTypeCamera

    - by user1314379
    I am trying to build an app which will compare the two images and gives the Matching Percentage. I am taking two Images from The camera(Making use of UIImagePickerController) and saving it to the document directory.Then i fetch these images in a different view controller to get the face part, using CIDetector API and CIfacefeature API. The problem is It is not detecting the face at all through i am able to fetch the images properly. And if i store the same image in the main bundle it detects. I dnt know where the problem is. I have tried everything. May Be the problem is with the UIImage or may be the format in which image is getting saved in document directory or with the camera. Please help. I will be grateful to you. Thanks in Advance.

    Read the article

  • Acer OrbiCam in Acer 5102WLMI with Windows 7 Pro won't work?

    - by pegazuz
    My camera seems to open up but shuts down right away without showing any picture. the message is Windows is checking for solutions but never finds any. I have tried several different drivers and tried re-installing OrbiCam software several times. Anyone know ofany drivers or software that might work or have any ideas on how to fix it?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >