Search Results

Search found 2068 results on 83 pages for 'camera'.

Page 8/83 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Command line method to find disk usage of camera mounted using gvfs

    - by Hamish Downer
    When my camera was mounted on /media I could use the standard tools (df) to see the disk usage of the card in my camera. However now the camera is mounted using gvfs, and df seems to ignore it. I've also tried pydf and discus to no avail. The camera is definitely available through nautilus, and when I select the camera in nautlius, the status bar tells me the amount of disk free. I can also open the ~/.gvfs/ folder in nautilus and right click on the camera folder and get the disk usage in a graphical way. But that is no use for a script. Are there command line tools that are the equivalent of df for gvfs filesystems? Or even better, a way to make df report on gvfs filesystems?

    Read the article

  • How to access photos from my Genius G-Shot P7545 camera?

    - by Florin
    I have a Genius G-Shot P7545 camera. In Windows I had just to plug in the camera to the usb and acces it like a usb stick. I tried to do that in Ubuntu 10.10 with no result. How can I acces the photots? With these comands I get: lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 015: ID 0784:1692 Vivitar, Inc. Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xcf5acf5a Device Boot Start End Blocks Id System /dev/sda1 * 1 2550 20478976 83 Linux /dev/sda3 2550 19458 135810048 f W95 Ext'd (LBA) /dev/sda5 2550 19458 135809024 83 Linux Unable to read /dev/sdb

    Read the article

  • Capture RTSP stream from IP Camera and store

    - by Keerthi
    I've got a few IP Cameras which output an RTSP (h264 mpeg4) stream. Hitting the URL locally via VLC: rtsp://192.168.0.21:554/mpeg4 I can stream the camera and dump to disk (on my desktop). I'd like to however store these files on my NAS (FreeNAS). I was looking at ways to capture the RTSP stream and dump them to disk but I'm unable to find anything. Is it possible to capture the stream on FreeBSD or Linux (RaspberryPi) and dump the streamed content to a disk local to Linux or FreeBSD - preferably every 30minutes? EDIT: The NAS is headless (HP N55L or something) and the RaspberryPi's are headless too. I've already looked into ZoneMinder but need something small. I was hoping maybe using Motion to detect motion on the stream but that will come later.

    Read the article

  • Zoneminder camera permanently idle

    - by C. Ross
    I have an Ubuntu server with zoneminder installed. I only have one camera (cheap Logitech USB model) and it seems to be permanently idle. I'm also getting this error repeatedly in the logs 06/06/2010 08:26:56.563783 zmwatch[28234].INF [Restarting capture daemon for main_room, shared memory not valid] 06/06/2010 08:26:56.812964 zmwatch[28234].INF ['zmc -d /dev/video0' starting at 10/06/06 08:26:56, pid = 29214] 06/06/2010 08:27:06.814486 zmwatch[28234].INF [Restarting capture daemon for main_room, shared memory not valid] 06/06/2010 08:27:07.054854 zmwatch[28234].INF ['zmc -d /dev/video0' starting at 10/06/06 08:27:07, pid = 29219] I've already tried following the instructions in the FAQ for increasing shared memory, but that doesn't seem to work. What do I need to do to get this working?

    Read the article

  • How to set orthgraphic matrix for a 2d camera with zooming?

    - by MahanGM
    I'm using ID3DXSprite to draw my sprites and haven't set any kind of camera projection matrix. How to setup an orthographic projection matrix for camera in DirectX which it would be able to support zoom functionality? D3DXMATRIX orthographicMatrix; D3DXMATRIX identityMatrix; D3DXMatrixOrthoLH(&orthographicMatrix, nScreenWidth, nScreenHeight, 0.0f, 1.0f); D3DXMatrixIdentity(&identityMatrix); device->SetTransform(D3DTS_PROJECTION, &orthographicMatrix); device->SetTransform(D3DTS_WORLD, &identityMatrix); device->SetTransform(D3DTS_VIEW, &identityMatrix); This code is for initial setup. Then, for zooming I multiply zoom factor in nScreenWidth and nScreenHeight.

    Read the article

  • Android Video Camera : still picture

    - by Alex
    I use the camera intent to capture video. Here is the problem: If I use this line of code, I can record video. But onActivityResult doesn't work. Intent intent = new Intent("android.media.action.VIDEO_CAMERA"); If I use this line of code, after press the recording button, the camera is freezed, I mean, the picture is still. Intent intent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE); BTW, when I use $Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); to capture a picture, it works fine. The java file is as follows: package com.camera.picture; import java.io.File; import java.text.SimpleDateFormat; import java.util.Date; import android.app.Activity; import android.content.ContentValues; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.os.Environment; import android.provider.MediaStore; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.ImageView; import android.widget.Toast; import android.widget.VideoView; public class PictureCameraActivity extends Activity { private static final int IMAGE_CAPTURE = 0; private static final int VIDEO_CAPTURE = 1; private Button startBtn; private Button videoBtn; private Uri imageUri; private Uri videoUri; private ImageView imageView; private VideoView videoView; /** Called when the activity is first created. * sets the content and gets the references to * the basic widgets on the screen like * {@code Button} or {@link ImageView} */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); imageView = (ImageView)findViewById(R.id.img); videoView = (VideoView)findViewById(R.id.videoView); startBtn = (Button) findViewById(R.id.startBtn); startBtn.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { startCamera(); } }); videoBtn = (Button) findViewById(R.id.videoBtn); videoBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub startVideoCamera(); } }); } public void startCamera() { Log.d("ANDRO_CAMERA", "Starting camera on the phone..."); String fileName = "testphoto.jpg"; ContentValues values = new ContentValues(); values.put(MediaStore.Images.Media.TITLE, fileName); values.put(MediaStore.Images.Media.DESCRIPTION, "Image capture by camera"); values.put(MediaStore.Images.Media.MIME_TYPE, "image/jpeg"); imageUri = getContentResolver().insert( MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values); Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); intent.putExtra(MediaStore.EXTRA_OUTPUT, imageUri); intent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1); startActivityForResult(intent, IMAGE_CAPTURE); } protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == IMAGE_CAPTURE) { if (resultCode == RESULT_OK){ Log.d("ANDROID_CAMERA","Picture taken!!!"); imageView.setImageURI(imageUri); } } if (requestCode == VIDEO_CAPTURE) { if (resultCode == RESULT_OK) { Log.d("ANDROID_CAMERA","Video taken!!!"); Toast.makeText(this, "Video saved to:\n" + data.getData(), Toast.LENGTH_LONG).show(); videoView.setVideoURI(videoUri); } } } private void startVideoCamera() { // TODO Auto-generated method stub //create new Intent Log.d("ANDRO_CAMERA", "Starting camera on the phone..."); String fileName = "testvideo.mp4"; ContentValues values = new ContentValues(); values.put(MediaStore.Video.Media.TITLE, fileName); values.put(MediaStore.Video.Media.DESCRIPTION, "Video captured by camera"); values.put(MediaStore.Video.Media.MIME_TYPE, "video/mp4"); videoUri = getContentResolver().insert( MediaStore.Video.Media.EXTERNAL_CONTENT_URI, values); Intent intent = new Intent("android.media.action.VIDEO_CAMERA"); //Intent intent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE); intent.putExtra(MediaStore.EXTRA_OUTPUT, videoUri); intent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1); // start the Video Capture Intent startActivityForResult(intent, VIDEO_CAPTURE); } private static File getOutputMediaFile() { // TODO Auto-generated method stub // To be safe, you should check that the SDCard is mounted // using Environment.getExternalStorageState() before doing this. File mediaStorageDir = new File(Environment.getExternalStoragePublicDirectory( Environment.DIRECTORY_PICTURES), "MyCameraApp"); // This location works best if you want the created images to be shared // between applications and persist after your app has been uninstalled. // Create the storage directory if it does not exist if (! mediaStorageDir.exists()){ if (! mediaStorageDir.mkdirs()){ Log.d("MyCameraApp", "failed to create directory"); return null; } } // Create a media file name String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date()); File mediaFile; mediaFile = new File(mediaStorageDir.getPath() + File.separator + "VID_"+ timeStamp + ".mp4"); return mediaFile; } /** Create a file Uri for saving an image or video */ private static Uri getOutputMediaFileUri(){ return Uri.fromFile(getOutputMediaFile()); } }

    Read the article

  • the unity aspect ratio script looks good in computer but not in android phones

    - by Pooya Fayyaz
    I'm developing a game for android devices.and i have a script that solve the ratio problem but i have a problem in this code.and i dont know why.it looks perfect in computer even resize the game screen but in mobile phones have a problem.my game runs in landscape mode.this is the script : using UnityEngine; using System.Collections; using System.Collections.Generic; public class reso : MonoBehaviour { void Update() { // set the desired aspect ratio (the values in this example are // hard-coded for 16:9, but you could make them into public // variables instead so you can set them at design time) float targetaspect = 16.0f / 9.0f; // determine the game window's current aspect ratio float windowaspect = (float)Screen.width / (float)Screen.height; // current viewport height should be scaled by this amount float scaleheight = windowaspect / targetaspect; // obtain camera component so we can modify its viewport Camera camera = GetComponent<Camera>(); // if scaled height is less than current height, add letterbox if (scaleheight < 1.0f && Screen.width <= 490 ) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } else // add pillarbox { float scalewidth = 1.0f / scaleheight; Rect rect = camera.rect; rect.width = scalewidth; rect.height = 1.0f; rect.x = (1.0f - scalewidth) / 2.0f; rect.y = 0; camera.rect = rect; } } } i figure that my problem occur in this part of the script: if (scaleheight < 1.0f) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } and its look like this in my mobile phone in portrait: and in landscape mode:

    Read the article

  • How to make an object move again after being stopped by collision in Unity?

    - by Matthew Underwood
    I have a player object which position is always centered on the main camera's viewport. This object has a Rigidbody 2D, a box and circle collider. The player moves around a level, the level has a polygon collider attached. I move the camera until the object hits against the collider, which stops the movement of the camera by setting its speed to 0. The problem happens when I want to move the camera / player object away from the collider. As the speed is already at 0, it cannot move away from the collider. The script attached to the player object, checks for collisions and applies the speed to 0 on the main camera's test script. using UnityEngine; using System.Collections; public class move : MonoBehaviour { public float speed; public test testing; // Use this for initialization void Start () { speed = 10F; testing = Camera.main.GetComponent<test>(); } // Update is called once per frame void FixedUpdate () { Vector3 p = Camera.main.ViewportToWorldPoint(new Vector3(0.5F, 0.5F, Camera.main.nearClipPlane)); transform.position = new Vector3(p.x, p.y, -1); } void OnCollisionEnter2D(Collision2D col) { testing.speed = 0; } void OnCollisionExit2D(Collision2D col) { testing.speed = 10F; } } This is the script attached to the main camera; just a simple script that changes the camera's position. using UnityEngine; using System.Collections; public class test : MonoBehaviour { public float speed; public float translationY; public float translationX; // Use this for initialization void Start () { speed = 10F; } void FixedUpdate () { translationY = Input.GetAxis("Vertical") * speed * Time.deltaTime; translationX = Input.GetAxis("Horizontal") * speed * Time.deltaTime; transform.Translate(translationX, translationY, 0); } } The player object isn't kinematic and is a fixed angle, the colliders aren't triggers and the polygon collider isn't a trigger either. The player is the red square, the collider is the pink area. -- EDIT -- From the latest change the collider set up for the player So if the X speed was disabled. It wouldnt move into the side of the polygon colider which is good, but yet you couldnt move away from it. And moving down would move inside the colider.

    Read the article

  • Remove a Digital Camera’s IR Filter for IR Photography on the Cheap

    - by Jason Fitzpatrick
    Whether you have a DSLR or a point-and-shoot, this simple hack allows you to shoot awesome IR photographs without the expense of a high-quality IR filter (or the accompanying loss of light that comes with using it). How does it work? You’ll need to take apart your camera and remove a single fragile layer of IR blocking glass from the CCD inside the camera body. After doing so, you’ll have a camera that sees infrared light by default, no special add-on filters necessary. Because it sees the IR light without the filters you’ll also skip out on the light loss that occurs with the addition of the add-on IR filter. The downside? You’re altering the camera in permanent and warranty-voiding way. This is most definitely not a hack for your brand new $2,000 DSLR, but it is a really fun hack to try out on an old point and shoot camera or your circa-2004 depreciated DSLR. Hit up the link below to see the process performed on an old Canon point and shoot–we’d strongly recommend searching for a break down guide for your specific camera model before attempting the trick on your own gear. Are You Brave Enough to IR-ize Your Camera [DIY Photography] HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • glm quaternion camera rotating on wrong axis

    - by Jarrett
    I'm trying to get my camera implemented with a glm::quat used to store the rotation. However, whenever I do circles with the mouse, the camera rotates along the axis I am viewing (i.e. I think it's called the target axis). For example, if I rotated the mouse in a clockwise fashion, the camera rotates clockwise around the axis. I initialize my quaternion like so: void Camera::initialize() { orientationQuaternion_ = glm::quat(); orientationQuaternion_ = glm::normalize(orientationQuaternion_); } I rotate like so: void Camera::rotate(const glm::detail::float32& degrees, const glm::vec3& axis) { orientationQuaternion_ = orientationQuaternion_ * glm::normalize(glm::angleAxis(degrees, axis)); } and I set the viewMatrix like so: void Camera::render() { glm::quat temp = glm::conjugate(orientationQuaternion_); viewMatrix_ = glm::mat4_cast(temp); viewMatrix_ = glm::translate(viewMatrix_, glm::vec3(-pos_.x, -pos_.y, -pos_.z)); } The only axis' I actually try to rotate are the X and Y axis (i.e. (1,0,0) and (0,1,0)). Anyone have any idea why I see my camera rotating around the target axis?

    Read the article

  • Beginner video capture and processing/Camera selection

    - by mattbauch
    I'll soon be undertaking a research project in real-time event recognition but have no experience with the programming aspect of video capture (I'm an upperclassman undergraduate in computer engineering). I want to start off on the right foot so advice from anyone with experience would be great. The ultimate goal is to track events such as a person standing up/sitting down, entering/leaving a room, possibly even shrugging/slumping in posture, etc. from a security camera-like vantage point. First of all, which cameras/companies would you recommend? I'm looking to spend ~$100, more if necessary but not much. Great resolution isn't a must, but is desirable if affordable. What about IP network cameras vs. a USB type webcam? Webcams are less expensive, but IP cameras seem like they'd be much less work to deal with in software. What features should I look for in the camera? Once I've selected a camera, what does converting its output to a series of RGB bitmaps entail? I've never dealt with video encoding/decoding so a starting point or a tutorial that will guide me up to this point would be great if anyone has suggestions. Finally, what is the best (least complicated/most efficient) way to display video from the camera plus my own superimposed images (boxes around events in progress, for instance) in a GUI application? I can work on any operating system in any language. I have some experience with win32 GUIs and Java GUIs. The focus of the project is on the algorithm and so I'm trying to get the video capture/display portion of the app done cleanly and quickly. Thanks for any responses!!

    Read the article

  • Glitch when moving camera in OpenGL

    - by CG
    I am writing a tile-based game engine for the iPhone and it works in general apart from the following glitch. Basically, the camera will always keep the player in the centre of the screen, and it moves to follow the player correctly and draws everything correctly when stationary. However whilst the player is moving, the tiles of the surface the player is walking on glitch as shown: Compared to the stationary (correct): Does anyone have any idea why this could be? Thanks for the responses so far. Floating point error was my first thought also and I tried slightly increasing the size of the tiles but this did not help. Changing glClearColor to red still leaves black gaps so maybe it isn't floating point error. Since the tiles in general will use different textures, I don't know if vertex arrays can be used (I always thought that the same texture had to be applied to everything in the array, correct me if I'm wrong), and I don't think VBO is available in OpenGL ES. Setting the filtering to nearest neighbour improved things but the glitch still happens every ten frames or so, and the pixelly result means that this solution is not viable anyway. The main difference between what I'm doing now and what I've done in the past is that this time I am moving the camera rather than the stationary objects in the world (i.e. the tiles, the player is still being moved). The code I'm using to move the camera is: void Camera::CentreAtPoint( GLfloat x, GLfloat y ) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(x - size.x / 2.0f, x + size.x / 2.0f, y + size.y / 2.0f, y - size.y / 2.0f, 0.01f, 5.0f); glMatrixMode(GL_MODELVIEW); } Is there a problem with doing things this way and if so is there a solution?

    Read the article

  • WPF Orthographic Camera. Region of view

    - by Evgeny
    Hello. I added Orthographic Camera to my project. I want to show my chart on screen proportional. For example height is 4 and width 4 (region from -2 to 2). I set width to 4 and it perfectly fit my square in widht. but i have problem with height. The chart top and bottom is always out of screen space. Why this happens and how to set to camera view the same width and height ? Camera position: 0,0,5 Viewport have size: 571.5x497 On image we can see on vertical axis points from 2 to -2 but on vertical much more. How to make them same? Image: http://i076.radikal.ru/1003/96/273c74ed9add.png Sorry for my English.

    Read the article

  • Camera Filters in ios7

    - by Muhannad Dasoqie
    How I can use the camera filters that already exist in iOS7? If i used this code: UIImagePickerController *picker= [[UIImagePickerController alloc]init]; picker.delegate = self; picker.allowsEditing = YES; picker.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:picker animated:YES]; the camera will open, and I can capture a photo but without using the filters. my question is: can I use the exist filters in camera view in iOS7?

    Read the article

  • How to detect in a Flex app if a camera is already in use by another application?

    - by Alex Fisherr
    I am making an application that plays the video stream from the user's local system (both Windows and Mac). I use the Camera.getCamera() method and in turn Camera.names to get a list of camera attached with the system. Unfortunately, if the camera is already in use by another application, say a desktop application on user's system, the browser is crashed. Is there any way that I can detect if a specific camera from the list of available camera is already in use by any other application?

    Read the article

  • Optimizing an iphone app for 3G in landscape with opengl, camera, quartz

    - by Joey
    I have an iphone app that basically uses the camera, an opengl layer, and UIViews (some drawing with Quartz). It runs ok on 3GS, but on the 3G it is unusable. Particularly, when I press a UIButton, it literally takes sometimes 10 seconds to register the press. Shark doesn't do me much good because it crashes when I try to profile even a tiny portion, and I've tried turning off some of the layers to see if they might be obvious contributors to the lag. I've noticed that turning off the camera really helps. I'm wondering if anyone has any familiarity with this and might suggest some likely causes. I had issues with extreme slowdown from running my app in landscape mode and using transforms, so considered that might be a cause, but I'm wondering if hoping for a 3G to run something with all of the above elements is just not really possible considering the camera seems to really cost a lot. The fact that the buttons are horribly delayed in their response makes me think there is something fundamental that I might be missing.

    Read the article

  • Rotate around the centre of the screen

    - by Dan Scott
    I want my camera to rotate around the centre of screen and I'm not sure how to achieve that. I have a rotation in the camera but I'm not sure what its rotating around. (I think it might be rotating around the position.X of camera, not sure) If you look at these two images: http://imgur.com/E9qoAM7,5qzyhGD#0 http://imgur.com/E9qoAM7,5qzyhGD#1 The first one shows how the camera is normally, and the second shows how I want the level to look when I would rotate the camera 90 degrees left or right. My camera: public class Camera { private Matrix transform; public Matrix Transform { get { return transform; } } private Vector2 position; public Vector2 Position { get { return position; } set { position = value; } } private float rotation; public float Rotation { get { return rotation; } set { rotation = value; } } private Viewport viewPort; public Camera(Viewport newView) { viewPort = newView; } public void Update(Player player) { position.X = player.PlayerPos.X + (player.PlayerRect.Width / 2) - viewPort.Width / 4; if (position.X < 0) position.X = 0; transform = Matrix.CreateTranslation(new Vector3(-position, 0)) * Matrix.CreateRotationZ(Rotation); if (Keyboard.GetState().IsKeyDown(Keys.D)) { rotation += 0.01f; } if (Keyboard.GetState().IsKeyDown(Keys.A)) { rotation -= 0.01f; } } } (I'm assuming you would need to rotate around the centre of the screen to achieve this)

    Read the article

  • Test de la caméra Raspberry Pi 5M, tutoriel par Nicolargo

    Bonjour,Nous avons précédemment publié le tutoriel :Raspberry Pi : Déballage et installationComme suite nous proposons ce tutoriel : Test de la caméra Raspberry Pi 5M Citation: Raspberry propose depuis peu et pour moins de 25 € une caméra dédiée à sa gamme Pi. Cette caméra de quelques grammes se connecte à une Raspberry Pi (modèle A ou B) à travers une interface CSI v2 (MIPI camera interface) dédiée. Grâce à Kubii (fournisseur Farnell en France), j'ai pu obtenir rapidement...

    Read the article

  • IP Camera working on lan but not on internet

    - by Kevin Boyd
    My IP cam model is Genius 350TR, I tested the cam at home on lan and internet and it worked. Then I shifted it to an office. It works on the office lan setup but I cannot connect to the ip cam from home. The IP cam is configured for port 192.168.0.30:7070 and it has a port forwarded to publicIP:7071 When I telnet to that public IP it connects to that port. However when I try to access the ip cam from a web browser it only shows me the configuration page and settings and the video is blank and it says connecting for some time and then says disconnected. The cam is configured for HTTP on internet and UDP on Lan. The office setup is ISP --- WifiRouter --- PC With Wifi card --- Switch --- IP CAM Is there a way to debug this problem?

    Read the article

  • Is any EXIF data stored within 3rd party Camera apps on the iPhone?

    - by 3rdparty
    I'm confused as to if any EXIF data is available when taking photos within 3rd party camera apps on the iPhone. My understanding is that Apple is currently not allowing any apps to save EXIF data to photos, and this is a limitation of saving to the camera roll on the phone. The last FAQ on this page indicates this, but appears to be out of date: http://www.codegoo.com/page/support I love some of the camera apps I've downloaded (Camera Genius, Best Camera, CameraBag) but don't want to continue using them if they aren't saving any/all EXIF data for the image. Anyone aware what the status of this 'limitation' is?

    Read the article

  • Camera and Image recognition

    - by kjh
    I recently watched a youtube video where a guy got a camera to recognize when a rubik's cube was held up to it, and it captured the 9 square color combination before snapping a picture of the cube and displaying the 3x3 grid on the screen of his computer. What kind of programming is this and where would I start reading to get into this sort of thing? specifically, controlling a camera, and getting it to pick out certain parts of an image and translate that data.

    Read the article

  • Why use 3d matrix and camera in 2D world for 2d geometric figures?

    - by Navy Seal
    I'm working in XNA on a 2d isometric world/game and I'm using DrawUserPrimitives to draw some geometric figures... I saw some tutorials about creating dynamic shadows but I didn't understood why they use a "3d" matrix to control the transformations since the figure I'm drawing is in 2d perspective. I know I'm drawing a 2d figure in 3d but I still can't understand if I really need to work with the matrix. Is there any advantage in using a 3d Matrix to control camera and view? Any reason why I can't just update my vertex's positions by using a regular method since the view is always the same... And since I want to work only with single figures, won't this cause all the geometric figures have the same transformations simultaneously? To understand better what I mean here's a video http://www.youtube.com/watch?v=LjvsGHXaGEA&feature=player_embedded

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >