Search Results

Search found 13243 results on 530 pages for 'android camera'.

Page 483/530 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • RTSP streaming and save into mp4 file using VLC

    - by Vivek Navadia
    Hello All let say i am having one RTSP url (rtsp://192.168.0.17/mpeg4). the live camera is setup on the machine which relay live video. i am streaming it using vlc player and i am saving it in mp4 file on some location (i.e. c:\temp.mp4). Now i am opening another vlc player instance and open this file (c:\temp.mp4). but as it is in use and saving live streaming to that file. that will not be played. if if stop the streaming and then played temp.mp4 file then it will play the streamed (saved) video. Now my requirement is VLC player should also stream and save into temp.mp4 file continuously and at the same time that file should be played in any standard player. is it possible to do with any option using VLC player that we can do both this things simultaneously. Thanks Vivek

    Read the article

  • How to search on project hosting sites

    - by Stan
    Say I want to look for open source iphone/Android tower defense game on sourceforge/google code/git hub. Directly searching by these keywords seems not easily getting desired result. Is there any way to search project on these sites? Thanks.

    Read the article

  • Database solutions for storing/searching EXIF data

    - by webdestroya
    I have thousands of photos on my site (each with a numeric PhotoID) and I have EXIF data (photos can have different EXIF tags as well). I want to be able to store the data effectively and search it. Some photos have more EXIF data than others, some have the same, so on.. Basically, I want to be able to query say 'Select all photos that have a GPS location' or 'All photos with a specific camera' I can't use MySQL (tried it, it doesn't work). I thought about Cassandra, but I don't think it lets me query on fields. I looked at SimpleDB, but I would rather: not pay for the system, and I want to be able to run more advanced queries on the data. Also, I use PHP and Linux, so it would be awesome if it could interface nicely to PHP. Any ideas?

    Read the article

  • When to best implement a I2C driver module in Linux

    - by stefangachter
    I am currently dealing with two devices connected to the I2C bus within an embedded system running Linux. I am using an exisiting driver for the first device, a camera. For the second device, I have successfully implemented a userspace program with which I can communicate with the second device. So far, both devices seem to coexist happily. However, almost all I2C devices have their own driver module. Thus, I am wondering what the advantages of a driver module are. I had a look at the following thread... http://stackoverflow.com/questions/149032/when-should-i-write-a-linux-kernel-module ... but without conclusion. Thus, what would be the advantage of writing a I2C driver module over a userspace implementation? Regards, Stefan

    Read the article

  • how to programatically determine Bluetooth master/slave roles?

    - by Rich
    So in a bluetooth piconet, there is one master with upto seven slaves. The master sets the clock and frequency hop that the slaves sync with. But is there a way to determine which device is the master and which is the slave? I'm mainly interested in portable devices (Android,iPhone) but beggars can't be choosers, if anybody has info in this field I would be interested. Thanks

    Read the article

  • Respecting EXIF orientation when displaying iPhone photos on the web

    - by GingerBreadMane
    I am developing an iPhone camera app that uploads an image to Amazon S3 and that image is displayed on a website. When the iPhone takes a picture, it always saves the photo in an upright orientation, while the orientation used to correctly view the photo is saved in the image's EXIF data. So if I take a photo with the iPhone and open it in FireFox without processing the EXIF data, the image could be sideways or upside down. My problem is that I don't know how to display the photo in it's correct orientation on the website. My current solution is to rotate the photo in the iPhone app, but I'd rather not do that. Is there anyway to respect the EXIF data when displaying on the web without pre-processing the image?

    Read the article

  • sending sms from windows application

    - by Alien01
    I am creating a windows application which needs to send some sms to mobile phone.This is just for testing purpose. Now can I use my cell phone to get this done. I have android phone which can be connected to pc using USB. Application is created in C++, windows api. Any pointers will help

    Read the article

  • OpenGL textures trigger error 1281 and strange background behavior

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. There must be something i didn't understand about applying/loading the textures. BUt the texture IS applied (nothing else is working though), the error is applied when i try to use the shader program however i checked the compilation of these and no problems were written. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great. EDIT : After narrowing the code implied by the error, here is the code that provokes it, it is called between texture loading and bos drawing : glEnableClientState(GL_VERTEX_ARRAY); //this gives the error : glUseProgram(this->shaderProgram); if (!shaderLoaded) { std::cout << "Loading default shaders" << std::endl; if(textured) loadShaderProgramm(texture_vertexSource, texture_fragmentSource); else loadShaderProgramm(default_vertexSource,default_fragmentSource); } glm::mat4 Projection = camera->getPerspective(); glm::mat4 View = camera->getView(); glm::mat4 Model = glm::mat4(1.0f); Model[0][0] *= scale_x; Model[1][1] *= scale_y; Model[2][2] *= scale_z; glm::vec3 translate_vec(this->x,this->y,this->z); glm::mat4 object_transform = glm::translate(glm::mat4(1.0f),translate_vec); glm::quat rotation = QAccumulative.getQuat(); glm::mat4 matrix_rotation = glm::mat4_cast(rotation); object_transform *= matrix_rotation; Model *= object_transform; glm::mat4 MVP = Projection * View * Model; GLuint ModelID = glGetUniformLocation(this->shaderProgram, "M"); if(ModelID ==-1) cout << "ModelID not found ..." << endl; GLuint MatrixID = glGetUniformLocation(this->shaderProgram, "MVP"); if(MatrixID ==-1) cout << "MatrixID not found ..." << endl; GLuint ViewID = glGetUniformLocation(this->shaderProgram, "V"); if(ViewID ==-1) cout << "ViewID not found ..." << endl; glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]); glUniformMatrix4fv(ModelID, 1, GL_FALSE, &Model[0][0]); glUniformMatrix4fv(ViewID, 1, GL_FALSE, &View[0][0]); drawObject();

    Read the article

  • unable to access Bluetooth in emulator

    - by Abhijeet
    I want to run Bluetooth chat apps, but my emulator is unable to switch Bluetooth on . I am using Android 2.2 . The Bluetooth option is not being highlighted in the emulator . Please anyone tell me , how to activate Bluetooth on the emulator . Thanks in advance Abhijeet

    Read the article

  • a good smart phone for java development?

    - by pstanton
    Hi all, I'd like to do some development of smart phone apps and my native programming language is java. The first application I'd like to write will need to be able to (attempt to) connect to a network (LAN or WiFi) automatically in the background (on a schedule). would an android phone be the best path or are there competitive purer java options?

    Read the article

  • Vibrations when exploding/repacking movie

    - by Stefano Borini
    Please bear with me, I know that what I'm doing can sound strange, but I can guarantee there's a very good reason for that. I took a movie with my camera, as avi. I imported the movie into iMovie and then exploded the single frames as PNG. Then I repacked these frames into mov using the following code movie, error = QTMovie.alloc().initToWritableFile_error_(out_path, None) mt = QTMakeTime(v, scale) attrib = {QTAddImageCodecType: "jpeg"} for path in png_paths: image = NSImage.alloc().initWithContentsOfFile_(path) movie.addImage_forDuration_withAttributes_(image, mt, attrib) movie.updateMovieFile() The resulting mov works, but it looks like the frames are "nervous" and shaky when compared to the original avi, which appears smoother. The size of the two files is approximately the same, and both the export and repacking occurred at 30 fps. The pics also appear to be aligned, so it's not due to accidental shift of the frames. My question is: by knowing the file formats and the process I performed, what is the probable cause of such result ? How can I fix it ?

    Read the article

  • Java Speech recognition api

    - by jaymin
    HI, i am currently developing an android application where i am required to implement speech recognition...could u suggest a link where i could find a java speech recognition API...? Thanks

    Read the article

  • How do I use .NET to find an orange ball in an image?

    - by JohnS
    I'm getting images from a C328R camera attached to a small arduino robot. I want the robot to drive towards orange ping-pong balls and pick them up. I'm using the C# code supplied by funkotron76 at http://www.codeproject.com/KB/recipes/C328R.aspx. Is there a library I can use to do this, or do I need to iterate over every pixel in the image looking for orange? If so, what kind of tolerance would I need to compensate for various lighting conditions? I could probably test to figure out these numbers, but I'm hoping someone out there knows the answers.

    Read the article

  • Super Cam iphone app how do they make it possible?

    - by Silent
    there is an iphone app called supercam and you can get it through the app store free. This app features a way to connect your webcam or dv cam that is connected on the internet, you could set up the ip address and enter the data on the app and it will connect to your online camera. the thing is that they have the video stream and it looks like they embedded the video in a uiview or webview at the bottom they have buttons to choose from all the cameras you have set up. so this is different from other video streaming apps because it does not play the video from the full screen mode (MPMediaPlayer API) would there be any tutorials about this or somehow take reverse engineer this?

    Read the article

  • URL conventions for Maps on Windows Phone 7

    - by Stan Wiechers
    What is the best practice for opening a map from the mobile internet explorer on windows phone 7? On BlackBerry you use a JavaScript method and on Android/iOS you simply link to a google maps URL. I am planning to integrate the different ways of opening maps into my mobile geo javascript library and don't have a windows phone device. http://code.google.com/p/geo-location-javascript/ Thanks, Stan Wiechers

    Read the article

  • UIImageWriteToSavedPhotosAlbum Problem

    - by Momeks
    Hi , i try to save a photo from camera after take a photo with a button . here is my codes: -(IBAction)takePic { ipc = [[UIImagePickerController alloc]init]; ipc.delegate = self; ipc.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:ipc animated:YES]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { img.image = [info objectForKey:UIImagePickerControllerOriginalImage]; [[picker parentViewController]dismissModalViewControllerAnimated:YES]; [picker release]; } but i dont know why doesnt save anything !

    Read the article

  • How to best deal with photos passed to IFilter?

    - by sharptooth
    I'm implementing an IFilter for indexing image formats. One problem is photos - many users have tons of photos, photos are huge and loading and searching for text on them is time consuming. Yes, sometimes people use cameras instead of scanners for digitizing documents, but the potential problems IMO far outweight the possibility of encountering a document digitized with a photo camera. So my implementation will not extract text from photos at all. What should the IFilter do once it detects that a given file is a photo image - indicate an error or return empty text?

    Read the article

  • #define vs enum in an embedded environment (How do they compile?)

    - by Alexander Kondratskiy
    This question has been done to death, and I would agree that enums are the way to go. However, I am curious as to how enums compile in the final code- #defines are just string replacements, but do enums add anything to the compiled binary? Or are they both equivalent at that stage. When writing firmware and memory is very limited, is there any advantage, no matter how small, to using #defines? Thanks! EDIT: As requested by the comment below, by embedded, I mean a digital camera. Thanks for the answers! I am all for enums!

    Read the article

  • PLCameraController is not adding in viewcontroller

    - by sujyanarayan
    Hi, I've declared PLCameraContoller instance in my AppDelegate class as:- self.cameraController = [PLCameraController performSelector:@selector(sharedInstance)];[cameraController setDelegate:self]; And I'm accessing it in one of my viewcontroller class as:- del = [[UIApplication sharedApplication] delegate]; UIView previewView = [del.cameraController performSelector:@selector(previewView)]; previewView.frame = CGRectMake(0,0, 320, 480); self.view = previewView; [del.cameraController performSelector:@selector(startPreview)]; [del.cameraController performSelector:@selector(setCameraMode:) withObject:(NSNumber)1]; Where "del" is an instance of my AppDelegate class. But i can see only black background in my viewcontroller view in iphone device. Also if i remove "self" from the appdelegate.m code of cameracontroller it also showing blank. How can i get camera in my view controller? I'm pretty much struggling with it. Thanks in Adv.

    Read the article

  • Python Daemon Subprocess not working at boot

    - by Adam Richardson
    I am attempting to write a python daemon that will launch at boot. The goal of the script is to receive a job from our gearman load balancing server and complete the job. I am using the python-daemon module from pypi (http://pypi.python.org/pypi/python-daemon/). The nature of the job that it is completing is converting images in the orf (olympus raw image format) to jpeg. In order to accomplish this an outside program is used, ufraw in this case. The problem comes in when I start the daemon at boot, if I launch from the shell it runs perfectly and completes the work. When it starts at boot it is unable to launch the subprocess command. commandString = '/usr/bin/ufraw-batch --interpolation=four-color --wb=camera --compression=100 --output="' + outfile + '" --out-type=jpg --overwrite "' + infile + '"' args = shlex.split(commandString) process = subprocess.Popen(args).wait() I am not sure what I am doing wrong. Thanks for any help.

    Read the article

  • Advice on embedding video content via CMS - what format?

    - by ted776
    Hi, if I set up the facility for people to embed video content on their site via their CMS (using TinyMCE editor), is there any reliable cross platform video format that should be used? From what I can find online, the only reliable way to embed and stream video is using FLV. Other formats seem to have caveats, e.g codecs required or quicktime updates required. Ideally I'd like to avoid this type of situation. If it is the case that FLV is the preferred option, then that involves asking people to encode their video content to FLV before uploading, so there is an extra step required here (unless I can set up the encoding in the back end, but this might take a while to process depending on the size of the video). Does anyone have any additional advice on this? The types of video i'd imagine people will be working with is raw camera footage, so i need to figure out the easiest and most reliable way of getting the footage on to a web page.

    Read the article

  • iPhone: Chained views

    - by Michael
    I want to have dynamically created views and be able to scroll(change views like in camera roll) either from my program or user should also be able to do that. The views should contain only a simple text. Each view has to replace other, so they are like chained. The other example is the screenshots of applications in app store application details. I don't know which classes to check/start so if anyone can give me an idea of how this could be designed I would appreciate it.

    Read the article

  • ASP.NET MVC 4/Web API Single Page App for Mobile Devices ... Needs Authentication

    - by lmttag
    We have developed an ASP.NET MVC 4/Web API single page, mobile website (also using jQuery Mobile) that is intended to be accessed only from mobile devices (e.g., iPads, iPhones, Android tables and phones, etc.), not desktop browsers. This mobile website will be hosted internally, like an intranet site. However, since we’re accessing it from mobile devices, we can’t use Windows authentication. We still need to know which user (and their role) is logging in to the mobile website app. We tried simply using ASP.NET’s forms authentication and membership provider, but couldn’t get it working exactly the way we wanted. What we need is for the user to be prompted for a user name and password only on the first time they access the site on their mobile device. After they enter a correct user name and password and have been authenticated once, each subsequent time they access the site they should just go right in. They shouldn’t have to re-enter their credentials (i.e., something needs to be saved locally to each device to identify the user after the first time). This is where we had troubles. Everything worked as expected the first time. That is, the user was prompted to enter a user name and password, and, after doing that, was authenticated and allowed into the site. The problem is every time after the browser was closed on the mobile device, the device and user were not know and the user had to re-enter user name and password. We tried lots of things too. We tried setting persistent cookies in JavaScript. No good. The cookies weren’t there to be read the second time. We tried manually setting persistent cookies from ASP.NET. No good. We, of course, used FormsAuthentication.SetAuthCookie(model.UserName, true); as part of the form authentication framework. No good. We tried using HTML5 local storage. No good. No matter what we tried, if the user was on a mobile device, they would have to log in every single time. (Note: we’ve tried on an iPad and iPhone running both iOS 5.1 and 6.0, with Safari configure to allow cookies, and we’ve tried on Android 2.3.4.) Is there some trick to getting a scenario like this working? Or, do we have to write some sort of custom authentication mechanism? If so, how? And, what? Or, should we use something like claims-based authentication and WIF? Or??? Any help is appreciated. Thanks!

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >