Search Results

Search found 13647 results on 546 pages for 'android camera intent'.

Page 404/546 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • Fitting a rectangle into screen with XNA

    - by alecnash
    I am drawing a rectangle with primitives in XNA. The width is: width = GraphicsDevice.Viewport.Width and the height is height = GraphicsDevice.Viewport.Height I am trying to fit this rectangle in the screen (using different screens and devices) but I am not sure where to put the camera on the Z-axis. Sometimes the camera is too close and sometimes to far. This is what I am using to get the camera distance: //Height of piramid float alpha = 0; float beta = 0; float gamma = 0; alpha = (float)Math.Sqrt((width / 2 * width/2) + (height / 2 * height / 2)); beta = height / ((float)Math.Cos(MathHelper.ToRadians(67.5f)) * 2); gamma = (float)Math.Sqrt(beta*beta - alpha*alpha); position = new Vector3(0, 0, gamma); Any idea where to put the camera on the Z-axis?

    Read the article

  • How do I create a third Person View using DXUTCamera in DX10?

    - by David
    I am creating a 3d flying game and using DXUTCamera for my view. I can get the camera to take on the characters position, But I would like to view my character in the 3rd person. Here is my code for first person view: //Put the camera on the object. D3DXVECTOR3 viewerPos; D3DXVECTOR3 lookAtThis; D3DXVECTOR3 up ( 5.0f, 1.0f, 0.0f ); D3DXVECTOR3 newUp; D3DXMATRIX matView; //Set the viewer's position to the position of the thing. viewerPos.x = character->x; viewerPos.y = character->y; viewerPos.z = character->z; // Create a new vector for the direction for the viewer to look character->setUpWorldMatrix(); D3DXVECTOR3 newDir, lookAtPoint; D3DXVec3TransformCoord(&newDir, &character->initVecDir, &character->matAllRotations); // set lookatpoint D3DXVec3Normalize(&lookAtPoint, &newDir); lookAtPoint.x += viewerPos.x; lookAtPoint.y += viewerPos.y; lookAtPoint.z += viewerPos.z; g_Camera.SetViewParams(&viewerPos, &lookAtPoint); So does anyone have an ideas how I can move the camera to the third person view? preferably timed so there is a smooth action in the camera movement. (I'm hoping I can just edit this code instead of bringing in another camera class)

    Read the article

  • Trying to get a PC to boot off a bootable SD card that is inside an USB attached Android device.

    - by Cyril
    First, I'd like to make myself look less a madman than I may have appeared to be. I wanted to have a bootable USB stick with me at all times, but it's less convenient because it's an extra object and it's easier to lose, forget etc. Then I thought, I have an Android phone, and it has a micro SD in it; perfect. Not all PC's can boot off a card reader, but the phone itself is a card reader when attached as a disk drive. Or at least so I thought. Turns out that my netbook BIOS (tested on Asus eeePC) refuses to see it as an external harddrive; it only recognizes it as a generic USB device and doesn't offer an option to boot from it. The device has a name "Android phone" or smth, so it seems to me that it doesn't work as a "pure" card reader, and instead still manifests itself as a phone. Can it be somehow overridden?

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

  • Opengl-es picking object

    - by lacas
    I saw a lot of picking code opengl-es, but nothing worked. Can someone give me what am I missing? My code is (from tutorials/forums) Vec3 far = Camera.getPosition(); Vec3 near = Shared.opengl().getPickingRay(ev.getX(), ev.getY(), 0); Vec3 direction = far.sub(near); direction.normalize(); Log.e("direction", direction.x+" "+direction.y+" "+direction.z); Ray mouseRay = new Ray(near, direction); for (int n=0; n<ObjectFactory.objects.size(); n++) { if (ObjectFactory.objects.get(n)!=null) { IObject obj = ObjectFactory.objects.get(n); float discriminant, b; float radius=0.1f; b = -mouseRay.getOrigin().dot(mouseRay.getDirection()); discriminant = b * b - mouseRay.getOrigin().dot(mouseRay.getOrigin()) + radius*radius; discriminant = FloatMath.sqrt(discriminant); double x1 = b - discriminant; double x2 = b + discriminant; Log.e("asd", obj.getName() + " "+discriminant+" "+x1+" "+x2); } } my camera vectors: //cam Vec3 position =new Vec3(-obj.getPosX()+x, obj.getPosZ()-0.3f, obj.getPosY()+z); Vec3 direction =new Vec3(-obj.getPosX(), obj.getPosZ(), obj.getPosY()); Vec3 up =new Vec3(0.0f, -1.0f, 0.0f); Camera.set(position, direction, up); and my picking code: public Vec3 getPickingRay(float mouseX, float mouseY, float mouseZ) { int[] viewport = getViewport(); float[] modelview = getModelView(); float[] projection = getProjection(); float winX, winY; float[] position = new float[4]; winX = (float)mouseX; winY = (float)Shared.screen.width - (float)mouseY; GLU.gluUnProject(winX, winY, mouseZ, modelview, 0, projection, 0, viewport, 0, position, 0); return new Vec3(position[0], position[1], position[2]); } My camera moving all the time in 3d space. and my actors/modells moving too. my camera is following one actor/modell and the user can move the camera on a circle on this model. How can I change the above code to working?

    Read the article

  • Mesh with Alpha Texture doesn't blend properly

    - by faulty
    I've followed example from various place regarding setting OutputMerger's BlendState to enable alpha/transparent texture on mesh. The setup is as follows: var transParentOp = new BlendStateDescription { SourceBlend = BlendOption.SourceAlpha, DestinationBlend = BlendOption.InverseDestinationAlpha, BlendOperation = BlendOperation.Add, SourceAlphaBlend = BlendOption.Zero, DestinationAlphaBlend = BlendOption.Zero, AlphaBlendOperation = BlendOperation.Add, }; I've made up a sample that display 3 mesh A, B and C, where each overlaps one another. They are drawn sequentially, A to C. Distance from camera where A is nearest and C is furthest. So, the expected output is that A see through and saw part of B and C. B will see through and saw part of C. But what I get was none of them see through in that order, but if I move C closer to the camera, then it will be semi transparent and see through A and B. B if move closer to camera will see A but not C. Sort of reverse. So it seems that I need to draw them in reverse order where furthest from camera is drawn first then nearest to camera is drawn last. Is it suppose to be done this way, or I can actually configure the blendstate so it works no matter in which order i draw them? Thanks

    Read the article

  • ubuntu 12.04 - keep getting "Server not found" for some websites

    - by android developer
    ever since last week , i've noticed that many websites cannot be accessed , and it doesn't matter if i use firefox or chromium as a web browser . as an example of such a website is: http://tutorials-android.blogspot.co.il/2011/05/layout-animation-in-android.html all i get is a "Server not found" error page . sometimes after a few refreshes it works just fine . i've checked it on a windows OS machine that is connected to the exact same LAN network , and the website is shown just fine . i've also checked the /etc/hosts file and it doesn't contain anything suspicious . what is going on? how can i fix it?

    Read the article

  • Google I/O Sandbox Case Study: VectorUnit

    Google I/O Sandbox Case Study: VectorUnit We interviewed VectorUnit at the Google I/O Sandbox on May 11, 2011 and they explained to us the benefits of building for the Android Platform. VectorUnit creates console-quality video games for the Android. For more information on Android developers, visit: developers.android.com For more information on VectorUnit, visit vectorunit.com From: GoogleDevelopers Views: 13 0 ratings Time: 01:33 More in Science & Technology

    Read the article

  • How do I transfer videos from DV camera to divx?

    - by Ward
    I have a videocamera that uses mini-dv tapes. In the past, I've transferred the files and made DVDs, but that was time- and disk-space- consuming. I wanted to find new tools and to figure out how to convert the videos to something smaller like divx but I didn't know enough about all the different formats to answer a previous question. Well, now I've done a bunch of research and I understand some of the details of video encoding, and in the process I wrote up some notes on the different formats involved in going from a DV videocam to divx or H.264 They're a bit rambling, but in case it's of any use, I'm going to post them as an answer. I'd be very interested in anyone else's answer as well.

    Read the article

  • what's wrong with my lookAt and move forward code?

    - by alaslipknot
    so am still in the process of getting familiar with libGdx and one of the fun things i love to do is to make basics method for reusability on future projects, and for now am stacked on getting a Sprite rotate toward target (vector2) and then move forward based on that rotation the code am using is this : // set angle public void lookAt(Vector2 target) { float angle = (float) Math.atan2(target.y - this.position.y, target.x - this.position.x); angle = (float) (angle * (180 / Math.PI)); setAngle(angle); } // move forward public void moveForward() { this.position.x += Math.cos(getAngle())*this.speed; this.position.y += Math.sin(getAngle())*this.speed; } and this is my render method : @Override public void render(float delta) { // TODO Auto-generated method stub Gdx.gl.glClearColor(0, 0, 0.0f, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); // groupUpdate(); Vector3 mousePos = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); ball.lookAt(new Vector2(mousePos.x, mousePos.y)); // if (Gdx.input.isTouched()) { ball.moveForward(); } batch.begin(); batch.draw(ball.getSprite(), ball.getPos().x, ball.getPos().y, ball .getSprite().getOriginX(), ball.getSprite().getOriginY(), ball .getSprite().getWidth(), ball.getSprite().getHeight(), .5f, .5f, ball.getAngle()); batch.end(); } the goal is to make the ball always look at the mouse cursor, and then move forward when i click, am also using this camera : // create the camera and the SpriteBatch camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); aaaand the result was so creepy lol Thank you

    Read the article

  • C++ OpenGL trouble trapping cursor in window

    - by ezio160324
    I am using OpenGL and I try to trap my cursor inside my game window (using both SetCursorPos and ClipCursor) But, these conflict with my camera rotation code as my camera is rotated with my mouse. If there is a way to do it, please let me know. If possible, I would be willing to make it so that when the cursor reaches an edge of the screen, it jumps to the opposite edge (though I fear that would also conflict with my camera code).

    Read the article

  • unable to capture picture using camera in j2me polish?

    - by SIVAKUMAR.J
    I'm, developing a mobile app in j2me.Now im converting it into j2me polish. In my app I capture a picture using camera in mobile phone. It works fine in j2me. But it does not work fine in j2me polish. I cannot resolve it. The code snippet given below public class VideoCanvas extends Canvas { // private VideoMIDlet midlet; // Form frm Form frm=null; public VideoCanvas(VideoControl videoControl) { int width = getWidth(); int height = getHeight(); // this.midlet = midlet; //videoControl.initDisplayMode(VideoControl.USE_DIRECT_VIDEO, this); //Canvas canvas = StyleSheet.currentScreen; //canvas = MasterCanvas.instance; videoControl.initDisplayMode( VideoControl.USE_DIRECT_VIDEO,this); try { videoControl.setDisplayLocation(2, 2); videoControl.setDisplaySize(width - 4, height - 4); } catch (MediaException me) {} videoControl.setVisible(true); } public VideoCanvas(VideoControl videoControl,Form ff) { frm=ff; int width = getWidth(); int height = getHeight(); // this.midlet = midlet; Ticker ticker=new Ticker("B4 video controll init"); frm.setTicker(ticker); //Canvas canvas = StyleSheet.currentScreen; videoControl.initDisplayMode(VideoControl.USE_DIRECT_VIDEO,this); ticker=new Ticker("after video controll init"); frm.setTicker(ticker); try { videoControl.setDisplayLocation(2, 2); videoControl.setDisplaySize(width - 4, height - 4); } catch (MediaException me) {} videoControl.setVisible(true); ticker=new Ticker("Device not supported"); frm.setTicker(ticker); } public void paint(Graphics g) { int width = getWidth(); int height = getHeight(); g.setColor(0x00ff00); g.drawRect(0, 0, width - 1, height - 1); g.drawRect(1, 1, width - 3, height - 3); } } In normal j2me the above code works correctly. But in j2me polish videoControl.initDisplayMode(VideoControl.USE_DIRECT_VIDEO,this) here this refers to VideoCanvas (which extends from javax.microedition.lcdui.Canvas). But it throws an "IllegalArgumentException - container should be canvas" like that. How to solve the issue?

    Read the article

  • Ogre material scripts; how do I give a technique multiple lod_indexes?

    - by BlueNovember
    I have an Ogre material script that defines 4 rendering techniques. 1 using GLSL shaders, then 3 others that just use textures of different resolutions. I want to use the GLSL shader unconditionally if the graphics card supports it, and the other 3 textures depending on camera distance. At the moment my script is; material foo { lod_distances 1600 2000 technique shaders { lod_index 0 lod_index 1 lod_index 2 //various passes here } technique high_res { lod_index 0 //various passes here } technique medium_res { lod_index 1 //various passes here } technique low_res { lod_index 2 //various passes here } Extra information The Ogre manual says; Increasing indexes denote lower levels of detail You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. OGRE determines which one is 'best' by which one is listed first. Currently, on a machine supporting the GLSL version I am using, the script behaves as follows; Camera 2000 : Shader technique Camera 1600 <= 2000 : Medium Camera <= 1600 : High If I change the lod order in shader technique to { lod_index 2 lod_index 1 lod_index 0 } The behaviour becomes; Camera 2000 : Low Camera 1600 <= 2000 : Medium Camera <= 1600 : Shader implying only the latest lod_index is used. If I change it to lod_index 0 1 2 It shouts at me Compiler error: fewer parameters expected in foo.material(#): lod_index only supports 1 argument So how do I specify a technique to have 3 lod_indexes? Duplication works; technique shaders { lod_index 0 //various passes here } technique shaders1 { lod_index 1 //passes repeated here } technique shaders2 { lod_index 2 //passes repeated here } ...but it's ugly.

    Read the article

  • How to resolve the only ImagePicker control view in landscap mode and whole application in portrait mode?

    - by Wolvorin
    I have tried almost all the answers during last two days provided by Google and SO but no luck :( What I want is my whole application is in portrait mode only. And it working fine in ios 6+. The only support required at now. But the problem is I need to launch UIImagePickerViewController with image source type camera in only landscap mode. What I tried till now is : (1) I try to create one category for UIImagePickerController for orientation. -(BOOL)shouldAutorotate { return NO; } -(NSUInteger)supportedInterfaceOrientations { return UIInterfaceOrientationMaskLandscape; } - (UIInterfaceOrientation)preferredInterfaceOrientationForPresentation { return UIInterfaceOrientationLandscapeLeft; } Like this. But the camera view is not proper aligned. It just follows the orientation of device with some +/- 90 angle but not what I required. Even the button of the camera shown by camera view as camera control is also follows the camera view, ie. the view is rotated to 90 anti clock vise and stays to that way. Is there any way to use the camera with proper alignment? or have to use other framework to work with it? Please help me. I stuck with it for last two days.

    Read the article

  • launch the emulator, I only see the black window with the string 'ANDROID", and no more desktop and

    - by Jiawelin
    Dear experts, I download the Android code and "make sdk" to build out my own SDK, but the emulator from this SDK does not work well -- it only shows the black window with the "ANDROID" string, but I can't see any desktop picture or any applications. what's wrong here? anyone could please provide me a hint? Thanks a lot. I use the command: $./emulator @jiawelin -debug all and the last output message is: emulator: android_hw_control_init: hw-control qemud handler initialized

    Read the article

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • New WebKit tests

    I have updated the WebKit comparison table with data from Safari 5, Chrome 5, and Android 2.1. Improvements throughout!The top five WebKit browsers according to these tests are now: Chrome 5 Safari 5 Safari 4 Samsung WebKit (on bada) Android 2.1Interesting findings: Chrome and Android now support localStorage (Safari already did). Chrome and Android now support geolocation. Safari does in theory, but it doesn’t give the actual coordinates, making the whole exercise a bit pointless. Chrome and Android...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • DirectShow: Video-Preview and Image (with working code)

    - by xsl
    Questions / Issues If someone can recommend me a good free hosting site I can provide the whole project file. As mentioned in the text below the TakePicture() method is not working properly on the HTC HD 2 device. It would be nice if someone could look at the code below and tell me if it is right or wrong what I'm doing. Introduction I recently asked a question about displaying a video preview, taking camera image and rotating a video stream with DirectShow. The tricky thing about the topic is, that it's very hard to find good examples and the documentation and the framework itself is very hard to understand for someone who is new to windows programming and C++ in general. Nevertheless I managed to create a class that implements most of this features and probably works with most mobile devices. Probably because the DirectShow implementation depends a lot on the device itself. I could only test it with the HTC HD and HTC HD2, which are known as quite incompatible. HTC HD Working: Video preview, writing photo to file Not working: Set video resolution (CRASH), set photo resolution (LOW quality) HTC HD 2 Working: Set video resolution, set photo resolution Problematic: Video Preview rotated Not working: Writing photo to file To make it easier for others by providing a working example, I decided to share everything I have got so far below. I removed all of the error handling for the sake of simplicity. As far as documentation goes, I can recommend you to read the MSDN documentation, after that the code below is pretty straight forward. void Camera::Init() { CreateComObjects(); _captureGraphBuilder->SetFiltergraph(_filterGraph); InitializeVideoFilter(); InitializeStillImageFilter(); } Dipslay a video preview (working with any tested handheld): void Camera::DisplayVideoPreview(HWND windowHandle) { IVideoWindow *_vidWin; _filterGraph->QueryInterface(IID_IMediaControl,(void **) &_mediaControl); _filterGraph->QueryInterface(IID_IVideoWindow, (void **) &_vidWin); _videoCaptureFilter->QueryInterface(IID_IAMVideoControl, (void**) &_videoControl); _captureGraphBuilder->RenderStream(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video, _videoCaptureFilter, NULL, NULL); CRect rect; long width, height; GetClientRect(windowHandle, &rect); _vidWin->put_Owner((OAHWND)windowHandle); _vidWin->put_WindowStyle(WS_CHILD | WS_CLIPSIBLINGS); _vidWin->get_Width(&width); _vidWin->get_Height(&height); height = rect.Height(); _vidWin->put_Height(height); _vidWin->put_Width(rect.Width()); _vidWin->SetWindowPosition(0,0, rect.Width(), height); _mediaControl->Run(); } HTC HD2: If set SetPhotoResolution() is called FindPin will return E_FAIL. If not, it will create a file full of null bytes. HTC HD: Works void Camera::TakePicture(WCHAR *fileName) { CComPtr<IFileSinkFilter> fileSink; CComPtr<IPin> stillPin; CComPtr<IUnknown> unknownCaptureFilter; CComPtr<IAMVideoControl> videoControl; _imageSinkFilter.QueryInterface(&fileSink); fileSink->SetFileName(fileName, NULL); _videoCaptureFilter.QueryInterface(&unknownCaptureFilter); _captureGraphBuilder->FindPin(unknownCaptureFilter, PINDIR_OUTPUT, &PIN_CATEGORY_STILL, &MEDIATYPE_Video, FALSE, 0, &stillPin); _videoCaptureFilter.QueryInterface(&videoControl); videoControl->SetMode(stillPin, VideoControlFlag_Trigger); } Set resolution: Works great on HTC HD2. HTC HD won't allow SetVideoResolution() and only offers one low resolution photo resolution: void Camera::SetVideoResolution(int width, int height) { SetResolution(true, width, height); } void Camera::SetPhotoResolution(int width, int height) { SetResolution(false, width, height); } void Camera::SetResolution(bool video, int width, int height) { IAMStreamConfig *config; config = NULL; if (video) { _captureGraphBuilder->FindInterface(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video, _videoCaptureFilter, IID_IAMStreamConfig, (void**) &config); } else { _captureGraphBuilder->FindInterface(&PIN_CATEGORY_STILL, &MEDIATYPE_Video, _videoCaptureFilter, IID_IAMStreamConfig, (void**) &config); } int resolutions, size; VIDEO_STREAM_CONFIG_CAPS caps; config->GetNumberOfCapabilities(&resolutions, &size); for (int i = 0; i < resolutions; i++) { AM_MEDIA_TYPE *mediaType; if (config->GetStreamCaps(i, &mediaType, reinterpret_cast<BYTE*>(&caps)) == S_OK ) { int maxWidth = caps.MaxOutputSize.cx; int maxHeigth = caps.MaxOutputSize.cy; if(maxWidth == width && maxHeigth == height) { VIDEOINFOHEADER *info = reinterpret_cast<VIDEOINFOHEADER*>(mediaType->pbFormat); info->bmiHeader.biWidth = maxWidth; info->bmiHeader.biHeight = maxHeigth; info->bmiHeader.biSizeImage = DIBSIZE(info->bmiHeader); config->SetFormat(mediaType); DeleteMediaType(mediaType); break; } DeleteMediaType(mediaType); } } } Other methods used to build the filter graph and create the COM objects: void Camera::CreateComObjects() { CoInitialize(NULL); CoCreateInstance(CLSID_CaptureGraphBuilder, NULL, CLSCTX_INPROC_SERVER, IID_ICaptureGraphBuilder2, (void **) &_captureGraphBuilder); CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void **) &_filterGraph); CoCreateInstance(CLSID_VideoCapture, NULL, CLSCTX_INPROC, IID_IBaseFilter, (void**) &_videoCaptureFilter); CoCreateInstance(CLSID_IMGSinkFilter, NULL, CLSCTX_INPROC, IID_IBaseFilter, (void**) &_imageSinkFilter); } void Camera::InitializeVideoFilter() { _videoCaptureFilter->QueryInterface(&_propertyBag); wchar_t deviceName[MAX_PATH] = L"\0"; GetDeviceName(deviceName); CComVariant comName = deviceName; CPropertyBag propertyBag; propertyBag.Write(L"VCapName", &comName); _propertyBag->Load(&propertyBag, NULL); _filterGraph->AddFilter(_videoCaptureFilter, L"Video Capture Filter Source"); } void Camera::InitializeStillImageFilter() { _filterGraph->AddFilter(_imageSinkFilter, L"Still image filter"); _captureGraphBuilder->RenderStream(&PIN_CATEGORY_STILL, &MEDIATYPE_Video, _videoCaptureFilter, NULL, _imageSinkFilter); } void Camera::GetDeviceName(WCHAR *deviceName) { HRESULT hr = S_OK; HANDLE handle = NULL; DEVMGR_DEVICE_INFORMATION di; GUID guidCamera = { 0xCB998A05, 0x122C, 0x4166, 0x84, 0x6A, 0x93, 0x3E, 0x4D, 0x7E, 0x3C, 0x86 }; di.dwSize = sizeof(di); handle = FindFirstDevice(DeviceSearchByGuid, &guidCamera, &di); StringCchCopy(deviceName, MAX_PATH, di.szLegacyName); } Full header file: #ifndef __CAMERA_H__ #define __CAMERA_H__ class Camera { public: void Init(); void DisplayVideoPreview(HWND windowHandle); void TakePicture(WCHAR *fileName); void SetVideoResolution(int width, int height); void SetPhotoResolution(int width, int height); private: CComPtr<ICaptureGraphBuilder2> _captureGraphBuilder; CComPtr<IGraphBuilder> _filterGraph; CComPtr<IBaseFilter> _videoCaptureFilter; CComPtr<IPersistPropertyBag> _propertyBag; CComPtr<IMediaControl> _mediaControl; CComPtr<IAMVideoControl> _videoControl; CComPtr<IBaseFilter> _imageSinkFilter; void GetDeviceName(WCHAR *deviceName); void InitializeVideoFilter(); void InitializeStillImageFilter(); void CreateComObjects(); void SetResolution(bool video, int width, int height); }; #endif

    Read the article

  • Castle ActiveRecord - schema generation without enforcing referential integrity?

    - by Simon
    Hi all, I've just started playing with Castle active record as it seems like a gentle way into NHibernate. I really like the idea of the database schema being generate from my classes during development. I want to do something similar to the following: [ActiveRecord] public class Camera : ActiveRecordBase<Camera> { [PrimaryKey] public int CameraId {get; set;} [Property] public int CamKitId {get; set;} [Property] public string serialNo {get; set;} } [ActiveRecord] public class Tripod : ActiveRecordBase<Tripod> { [PrimaryKey] public int TripodId {get; set;} [Property] public int CamKitId {get; set;} [Property] public string serialNo {get; set;} } [ActiveRecord] public class CameraKit : ActiveRecordBase<CameraKit> { [PrimaryKey] public int CamKitId {get; set;} [Property] public string description {get; set;} [HasMany(Inverse=true, Table="Cameras", ColumnKey="CamKitId")] public IList<Camera> Cameras {get; set;} [HasMany(Inverse=true, Table="Tripods", ColumnKey="CamKitId")] public IList<Camera> Tripods {get; set;} } A camerakit should contain any number of tripods and cameras. Camera kits exist independently of cameras and tripods, but are sometimes related. The problem is, if I use createschema, this will put foreign key constraints on the Camera and Tripod tables. I don't want this, I want to be able to to set CamKitId to null on the tripod and camera tables to indicate that it is not part of a CameraKit. Is there a way to tell activerecord/nhibernate to still see it as related, without enforcing the integrity? I was thinking I could have a cameraKit record in there to indicate "no camera kit", but it seems like oeverkill. Or is my schema wrong? Am I doing something I shouldn't with an ORM? (I've not really used ORMs much) Thanks!

    Read the article

  • Replace text in XSL using wildcards

    - by JosephThomas
    This is similar to an earlier problem I was having which you guys solved in less than a day. I am working with XML files that are generated by a digital video camera. The camera allows the user to save all of the camera's settngs to an SD card so that the settings can be recalled or loaded into another camera. The XSL stylesheet I am writing will allow users to view the camera's settings, as saved to the SD card in a web browser. While most of the values in the XML file -- as formatted by my stylesheet -- make sense to humans, some do not. What I would like to do is have the stylesheet display text that is based on the value in the XML file but more easily understood by humans. A typical value that can be written to the XML file is "_23_970" which represents the camera's frame rate. This would be better displayed as 23.970 (or 023.970). The first underscore is a sort of place holder to make a space for values over 099.999. The second underscore, obviously represents the decimal. My previous (similar) question involved replacing predictable text, and the solution was matching templates. In this case, however, the camera can be set at any one of 119,999 frame rates (I think I did that math correctly). The approach, I would guess, is to pass a value to the displayed webpage that keeps the numeric values (each digit), replaces the second underscore with a decimal, and replaces the first underscore with either an nbsp or a zero (whichever is easier). If the first character in the string is a "1" (the camera can run at frame rates up to 120.000) then the one should be passed on to the page displayed by the stylesheet. I have read other posts here regarding wildcards, but couldn't find one that answered this question. EDIT: Sorry for leaving out important info. I fared better on my first try at asking a question! I guess I got complacent. Anyhow . . . I should have shown you the code that displays the text in the XSL file as is: <tr> <xsl:for-each select="Settings/Groups/Recording"> <tr><td class="title_column">Frame Rate</td><td><xsl:value-of select="RecOutLinkSpeed"/></td></tr> </xsl:for-each> </tr> I should also have given you the URL for the sample file I have been working with: http://josephthomas.info/Alexa/Setup_120511_140322.xml

    Read the article

  • Upload picture directly to the server

    - by Rajeev
    In the following link http://www.tuttoaster.com/create-a-camera-application-in-flash-using-actionscript-3/ how to make the picture upload directly to the server after taking a picture from webcam package { import flash.display.Sprite; import flash.media.Camera; import flash.media.Video; import flash.display.BitmapData; import flash.display.Bitmap; import flash.events.MouseEvent; import flash.net.FileReference; import flash.utils.ByteArray; import com.adobe.images.JPGEncoder; public class caml extends Sprite { private var camera:Camera = Camera.getCamera(); private var video:Video = new Video(); private var bmd:BitmapData = new BitmapData(320,240); private var bmp:Bitmap; private var fileReference:FileReference = new FileReference(); private var byteArray:ByteArray; private var jpg:JPGEncoder = new JPGEncoder(); public function caml() { saveButton.visible = false; discardButton.visible = false; saveButton.addEventListener(MouseEvent.MOUSE_UP, saveImage); discardButton.addEventListener(MouseEvent.MOUSE_UP, discard); capture.addEventListener(MouseEvent.MOUSE_UP, captureImage); if (camera != null) { video.smoothing = true; video.attachCamera(camera); video.x = 140; video.y = 40; addChild(video); } else { trace("No Camera Detected"); } } private function captureImage(e:MouseEvent):void { bmd.draw(video); bmp = new Bitmap(bmd); bmp.x = 140; bmp.y = 40; addChild(bmp); capture.visible = false; saveButton.visible = true; discardButton.visible = true; } private function saveImage(e:MouseEvent):void { byteArray = jpg.encode(bmd); fileReference.save(byteArray, "Image.jpg"); removeChild(bmp); saveButton.visible = false; discardButton.visible = false; capture.visible = true; } private function discard(e:MouseEvent):void { removeChild(bmp); saveButton.visible = false; discardButton.visible = false; capture.visible = true; } } }

    Read the article

  • Android Hero denies XMLSocket or Socket connection? ....errors #2031, #2048

    - by claudi-ursica
    Hi, I am trying to adapt an existing flash web chat application for the Android mobile phone and I am having this really annoying issue. The server is a custom based solution and can send back both binary messages or XML. So I can use either XMLSocket class or the Socket class to get data from the server. Everything works fine when deployed and I connect from the desktop but when I try it from the android mobile I get the infamous errors #2031, followed by #2048. Now the crossdomain.xml file is rock solid and works well for desktop. When the connect socket method runs I see that the server replies with the crossdomain file but I get the error when running on the mobile. Has anyone bumped into this? Is there some limitation from the mobile phone part. I wasn't able to find anything relevant for this issue, in terms of the phone not allowing Socket or XMLSocket connections. The phone(s) Motorola and HTC run Android 2.1 and indicates the flash FL10,1,123,358 version of flash lite. The issue can be reproduced also on the HTC Desire. Any input on this would be highly appreciated... 10x, Claudiu

    Read the article

  • GLSL: Strange light reflections [Solved]

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices?

    Read the article

  • GLSL: Strange light reflections

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices? SOLVED Solved... 3 days needed for changing one letter from this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(12*sizeof(float)) // array buffer offset ); to this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(11*sizeof(float)) // array buffer offset ); see difference? :)

    Read the article

  • How do I draw an OpenGL point sprite using libgdx for Android?

    - by nbolton
    Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >