Search Results

Search found 833 results on 34 pages for 'gesture recognition'.

Page 7/34 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Creative Gesture Camera in Processing

    - by user2892963
    I'm trying to use the creative gesture camera in Processing. I started with the Intel Perceptual Computing SDK, and ran into an issue. I want to get the hand openness, and I am running into some issues - no matter what, the hand.openness returns 0. It otherwise runs quite well... Some Sample code I'm trying to get to work: If you open your hand it starts printing to the console, close it and it stops. import intel.pcsdk.*; PXCUPipeline session; PXCMGesture.GeoNode hand = new PXCMGesture.GeoNode(); void setup() { session = new PXCUPipeline(this); if(!session.Init(PXCUPipeline.GESTURE)) exit(); } void draw() { background(0); if(session.AcquireFrame(false)) { if(session.QueryGeoNode(PXCMGesture.GeoNode.LABEL_BODY_HAND_PRIMARY|PXCMGesture.GeoNode.LABEL_OPEN, hand)) //Only when primary hand is open { rect(0, 0, 10, 10); println(hand.openness + " : " + frameCount); //Openness should be from 0 to 100 } session.ReleaseFrame(); } } Using the current version of Processing (2.0.3), Perceptual Computing SDK Version 7383.

    Read the article

  • Disable Charm bar's Touchpad gesture shortcut

    - by Chin
    I'm using an Acer AO722 laptop. Everytime I slide my finger from the right edge of the touchpad (where the slider is) toward the center (mostly accidentally), the charmbar pops up and stays on the screen until I manually click on some random spots on the screen. There's no such option in the Synaptics properties: .. nor is there any in the mouse option. Is there a way to turn this shortcut off?

    Read the article

  • How to have a UISwipeGestureRecognizer AND UIPanGestureRecognizer work on the same view

    - by Shizam
    How would you setup the gesture recognizers so that you could have a UISwipeGestureRecognizer and a UIPanGestureRecognizer work at the same time? Such that if you touch and move quickly (quick swipe) it detects the gesture as a swipe but if you touch then move (short delay between touch & move) it detects it as a pan? I've tried various permutations of requireGestureRecognizerToFail and that didn't help exactly, it made it so that if the SwipeGesture was left then my pan gesture would work up, down and right but any movement left was detected by the swipe gesture.

    Read the article

  • IPhone two fingers twist gesture

    - by user267980
    Hi. I need to rotate an image when the user do a two finger twist on it. do you have an idea on how i can code this or if you've done this before. I think it would be a good idea to write a class that detect all the main gesture and provide iif.

    Read the article

  • Speech Recognition in Windows Mobile 6.0

    - by user356689
    Thanks in Advance...Need speech recognition and convert into text in windows mobile 6.0. I already done it for windows desktop application using System.Speech.Recognition. Will System.Speech support for windows mobile.. Please Reply If any other solution available.

    Read the article

  • What is an application for "web site recognition"?

    - by OSX Jedi
    This explanation isn't clear to me. Let me describe an application for web site recognition. Suppose that we want to know what everyone is doing with the web at starbuck. We can use wireshark or other programs to sniff all the packets. By grouping all the secondary connections with the primary one, then we would be able to get a much easier picture of user's primary activities. Is this talking about being able to recognize which websites each of the laptops are connecting to?

    Read the article

  • What does "Value does not fall within expected range" mean in runtime error?

    - by manuel
    Hi, I'm writing a plugin (dll file) using /clr and trying to implement speech recognition using .NET. But when I run it, I got a runtime error saying "Value does not fall within expected range", what does the message mean? public ref class Dialog : public System::Windows::Forms::Form { public: SpeechRecognitionEngine^ sre; private: System::Void btnSpeak_Click(System::Object^ sender, System::EventArgs^ e) { Initialize(); } protected: void Initialize() { //create the recognition engine sre = gcnew SpeechRecognitionEngine(); //set our recognition engine to use the default audio device sre->SetInputToDefaultAudioDevice(); //create a new GrammarBuilder to specify which commands we want to use GrammarBuilder^ grammarBuilder = gcnew GrammarBuilder(); //append all the choices we want for commands. //we want to be able to move, stop, quit the game, and check for the cake. grammarBuilder->Append(gcnew Choices("play", "stop")); //create the Grammar from th GrammarBuilder Grammar^ customGrammar = gcnew Grammar(grammarBuilder); //unload any grammars from the recognition engine sre->UnloadAllGrammars(); //load our new Grammar sre->LoadGrammar(customGrammar); //add an event handler so we get events whenever the engine recognizes spoken commands sre->SpeechRecognized += gcnew EventHandler<SpeechRecognizedEventArgs^> (this, &Dialog::sre_SpeechRecognized); //set the recognition engine to keep running after recognizing a command. //if we had used RecognizeMode.Single, the engine would quite listening after //the first recognized command. sre->RecognizeAsync(RecognizeMode::Multiple); //this->init(); } void sre_SpeechRecognized(Object^ sender, SpeechRecognizedEventArgs^ e) { //simple check to see what the result of the recognition was if (e->Result->Text == "play") { MessageBox(plugin.hwndParent, L"play", 0, 0); } if (e->Result->Text == "stop") { MessageBox(plugin.hwndParent, L"stop", 0, 0); } } };

    Read the article

  • Speech recognition webservice that scores the accuracy of one audio clips vs. another?

    - by wgpubs
    Does such a thing exist? Building a Rails based web application where users can upload an audio file of them speaking that then needs to be compared to another audio file for the purposes of determining how similar to voices are. Ideally I'd like to simply get a response that gives me a score of how similar they are in terms of percentage (e.g. 75% similar etc...). Anyone have any ideas? Thanks

    Read the article

  • Whay types of grammar files are usable for spoken voice recognition?

    - by user1413199
    I'm using the System.Speech library in C# and I would like to create a smaller file to house commands as opposed to the default grammar. I'm not totally sure what I need. I've been looking at several different things but I don't really have any idea what I'm doing. I've read up on some stuff in ANTLR and looked at NuGram from NuEcho. I understand what a grammar file is and roughly how to create one but I'm not sure how they're used specifically for deciphering spoken words.

    Read the article

  • How does USB device recognition work?

    - by GorillaSandwich
    I'm curious how USB device recognition works in Windows. I imagine it's something like this: When you plug in a device, it tells Windows "here's my device ID to tell you what I am" Windows looks to see if any drivers have been installed that match that device ID. The driver probably tells Windows what the device should be called - like "BlackBerry Curve" or "Canon Printer" If so, it somehow associates that device with that driver Otherwise, it looks for a matching driver online (if you let it) Am I right? If so, that still leaves some questions. When you install drivers, where do they go? Are they files in a folder, or do they get added to the registry? What is Windows doing when it first recognizes the device, thinks, and finally says "your new device is installed and ready to use?" Where does Windows look for missing drivers? Is it in their own database? Do device manufacturers submit drivers to Microsoft for inclusion there? Can anybody explain how this process really works? Also, do other OSes do this differently?

    Read the article

  • Webcast: Optimize Accounts Payable Through Automated Invoice Processing

    - by kellsey.ruppel(at)oracle.com
    Is your accounts payable process still very labor-intensive? Then discover how Oracle can help you eliminate paper, automate data entry and reduce costs by up to 90% - while saving valuable time through fewer errors and faster lookups. Join us on Tuesday, March 22 at 10 a.m. PT for this informative Webcast where Jamie Rancourt and Brian Dirking will show how you can easily integrate capture, forms recognition and content management into your PeopleSoft and Oracle E-Business Suite accounts payable systems. You will also see how The Home Depot, Costco and American Express have achieved tremendous savings and productivity gains by switching to automated solutions. Learn how you can automate invoice scanning, indexing and data extraction to:Improve speed and reduce errors Eliminate time-consuming searches Utilize vendor discounts through faster processing Improve visibility and ensure compliance Save costs in accounts payable and other business processesRegister today!

    Read the article

  • Adding GestureOverlayView to my SurfaceView class, how to add to view hierarchy?

    - by Codejoy
    I was informed in a later answer that I have to add the GestureOverlayView I create in code to my view hierarchy, and I am not 100% how to do that. Below is the original question for completeness. I want my game to be able to recognize gestures. I have this nice SurfaceView class that I do an onDraw to draw my sprites, and I have a thread thats running it to call the onDraw etc . This all works great. I am trying to add the GestureOverlayView to this and it just isn't working. Finally hacked to where it doesn't crash but this is what i have public class Panel extends SurfaceView implements SurfaceHolder.Callback, OnGesturePerformedListener { public Panel(Context context) { theContext=context; mLibrary = GestureLibraries.fromRawResource(context, R.raw.myspells); GestureOverlayView gestures = new GestureOverlayView(theContext); gestures.setOrientation(gestures.ORIENTATION_VERTICAL); gestures.setEventsInterceptionEnabled(true); gestures.setGestureStrokeType(gestures.GESTURE_STROKE_TYPE_MULTIPLE); gestures.setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); //GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures); gestures.addOnGesturePerformedListener(this); } ... ... onDraw... surfaceCreated(..); ... ... public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) { ArrayList<Prediction> predictions = mLibrary.recognize(gesture); // We want at least one prediction if (predictions.size() > 0) { Prediction prediction = predictions.get(0); // We want at least some confidence in the result if (prediction.score > 1.0) { // Show the spell Toast.makeText(theContext, prediction.name, Toast.LENGTH_SHORT).show(); } } } } The onGesturePerformed is never called. Their example has the GestureOverlay in the xml, I am not using that, my activity is simple: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); Panel p = new Panel(this); setContentView(p); } So I am at a bit of a loss of the missing piece of information here, it doesn't call the onGesturePerformed and the nice pretty yellow "you are drawing a gesture" never shows up.

    Read the article

  • ¿Oficina sin papeles?

    - by [email protected]
    Recientemente hemos organizado un evento de Digitalización para mostrar algunos de los últimos productos de Oracle en éste área.Siempre tendemos a pensar que en España estamos retrasados en estas tecnologías y que el mercado no está preparado para eliminar el papel. En algunos casos es cierto, pero también nos hemos llevado sorpresas con clientes extremadamente avanzados en la gestión electrónica del papel.Para los clientes que no tienen una solución corporativa ya desplegada, nuestra oferta de Imaging les parece completa e integrada, porque les permite digitalizar el papel en el punto más cercano a su recepción y posteriormente realizar todo el trámite interno de forma digital.Este proceso es el que se muestra en la siguiente imágen: Sobre todo en el entorno financiero los clientes ya tienen grandes infraestructuras desplegadas (algunos con funcionalidades muy sofisticadas que han desarrollado a medida durante estos últimos años).En estos casos, su interés está centrado en 2 capacidades clave de nuestros productos: La digitalización distribuidaEl OCR inteligenteCuando ya disponemos de una infraestructura de digitalización centralizada, tenemos varios puntos de mejora con los que conseguir mayores ratios de ahorro en la gestión del papel. Uno de ellos es digitalizar en origen, de forma que ahorraremos en logística de desplazamiento y almacenamiento de papel (reducimos valijas) y en velocidad de arranque de los procesos (desde el momento de la recepción).El hecho de poder hacer esto sólo con un explorador de internet es muy novedoso para los clientes.El no instalar ninguna pieza de software de cliente parece que es un requisito que muchos clientes estaban demandando desde hace tiempo. De hecho, estamos realizando demos en vivo con un escáner del cliente (solo necesitamos el driver de windows para ese escáner). El resultado es sorprendente porque mostramos cómo: escaneamos con sólo un explorador de internet; el documento escaneado, con sus metadatos, se incorporan al gestor documental; y se dispara su workflow de aprobación.Hacer esto en segundos es algo que genera mucho interés en los clientes de cara a acelerar la gestión de muchos de sus trámites en papel.Por último, lo más novedoso de la oferta es el OCR inteligente. Hay quien ya tiene absolutamente operativas sus infraestructuras de digitalización con todas estas capacidades, y buscan un paso más allá con el reconocimiento inteligente de todos los metadatos posibles.El beneficio es rápido, claramente cuantificable y muy alto. El software de OCR inteligente se basa en lógica difusa y nos permite definir los umbrales de validación totalmente adecuados a nuestros factores de confianza. Es decir, configuramos el umbral para que cuando el software acepta un acierto tengamos la seguridad total de que dichos metadatos se han reconocido perfectamente. En caso contrario, el software lanza una validación manual.¿Qué pasa si conseguimos que para determinados documentos, el 40%, 50%, 60% o incluso el 70% u 80% de ellos fueran procesados 100% automáticamente?. El ahorro es inmenso, la reducción del tiempo de proceso también, y la integración con nuestras infraestructuras de digitalización es muy sencilla (basta con desviar unos cuantos documentos de un tipo concreto a Oracle Forms Recognition y evaluar el resultado).Os animo a que veáis estos productos y consigamos hacer realidad la reducción de papel.

    Read the article

  • Pinch gesture in QLPreviewController?

    - by Arne
    I am using the QLPreviewController in my project, but cannot use pinch gestures to zoom in. What do I need to set up first? This is how I create the view controller: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { QLPreviewController *previewer = [[QLPreviewController alloc] init]; [previewer setDataSource:self]; [previewer setCurrentPreviewItemIndex:indexPath.row]; [[self navigationController] pushViewController:previewer animated:YES]; }

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >