Search Results

Search found 833 results on 34 pages for 'gesture recognition'.

Page 1/34 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Android 2.1 fling gesture captured on textview but still a contextmenu opens

    - by hermo
    The following problem seems unique to 2.1, happens both on an emulator and on a nexus. The same example works fine on other platforms I've tested (1.5, 1.6 and 2.0 emulators). I've added created gestureListener as described in this post. The difference is that I've added the listener on a TextView which also has a contextMenu registered, i.e. sth like the following: onCreate(...) { ... // Layout contains a large TextView on which I want to add a context menu tv = findViewById(R.id.text_view); tv.registerForContextMenu(this); // create the gestureListener according above mentioned post. gestureListener = ... // set the listener on the text-view tv.setOnTouchListener(gestureListener); ... } When testing it, the correct gesture is recognized alright, but every other time it also causes the context menu to be opened. As the same example is working on non 2.1 platforms, I've got a feeling it is not my code that is the problem... Thankful for any suggestions.

    Read the article

  • What resources are there for facial recognition

    - by Zintinio
    I'm interested in learning the theory behind facial recognition software so that I can hopefully implement it in the future. Not just face tracking, but being able to recognize individuals. What papers, books, libraries, or source is available so that I can learn more about the subject? I have found libface which seems to use eigenfaces for recognition. If there are any practitioners out there, please share any information that you can.

    Read the article

  • Custom Gesture in cocos2d

    - by Lewis
    I've found a little tutorial that would be useful for my game: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ But I can't work out how to convert that gesture to work with cocos2d, I have found examples of pre made gestures in cocos2d, but no custom ones, is it possible? EDIT STILL HAVING PROBLEMS WITH THIS: I've added the code from Sentinel below (from SO), the Gesture and RotateGesture have both been added to my solution and are compiling. Although In the rotation class now I only see selectors, how do I set those up? As the custom gesture found in that project above looks like: header file for custom gesture: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end .m for custom gesture file: #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Then its initialised like this: // calculate center and radius of the control CGPoint midPoint = CGPointMake(image.frame.origin.x + image.frame.size.width / 2, image.frame.origin.y + image.frame.size.height / 2); CGFloat outRadius = image.frame.size.width / 2; // outRadius / 3 is arbitrary, just choose something >> 0 to avoid strange // effects when touching the control near of it's center gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; The selector below is also in the same file where the initialisation of the gestureRecogonizer: - (void) rotation: (CGFloat) angle { // calculate rotation angle imageAngle += angle; if (imageAngle > 360) imageAngle -= 360; else if (imageAngle < -360) imageAngle += 360; // rotate image and update text field image.transform = CGAffineTransformMakeRotation(imageAngle * M_PI / 180); [self updateTextDisplay]; } I can't seem to get this working in the RotateGesture class can anyone help me please I've been stuck on this for days now. SECOND EDIT: Here is the users code from SO that was suggested to me: Here is projec on GitHub: SFGestureRecognizers It uses builded in iOS UIGestureRecognizer, and don't needs to be integrated into cocos2d sources. Using it, You can make any gestures, just like you could, if you whould work with UIGestureRecognizer. For example: I made a base class Gesture, and subclassed it for any new gesture: //Gesture.h @interface Gesture : NSObject <UIGestureRecognizerDelegate> { UIGestureRecognizer *gestureRecognizer; id delegate; SEL preSolveSelector; SEL possibleSelector; SEL beganSelector; SEL changedSelector; SEL endedSelector; SEL cancelledSelector; SEL failedSelector; BOOL preSolveAvailable; CCNode *owner; } - (id)init; - (void)addGestureRecognizerToNode:(CCNode*)node; - (void)removeGestureRecognizerFromNode:(CCNode*)node; -(void)recognizer:(UIGestureRecognizer*)recognizer; @end //Gesture.m #import "Gesture.h" @implementation Gesture - (id)init { if (!(self = [super init])) return self; preSolveAvailable = YES; return self; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer { return YES; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)recognizer shouldReceiveTouch:(UITouch *)touch { //! For swipe gesture recognizer we want it to be executed only if it occurs on the main layer, not any of the subnodes ( main layer is higher in hierarchy than children so it will be receiving touch by default ) if ([recognizer class] == [UISwipeGestureRecognizer class]) { CGPoint pt = [touch locationInView:touch.view]; pt = [[CCDirector sharedDirector] convertToGL:pt]; for (CCNode *child in owner.children) { if ([child isNodeInTreeTouched:pt]) { return NO; } } } return YES; } - (void)addGestureRecognizerToNode:(CCNode*)node { [node addGestureRecognizer:gestureRecognizer]; owner = node; } - (void)removeGestureRecognizerFromNode:(CCNode*)node { [node removeGestureRecognizer:gestureRecognizer]; } #pragma mark - Private methods -(void)recognizer:(UIGestureRecognizer*)recognizer { CCNode *node = recognizer.node; if (preSolveSelector && preSolveAvailable) { preSolveAvailable = NO; [delegate performSelector:preSolveSelector withObject:recognizer withObject:node]; } UIGestureRecognizerState state = [recognizer state]; if (state == UIGestureRecognizerStatePossible && possibleSelector) { [delegate performSelector:possibleSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateBegan && beganSelector) [delegate performSelector:beganSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateChanged && changedSelector) [delegate performSelector:changedSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateEnded && endedSelector) { preSolveAvailable = YES; [delegate performSelector:endedSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateCancelled && cancelledSelector) { preSolveAvailable = YES; [delegate performSelector:cancelledSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateFailed && failedSelector) { preSolveAvailable = YES; [delegate performSelector:failedSelector withObject:recognizer withObject:node]; } } @end Subclass example: //RotateGesture.h #import "Gesture.h" @interface RotateGesture : Gesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed; @end //RotateGesture.m #import "RotateGesture.h" @implementation RotateGesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed { if (!(self = [super init])) return self; preSolveSelector = preSolve; delegate = target; possibleSelector = possible; beganSelector = began; changedSelector = changed; endedSelector = ended; cancelledSelector = cancelled; failedSelector = failed; gestureRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:@selector(recognizer:)]; gestureRecognizer.delegate = self; return self; } @end Use example: - (void)addRotateGesture { RotateGesture *rotateRecognizer = [[RotateGesture alloc] initWithTarget:self preSolveSelector:@selector(rotateGesturePreSolveWithRecognizer:node:) possibleSelector:nil beganSelector:@selector(rotateGestureStateBeganWithRecognizer:node:) changedSelector:@selector(rotateGestureStateChangedWithRecognizer:node:) endedSelector:@selector(rotateGestureStateEndedWithRecognizer:node:) cancelledSelector:@selector(rotateGestureStateCancelledWithRecognizer:node:) failedSelector:@selector(rotateGestureStateFailedWithRecognizer:node:)]; [rotateRecognizer addGestureRecognizerToNode:movableAreaSprite]; } I dont understand how to implement the custom gesture code at the start of this post into the rotateGesture class which is a subclass of the gesture class written by the SO user. Any ideas please? When I get 6 more rep I'll add a bounty to this.

    Read the article

  • how does data clustering help in image or pattern recognition

    - by anon
    I have been playing around with different data clustering algorithms working on finding clusters between random data points represented an nodes, I keep reading that data clustering is used for image recognition. I am failing to make the connection, how does clustering data help in recognizing an image or in facial recognition. can someone explain this?

    Read the article

  • Image Recognition (Shape recognition)

    - by mqpasta
    I want to recognize the shapes in the picture by template matching.Is the "ExhaustiveTemplateMatching" is the right option given in Aforge.Net for this purpose.Had anyone tried this class and find it working correctly.How accurate and right choice this class is for achieving my purpose.Suggest any other methods or Alogrithms as well for recognizing shapes by matching template.For example Identifying ComboBox in a picture.

    Read the article

  • How can I use the voice recognition used by Android on Ubuntu?

    - by aking1012
    If I'm developing an Android app that uses TTS and Voice recognition, which libraries are used for the same voice recognition and speech on Ubuntu? I'm assuming espeak for text to speech, but I'm unsure which voice recognition library and dictionary/learning/calibration system is used for voice recognition. I'ld like to make the app available on Ubuntu Desktop. as well as test it outside an emulator

    Read the article

  • Delphi Speech recognition delphi

    - by XBasic3000
    I need create a programatic equivalent using delphi language... or could someone post a link on how to do grammars in peech recogniton using the delphi. sorry for my english... XML Grammar Sample(s): <GRAMMAR> <!-- Create a simple "hello world" rule --> <RULE NAME="HelloWorld" TOPLEVEL="ACTIVE"> <P>hello world</P> </RULE> <!-- Create a more advanced "hello world" rule that changes the display form. When the user says "hello world" the display text will be "Hiya there!" --> <RULE NAME="HelloWorld_Disp" TOPLEVEL="ACTIVE"> <P DISP="Hiya there!">hello world</P> </RULE> <!-- Create a rule that changes the pronunciation and the display form of the phrase. When the user says "eh" the display text will be "I don't understand?". Note the user didn't say "huh". The pronunciation for "what" is specific to this phrase tag and is not changed for the user or application lexicon, or even other instances of "what" in the grammar --> <RULE NAME="Question_Pron" TOPLEVEL="ACTIVE"> <P DISP="I don't understand" PRON="eh">what</P> </RULE> <!-- Create a rule demonstrating repetition --> <!-- the rule will only be recognized if the user says "hey diddle diddle" --> <RULE NAME="NurseryRhyme" TOPLEVEL="ACTIVE"> <P>hey</P> <P MIN="2" MAX="2">diddle</P> </RULE> <!-- Create a list with variable phrase weights --> <!-- If the user says similar phrases, the recognizer will use the weights to pick a match --> <RULE NAME="UseWeights" TOPLEVEL="ACTIVE"> <LIST> <!-- Note the higher likelihood that the user is expected to say "recognizer speech" --> <P WEIGHT=".95">recognize speech</P> <P WEIGHT=".05">wreck a nice beach</P> </LIST> </RULE> <!-- Create a phrase with an attached semantic property --> <!-- Speaking "one two three" will return three different unique semantic properties, with different names, and different values --> <RULE NAME="UseProps" TOPLEVEL="ACTIVE"> <!-- named property, without value --> <P PROPNAME="NOVALUE">one</P> <!-- named property, with numeric value --> <P PROPNAME="NUMBER" VAL="2">two</P> <!-- named property, with string value --> <P PROPNAME="STRING" VALSTR="three">three</P> </RULE> </GRAMMAR> **Programmatic Equivalent:** To add a phrase to a rule, SAPI provides an API called ISpGrammarBuilder::AddWordTransition. The application developer can add the sentences as follows: SPSTATEHANDLE hsHelloWorld; // Create new top-level rule called "HelloWorld" hr = cpRecoGrammar->GetRule(L"HelloWorld", NULL, SPRAF_TopLevel | SPRAF_Active, TRUE, &hsHelloWorld); // Check hr // Add the command words "hello world" // Note that the lexical delimiter is " ", a space character. // By using a space delimiter, the entire phrase can be added // in one method call hr = cpRecoGrammar->AddWordTransition(hsHelloWorld, NULL, L"hello world", L" ", SPWT_LEXICAL, NULL, NULL); // Check hr // Add the command words "hiya there" // Note that the lexical delimiter is "|", a pipe character. // By using a pipe delimiter, the entire phrase can be added // in one method call hr = cpRecoGrammar->AddWordTransition(hsHelloWorld, NULL, L"hiya|there", L"|", SPWT_LEXICAL, NULL, NULL); // Check hr // save/commit changes hr = cpRecoGrammar->Commit(NULL); // Check hr

    Read the article

  • Speech Recognition Grammar Rules using delphi code

    - by XBasic3000
    I need help to make ISeechRecoGrammar without using xml format. Like creating it on runtime on delphi. example: procedure TForm1.FormCreate(Sender: TObject); var AfterCmdState: ISpeechGrammarRuleState; temp : OleVariant; Grammar: ISpeechRecoGrammar; PropertiesRule: ISpeechGrammarRule; ItemRule: ISpeechGrammarRule; TopLevelRule: ISpeechGrammarRule; begin SpSharedRecoContext.EventInterests := SREAllEvents; Grammar := SpSharedRecoContext.CreateGrammar(m_GrammarId); TopLevelRule := Grammar.Rules.Add('TopLevelRule', SRATopLevel Or SRADynamic, 1); PropertiesRule := Grammar.Rules.Add('PropertiesRule', SRADynamic, 2); ItemRule := Grammar.Rules.Add('ItemRule', SRADynamic, 3); AfterCmdState := TopLevelRule.AddState; TopLevelRule.InitialState.AddWordTransition(AfterCmdState, 'test', temp, temp, '****', 0, temp, temp); Grammar.Rules.Commit; Grammar.CmdSetRuleState('TopLevelRule', SGDSActive); end; can someone reconstruct or midify this delphi code (above) to be exactly same function below(xml). <GRAMMAR LANGID="409"> <!-- "Constant" definitions --> <DEFINE> <ID NAME="RID_start" VAL="1"/> <ID NAME="PID_action" VAL="2"/> <ID NAME="PID_actionvalue" VAL="3"/> </DEFINE> <!-- Rule definitions --> <RULE NAME="start" ID="RID_start" TOPLEVEL="ACTIVE"> <P>i am</P> <RULEREF NAME="action" PROPNAME="action" PROPID="PID_action" /> <O>OK</O> </RULE> <RULE NAME="action"> <L PROPNAME="actionvalue" PROPID="PID_actionvalue"> <P VAL="1">albert</P> <P VAL="2">francis</P> <P VAL="3">alex</P> </L> </RULE> </GRAMMAR> sorry for my english...

    Read the article

  • Syntax Recognition for XML-Based Languages in Oracle JDeveloper

    - by Ramkumar Menon
      @Thanks Jeffrey Stephenson If you are looking at using any one of the new XML Based languages, lets say a docbook xml, or xproc, or what not, you can make use of JDeveloper's syntax highlighting and completion insight feature to ease out those extra keystrokes. All you need is a URL/local copy of the XML Schema for the language. Once you have, you can register it via Tools --> Preferences --> XML Schemas.   Remember to provide a new extension name [Using a default .xml extension did not work for me.] I provided my own extension .dbk for my docbook files. Once you save these settings, you can create new files that conform to the schema, and you get validation/completion insight/prompting for free.      

    Read the article

  • SAPI Speech recognition delphi

    - by XBasic3000
    I need create a programatic equivalent using delphi language... or could someone post a link on how to do grammars in peech recogniton using the delphi. sorry for my english... **Programmatic Equivalent C#:** Ref: http://msdn.microsoft.com/en-us/library/ms723634(v=VS.85).aspx To add a phrase to a rule, SAPI provides an API called ISpGrammarBuilder::AddWordTransition. The application developer can add the sentences as follows: SPSTATEHANDLE hsHelloWorld; // Create new top-level rule called "HelloWorld" hr = cpRecoGrammar->GetRule(L"HelloWorld", NULL, SPRAF_TopLevel | SPRAF_Active, TRUE, &hsHelloWorld); // Check hr // Add the command words "hello world" // Note that the lexical delimiter is " ", a space character. // By using a space delimiter, the entire phrase can be added // in one method call hr = cpRecoGrammar->AddWordTransition(hsHelloWorld, NULL, L"hello world", L" ", SPWT_LEXICAL, NULL, NULL); // Check hr // Add the command words "hiya there" // Note that the lexical delimiter is "|", a pipe character. // By using a pipe delimiter, the entire phrase can be added // in one method call hr = cpRecoGrammar->AddWordTransition(hsHelloWorld, NULL, L"hiya|there", L"|", SPWT_LEXICAL, NULL, NULL); // Check hr // save/commit changes hr = cpRecoGrammar->Commit(NULL); // Check hr XML Grammar Sample(s): <GRAMMAR> <!-- Create a simple "hello world" rule --> <RULE NAME="HelloWorld" TOPLEVEL="ACTIVE"> <P>hello world</P> </RULE> <!-- Create a more advanced "hello world" rule that changes the display form. When the user says "hello world" the display text will be "Hiya there!" --> <RULE NAME="HelloWorld_Disp" TOPLEVEL="ACTIVE"> <P DISP="Hiya there!">hello world</P> </RULE> <!-- Create a rule that changes the pronunciation and the display form of the phrase. When the user says "eh" the display text will be "I don't understand?". Note the user didn't say "huh". The pronunciation for "what" is specific to this phrase tag and is not changed for the user or application lexicon, or even other instances of "what" in the grammar --> <RULE NAME="Question_Pron" TOPLEVEL="ACTIVE"> <P DISP="I don't understand" PRON="eh">what</P> </RULE> <!-- Create a rule demonstrating repetition --> <!-- the rule will only be recognized if the user says "hey diddle diddle" --> <RULE NAME="NurseryRhyme" TOPLEVEL="ACTIVE"> <P>hey</P> <P MIN="2" MAX="2">diddle</P> </RULE> <!-- Create a list with variable phrase weights --> <!-- If the user says similar phrases, the recognizer will use the weights to pick a match --> <RULE NAME="UseWeights" TOPLEVEL="ACTIVE"> <LIST> <!-- Note the higher likelihood that the user is expected to say "recognizer speech" --> <P WEIGHT=".95">recognize speech</P> <P WEIGHT=".05">wreck a nice beach</P> </LIST> </RULE> <!-- Create a phrase with an attached semantic property --> <!-- Speaking "one two three" will return three different unique semantic properties, with different names, and different values --> <RULE NAME="UseProps" TOPLEVEL="ACTIVE"> <!-- named property, without value --> <P PROPNAME="NOVALUE">one</P> <!-- named property, with numeric value --> <P PROPNAME="NUMBER" VAL="2">two</P> <!-- named property, with string value --> <P PROPNAME="STRING" VALSTR="three">three</P> </RULE> </GRAMMAR>

    Read the article

  • Using android gesture on top of menu buttons

    - by chriacua
    What I want is to have an options menu where the user can choose to navigate the menu between: 1) touching a button and then pressing down on the trackball to select it, and 2) drawing predefined gestures from Gestures Builder As it stands now, I have created my buttons with OnClickListener and the gestures with GestureOverlayView. Then I select starting a new Activity depending on whether the using pressed a button or executed a gesture. However, when I attempt to draw a gesture, it is not picked up. Only pressing the buttons is recognized. The following is my code: public class Menu extends Activity implements OnClickListener, OnGesturePerformedListener { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); //create TextToSpeech myTTS = new TextToSpeech(this, this); myTTS.setLanguage(Locale.US); //create Gestures mLibrary = GestureLibraries.fromRawResource(this, R.raw.gestures); if (!mLibrary.load()) { finish(); } // Set up click listeners for all the buttons. View playButton = findViewById(R.id.play_button); playButton.setOnClickListener(this); View instructionsButton = findViewById(R.id.instructions_button); instructionsButton.setOnClickListener(this); View modeButton = findViewById(R.id.mode_button); modeButton.setOnClickListener(this); View statsButton = findViewById(R.id.stats_button); statsButton.setOnClickListener(this); View exitButton = findViewById(R.id.exit_button); exitButton.setOnClickListener(this); GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures); gestures.addOnGesturePerformedListener(this); } public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) { ArrayList<Prediction> predictions = mLibrary.recognize(gesture); // We want at least one prediction if (predictions.size() > 0) { Prediction prediction = predictions.get(0); // We want at least some confidence in the result if (prediction.score > 1.0) { // Show the gesture Toast.makeText(this, prediction.name, Toast.LENGTH_SHORT).show(); //User drew symbol for PLAY if (prediction.name.equals("Play")) { myTTS.shutdown(); //connect to game // User drew symbol for INSTRUCTIONS } else if (prediction.name.equals("Instructions")) { myTTS.shutdown(); startActivity(new Intent(this, Instructions.class)); // User drew symbol for MODE } else if (prediction.name.equals("Mode")){ myTTS.shutdown(); startActivity(new Intent(this, Mode.class)); // User drew symbol to QUIT } else { finish(); } } } } @Override public void onClick(View v) { switch (v.getId()){ case R.id.instructions_button: startActivity(new Intent(this, Instructions.class)); break; case R.id.mode_button: startActivity(new Intent(this, Mode.class)); break; case R.id.exit_button: finish(); break; } } Any suggestions would be greatly appreciated!

    Read the article

  • How advanced are author-recognition methods?

    - by Nick Rtz
    From a written text by an author if a computer program analyses the text, how much can a computer program tell today about the author of some (long enough to be statistically significant) texts? Can the computer program even tell with "certainty" whether a man or a woman wrote this text based solely on the contents of the text and not an investigation such as ip numbers etc? I'm interested to know if there are algorithms in use for instance to automatically know whether an author was male or female or similar characteristics of an author that a computer program can decide based on analyses of the written text by an author. It could be useful to know before you read a message what a computer analyses says about the author, do you agree? If I for instance get a longer message from my wife that she has had an accident in Nigeria and the computer program says that with 99 % probability the message was written by a male author in his sixties of non-caucasian origin or likewise, or by somebody who is not my wife, then the computer program could help me investigate why a certain message differs in characteristics. There can also be other uses for instance just detecting outliers in a geographically or demographically bounded larger data set. Scam detection is the obvious use I'm thinking of but there could also be other uses. Are there already such programs that analyse a written text to tell something about the author based on word choice, use of pronouns, unusual language usage, or likewise?

    Read the article

  • Android ViewFlipper + Gesture Detector

    - by Tim
    I am using gesture detector to catch "flings" and using a view flipper to change the screen when this happens. Some of my child views contain list views. The the gesture detector wont recognize a swipe if you swipe on the list view. But it will recognize it if it is onTop of TextView's or ImageView's. Is there a way to implement it so that it will recognize the swipes even if they are on top of another view that has a ClickListener?

    Read the article

  • Is it posible for WIndows Speech Recognition Engine to use in my project like word pronounciation ga

    - by XBasic3000
    I use to create an application that uses the windows speech recognition engine or the SAPI. its like a game for pronounciation that it give you score when you pronounce it correctly. but when i started experiments with SAPI, it has poor recognition unless if you load a grammar on it (XML) its give best recognition result. but the problem now is closest pronounciation from the input text will be recognize. for example: Database - dedebase - correct. even if you mispronounce it. it gives you correct answers. without using the xml grammar when you say database it give you "in the base/the base/data base/etc..." please post your answer,suggestion,clarication and please votes for best answer.

    Read the article

  • Is it possible to use WIndows Speech Recognition Engine in a word pronunciation game?

    - by XBasic3000
    I use to create an application that uses the windows speech recognition engine or the SAPI. its like a game for pronunciation that it give you score when you pronounce it correctly. but when i started experiments with SAPI, it has poor recognition unless if you load a grammar on it (XML) its give best recognition result. but the problem now is closest pronunciation from the input text will be recognize. for example: Database - dedebase - correct. even if you mispronounce it. it gives you correct answers. without using the xml grammar when you say database it give you "in the base/the base/data base/etc..." please post your answer,suggestion,clarification. votes for best answer. is it possible or not? by the way i use delphi compiler on the projects....

    Read the article

  • Is it possible to use WIndows Speech Recognition Engine in a word pronounciation game?

    - by XBasic3000
    I use to create an application that uses the windows speech recognition engine or the SAPI. its like a game for pronounciation that it give you score when you pronounce it correctly. but when i started experiments with SAPI, it has poor recognition unless if you load a grammar on it (XML) its give best recognition result. but the problem now is closest pronounciation from the input text will be recognize. for example: Database - dedebase - correct. even if you mispronounce it. it gives you correct answers. without using the xml grammar when you say database it give you "in the base/the base/data base/etc..." please post your answer,suggestion,clarication. votes for best answer. is it posible or not? by the way i use delphi compiler on the projects....

    Read the article

  • How to detect a Triangle gesture with kinect?

    - by Akhilesh Mishra
    I am trying to implement a gesture recognition system which interprets the geometric gestures user makes and draws it on screen, I have some idea of how circle can be recognized, however i have no clue how to get started with triangle recognition. the data I have is X and Y coordinates of all points the gesture passed through. I get this data by tracking right hand. I found something online called Hough Transform , which is used for detecting lines but i am not sure whether it will work for discrete collection of points, Any ideas folks?

    Read the article

  • How to show multiple screens with right/left slide Gesture

    - by ajay sahu
    I am having an application in which i have a ListView .List is populated from an array list . On selection of each item it show the detail description for that item in a seperate screen populated data from another array list . A single screen is used to display details of all the items.it loads data from dynamically. Can anyone please tell me how can i display all details on same screen using right/left slide gesture. Screen with ListView -itemList Screen to display detail -detail listView on next and previous gesture it should dynamic data on screen 2detail list view from arraylist

    Read the article

  • What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition?

    - by Michael
    There are two similar namespaces and assemblies for speech recognition in .NET. I’m trying to understand the differences and when it is appropriate to use one or the other. There is System.Speech.Recognition from the assembly System.Speech (in System.Speech.dll). System.Speech.dll is a core DLL in the .NET Framework class library 3.0 and later There is also Microsoft.Speech.Recognition from the assembly Microsoft.Speech (in microsoft.speech.dll). Microsoft.Speech.dll is part of the UCMA 2.0 SDK I find the docs confusing and I have the following questions: System.Speech.Recognition says it is for "The Windows Desktop Speech Technology", does this mean it cannot be used on a server OS or cannot be used for high scale applications? The UCMA 2.0 Speech SDK ( http://msdn.microsoft.com/en-us/library/dd266409%28v=office.13%29.aspx ) says that it requires Microsoft Office Communications Server 2007 R2 as a prerequisite. However, I’ve been told at conferences and meetings that if I do not require OCS features like presence and workflow I can use the UCMA 2.0 Speech API without OCS. Is this true? If I’m building a simple recognition app for a server application (say I wanted to automatically transcribe voice mails) and I don’t need features of OCS, what are the differences between the two APIs?

    Read the article

  • voice recognition in android

    - by jaymin
    Hi, I am an android application developer. I was curious as to how does voice recognition could be implemented using android. There is inbuilt support for speech recognition in android, but how can it be used to implement voice recognition...Are there any links which would help me in learning on this topic.. Thanks

    Read the article

  • Facial recognition/detection PHP or software for photo and video galleries

    - by Peter
    I have a very large photo gallery with thousands of similar people, objects, locations, things. The majority of the people in the photos have their own user accounts and avatar photos to match. There are also logical short lists of people potentially in the photo based on additional data available for each photo. I allow users to tag photos with their friends and people they know but an automated process would be better. I've used photo tagger/finder from face.com integrating with Facebook photos and the Google Picasa photo tagger for personal albums also does the same thing and is exactly what I'm looking to do. Is there a PHP script, API for Google Picasa, face.com or other recognition service or any other open source project that provides server-side facial recognition and/or grouping photos by similarity? Examples: As you can see, various photo sharing sites offer the feature, but are there any that provide an API for images stored on my own server or something extensive enough to link into my own gallery and tagging system? viewdle - Face recognition/Tagging for video PHP - Face detection in pure PHP Xarg OpenCV Face.com - app for finding and tagging photos in Facebook Google Picasa - photo sharing TeraSnaps - photo sharing site Google Portrait - photo grouping from Google Image results FaceOnIt - Video face recognition PittPatt - Detection, Recognition, Video Face Mining BetaFace ChaosFace - Real-time Face Detector

    Read the article

  • Windows 8 Speech Recognition Language

    - by Greg
    I've got Windows 8 Pro installed (RTM version from MSDN). For an application I use I need to have the speech recognition language set to English - US. The only option I have is English - UK. I have tried going to Language in Control Panel and setting the only language to English - US, however English - UK is still the only option in speech properties. How can I add a language to the Speech Properties?

    Read the article

  • Gesture recognizer for mouse down and up in iPhone SDK

    - by user545201
    I want to catch both mouse down and mouse up using gesture recognizer. However, when the mouse down is caught, mouse up is never caught. Here's what I did: First create a custom MouseGestureRecognizer: @implementation MouseGestureRecognizer -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; self.state = UIGestureRecognizerStateRecognized; } -(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; self.state = UIGestureRecognizerStateRecognized; } @end Then bind the recognizer to a view in view controller: UIGestureRecognizer *recognizer = [MouseGestureRecognizer alloc] initWithTarget:self action:@selector(handleGesture:)]; [self.view addGestureRecognizer:recognizer]; When I click mouse in the view, the touchesBegan is called, but touchesEnded is never called. Is it because of the UIGestureRecognizerStateRecognized? Any advice will be appreciated! Thanks!

    Read the article

  • Did 12.04 just add multi-touch gesture support mid-release?

    - by adempewolff
    I was reviewing the updates I was about to download today and I noticed that a lot of them had to do with gesture support, noticed that many of these were new installs rather than upgrades. Has 12.04 just added multi-touch gesture support mid-release? If so, what are the capabilities that this adds? Which applications already support these capabilities and can I expect others to add support in the near future? Here are the packages that were installed: Install: libframe6:amd64 (2.2.4-0ubuntu0.12.04.1), libgeis1:amd64 (2.2.9.2-0ubuntu1), libgrail5:amd64 (3.0.6-0ubuntu0.12.04.01, automatic) And here are those that were upgraded (also including many with touch support): Upgrade: libgrip0:amd64 (0.3.4-0ubuntu2~ubuntu12.04.1, 0.3.5-0ubuntu1~12.04.1), eog:amd64 (3.4.2-0ubuntu1, 3.4.2-0ubuntu1.1), ginn:amd64 (0.2.4-0ubuntu1, 0.2.4.1-0ubuntu1) Of which the descriptions for the new installs are, libgeis1: Gesture engine interface support A common API for clients of a systemwide gesture recognition and propagation engine. libframe6: Touch Frame Library This library handles the buildup and synchronization of a set of simultaneous touches. The library is input agnostic, with bindings for mtdev, frame and XI2.1. libgrail5: Gesture Recognition And Instantiation Library This library consists of an interface and tools for handling gesture recognition and gesture instantiation. Applications can use the grail callbacks to receive gesture primitives and raw input events from the underlying kernel device. And the descriptions for the upgraded packages are, ligrip0: provides multitouch gestures to GTK+ apps Libgrip hooks gesture recognition into GTK+ applications. ginn: Gesture Injector: No-GEIS, No-Toolkits A daemon with jinn-like wish-granting capabilities: it gives applications the ability to support a subset of multi-touch gestures without having to integrate GEIS or multi-touch GTK/Qt libs. Adding in a ton of new libraries and upgrading the existing components makes me wonder if 12.04 is meant to start natively supporting gestures other than two finger scroll in the near future. I expected these capabilities to be introduced soon but I thought that they would only be rolled out in a new release, not as upgrades for an existing release. Anyone have any info about this?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >