Search Results

Search found 916 results on 37 pages for 'speech recognition'.

Page 4/37 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Name typing in the "TO" line for last name recognition

    - by Buck
    I have outlook 2010 on a Windows 7 laptop. When I go to send an email at the "TO" line and I start typing the name, if I start to enter the last name it will not recognize anyone in my contacts and will not auto-populate a list of all the names that fit the description of what I have typed so far. But if I start typing the first name first it will start this auto-choice feature based on what I have typed so far. The company I work for has 20k + employees and If I want to email someone like "Michael Hutch " if I type "Michael" it still gives me like 800 names to chose from. My old laptop that had 2003 Outlook on it, had this functionality. Is there a way to enable this in Outlook 2010?

    Read the article

  • how to lengthen the pause between the words with text-to-speech (pyTTS or SAPI5)

    - by Berry Tsakala
    Is it possible to extend the gap between spoken words when using text to speech with SAPI5 ? The problem is that esp. with some voices, the words are almost connected to each other, which makes the speech more difficult to understand. I'm using python and pyTTS module (on windows, since it's using SAPI) I tried to hook to the OnWord event and add a time.sleep() or tts.Pause(), but apparently even though all the events are caught, they are being processed only at the end of the spoken text, whether i'm using the sync or async flag. In this NON WORKING example, the sleep() method is executed only after the sentence is spoken: tts = pyTTS.Create() def f(x): tts.Pause() sleep(0.5) tts.Resume() tts.OnWord = f tts.Speak(text)

    Read the article

  • Training speech recognition software

    - by wyatt
    A little left field, but I'm trying to train a speech recognition program and the guidelines suggest that I attempt to speak clearly but naturally. I notice, however, that when one speaks naturally each word tends to drift into the next, resulting in a rather ambiguous boundary between the words. One the one hand, speaking in a more stilted manner would seem to aid the computer in recognising the phonemes, but on the other it would tend to make it less likely to understand more natural speech. Anyone knowledgeable in the field out there who can suggest which of the two approaches is more effective? Thanks

    Read the article

  • How to fix this Speech Recognition wicked bug?

    - by aF
    I have this code in my C# project: public void startRecognition(string pName) { presentationName = pName; if (WaveNative.waveInGetNumDevs() > 0) { string grammar = System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Presentations\\" + presentationName + "\\SpeechRecognition\\soundlog.cfg"; if (File.Exists(grammar)) { File.Delete(grammar); } executeCommand(); /// Create an instance of SpSharedRecoContextClass which will be used /// to interface with the incoming audio stream recContext = new SpSharedRecoContextClass(); // Create the grammar object recContext.CreateGrammar(1, out recGrammar); //recContext.CreateGrammar(2, out recGrammar2); // Set up dictation mode //recGrammar2.SetDictationState(SpeechLib.SPRULESTATE.SPRS_ACTIVE); //recGrammar2.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); // Set appropriate grammar mode if (File.Exists(grammar)) { recGrammar.LoadCmdFromFile(grammar, SPLOADOPTIONS.SPLO_STATIC); //recGrammar.SetDictationState(SpeechLib.SPRULESTATE.SPRS_INACTIVE); recGrammar.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); recGrammar.SetRuleIdState(0, SPRULESTATE.SPRS_ACTIVE); } /// Bind a callback to the recognition event which will be invoked /// When a dictated phrase has been recognised. recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); // System.Windows.Forms.MessageBox.Show(recContext.ToString()); // gramática compilada } } private static void handleRecognition(int StreamNumber, object StreamPosition, SpeechLib.SpeechRecognitionType RecognitionType, SpeechLib.ISpeechRecoResult Result) { string temp = Result.PhraseInfo.GetText(0, -1, true); _recognizedText = ""; // System.Windows.Forms.MessageBox.Show(temp); // System.Windows.Forms.MessageBox.Show(recognizedWords.Count.ToString()); foreach (string word in recognizedWords) { if (temp.Contains(word)) { // System.Windows.Forms.MessageBox.Show("yes"); _recognizedText = word; } } } This codes generates a dll that I use in another application. Now, the wicked bug: - when I run the startRecognition method in the beginning of the execution of the other application, this codes works very well. But when I run it some time after the beginning, this codes works but the handleRecognition method is never called. I see that the words are recognized because they appear on the Microsoft Speech Recognition app, but the handler method is never called. Do you know what's the problem with this code? NOTE: this project has some code that is allways being executed. Might that be the problem? Because the other code is running it doesn't allow it to this to run?

    Read the article

  • Getting Recognition for Open-Source Computer Language Projects

    - by Jon Purdy
    I like language a lot, so I write a lot of language-based solutions for programming, automation, and data definition. I'm very much a believer in open-source software, so lately I've started to push these projects to Sourceforge when I start them. I feel that these tools could be quite valuable in the right hands, and that they fill niches that otherwise go unfilled. The trouble, for me, is gaining recognition. No matter how useful the software I write, after a certain point I can no longer come up with anything to add or improve. Basically no one but me uses it, so it's not being attacked from enough angles to discover any new weaknesses. I cannot work on a project that doesn't have anything to do, but I won't have anything to do unless I gain recognition by working on it! This is greatly discouraging. It's like giving what you think is a really thoughtful gift to someone who just isn't paying attention. So I'm looking for advice on how to network and disseminate information about my projects so that they don't fizzle out like this. Are there any sites, newsgroups, or mailing lists that I've been completely missing?

    Read the article

  • OpenCV/EmguCV face recognition

    - by Meko
    Hi .I am tying to make app that detect face and recognize it. I made Face detection but I want some idea to when making recognition. I using web cam for tracking and It can recognize face.Then I am taking only part of face to an new gray image and comparing it using EigenObjectRecognizer with list of images in database.But it is not giving good result.Some times find some thing wrong,some times nothing.I want to ask that for comparing photos which additional techniques i must implement?Like Histogram equalization or resolution of faces equalization ?

    Read the article

  • Face Recognition in AS3

    - by dontPanic
    Hey all, I have been working on a project which involves Marilena(project that ports Face Detection part of OPENCV to ActionScript3). Right now I can take the faces and keep them as byteArrays. I am working on Flash Builder 4. I want to add Face Recognition part either. I will identify the faces by connecting to a database but I couldnt figure out how to do it without OpenCV on flash.You guys have any idea???

    Read the article

  • Face Recognition for classifying digital photos?

    - by Jeremy E
    I like to mess around with AI and wanted to try my hand at face recognition the first step is to find the faces in the photographs. How is this usually done? Do you use convolution of a sample image/images or statistics based methods? How do you find the bounding box for the face? My goal is to classify the pictures of my kids from all the digital photos. Thanks in advance.

    Read the article

  • Microsoft TTS (Text to Speech) Dat File Locations

    - by neddy
    Ok, so I've downloaded some TTS engines to replace the default microsoft TTS engine, and make my program sound a little more 'human' -- basically i am wondering where abouts the TTS engine files are stored on the local pc (windows 7) -- the files i have are in .Dat format, does anyone have any idea where abouts the should go to be registered as a voice for Text-to-Speech? Cheers.

    Read the article

  • Any Name Entity Recognition - web services available

    - by Gublooo
    Hello I wanted to know if there are any paid or free named entity recognition web services available. Basically I'm looking for something - where if I pass a text like: "John had french fries at Burger King" It should be identify - something along the lines: Person: John Organization: Burger King I've heard of Annie from GATE - but I dont think it has a web service available. Thanks

    Read the article

  • TI-99 speech effect?

    - by kotlinski
    Hi, I want to make a program that takes recorded speech and transforms it so it sounds like it's coming from a Texas TI-99. Do you have any good ideas and resources for how to go about that?

    Read the article

  • Open Source Simple Speech Recognition in C++ in Windows

    - by Cenoc
    Hey Everyone, I was wondering, are there any basic speech recognition tools out there? I just want something that can distinguish the difference between "yes" and "no" and is reasonably simple to implement. Most of the stuff out there seems to make you start from scratch, and I'm looking for something more high level. Thanks!

    Read the article

  • speech recognition project

    - by sk
    hello im making my final year project i.e. speech recognition.but i dont have nay idea how to start.i will use c#.plz can anyone guide me how to start.what shoul be the first step? thnx

    Read the article

  • Voice Recognition in iPhone app

    - by PRN
    Hello, Is it possible to access voice recognition in an iphone app,similar to voice dialing available in iphone 3gs...when the user says something that related information needs to be fetched... Is there any particular api for the same? I have seen apps on itunes..but how to go about it? Thanks in advance.

    Read the article

  • Sentence recognition and ability to answer with another sentence

    - by terabytest
    Hi. I'm looking into sentence recognition to make a program that should play voice clips from the game Team Fortress 2. I have .wav files and text files containing the transcription for every sound. What I'd like to make is a system that makes two characters of the game talk to each other (trough playing sound clips), and each one recognises the sense of what the other is saying and tries to answer with another of the available sound clips (trying its best to fit the sense of what it's trying to say) Is that possible in any way?

    Read the article

  • What speech libraries are available in Linux?

    - by George Edison
    When it comes to TTS (text-to-speech) libraries in Linux, what choices do developers have? What libraries ship with the majority of distros? Are there minimal libraries? What functionality does each library offer? I'm approaching this primarily from a C++ point of view, although Python would suit me too.

    Read the article

  • Speech Recognition in iPhone app

    - by PRN
    Hello, Is it possible to access speech recognition in an iphone app,similar to voice dialing available in iphone 3gs...when the user says something that related information needs to be fetched... Is there any particular api for the same? I have seen apps on itunes..but how to go about it? Thanks in advance.

    Read the article

  • Don't Miss The OpenWorld Session: The Impact of the Upcoming Revenue Recognition and Lease Accounting Changes

    - by Theresa Hickman
    Would you like to learn more about Revenue Recognition and Leases Accounting changes from subject matter experts? Would you like to better prepare your organization for the upcoming changes? If yes, then it's not too late to register for OpenWorld 2012 and meet Christopher Smith and Ashima Jain from PwC as well as our resident accounting expert, Seamus Moran, who will be presenting at Session 9462: The Impact of the Upcoming Revenue Recognition and Lease Accounting Changes. Here are the details about this session: Date: Oct. 1, 2012  Time: 10:45-11:45 a.m Place: Moscone West Room 2005 Abstract: With the new revenue recognition rules expected to be issued this year and the lease accounting rules expected to be issued next year—both expected to be applied retroactively—businesses all around the world face many changes until the effective date of these proposed standards. In this session, learn from PricewaterhouseCoopers on the potential impact on accounting, processes, and systems and hear from Oracle about the proposed updates to Oracle E-Business Suite to assist you in assessing the impact on existing contracts, technology, and processes.

    Read the article

  • Optimal Compression for Speech

    - by ashes999
    I'm designing a game that depends heavily on audio; I will have some 300+ speech files (most of them just a word or two long). This can very quickly escalate the size of my final game. What's the optimal way to encode/compress speech files to keep the size minimal without getting audio artifacts? Please address both per-file compression/encoding, and also zipping/compressing the set of all speech files together in your answer. Because I'm not sure which (or combination of both) factors will give me the best results. Edit: I need this to run in Silverlight and Android, so I'm presumably stuck with only MP3 as my option (other than uncompressed wave files).

    Read the article

  • Java Speech Example: Encode, Stream, Decode, Play

    - by Dewayne
    I have been trying to find an example of this that I could use for a couple years, I'm ashamed to admit. I would like to see a working, compileable example (most that I find online don't compile or don't actually work) of reading from the microphone, encoding the voice data in a speech-friendly encoding such as Speex, and streaming that information in real time to a Decoder which then plays the audio. I suppose this example would simply echo what is said. I would like to ultimately use this to learn to make an audio mixing chat server.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >