Search Results

Search found 299 results on 12 pages for 'mic'.

Page 3/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Actionscript 3: Monitoring the activity level for multiple Microphones doesn't seem to work.

    - by Dave
    For a project I want to show all available webcams and microphones, so that the user can easily select whichever webcam/microphone combination they prefer. I run into an issue with the microphones listing though. Each microphone is listed with an activity animation and it's name. I am able to list all Microphones just fine (using the Microphone.names Array), but it seems like I can only get the activity viewer to work for one microphone. The other microphones show up with '-1' activity, which (as far as I know) is Flex for 'present, but not in use'. When unplugging the microphone that does show activity, the next one (in my case, the mic-in line on my motherboard) shows up with '0' activity (it's not connected, so that makes sense). During my testing I have a total of 3 microphones available, the not-connected onboard mic-in port, and two connected microphones. For testing purposes I use a timer that traces the current microphone activity each 100ms and the graph is also shown. It does not seem to matter what default microphone I set via flash' settings panel. The code I've only attached the revelant code snippets below to make it easier for you to read through them. Please let me know if you prefer the entire code. Main application.mxml Note: cont is a VBox. i is defined before this code snippet. var mics:Array = Microphone.names; for(i=0; i < mics.length; i++){ var mic:settingsMicEntry = new assets.settingsMicEntry; mic.d = {name: mics[i], index: i}; cont.addChild(mic); } assets/settingsMicEntry.mxml timer is defined before this code snippet. the SoundTransform is added to silence local microphone playback. Excluding this code does not solve the problem, sadly (I've tried). display is an MXML Canvas object. mic = Microphone.getMicrophone(d.index); if(mic){ // Temporary: The Microphones' visualizer var bar:Box = new Box(); bar.y = 50; bar.height = 0; bar.width = 66; bar.setStyle("backgroundColor", 0x003300); display.addChild(bar); var tf:SoundTransform = new SoundTransform(0); mic.setLoopBack(true); mic.soundTransform = tf; timer = new Timer(100); timer.addEventListener(TimerEvent.TIMER, function(e:TimerEvent):void{ var h:int = Math.floor((display.height/100)*mic.activityLevel); bar.height = (h>-1) ? h : 0; bar.y = (h>-1) ? display.height-h : display.height; trace('TIMER: '+h+' from '+d.name); }); timer.start(); } I'm pulling my hear out here, so any help is much appreciated! Thanks, -Dave Ps.: Pardon the messiness of the code!

    Read the article

  • Unable to SSH into EC2 instance on Fedora 17

    - by abhishek
    I did following steps But I am not able to SSH to it(Same steps work fine on Fedora 14 image). I am getting Permission denied (publickey,gssapi-keyex,gssapi-with-mic) I created new instance using fedora 17 amazon community image(ami-2ea50247). I copied my ssh keys under /home/usertest/.ssh/ after creating a usertest I have SELINUX=disabled here is Debug info: $ ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com OpenSSH_5.2p1, OpenSSL 1.0.0b-fips 16 Nov 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-243-101-41.compute-1.amazonaws.com [54.243.101.41] port 22. debug1: Connection established. debug1: identity file /home/usertest/.ssh/identity type -1 debug1: identity file /home/usertest/.ssh/id_rsa type -1 debug3: Not a RSA1 key file /home/usertest/.ssh/id_dsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/usertest/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 131/256 debug2: bits set: 506/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug1: Host 'ec2-54-243-101-41.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/usertest/.ssh/known_hosts:17 debug2: bits set: 500/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/usertest/.ssh/identity ((nil)) debug2: key: /home/usertest/.ssh/id_rsa ((nil)) debug2: key: /home/usertest/.ssh/id_dsa (0x7f904b5ae260) debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup gssapi-with-mic debug3: remaining preferred: publickey,keyboard-interactive,password debug3: authmethod_is_enabled gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug3: Trying to reverse map address 54.243.101.41. debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug2: we did not send a packet, disable method debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/usertest/.ssh/identity debug3: no such identity: /home/usertest/.ssh/identity debug1: Trying private key: /home/usertest/.ssh/id_rsa debug3: no such identity: /home/usertest/.ssh/id_rsa debug1: Offering public key: /home/usertest/.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

    Read the article

  • Listening to the iPhone mic with SCListener and playing music at the same time: how?

    - by Eamon Ford
    Hello, I am using Stephen Celis' SCListener class (for iPhone) to "listen" from the microphone, but I also need to be playing music at the same time using the MediaPlayer framework. However, when I start listening with SCListener, the music fades out and stops. I have set the kAudioSessionCategory_PlayAndRecord property on the audio session in SCListener, which should allow me to play audio and record audio at the same time, but as far as I can tell it has no effect. I'm confused, because according to other developers' results, this works just fine, but not for me. I'm thinking maybe the kAudioSessionCategory_PlayAndRecord property allows you to play sound and record if you're using the AVAudioPlayer framework or something to play the sound, but maybe not the MediaPlayer framework? This would be a problem for me because I need to play music from the user's iPod library, which, as far as I know is only possible to do using the MediaPlayer framework. Does anyone know how I can get around this problem? Thanks in advance!

    Read the article

  • EC2 SSH access from fedora

    - by Randika Rathugama
    I'm trying to connect to existing instance of EC2 with a new PEM. But I get this error when I try to connect. Here is what I did so far. I created the PEM on EC2 and saved it to ~/.ssh/my-fedora.pem and ran this command; is there anything else I should do? [randika@localhost ~]$ ssh -v -i ~/.ssh/my-fedora.pem [email protected] OpenSSH_5.3p1, OpenSSL 1.0.0-fips-beta4 10 Nov 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com [xx-xx-xx-xx] port 22. debug1: Connection established. debug1: identity file /home/randika/.ssh/saberion-fedora.pem type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.7 debug1: match: OpenSSH_4.7 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/randika/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Offering public key: [email protected] debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: Offering public key: [email protected] debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: Trying private key: /home/randika/.ssh/saberion-fedora.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: No more authentication methods to try. Permission denied (publickey,gssapi-with-mic).

    Read the article

  • External microphone not working

    - by haireefairee
    gnome-volume-control does not recognise external hardware. My headphones work nonetheless, but an external microphone does not. External microphones used to work, but at times were temperamental - I would have to login or logout with or without microphone plugged in. I am running Ubuntu 10.04 LTS (Lucid Lynx) on an mSi U100 wind notebook with one Intel soundcard and trying to use a jack microphone which has worked previously. USB microphones have also been problematic. I have done the basics: Installed upgrades. Checked nothing is muted. Looked for the device on gnome-volume-control. Tried using a different microphone that works on a friends computer. Tested my microphone works when using a different computer. Checked my soundcard can be seen (cat /proc/asound/cards). I have done more complicated things: I have tried playing around with settings in alsamixer. Nothing is muted. I can adjust "mic" and "internal mic" regardless of whether an external microphone is plugged in. I have the choice of input source from "mic", "front mic", "line" and "CD". I've played around changing this and it hasn't helped. I only have one CAPTURE option. In gnome-sound-recorder I have the choice of line, microphone 1 and microphone 2. I have played around changing this option. None of these pick up sound from the external microphone. Microphone 2 is the microphone on my laptop which is bad quality. In gnome-sound-recorder I have the choice of different profiles, and changing this has not helped either. I have looked at gstreamer-properties but none of that seemed helpful. I don't know if there a way to check if these external devices are being picked up. I would like to make an external microphone work. Please help!

    Read the article

  • Acer Aspire microphone issue

    - by Dr_Bunsen
    I have got an Acer Aspire 7750G: More info It has been running Ubuntu since I bought the laptop (12.10 as of now), but a few updates ago, I lost my microphone. It doesn't recognise the internal mic or the mic jack (although alsa seems to recognise it). I have been looking around, and I fixed it, but this was only temporary, after a reboot my mic was gone again. I haven't been able to find the fix, so I wanted to know whatever you guys know what I have to do to fix this. Thanks in advance.

    Read the article

  • View the Task's activity stack

    - by Mic
    Hi There, I just started developing a simple Android application while I'm still learning its platform. I'm using Eclipse IDE with the ADT plugin 0.9.6 and I need to know if it's possible to view the activity stack that is associated with a Task. Is there any way through the DDMS tool or through any other technique? Thanks in advance for your help/advice. Mic

    Read the article

  • View the Task's activity stack

    - by Mic
    Hi There, I just started developing a simple Android application while I'm still learning its platform. I'm using Eclipse IDE with the ADT plugin 0.9.6 and I need to know if it's possible to view the activity stack that is associated with a Task. Is there any way through the DDMS tool or through any other technique? Thanks in advance for your help/advice. Mic

    Read the article

  • Unable to mute the microphone in Android

    - by J Andy
    Hi, I'm working on an app that uses AudioRecord class to get input data from the phone mic. For some reason I'm unable to mute the mic. I have tried with different AudioSources (DEFAULT, MIC and VOICE_UPLINK) when creating the AudioRecord object, but there's no difference in the muting behaviour. The muting itself, I'm trying to achieve with AudioManager#setMicrophoneMute() method. Any ideas? Thanks.

    Read the article

  • How to use files/streams as source/sink in PulseAudio

    - by Nilesh
    I'm a PulseAudio noob, and I'm not sure if I'm even using the correct terminology. I've seen that PulseAudio can perform echo cancellation, but it needs a source and a sink to filter from, and a new source and sink. I can provide my mic and my audio-out as the source and sink, right? Now, here's my situation: I have two video streams, say, rtmp streams, or consider two flv files, say at any given moment, stream X is the input stream that's coming from another computer's webcam+mic and stream Y is the output stream that I'm sending, (and it's coming from my computer's webcam+mic). Question: Back to the first paragraph - here's the thing, I don't want to use my mic and my audio-out, instead, I want to use these two "input" and "output" streams as my source and sink so to speak (of course, I'll use xuggler maybe, to extract just the audio from X and Y). It may be a strange question, and I have my reasons for doing this strange this - I need to experiment and verify the results to see.

    Read the article

  • How to create live stream audio for web-sites ???

    - by Kathir
    Hi All, We are storing sound from mic to pc via sound forge. We would like to broadcast the sound which comes from the mic to the pc as live streaming audio. Basically a person speaks in a mic, we like to give it as live stream audio. The web-site is hosted on yahoo server. Can you please let me know in what are the ways we can achieve this? Thanks, Kathir

    Read the article

  • Microphone not recognised on my TekNmotion Intruder headset?

    - by Mitch Gardner
    I own a TekNmotion Intruder headset and Ubuntu 12.04 doesn't seem to even notice that my mic exists. I just re-installed Ubuntu and that doesn't seem to help either. They are a bit of a no-name brand and I can't find drivers for them. The audio plays clearly through the head phones but my mic wont record using "Record my desktop", Works rarely with Skype, and doesn't even show up when I go to Sound Settings ---- Input. Help please!

    Read the article

  • Dell Latitude E6420 in-built microphone not working

    - by user38845
    I'm running with Ubuntu 12.04 64bit on Dell Latitude E6420. The inbuilt microphone is not working, and is not listed on the sound settings dialog. According to lspci, the device is: 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) I've also looked at alsamixer, and can see a "mic" and "dock mic" but fiddling with these and increasing to max amplification doesn't appear to change anything. Note: I've raised a certification question on the same topic here: https://answers.launchpad.net/ubuntu-certification/+question/200678

    Read the article

  • SSH into Fedora 17 will not work with new users

    - by psion
    I just deployed a new Fedora 17 server on the Amazon EC2. I was able to log in as ec2-user with my generated keypair, but I cannot log in under normal circumstances as a user I created. This is just a normal ssh: ssh user@ip-address Any ideas on what is going on here? EDIT: This is a snippit from my sshd_config file # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no #PermitEmptyPasswords no PasswordAuthentication no EDIT AGAIN: This is the output of ssh -v. OpenSSH_5.8p2, OpenSSL 1.0.0i-fips 19 Apr 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 107.23.2.165 [107.23.2.165] port 22. debug1: Connection established. debug1: identity file /home/psion/.ssh/id_rsa type 1 debug1: identity file /home/psion/.ssh/id_rsa-cert type -1 debug1: identity file /home/psion/.ssh/id_dsa type 2 debug1: identity file /home/psion/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 19:cb:84:21:a9:0e:83:96:2f:6a:fa:7d:ce:39:0f:31 debug1: Host '107.23.2.165' is known and matches the RSA host key. debug1: Found key in /home/psion/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_1000' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_1000' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Offering DSA public key: /home/psion/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: Offering RSA public key: /home/psion/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: No more authentication methods to try. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

    Read the article

  • Microphones not working on Apple macbook Air 1,1 (Early 2008) under Linux

    - by jj_p
    I'm running Linux on an mba. I can't make the microphones (neither external nor internal) work. I test using alsamixer and arecord -d 5 test-mic.waw together with aplay test-mic.waw It seems there is a problem with kernel trying to decipher Apple (intentionally) corrupted 'bios', in particular the mic pins are wrongly assigned. As far as we are concerned here, is there any difference between using EFI and BIOS-compatibility mode? (see https://wiki.archlinux.org/index.php/MacBook where they claim to have everything working out of the box on mba1,1) A nice proposal would be to compile the latest Linux kernel and run hda-jack-retask to find the right configuration (in the case of Realtek codec, the missing things I'm supposed to check are either some vendor-specific COEF verbs, EAPD or GPIO setup.), and then come up with a kernel patch to address the issue. Since I'm not that familiar with this last part of the story, can anyone help me through this process? Some useful data: The output from alsa script run as root http://www.alsa-project.org/db/?f=adae8ebee1007043fe83414ac4972319e02255fa The command hda-jack-sense-test -a (with external mic in) Pin 0x14 (Internal Speaker): present = No Pin 0x15 (Green HP Out): present = Yes Pin 0x16 (Not connected): present = No Pin 0x17 (Not connected): present = No Pin 0x18 (Not connected): present = No Pin 0x19 (Not connected): present = No Pin 0x1a (Not connected): present = No Pin 0x1b (Not connected): present = No Pin 0x1c (Not connected): present = No Pin 0x1d (Not connected): present = No Pin 0x1e (Not connected): present = No Pin 0x1f (Not connected): present = No Most likely the chip is Realtek ALC885 (compare also ALC889A) http://guide-images.ifixit.net/igi/bBTSqaeK5JpQ1AWe.large , although at the moment alsa reads it as ALC889A Takashi Iwai's tutorial https://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio.txt Some people researched the original files from a running OS X installation on this same model (I think the relevant files are AppleHDA.kext/Contents/MacOS/AppleHDA AppleHDA.kext/Contents/PlugIns/AppleHDAHardwareConfigDriver.kext/Contents/Info.p????list AppleHDA.kext/Contents/Resources/layout12.xml.zlib AppleHDA.kext/Contents/Resources/Platforms.xml.zlib) http://www.insanelymac.com/forum/topic/220090-alc889a-pin-configuration/#entry1554954 Datasheet http://www.realtek.info/pdf/ALC885_1-1.pdf (from the same Realtek, one can also try to download Linux driver, but this is just taken from ALSA project, as stated in the readme file.) Compare with this Arch guy http://www.alsa-project.org/db/?f=3ca8243c0626844f0264a3faad0aa72018bc14f4 Here for the first time support to audio (except mics) for mba1,2 (which is morally the same as 1,1) is patched into the kernel http://www.alsa-project.org/pipermail/alsa-devel/2010-February/025511.html The same jack supposedly works both for HP and ext MIC, I think it's called TRRS, and it's the same as the one used e.g. for iphones This guy might have done a similar job, though to a more recent version and for sound globally, not just mics: http://blogs.aerys.in/jeanmarc-leroux/2013/09/15/fixing-2013-macbook-air-ubuntu-sound-issue/ (this is mirror to http://unix.stackexchange.com/questions/73044/microphones-not-working-on-apple-macbook-air-1-1-early-2008-under-linux )

    Read the article

  • Can't get SSH public key authentication to work

    - by Trey Parkman
    My server is running CentOS 5.3. I'm on a Mac running Leopard. I don't know which is responsible for this: I can log on to my server just fine via password authentication. I've gone through all of the steps for setting up PKA (as described at http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-ssh-beyondshell.html), but when I use SSH, it refuses to even attempt publickey verification. Using the command ssh -vvv user@host (where -vvv cranks up verbosity to the maximum level) I get the following relevant output: debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred keyboard-interactive,password debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password followed by a prompt for my password. If I try to force the issue with ssh -vvv -o PreferredAuthentications=publickey user@host I get debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred publickey debug3: authmethod_lookup publickey debug3: No more authentication methods to try. So, even though the server says it accepts the publickey authentication method, and my SSH client insists on it, I'm rebutted. (Note the conspicuous absence of an "Offering public key:" line above.) Any suggestions?

    Read the article

  • SSH Kerberos Auth in Mac OSX 10.7

    - by deemstone
    I just upgrade my Mac OS to 10.7 Lion. It has worked well before. But , Only kinit working normally now, can't ssh to my server. After reinstall the "Mac OS X Kerberos Extras" , nothing better. Anyone give me a help? Thanks a lot!! my command line : Myname$ ssh [email protected] -v ...... debug1: Authentications that can continue: gssapi-with-mic,password debug1: Next authentication method: gssapi-with-mic debug1: Miscellaneous failure (see text) UNKNOWN_SERVER while looking up 'host/[email protected]' (cached result, timeout in 1200 sec) debug1: An invalid name was supplied unknown mech-code 0 for mech 1 2 752 43 14 2 debug1: Miscellaneous failure (see text) unknown mech-code 0 for mech 1 3 6 1 5 5 14 debug1: Authentications that can continue: gssapi-with-mic,password debug1: An unsupported mechanism was requested unknown mech-code 0 for mech 1 3 5 1 5 2 7 debug1: Miscellaneous failure (see text) unknown mech-code 0 for mech 1 3 6 1 5 2 5 debug1: Next authentication method: password [email protected]'s password:

    Read the article

  • Audio input problem in Ubuntu 9.10

    - by Andrea Ambu
    My audio input is a mix of my mic output and my sound card output. I'd like it to be just my mic output. I was able to do so in Ubuntu 9.04 but the interface is 9.10 is totally changed and I tried every my creativity was able to think. It's really annoying when talking to other people over the internet because they keep hearing their voice back. I'm not sure I explained it in clear way so I'll give you an example: What I do: I put an mp3 on play or a video on youtube then open a recorder and start to talk on my mic. What happens: both my voice and audio from mp3/youtube get reordered, even if I put headphones volume to 0 (via hardware). What I'd like to happen: Only my voice should be recorded. I'm sure I'm missing some technical term, but that's the problem and I'd like to solve it in Ubuntu 9.10, any idea?

    Read the article

  • Audio input problem in Ubuntu 9.10

    - by Andrea Ambu
    My audio input is a mix of my mic output and my sound card output. I'd like it to be just my mic output. I was able to do so in Ubuntu 9.04 but the interface is 9.10 is totally changed and I tried every my creativity was able to think. It's really annoying when talking to other people over the internet because they keep hearing their voice back. I'm not sure I explained it in clear way so I'll give you an example: What I do: I put an mp3 on play or a video on youtube then open a recorder and start to talk on my mic. What happens: both my voice and audio from mp3/youtube get reordered, even if I put headphones volume to 0 (via hardware). What I'd like to happen: Only my voice should be recorded. I'm sure I'm missing some technical term, but that's the problem and I'd like to solve it in Ubuntu 9.10, any idea?

    Read the article

  • Is there a way to set up an SMTP relay that allows users of a web app to have the web app send email

    - by mic
    the web service sends out emails on behalf of the users to their customers. So [email protected] uses webservice and webservice sends emails . The emails should be appearing as coming from [email protected]. Currently what we are trying to do is to configure webservice to act as an email client for each user, each user being able to create their own profile in which they need to configure their smtp server credentials. But given that there are more options for configurations than you can shake your stick at -not to mention trying to explain to users what info to get from where, POP b4 smtp, TLS, SSL, AUTH,etc) I am wondering if there could be a different way. How, if at all could this be approached? Can I set up a postfix server to do what I need to without running into another admin. nightmare or being blocked for spamming? Thank you for your insights

    Read the article

  • Is there a way to have Microsoft Exchange server override the from field in outgoing mail?

    - by mic.sca
    I need to know if it's possible on an exchange server to filter outgoing mail and override the from address in certain cases. We have to set up many exhange users who will only be able to access exchange through the outlook web access. All their outgoing e-mails when received from recipients outside our company should appear to be sent from a single generic address and not from the users' addresses. Anyone knows whether this is possible or not? thank you, Michele

    Read the article

  • How to send audio stream via UDP in java?

    - by Nob Venoda
    Hi to all :) I have a problem, i have set MediaLocator to microphone input, and then created Player. I need to grab that sound from the microphone, encode it to some lower quality stream, and send it as a datagram packet via UDP. Here's the code, i found most of it online and adapted it to my app: public class AudioSender extends Thread { private MediaLocator ml = new MediaLocator("javasound://44100"); private DatagramSocket socket; private boolean transmitting; private Player player; TargetDataLine mic; byte[] buffer; private AudioFormat format; private DatagramSocket datagramSocket(){ try { return new DatagramSocket(); } catch (SocketException ex) { return null; } } private void startMic() { try { format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 8000.0F, 16, 2, 4, 8000.0F, true); DataLine.Info info = new DataLine.Info(TargetDataLine.class, format); mic = (TargetDataLine) AudioSystem.getLine(info); mic.open(format); mic.start(); buffer = new byte[1024]; } catch (LineUnavailableException ex) { Logger.getLogger(AudioSender.class.getName()).log(Level.SEVERE, null, ex); } } private Player createPlayer() { try { return Manager.createRealizedPlayer(ml); } catch (IOException ex) { return null; } catch (NoPlayerException ex) { return null; } catch (CannotRealizeException ex) { return null; } } private void send() { try { mic.read(buffer, 0, 1024); DatagramPacket packet = new DatagramPacket( buffer, buffer.length, InetAddress.getByName(Util.getRemoteIP()), 91); socket.send(packet); } catch (IOException ex) { Logger.getLogger(AudioSender.class.getName()).log(Level.SEVERE, null, ex); } } @Override public void run() { player = createPlayer(); player.start(); socket = datagramSocket(); transmitting = true; startMic(); while (transmitting) { send(); } } public static void main(String[] args) { AudioSender as = new AudioSender(); as.start(); } } And only thing that happens when I run the receiver class, is me hearing this Player from the sender class. And I cant seem to see the connection between TargetDataLine and Player. Basically, I need to get the sound form player, and somehow convert it to bytes[], therefore I can sent it as datagram. Any ideas? Everything is acceptable, as long as it works :)

    Read the article

  • Windows Phone 7 Prototype 001: Speech Recognition on WP7

    At some point in the future it will be awesome when you can just tell your computer what to do and it does it - without typing to help those of us with a blistering 11 WPM hunk and peck technique. Siri, a mobile digital assistant using speech recognition was voted best tech at SXSW. I dont know about that one. Although, I'm sure it will get better when Apple rebuilds it and  bundles on iPhone 5. So how would you do that on WP7? There have been some videos floating around showing Bing with some voice control so obviously the phone has speech recognition. So what options are there: System.Speech? Not included in WP7/SL Nuance software like Siri? No WP7/SL version yet. Invoking the SAPI dlls on the phone? No automation factory in WP7 SL. Web services using System.Speech and mic on the phone? YES! The last one was my least favorite but that works for now. I built a quick sample app to show how to do text-to-speech and speech recognition on WP7.   @eklimczak will not be happy with the developer designed UI. In this sample there is web service with provides access to the system.speech APIs in .NET. Basically its just passing around byte arrays. On the phone its using the XNA audio frameworks to play the text-to-speech stream and to record using the microphone. The code is pretty simple and you can download from the link at the end of this post. The only things to note are adjusting the WCF config to handle larger byte uploads and the Microphone API is a little weird with that 1 second buffer. It would be nice if you could just to mic.start and mic.end which would return an array of bytes instead of managing your own stream inside the buffer ready callback. Couple of downsides to this approach: Recoding from the phone has some static. Could be my code or the my mic is bad / not calibrated right. Having to make web service calls instead of local access is not ideal (Microsoft, please add an API for the SAPI dlls) Although in the context of an app like Siri its not so bad since you need to do web service lookups to get data back Speech recognition quality really depends on either a) a limited grammar set like that pizza grammar in the sample or b) training the recognizer. For the latter it would be annoying to have users train the system. Using the System.Speech stuff youd have to have a profile for each user. So until Microsoft adds some speech client APIs on the phone or Nuance releases a wp7 product, this is a decent workaround. In the future Id like to build something similar to Siri. I shall call it Iris in homage. Im a big fan of mobile speech apps because frankly its just not safe to Google while driving. Since some of my designer co-workers have been posting UI sketches for WP7, Id like to start posting some code prototypes for things I try out on the phone. That will probably last 2 weeks, but for the moment I have like 10 posts in the queue. Sample Code 100% guaranteed to work on my emulatorDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >