Search Results

Search found 40031 results on 1602 pages for 'command message'.

Page 574/1602 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • How to configure a shortcut for an SSH connection through a SSH tunnel

    - by Simone Carletti
    My company production servers (FOO, BAR...) are located behind two gateway servers (A, B). In order to connect to server FOO, I have to open a ssh connection with server A or B with my username JOHNDOE, then from A (or B) I can access any production server opening a SSH connection with a standard username (let's call it WEBBY). So, each time I have to do something like: ssh johndoe@a ... ssh webby@foo ... # now I can work on the server As you can imagine, this is a hassle when I need to use scp or if I need to quickly open multiple connections. I have configured a ssh key and also I'm using .ssh/config for some shortcuts. I was wondering if I can create some kind of ssh configuration in order to type ssh foo and let SSH open/forward all the connections for me. Is it possible? Edit womble's answer is exactly what I was looking for but it seems right now I can't use netcat because it's not installed on the gateway server. weppos:~ weppos$ ssh foo -vv OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/xyz/.ssh/config debug1: Applying options for foo debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec ssh a nc -w 3 foo 22 debug1: permanently_drop_suid: 501 debug1: identity file /Users/xyz/.ssh/identity type -1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_rsa type 1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_dsa type 2 bash: nc: command not found ssh_exchange_identification: Connection closed by remote host

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • Can Xen be configured to dedicate only one port of a dual-port NIC to a domU?

    - by jamieb
    I'm using CentOS 5.4 on my dom0 with a stock Xen kernel. I'm attempting to use the pciback module to hide some of the Ethernet ports from the host and reserve them for a domU I intend to use for a firewall (process described here). However, when I launch the domU, I get the following error message: Using config file "/etc/xen/firewall". Error: pci: improper device assignment specified: pci: 0000:01:04.0 must be co-assigned to the same guest with 0000:01:06.0, but it is not owned by pciback. lspci gives me the following output: 00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02) 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) 01:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 01:06.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 01:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) From the sound of the error message, it seems like I also need to dedicate eth0 (PCI ID 01:04.0) to the domU. Am I correct? If not, what am I doing wrong? Thanks!

    Read the article

  • (13) Permission denied on Apache CGI attempt

    - by ncv
    I have recently upgraded my Apache2 server, and am now unable to run a CGI app. My logs are showing (13) Permission denied unable to connect to cgi deamon after multiple tries I understand that the error message means Apache is being denied some permission to some file, and I'm stumped as to how to track down and solve the problem. Is the file mentioned in the error message truly the blocked file? Or might the problem be caused by some other needed file? The .cgi file is right where it has always been, under /usr/share. The file ownership (root) and permissions (world readable/executable) are the same as they have always been for the file and its ancestors. The SELinux file labels are unchanged. The SELinux audit log shows no denial associated with Apache nor the CGI program. In case of a donotaudit condition, I enabled audit, but still saw nothing. I switched SELinux into permissive mode briefly, to no avail. I even tried restarting Apache while in permissive mode. This did not solve the problem. Any suggestions on how to solve this problem? I'm tempted to just revert to the older Apache.

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • SSH client not showing prompt after successful login

    - by user431949
    I'm having problems with my SSH client on Ubuntu 10.10. When I switch on my computer and open a Terminal and execute the command ssh user@host, it gives me a password prompt after which I enter the right password, I then get a prompt to execute my commands on the remote computer. Now the problem is, after a little while (probably around 10 minutes), the terminal window stops accepting commands (No matter what I type, nothing shows). Once this happens, I close the Terminal window and try to start all over again by opening another Terminal window. But this time around, after entering the right password, I don't get a welcome message or prompt. The cursor just keeps blinking on a new line. I ran the ssh command with -v parameter and the message I get after a successful login is: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.utf8 Still the cursor keeps blinking on a new line without a prompt. However, Putty SSH client works perfectly on the same machine. Thank you very much for your time. Your help would be greating appreciated.

    Read the article

  • Is there an SSL equivelent to an ssh agent?

    - by Matthew J Morrison
    Here is my situation: There are a number of developers who all need to have access to be able to install ruby gems and python eggs from a remote source. Currently, we have a server inside our firewall that hosts the gems and eggs. We now want the ability to be able to install things hosted on that server outside of our firewall. Since some of the gems and eggs that we host are proprietary I would like to somewhat lock access to that machine down, as unobtrusively as possible to the developers. My first thought was using something like ssh keys. So, I spent some time looking at SSL mutual authentication. I was able to get everything set up and working correctly, testing with curl, but the unfortunate thing was that I had to pass extra arguments to curl so it knows about the certificate, key and certificate authority. I was wondering if there is anything like the ssh agent that I can set up to provide that information automatically so that I can push the certificates and keys to the developer's machines so the developers don't have to log in or provide keys each time they try to install something. Another thing that I want to avoid is having to modify the 'gem' command and the 'pip' command to provide keys when they make the http connection. Any other suggestions that may solve this problem (not related to ssl mutual auth) are also welcome. EDIT: I've been continuing to research this and I came across stunnel. I think this may be what I'm looking for, any feedback regarding stunnel would also be great!

    Read the article

  • Rebuild Fedora 19 ISO adding Kickstart for USB install

    - by dooffas
    I am attempting to edit a Fedora 19 DVD ISO to add a kickstart file. I then need this ISO burnt to a USB stick for instillation. The error I get when booting is Warning: Could not boot. Warning: /dev/root does not exist To try and determine which part of the process is failing I have broken the process down in to separate stages. Step 1: Burn the original ISO "Fedora-19-x86_64-DVD.iso" (Available - here) to a pendrive and see if that will install. dd if=/path/to/iso of=/dev/sdc Burning this image was successful and it installed without issue. Step 2: Exctract the ISO, repackage it and burn it to a pendrive and see if that will install. PLEASE NOTE: The final command in this section has been broken down in to multiple lines for ease of reading, in fact it was run as a single command on one line. mkdir -p /mnt/linux mount -o loop /tmp/linux-install.iso /mnt/linux cd /mnt/ tar -cvf - linux | (cd /var/tmp/ && tar -xf - ) cd /var/tmp/linux xorriso -as mkisofs -R -J -V "NewFedoraImage" -o ouput/file.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -isohybrid-mbr /usr/share/syslinux/isohdpfx.bin . This iso was then burnt to a pendrive as before. dd if=/path/to/iso of=/dev/sdc This ISO burnt to the pen drive with no problem and will boot. I then see the fedora options screen. After choosing either "Install Fedora 19" or "Test this media & install Fedora 19" I then receive the errors highlighted above. This means the kickstart file is not to blame, but repackaging the ISO. Is there something I am missing in the repackaging process? Any input would be great! NOTE: If it is of any help, I attempted Step 2 with an Ubuntu server ISO and the process was successful.

    Read the article

  • Why won't this script accept any arguments?

    - by Nate Wagar
    I'm trying to write an SVN post-commit hook and, strangely, am getting hung up on what should be the easiest part. The Script: set REPO="$1" set REV="$2" set SVNBIN="/opt/CollabNet_Subversion/bin/" set SSHBIN="/usr/bin/ssh" set HOST="staging.domain.net" set timeout=30 set USERNAME="svn-usr" set E_NO_CONNECT=2 set E_WRONG_PASS=3 set E_UNKOWN=25 set CHANGED=`"$SVNBIN"svnlook changed --revision $REV $REPOS` echo "Here are changes: $CHANGED" >> /var/svn/repos/www/logs/testing echo "Command: $0; Repo: $REPO; Rev: $REV; Total: $#" >> /var/svn/repos/www/logs/testing set PROJECT "" Yet when I call it, it doesn't seem to be seeing the arguments I pass to it: /var/svn/repos/www/logs> sudo ../hooks/post-commit /var/svn/repos/www 33 svnlook: missing argument: --revision Type 'svnlook help' for usage. /var/svn/repos/www/logs> cat testing Here are changes: Command: ../hooks/post-commit; Repo: ; Rev: ; Total: 1 This is on a Solaris 10 SPARC box. I'm a bit of a script newbie, but shouldn't this be really easy??

    Read the article

  • when should be choose simple php mail and when smpt with loggin+password?

    - by user43353
    Hi, My Case: web application that need to send 1,000 messages per day to main gmail account. (Only need to send email, not need receive emails - email client) 1. option - use php mail function + sendmail + config php.ini php example: <?php $to = '[email protected]'; $subject = 'the subject'; $message = 'hello'; $headers = 'From: [email protected]' . "\r\n" . 'Reply-To: [email protected]' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail($to, $subject, $message, $headers); ?> php.ini config (ubuntu): sendmail_path = /usr/sbin/sendmail -t -i pros:don't need email account, easy to setup cons:? 2. option - use Zend_Mail + transport on smpt+ password auto php example(need include Zend_Mail classes): $config = array('auth' => 'login', 'username' => 'myusername', 'password' => 'password'); $transport = new Zend_Mail_Transport_Smtp('mail.server.com', $config); $mail = new Zend_Mail(); $mail->setBodyText('This is the text of the mail.'); $mail->setFrom('[email protected]', 'Some Sender'); $mail->addTo('[email protected]', 'Some Recipient'); $mail->setSubject('TestSubject'); $mail->send($transport); pros:? cons:? Questions: Can 1 option be filtered by gmail email server as spam? please can you add pros + cons to options above Thanks

    Read the article

  • SQL Full-Text indexing not populating

    - by Sam
    We installed a clustered SQL 2005 installation on windows 2008 and reattached our san drives from another machine and restored to do a migration to new hardware. There have been a few minor issues, but this one has me stuck. Trying to populate Full-Text indexes is not working. I create a basic table with some simple text in a new database and get the same results as old indexes. 2010-09-27 10:30:46.85 spid19s Informational: Full-text Full population initialized for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'). Population sub-tasks: 1. 2010-09-27 10:31:15.36 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001DF. Attempt will be made to reindex it. 2010-09-27 10:31:15.37 spid19s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-09-27 10:31:15.37 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001E0. Attempt will be made to reindex it. The rebuild/repopulate procedure finishes, but I get zero rows in the index. The .dll in the message is present and the service accounts have access to this. My FTData also has data in it, so it seems there wouldn't be permission issue on this folder. Application throws this error: “PHP Warning: mssql_query() [function.mssql-query]: message: Full-text catalog 'ikm_PageIndex_FText' is in an unusable state. Drop and re-create this full-text catalog. (severity 16) in E:\Inetpub\knowledgebase_insidemesa\lib\database\mssql.php on line 154” A microsoft discussion is the only post I found which had claimed to fix this - said it was registry related, but then didn't post the fix.

    Read the article

  • Break all hardlinks within a folder

    - by Georges Dupéron
    I have a folder which contains a certain number of files which have hard links (in the same folder or somewhere else), and I want to de-hardlink these files, so they become independant, and changes to their contents won't affect any other file (their link count becomes 1). Below, I give a solution which basically copies each hard link to another location, then move it back in place. However this method seems rather crude and error-prone, so I'd like to know if there is some command which will de-hardlink a file for me. Crude answer : Find files which have hard links (Edit: To also find sockets, etc. that have hardlinks, use find -not -type d -links +1) : find -type f -links +1 A crude method to de-hardlink a file (copy it to another location, and move it back) : Edit: As Celada said, it's best to do a cp -p below, to avoid loosing timestamps and permissions. Edit: Create a temporary directory and copy to a file under it, instead of overwriting a temp file, it minimizes the risk to overwrite some data, though the mv command is still risky (thanks @Tobu). # This is unhardlink.sh set -e for i in "$@"; do temp="$(mktemp -d ./hardlnk-XXXXXXXX)" [ -e "$temp" ] && cp -ip "$i" "$temp/tempcopy" && mv "$temp/tempcopy" "$i" && rmdir "$temp" done So, to un-hardlink all hard links (Edit: changed -type f to -not -type d, see above) : find -not -type d -links +1 -print0 | xargs -0 unhardlink.sh

    Read the article

  • Error setting up Blackberry Internet Service with Outlook Web Access

    - by Travis
    I'm trying to set up Blackberry Internet Service to connect to our Windows SBS2003 outlook web access. I've tried every possible combination of credentials by I always get the same error: An error occured during email account validation. Please check your information and try again. If the error persists please contact your System Administrator. The fields are the following: Outlook Web Access URL: http://mail.domain.com/exchange (I've also tried just using the IP address http://000.000.000.000/exchange with no effect). User Name: JohnDoe (same as OWA login / domain username - I've also tried DOMAIN\JohnDoe) Email Address: [email protected] Mailbox Name: This one confused me a little bit, but it seems it should be the same as the domain username (eg. JohnDoe). I've also tried DOMAIN\JohnDoe, and a number of other things. No matter what I do, I get the same error message. At this point, I'm basically just trying things, because I don't really know how this service is supposed to work. Does anyone know what causes this particularly vague error message, and what I can change either in my email settings or on our exchange server to resolve this?

    Read the article

  • Linux commands shows different results

    - by ClydeFrog
    I'm really having a hard time to process these results on my Ubuntu server. I have a major problem with my JBoss server where I get FileNotFoundExceptions along with "No space left on device" errors. And I thought "maybe I'm out of disk space", and used df command to figure out how much I have left: root@ubuntu1:/# df -h Filsystem Storlek Anvnt Tillg Anv% Monterat på /dev/mapper/ubuntu1-root 36G 13G 21G 38% / none 2,0G 192K 2,0G 1% /dev none 2,0G 0 2,0G 0% /dev/shm none 2,0G 64K 2,0G 1% /var/run none 2,0G 0 2,0G 0% /var/lock /dev/sda1 228M 23M 193M 11% /boot /dev/mapper/vgdata-lvdata 79G 9,2G 66G 13% /data And as you can see, I have plenty of space left. And I also checked if I'm out of i-nodes: root@ubuntu1:/# df -i Filsystem Inoder IAnv IFria IAnv% Monterat på /dev/mapper/ubuntu1-root 2346512 61992 2284520 3% / none 505380 773 504607 1% /dev none 507383 1 507382 1% /dev/shm none 507383 30 507353 1% /var/run none 507383 2 507381 1% /var/lock /dev/sda1 124496 230 124266 1% /boot /dev/mapper/vgdata-lvdata 10486784 233945 10252839 3% /data But then i used du: root@ubuntu1:/# du -s -h /* 7,5M /bin 23M /boot 19G /data 192K /dev 11G /eniro 5,3M /etc 112K /home 0 /initrd.img 183M /lib 0 /lib64 16K /lost+found 12K /media 4,0K /mnt 4,0K /opt du: kan inte komma åt "/proc/20452/task/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/task/20452/fdinfo/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fdinfo/3": Filen eller katalogen finns inte 0 /proc 18M /root 8,2M /sbin 4,0K /selinux 8,0K /srv 0 /sys 40K /tmp 691M /usr 1,2G /var 0 /vmlinuz Notice that /data and /eniro are 30G combined! How is it possible? Do I have a memory leak somewhere? Or is it something else? ----- EDIT 1 ----- Ok, I figured out that /data has its own mount so it's not possible to combine /data and /eniro because they aren't on the same mount. But how come it says 9,2G on the first command when it says 19G on the third on directory /data?

    Read the article

  • Dovecot unable to perform mysql query

    - by NathanJ2012
    I have been following the ISPMail tutorials on workaround.org (the 2.9 Wheezy version) and thus far everything has been working fine. When I reached the step to "Testing email delivery" step I noticed a error about the query in the output log from /var/log/mail.log. May 14 06:48:59 mail postfix/pickup[17704]: EA4AD240A98: uid=0 from=<root> May 14 06:48:59 mail postfix/cleanup[17776]: EA4AD240A98: message-id=<[email protected]> May 14 06:48:59 mail postfix/qmgr[17706]: EA4AD240A98: from=<[email protected]>, size=429, nrcpt=1 (queue active) May 14 06:49:00 mail dovecot: auth-worker(17782): mysql(127.0.0.1): Connected to database mailserver May 14 06:49:00 mail dovecot: auth-worker(17782): Warning: mysql: Query failed, retrying: Table 'mailserver.users' doesn't exist May 14 06:49:00 mail dovecot: auth-worker(17782): Error: sql([email protected]): User query failed: Table 'mailserver.users' doesn't exist (using built-in default user_query: SELECT home, uid, gid FROM users WHERE username = '%n' AND domain = '%d') May 14 06:49:00 mail dovecot: lda([email protected]): msgid=<[email protected]>: saved mail to INBOX May 14 06:49:00 mail postfix/pipe[17780]: EA4AD240A98: to=<[email protected]>, relay=dovecot, delay=0.09, delays=0.03/0.01/0/0.06, dsn=2.0.0, status=sent (delivered via dovecot service) May 14 06:49:00 mail postfix/qmgr[17706]: EA4AD240A98: removed I found this rather interesting that it isn't finding the DB so I went back through and checked EVERY file that I touched that involved the DB (including the postfix cf files) and everything is correct so I am baffled at this point, but oddly enough it would seem the email still made it to the correct destination in /var/vmail/domain.com/. Should I be worried about this or am I missing something here? Since it is a message from dovecot it would be the query from dovecot-sql.conf.ext which I am including here driver = mysql connect = host=127.0.0.1 dbname=mailserver user=blocked password=***REMOVED*** default_pass_scheme = PLAIN-MD5 password_query = SELECT email as user, password FROM virtual_users WHERE email='%u';

    Read the article

  • Cisco Router - Add a missing MIB file

    - by Jonathan Rioux
    I have a Cisco 881w, and I would like to setup NBAR in my NetFlow Analyzer. But it says that my router misses this MIB in order to allow NFA to poll the router with snmp to get NBAR infos. From the FAQ page of the NetFlow Analyzer website, it responds to my error: Q. I am able to issue the command "ip nbar protocol-discovery" on the router and see the results. But NFA says my router does not support NBAR, Why? A. Earlier version of IOS supports NBAR discovery only on router. So you can very well execute the command "ip nbar protocol-discovery" on the router and see the results. But NBAR Protocol Discovery MIB(CISCO-NBAR-PROTOCOL-DISCOVERY-MIB) support came only on later releases. This is needed for collecting data via SNMP. Please verify that whether your router IOS supports CISCO-NBAR-PROTOCOL-DISCOVERY-MIB. The missing MIB is: CISCO-NBAR-PROTOCOL-DISCOVERY-MIB I found it here: ftp://ftp.cisco.com/pub/mibs/v2/CISCO-NBAR-PROTOCOL-DISCOVERY-MIB.my But how can I add this MIB into the router? The IOS of my router is: c880data-universalk9-mz.151-3.T1.bin

    Read the article

  • How to email photo from Ubuntu F-Spot application via Gmail?

    - by Norman Ramsey
    My father runs Ubuntu and wants to be able to use the Gnome photo manager, F-Spot, to email photos. However, he must use Gmail as his client because (a) it's the only client he knows how to use and (b) his ISP refuses to reveal his SMTP password. I've got as far as setting up Firefox to use GMail to handle mailto: links and I've also configured firefox as the system default mailer using gnome-default-applications-properties. F-Spot presents a mailto: URL with an attach=file:///tmp/mumble.jpg header. So here's the problem: the attachment never shows up. I can't tell if Firefox is dropping the attachment header, if GMail doesn't support the header, or what. I've learned that: There's no official header in the mailto: URL RFC that explains how to add an attachment. I can't find documentation on how Firefox handles mailto: URLs that would explain to me how to communicate to Firefox that I want an attachment. I can't find any documentation for GMail's URL API that would enable me to tell GMail directly to start composing a message with a given file as an attachement. I'm perfectly capable of writing a shell script to interpolate around F-Spot to massage the URL that F-Spot presents into something that will coax Firefox into doing the right thing. But I can't figure out how to persuade Firefox to start composing a GMail message with a local file attached. Any help would be greatly appreciated.

    Read the article

  • ipmi - can't ping or remotely connect

    - by Fidel
    I've tried configuring the IPMI controller to accept remote connections, but I can't even ping it. Here is it status: #/usr/local/bin/ipmitool lan print 2 Set in Progress : Set Complete Auth Type Support : NONE PASSWORD Auth Type Enable : Callback : : User : NONE PASSWORD : Operator : PASSWORD : Admin : PASSWORD : OEM : IP Address Source : Static Address IP Address : 192.168.1.112 Subnet Mask : 255.255.255.0 MAC Address : 00:a0:a5:67:45:25 IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10 BMC ARP Control : ARP Responses Enabled, Gratuitous ARP Enabled Gratituous ARP Intrvl : 8.0 seconds Default Gateway IP : 192.168.1.1 Default Gateway MAC : 00:00:00:00:00:00 802.1q VLAN ID : Disabled 802.1q VLAN Priority : 0 RMCP+ Cipher Suites : 0,1,2,3 Cipher Suite Priv Max : uaaaXXXXXXXXXXX : X=Cipher Suite Unused : c=CALLBACK : u=USER : o=OPERATOR : a=ADMIN : O=OEM # /usr/local/bin/ipmitool user list 2 ID Name Enabled Callin Link Auth IPMI Msg Channel Priv Limit 1 true false true true USER 2 admin true false true true ADMINISTRATOR # /usr/local/bin/ipmitool channel getaccess 2 2 Maximum User IDs : 5 Enabled User IDs : 2 User ID : 2 User Name : admin Fixed Name : No Access Available : callback Link Authentication : enabled IPMI Messaging : enabled Privilege Level : ADMINISTRATOR # /usr/local/bin/ipmitool channel info 2 Channel 0x2 info: Channel Medium Type : 802.3 LAN Channel Protocol Type : IPMB-1.0 Session Support : multi-session Active Session Count : 0 Protocol Vendor ID : 7154 Volatile(active) Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available Non-Volatile Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available # /usr/local/bin/ipmitool chassis status System Power : on Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : unknown Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false # arp Address HWtype HWaddress Flags Mask Iface 192.168.1.112 ether 00:A0:A5:67:45:25 C bond0 # /usr/local/bin/ipmitool -I lan -H 192.168.1.112 -U admin -P admin chassis power status Error: Unable to establish LAN session Unable to get Chassis Power Status In summary. It exists on the ARP list so arp's are being broadcast. I can't ping it and can't connect to it. Can anyone spot any glaring mistakes in the configuration? Many thanks, Fidel

    Read the article

  • Supervisord doesn't stop nginx process

    - by Lennart Regebro
    I'm using Supervisor a lot, and in this project I have an nginx process managed by Supervisord. The relevant parts of the configuration is this: [supervisord] logfile=/home/projects/eceee-web/prod/var/log/supervisord.log logfile_maxbytes=5MB logfile_backups=10 loglevel=info pidfile=/home/projects/eceee-web/prod/var/supervisord.pid ; childlogdir=/home/projects/eceee-web/prod/var/log nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) directory=/home/projects/eceee-web/prod [program:nginx] command = /home/projects/eceee-web/prod/bin/nginx redirect_stderr = true autostart= true autorestart = true directory = /home/projects/eceee-web/prod stdout_logfile = /home/projects/eceee-web/prod/var/log/nginx-stdout.log stderr_logfile = /home/projects/eceee-web/prod/var/log/nginx-stderr.log The /home/projects/eceee-web/prod/bin/nginx command will start nginx in the foreground, it does not deamonify itself. Still, stopping it will fail: supervisorctl stop nginx Will not give any answer, but the process will continue. Any idea what? This is on OS X Darwin, with Supervisor 3.0a9 and nginx 0.7.65.

    Read the article

  • Unable to receive emails from Ubuntu postfix mail server

    - by Paddington
    I am unable to receive emails on an Ubuntu 11.04 server running postfix with the Plesk control panel. I can't see the mails even on webmail. I am able to send emails and am not getting any error messages on the email client when I try to receive. Here is the output of the logs: *tail -f /usr/local/psa/var/log/maillog Aug 29 10:38:31 cp9 postfix/tlsmgr[3811]: fatal: open database /var/lib/postfix/smtpd_scache.db: Invalid argument Aug 29 10:38:32 cp9 postfix/master[27738]: warning: process /usr/lib/postfix/tlsmgr pid 3811 exit status 1 Aug 29 10:38:32 cp9 postfix/master[27738]: warning: /usr/lib/postfix/tlsmgr: bad command startup -- throttling Aug 29 10:38:36 cp9 pop3d: Connection, ip=[::ffff:196.201.7.158] Aug 29 10:38:36 cp9 pop3d: IMAP connect from @ [::ffff:196.201.7.158]INFO: LOGIN, [email protected], ip=[::ffff:196.201.7.158] Aug 29 10:38:37 cp9 pop3d: 1346229517.874008 LOGOUT, [email protected], ip=[::ffff:196.201.7.158], top=0, retr=0, time=1, rcvd=24, sent=1716, maildir=/var/qmail/mailnames/essentialhuku.co.za/earle/Maildir Aug 29 10:14:05 cp9 postfix/tlsmgr[1133]: fatal: open database /var/lib/postfix/smtpd_scache.db: Invalid argument Aug 29 10:14:06 cp9 postfix/master[27738]: warning: process /usr/lib/postfix/tlsmgr pid 1133 exit status 1 Aug 29 10:14:06 cp9 postfix/master[27738]: warning: /usr/lib/postfix/tlsmgr: bad command startup -- throttling Aug 29 10:14:08 cp9 pop3d: Connection, ip=[::ffff:196.201.7.158

    Read the article

  • How to debug a kernel created using ubuntu-vm-builder?

    - by user265592
    Aim: Trying to perform a code walkthrough of what functions are getting called for sending and receiving packets over the network. I am building a kernel and using gdb for debugging/ tracing purposes. I have build a vm using the following command : time sudo ubuntu-vm-builder qemu precise --arch 'amd64' --mem '1024' --rootsize '4096' --swapsize '1024' --kernel-flavour 'generic' --hostname 'ubuntu' --components 'main' --name 'Bob' --user 'ubuntu' --pass 'ubuntu' --bridge 'br0' --libvirt 'qemu:///system' And I can run the VM successfully in qemu using the following command: qemu-system-x86_64 -smp 1 -drive file=tmpGgEOzK.qcow2 "$@" -net nic -net user -serial stdio -redir tcp:2222::22 Now, I want to debug the kernel using gdb. For this I need an executable with debug symbols(vmlinux), which apparently I don't have, as the vm-builder never asked for any such options and simply created a .qcow2 file. Question 1: Am I taking the correct approach to solve the problem and is there an easier way to do it? Question 2: Is there a way to debug this kernel using GDB? P.S: I don't have hardware support for KVM. Please correct me if I am wrong. Thanks.

    Read the article

  • Is ffmpeg incorrectly interpreting .aif files?

    - by marue
    Being on an Ubuntu 10.04 server i installed the ffmpeg packages with apt. ffmpeg is working afterwards, and doing as it should. Almost. For testing purposes i uploaded a few audiofiles. One of them, an aif file, is not being correctly interpreted. While on my workhorse (Mac SnowLeopard) ffmpeg tells the format as Stream #0.0: Audio: pcm_s24be, 44100 Hz, 2 channels, s32, 2116 kb/s my Ubuntu server says it is: Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s which is the wrong bitdepth. Ubuntu then fails to convert the file with the error message [pcm_s24be @ 0xcd4b580]invalid PCM packet Error while decoding stream #0.0 which certainly is not true. The file is perfectly valid. Are there any know issues for ffmpeg interpreting the aif format? How can i find out which version of the aif-codec ffmpeg is using? Any ideas how to approach this issue? ffprobe output: FFprobe version SVN-r20090707, Copyright (c) 2007-2009 Stefano Sabatini libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 built on Jan 20 2010 00:13:01, gcc: 4.4.3 20100116 (prerelease) Input #0, aiff, from 'testfile.aif': Duration: 00:00:04.00, start: 0.000000, bitrate: 2117 kb/s Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s update 2: Forcing the conversion with -sample_fmt s32 doesn't change anything. Strange thing is: Even without using -sample_fmt s32 i just realized that the conversion is working and creates valid audiofiles. There just is the error message from above.

    Read the article

  • iptables-restore: line 1 failed

    - by Doug
    Hello, I am new to servers, and I was following this guide and it failed on the first command instructed. Could anyone give me a hand? http://wiki.debian.org/iptables ~ZORO~:/etc# iptables-restore < /etc/iptables.test.rules iptables-restore: line 1 failed Edit: iptables.test.rules ~ZORO~:/etc# cat /etc/iptables.test.rules *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You could modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections for script kiddies # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Now you should read up on iptables rules and consider whether ssh access # for everyone is really desired. Most likely you will only allow access from certain IPs. # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls (access via 'dmesg' command) -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy: -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, or otherwise (even COM or Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • Event Log "Wake Source" when system wakes from sleep

    - by Doltknuckle
    So I've been troubleshooting sleep timers for our systems and have run across an interesting issue. I need a way to report how long a system was awake after a number of different inputs. Now, I've discovered that the System Log tracks wake and sleep events and even tells you the times that everything happens at. The thing is doesn't tell you is what triggered the wake event. It does give you a numerical code however. Here are some examples of what I am finding. Index : 2901 EntryType : Information InstanceId : 1 Message : The system has resumed from sleep. Sleep Time: 2010-10-01T23:20:06.097488100Z Wake Time: 2010-10-03T17:41:12.796400500Z Wake Source: 0 Category : (0) CategoryNumber : 0 Source : Microsoft-Windows-Power-Troubleshooter -- Index : 2841 EntryType : Information InstanceId : 1 Message : The system has resumed from sleep. Sleep Time: 2010-10-01T19:19:37.239789600Z Wake Time: 2010-10-01T21:28:48.921200800Z Wake Source: 4HID Keyboard Device Category : (0) CategoryNumber : 0 Source : Microsoft-Windows-Power-Troubleshooter So here's my question: Does anyone know what the different numerical codes for the "Wake Source" mean? I think "0" is a magic packet and "4" is a USB device. Does anyone have any idea if there is any documentation out there on this for Windows 7? Thanks in advance

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >