Search Results

Search found 4860 results on 195 pages for 'sudo petruza'.

Page 135/195 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Is there a serious issue with setting the SUID bit on tcpdump?

    - by Dean
    I'm running tcpdump on a remote machine, and piping the output to Wireshark on my local machine over SSH. In order to do this, I had to set the SUID bit on tcpdump. For background, the remote machine is an Amazon EC2 running "Amazon Linux AMI 2012.09". On this image, there is no root password, and it is not possible to log in as root. You can't use sudo without a TTY, and therefore you have to set the SUID. What are the practical risks of setting this bit on tcpdump? Is there any need to be paranoid? Should I unset it whenever I'm not capturing?

    Read the article

  • afp/smb transfers caps at 2 megabytes/sec, wireless N

    - by CQM
    I wanted to transfer files between two mac computers. The network is wireless-N and both computers have wireless-N modules in them. The problem is that when I transfer files between them, via file sharing (afp) the network speed caps at 2 megabytes/sec. Just downloading files from the internet I can get faster speeds, so this isn't a constriction of my wifi bandwidth, it appears to be a constriction of the protocol being used. My wifi-n is set to 130mbits, so I should see real world transfer speeds around 12-16 megabytes/sec I did this command on both computers sudo sysctl -w net.inet.tcp.delayed_ack=0 which is supposed to lower tcp overhead, but this did not affect it. How can I get the speed I am expecting?

    Read the article

  • Workaround for starting zabbix agent on Ubuntu. update-rc.d LSB mismatch

    - by bryan kennedy
    I am trying to run the zabbix-agent (1.8.1) on boot on Ubuntu (lucid 10.04). Zabbix is installed just fine, and it manually starts just fine with /etc/init.d/zabbix-agent start However it doesn't start on boot, because when I run: sudo update-rc.d zabbix-agent default I get: update-rc.d: warning: zabbix-agent start runlevel arguments (none) do not match LSB Default-Start values (2 3 4 5) update-rc.d: warning: zabbix-agent stop runlevel arguments (none) do not match LSB Default-Stop values (0 1 6) After googling around I found some cryptic information about this being a possible bug with Zabbix's startup scripts, but I can't find a workaround. I'm trying to understand how the update-rc.d system works, but I'm not getting very far. How can I modify this setup to start the zabbix-agent on startup?

    Read the article

  • Mac OS X & Linux: mount_nfs: can't access /nfs: Permission denied

    - by MountainX
    I have an Ubuntu 12.04 NFS server and I have an iMac NFS client running OS X 10.6.8. I believe I have everything set up properly, yet I still get this error on the Mac: mount_nfs: can't access /nfs: Permission denied My exports on the Linux server uses the insecure option like this: /export/home/me/ 192.168.100.132(rw,subtree_check,insecure,nohide) Where 192.168.100.132 is the address of my Mac. I have even tried using -o resvport on the Mac (in addition to insecure on Linux) and I still get the same error as above. $ sudo mount -t nfs -o resvport 192.168.100.1:/home/me /Users/me/mount Here is the output of showmount: # showmount -e 192.168.100.1 Export list for 192.168.100.1: /export/home/me 192.168.100.132 .... I have reviewed this similar question: How to mount NFS export on Mac OS X? And I have reviewed this frequently recommended tutorial: http://www.cyberciti.biz/faq/apple-mac-osx-nfs-mount-command-tutorial/ I still can't find a solution. Any ideas?

    Read the article

  • -bash: ls: command not found at Terminal on MAC OS

    - by art.mania
    I need to start using GIT for my projects from now on and I need to use some UNIX commands. but no matter what I do, I always receive "command not found" error. I installed MacPorts, but still cant run any UNIX command :/ When I try ls, I get the error below, same for sudo, or any other command: -bash: ls: command not found and when I try $PATH, I get the lines below: hakan-yilmaz-MacBook-Pro:~ hakanyilmaz$ **$PATH** -bash: /opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/Current/bin:/opt/subversion/bin/:PATH: No such file or directory I'm on Mac OS X 10.6.6 I spent 2-3 days and kept googling and trying everything I found at forums, but no success. SOLUTION: I opened .bash_profile with TextWrangler and removed everything else than export PATH=/opt/local/bin:/opt/local/sbin:$PATH Then, I reboot that Mac, and WORKING!!!!

    Read the article

  • How can I tell whether an interrupted rm -r removed any files?

    - by Jake Petroules
    I installed sshfs a Linux box and then mounted my Mac home directory. In the middle of troubleshooting a configuration issue, I did an ls -l on the mount directory (as normal user), receiving: total 0 d????????? ? ? ? ? ? sl I then ran sudo rm -r on that directory but pressed Ctrl+C to terminate it immediately before it (looks) like the command did anything. I notice no files missing but I want to be sure - is there a way I can somehow inspect the filesystem log on my Mac to see if any files were actually removed?

    Read the article

  • sendmail user unknown - debian lenny

    - by Rimian
    My php's mail() function just stopped working a short while ago. It's started returning FALSE. I am not much of a sysadmin so please forgive my ignorance. I set my php.ini send_path option to: "sendmail_path = /usr/sbin/sendmail -t -i" and restarted apache. Then, I learnt how to test sendmail like so: sudo /usr/sbin/sendmail -bv [email protected] [email protected]... deliverable: mailer esmtp, host example.com., user [email protected] The example email is a real mail box. I have also seen unknown user messages in the mail log. Can anyone please help me debug this? Cheers, Rim

    Read the article

  • How to format DVD-RAM?

    - by AndrejaKo
    I have few DVD-RAM disks and when using udftools, specifically sudo mkudffs --media-type=dvdram /dev/sr0 where /dev/sr0 is my DVD-RAM drive, I get trying to change type of multiple extents and nothing happens. What should I do? EDIT After trying with dvd+tools, here's what I got: #dvd+rw-format /dev/dvd -format=full -ssa=default * BD/DVD±RW/-RAM format utility by <[email protected]>, version 7.1. * 4.6GB DVD-RAM media detected. * formatting 54.8| And same error as before from mkudffs.

    Read the article

  • How to unsquash and mount arch linux live CD

    - by steffen
    I am following this manual to install Arch Linux from within another Linux distro with the help of an Arch Linux live CD. Here is what I did: sudo mount -o loop Downloads/archlinux-2012.11.01-dual.iso arch_iso/ unsquashfs -d squashfs-root/ arch_iso/arch/x86_64/root-image.fs.sfs This results in a directory squashfs-root/ containing one file: root-image.fs I assume that this is not what I want. I want to see something that looks like a Linux root folder. If I follow these steps: "mount the file system" with mount -B /squashfs-root ${livecd_arch} and mount -t proc /proc ${livecd_arch}/proc, I get error messages like: mount: mount point /home/me/arch_root//proc does not exist What am I missing? Thanks!

    Read the article

  • ps ux on OSX shows user for ps command to be root? Is this normal?

    - by snies
    I am running OS X 10.6.1 . When i am logged in as a normal user of group staff and do a ps ux it lists my ps ux command as being run by root: snies 181 0.0 0.3 2774328 12500 ?? S 6:00PM 0:20.96 /System/Library... root 1673 0.0 0.0 2434788 508 s001 R+ 8:16AM 0:00.00 ps ux snies 177 0.0 0.0 2457208 984 ?? Ss 6:00PM 0:00.52 /sbin/launchd snies 1638 0.0 0.0 2435468 1064 s001 S 8:13AM 0:00.03 -bash Is this normal behaviour? And if so why? Please note that the user is not an Administrator account and is not able to sudo.

    Read the article

  • BitchX - Segmentation fault

    - by alexus
    Last login: Tue Mar 16 15:29:57 on ttys002 mbp:~ alexus$ sudo port install bitchx Password: --- Computing dependencies for bitchx --- Fetching ncursesw --- Attempting to fetch ncurses-5.7.tar.gz from http://distfiles.macports.org/ncurses --- Verifying checksum(s) for ncursesw --- Extracting ncursesw --- Configuring ncursesw --- Building ncursesw --- Staging ncursesw into destroot --- Installing ncursesw @5.7_0+darwin_10 --- Activating ncursesw @5.7_0+darwin_10 --- Cleaning ncursesw --- Fetching ncurses --- Verifying checksum(s) for ncurses --- Extracting ncurses --- Configuring ncurses --- Building ncurses --- Staging ncurses into destroot --- Installing ncurses @5.7_0+darwin_10 --- Activating ncurses @5.7_0+darwin_10 --- Cleaning ncurses --- Fetching bitchx --- Attempting to fetch ircii-pana-1.1-final.tar.gz from http://voxel.dl.sourceforge.net/bitchx --- Verifying checksum(s) for bitchx --- Extracting bitchx --- Applying patches to bitchx --- Configuring bitchx --- Building bitchx --- Staging bitchx into destroot --- Installing bitchx @1.1_1+darwin --- Activating bitchx @1.1_1+darwin --- Cleaning bitchx mbp:~ alexus$ BitchX BitchX - Based on EPIC Software Labs epic ircII (1998). Version (BitchX-1.1-final) -- Date (20040326). Process [30864] Segmentation fault mbp:~ alexus$ any ideas why is it doing "Segmentation fault" and how to troubleshoot it?

    Read the article

  • problems with Apache on Snow Leopard

    - by Hristo
    I kind of screwed up the Apache "stuff" on my Mac. Usually when I visit http://localhost/, I would see the "It Works!" but now it just lists the directory and files inside /Library/WebServer/Documents. When I try to stop/start/restart the server with sudo apachectl stop, I get: httpd: Syntax error on line 68 of /etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/mod_disk_cache.so into server: dlopen(/usr/libexec/apache2/mod_disk_cache.so, 10): Symbol not found: _apr_file_info_get$INODE64\n Referenced from: /usr/libexec/apache2/mod_disk_cache.so\n Expected in: flat namespace\n in /usr/libexec/apache2/mod_disk_cache.so I don't want to do the MacPorts install, I tried it earlier but... I just want to do it via source code with the usual ./configure, make, make install. Any ideas on how to get this working? Is there a way to totally remove Apache and then reinstall a fresh version? Thanks, Hristo

    Read the article

  • problems with Apache on Snow Leopard

    - by Hristo
    I kind of screwed up the Apache "stuff" on my Mac. Usually when I visit http://localhost/, I would see the "It Works!" but now it just lists the directory and files inside /Library/WebServer/Documents. When I try to stop/start/restart the server with sudo apachectl stop, I get: httpd: Syntax error on line 68 of /etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/mod_disk_cache.so into server: dlopen(/usr/libexec/apache2/mod_disk_cache.so, 10): Symbol not found: _apr_file_info_get$INODE64\n Referenced from: /usr/libexec/apache2/mod_disk_cache.so\n Expected in: flat namespace\n in /usr/libexec/apache2/mod_disk_cache.so I don't want to do the MacPorts install, I tried it earlier but... I just want to do it via source code with the usual ./configure, make, make install. Any ideas on how to get this working? Is there a way to totally remove Apache and then reinstall a fresh version? Thanks, Hristo

    Read the article

  • Linux: find thin server running on port 80 and kill it

    - by Andrew
    On my Linux server I ran: sudo thin start -p 80 -d Now I'd like to restart the sever. The trouble is, I can't seem to get the old process to kill it. I tried: netstat -anp But what I see on port 80 is this: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - So, it didn't give me a PID to kill... I tried pgrep -l thin but that gave me nothing. Meanwhile pgrep -l ruby gives me like 6 processes running. I don't really understand why multiple ruby threads would be running, or which one I need to kill... How do I kill / restart the thin daemon?

    Read the article

  • Running make for Nginx throws a “multiple target patterns” error

    - by Justin Meltzer
    When I run make inside my installed nginx directory I get the output: make -f objs/Makefile make[1]: Entering directory `/home/ec2-user/nginx/nginx-1.2.4' objs/Makefile:110: *** multiple target patterns. Stop. make[1]: Leaving directory `/home/ec2-user/nginx/nginx-1.2.4' make: * [build] Error 2 I am on an Amazon Linux AMI. The steps I took from the beginning was wget /path/to/nginx/tarball tar xvf nginx-1.2.4.tar.gz cd nginx-1.2.4 ./configure --prefix=/nginx --a-bunch-of-other-options Then I ran make. Also I installed make by running sudo yum install make Please let me know if there's any other information I should be providing.

    Read the article

  • OS X 10.6 Snow Leopard no longer mounting an external USB drive

    - by Brant Bobby
    I have a 1TB generic external hard drive containing a single HFS partition. I originally formatted this using Disk Utility and it worked fine. Now, for some reason, it's not auto-mounting when I start up. Using mount at the command line gives the following error: $ sudo mount /dev/disk1s2 /Volumes/Test /dev/disk1s2 on /Volumes/Test: Incorrect super block. ... but if I use the mount_hfs command it works fine, mounts, and is readable. $ mount_hfs /dev/disk1s2 /Volumes/Test/ fsck gives me an error about a bad super block: $ fsck /dev/disk1 ** /dev/rdisk1 (NO WRITE) BAD SUPER BLOCK: MAGIC NUMBER WRONG ... but fsck_hfs -fn /dev/disk1s2 doesn't find any problems and reports that the volume appears to be OK. In Disk Utility, the drive appears to have a single MS-DOS partition with a curious notice about how it appears to be partitioned for Boot Camp: I have the Boot Camp HFS driver installed in WIndows 7, and that OS sees the drive/partition normally. What's wrong with my disk?

    Read the article

  • Rails: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'

    - by misbehavens
    So I've got a Ruby on Rails application that I am trying to run (in development) on Snow Leopard. I've got it working on my Ubuntu computer, but now I need to get my Snow Leopard environment set up. Originally, I installed the MySQL 2.8.1 Ruby Gem and was running into this issue: uninitialized constant MysqlCompat::MysqlRes But thanks to this tutorial I was able to resolve it by running this command and installing a previous version of the Gem: export ARCHFLAGS="-arch i386 -arch x86_64" ;sudo gem install --no-rdoc --no-ri -v=2.7 mysql -- --with-mysql-dir=/usr/local/mysql --with-mysql-config=/usr/local/mysql/bin/mysql_config Now that I've resolved that issue, I'm running into a different error: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' This happens when I try to run rake db:migrate as well as when the server is running. How can I resolve this issue?

    Read the article

  • Sending email using smpt (Gmail) from Hudson CI

    - by jensendarren
    How can I set up Hudson CI so that I can send out emails from the server following a build failure? At the moment all I get is the following error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.0 Must issue a STARTTLS command first One solution is to start Hudson as follows: java -Dmail.smtp.starttls.enable="true" -jar /usr/share/hudson/hudson.war However, I am already using the following to start Hudson: sudo /etc/init.d/hudson start I am thinking the solution is to somehow set the system property mail.smtp.starttls.enable in a property file somewhere, but I have no idea how to do that. What are my options? Thank you all in advance!

    Read the article

  • Trouble with NFS file sharing on Synology 211 NAS and Ubuntu Client

    - by Aglystas
    I'm attempting to set up NFS file sharing and keep getting the error... mount.nfs: access denied by server while mounting 192.168.1.110:/myshared Here is the exact command I'm using to mount sudo mount -o nolock 192.168.1.110:/myshared /home/emiller/MyShared I have set 'Enabled NFS' in DSM and set nfs priviledges in the the Shares section of the control panel. Here is the /etc/exports entry from the NAS. volume1/myshared 192.168.1.*(rw,sync,no_wdelay,no_root_squash,insecure_locks,anonuid=0,anongid=0) I read some things about the hosts.allow and hosts.deny but it seems like if they are empty they aren't used for anything. I can see the share when I run ... showmount -e 192.168.1.110 Any help would be appreciated in this matter.

    Read the article

  • Linux DD command partition -to- partition

    - by Ben Jackson
    I just used the DD command to copy the contents of one partition over to another partition on another drive, like this: dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=noerror sda2 partition was 66GB and sdb2 was 250GB. I read that by doing this the extra space on the drive I am copying to will be wasted, is this true? I wasn't worried about loosing the extra space for the time being however, I just ran: sudo kill -USR1 (PID) to view the current status of DD and it has written over 66GB of data, will it continue to write data until it gets to 250GB? If so, is there a way to stop the process without corrupting it as waiting for it to write blank space seems like a waste of time.

    Read the article

  • Problem adding public key for apt

    - by highBandWidth
    I was trying to get the official mongodb for Ubuntu, following the instructions at http://www.mongodb.org/display/DOCS/Ubuntu+and+Debian+packages After adding the deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen line in my sources, I need to add the pgp key since synaptic says W: GPG error: http://downloads-distro.mongodb.org dist Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9ECBEC467F0CEB10 Again following instructions, I did sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 this says Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 7F0CEB10 gpg: requesting key 7F0CEB10 from hkp server keyserver.ubuntu.com ?: keyserver.ubuntu.com: Connection refused gpgkeys: HTTP fetch error 7: couldn't connect: Connection refused gpg: no valid OpenPGP data found. gpg: Total number processed: 0 Interestingly, I also get $ apt-key list gpg: fatal: /home/myname/.gnupg: directory does not exist! secmem usage: 0/0 bytes in 0/0 blocks of pool 0/32768 How can I get apt to use this source?

    Read the article

  • How do I change HOSTNAME on an Ubuntu server?

    - by BryanWheelock
    I'm attempting to change the hostname on my shared server with Slicehost so I can setup Postfix as a null client. I edited /etc/hosts and after reboot, the hostname is still incorrect. What am I doing wrong? username@mail Fri Jul 01 13:01:32 ~ $ sudo cat /etc/hostname mail.domain1.com username@mail Fri Jul 01 13:01:45 ~ $ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain 208.78.100.198 mail.domain1.com username@mail Fri Jul 01 13:02:13 ~ $ hostname -f pop.where.secureserver.net I also intend to add another domain to this server, how do I configure this correctly.

    Read the article

  • Symlink - Permission Denied

    - by John Smith
    I'm facing an interesting problem with plenty of Permission Denied outputs when using SymLinks Linux: Slackware 13.1 Directory with Symlink: root@Tower:/var/lib# ls -lah drwxr-xr-x 8 root root 0 2012-12-02 20:09 ./ drwxr-xr-x 15 root root 0 2012-12-01 21:06 ../ lrwxrwxrwx 1 ntop ntop 21 2012-12-02 20:09 ntop - /mnt/user/media/ntop6/ Symlinked Directory: root@Tower:/mnt/user/media# ls -lah drwxrwx--- 1 nobody users 1.4K 2012-12-02 19:28 ./ drwxrwx--- 1 nobody users 128 2012-11-18 16:06 ../ drwxrwxrwx 1 ntop ntop 320 2012-12-02 20:22 ntop6/ What I have done: I have used chown -h ntop:ntop on the ntop directory in /var/lib Just to be sure, I have chmod 777 to both directories Permission denied actions: root@Tower:/var/lib# sudo -u ntop mkdir /var/lib/ntop/test mkdir: cannot create directory `/var/lib/ntop/test': Permission denied Any ideas?

    Read the article

  • dev_install failed on ARM chromebook

    - by user1027721
    I'm trying this guide for having access to emerge on chromeos. http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/install-software-on-base-images Unfortunately I always got the same error which is $ sudo dev_install Starting installation of developer packages. First, we download the necessary files. Downloading https://commondatastorage.googleapis.com/chromeos-dev-installer/board/daisy/full-3.168.0.0/packages/app-misc/mime-types-8.tbz2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 127 100 127 0 0 252 0 --:--:-- --:--:-- --:--:-- 305 [: 184: -ne: unexpected operator Extracting /usr/local/portage/packages/app-misc/mime-types-8.tbz2 I think that it somehow returns a 404 everytime. Thanks for your help

    Read the article

  • Why do password entries over ssh take so long?

    - by Dean
    When I'm ssh'd into my server, any time I enter my password, there's a 40 second delay before the server responds. This occurs when logging in, as well as whenever I run a command via sudo. The delay does not happen when I run su and enter my password however. Using the -v flag for ssh doesn't show anything during this time. Looking at Wireshark, all traffic between the two machines stops while this is happening. Any idea what's happening, or advice on how to investigate this? The server is running Debian squeeze (6.0.4)

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >