Search Results

Search found 4860 results on 195 pages for 'sudo petruza'.

Page 173/195 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • What's the right way to create a Ubuntu user whose home directory is /var/www/SITE?

    - by Leonnears
    First of, I need to state I'm a complete ignorant when it comes to server administration on Ubuntu, and I'm doing what I can. I have been trying to do this for hours with no luck. Basically, I want to create a Ubuntu user whose home directory is /var/www/SITE, and prefered it is chroot'd to it. The chroot part is not so important right now, as first I prefer to make anything work. The user should be able to upload files here and the webserver (www-data user?) should be able to pick them up with no problem. I was able to create the user and give it the home directory /var/www/SITE. (the user is "anders"). I gave him a password, and "anders" can connect to FTP just fine and upload files. But here's where things don't work: While my user can upload files to that /var/www/SITE directory, when I access the webpage on my browser I get a Forbidden error. Note that anders is also a member of the www-data group. I can fix this by running sudo chmod g+s /var/www/SITE/* anders -R but this is of course not ideal. Ideally the files should "work" as soon as I upload them. What's the right way to fix this? If it matters (don't think so), I'm editing my files in Coda 2 and anders is the user for it.

    Read the article

  • BOINC error code -1200 when opening...

    - by Erik Vold
    I installed boinc 6.10.21 on my macosx 10.5 in order to upgrade from a 6.6 version that I was running today, and I am the admin user, and I was logged in as the admin user. As I was installing 6.10.21 I was asked if non admin users should be allowed to use boinc, and I said 'yes' to this. Then when I tried to open boinc I got a message like the following: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I tried to reinstall first, and I was not asked if non admin users should be allowed to use boinc.. so I retried a few times and got no different result.. So I downloaded 6.10.43 and installed that, and again I was not asked if non admin users should be allowed to use boinc.. and when I tried to run boinc I got the same message like: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I did a google search trying to figure out how to add my admin user to the bonic_master user group and found this which suggested I run the following in terminal: "sudo dscl . -append /Groups/boinc_master GroupMembership <your user's short name> CR" So I did this and now I get the following error: BOINC ownership or permissions are not set properly; please reinstall BOINC (Error code -1200) So I reinstall and I am ever asked the question about allowing non admin users again, and I still get this error message every after every reinstall attempt.. What should I do?..

    Read the article

  • NFS4 permission denied when userid does not match (even though idmap is working)

    - by SystemParadox
    I have NFS4 setup with idmapd working correctly. ls -l from the client shows the correct user names, even though the user ids differ between the machines. However, when the user ids do not match, I get 'permission denied' errors trying access files, even though ls -l shows the correct username. When the user ids do happen to match by coincidence, everything works fine. sudo sysctl -w sunrpc.nfsd_debug=1023 gives the following output in the server syslog for the failed file access: nfsd_dispatch: vers 4 proc 1 nfsv4 compound op #1/3: 22 (OP_PUTFH) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsv4 compound op ffff88003d0f5078 opcnt 3 #1: 22: status 0 nfsv4 compound op #2/3: 3 (OP_ACCESS) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking nfsv4 compound op ffff88003d0f5078 opcnt 3 #2: 3: status 0 nfsv4 compound op #3/3: 9 (OP_GETATTR) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking nfsv4 compound op ffff88003d0f5078 opcnt 3 #3: 9: status 0 nfsv4 compound returned 0 nfsd_dispatch: vers 4 proc 1 nfsv4 compound op #1/7: 22 (OP_PUTFH) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsv4 compound op ffff88003d0f5078 opcnt 7 #1: 22: status 0 nfsv4 compound op #2/7: 32 (OP_SAVEFH) nfsv4 compound op ffff88003d0f5078 opcnt 7 #2: 32: status 0 nfsv4 compound op #3/7: 18 (OP_OPEN) NFSD: nfsd4_open filename dom_file op_stateowner (null) renewing client (clientid 4f96587d/0000000e) nfsd: nfsd_lookup(fh 28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba, dom_file) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking nfsd: fh_lock(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) locked = 0 nfsd: fh_compose(exp 08:01/22806529 srv/dom_file, ino=22809724) nfsd: fh_verify(36: 01070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking fh_verify: srv/dom_file permission failure, acc=804, error=13 nfsv4 compound op ffff88003d0f5078 opcnt 7 #3: 18: status 13 nfsv4 compound returned 13 Is that useful to anyone? Any hints on to debug this would be greatly appreciated. Server kernel: 2.6.32-40-server (Ubuntu 10.04) Client kernel: 3.2.0-27-generic (Ubuntu 12.04) Same problem with my new server running 3.2.0-27-generic (Ubuntu 12.04). Thanks.

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

  • how to correctly mount fat32 partition in Ubuntu in order to preserve case

    - by Dean
    I've found there are couple of problems might be related how my FAT32 partition was mounted. I hope you can help me to solve the problem. I also included the command I used to help others when they find this post, sorry to those might feel I should use less space. I've the following file structures on my disk dean@notebook:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x08860886 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 5737 45978624 7 HPFS/NTFS /dev/sda3 5738 10600 39062047+ 83 Linux /dev/sda4 10601 19457 71143852+ 5 Extended /dev/sda5 10601 11208 4883728+ 82 Linux swap / Solaris /dev/sda6 11209 15033 30720000 b W95 FAT32 /dev/sda7 15033 19457 35537920 7 HPFS/NTFS In the etc/fstab I've got UUID=91c57a65-dc53-476b-b219-28dac3682d31 / ext4 defaults 0 1 UUID=BEA2A8AFA2A86D99 /media/NTFS ntfs-3g quiet,defaults,locale=en_US.utf8,umask=0 0 0 UUID=0C0C-9BB3 /media/FAT32 vfat user,auto,utf8,fmask=0111,dmask=0000,uid=1000 0 0 /dev/sda5 swap swap sw 0 0 /dev/sda1 /media/sda1 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 /dev/sda2 /media/sda2 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 I checked my id using id and I've got dean@notebook:~$ id uid=1000(dean) gid=1000(dean) groups=4(adm),20(dialout),24(cdrom),46(plugdev),103(fuse),104(lpadmin),115(admin),120(sambashare),1000(dean) I don't know why with these settings I still have problem of using svn like in this one Thank you for your help!

    Read the article

  • configuring vsftpd anonymous upload. Creates files but freezes at 0 bytes

    - by Wayne
    vsftpd on ubuntu after sudo apt-get install vsftpd Then did configuration as in the attached /etc/vsftpd.conf file. Anonymous ftp allows cd to the upload directly and allows put myfile.txt which gets created on the server but then the client hangs and never proceeds. The file on the server remains at 0 bytes. Here's the folders and permissions: root@support:/home/ftp# ls -ld . drwxr-xr-x 3 root root 4096 Jun 22 00:00 . root@support:/home/ftp# ls -ld pub drwxr-xr-x 3 root root 4096 Jun 21 23:59 pub root@support:/home/ftp# ls -ld pub/upload drwxr-xr-x 2 ftp ftp 4096 Jun 22 00:06 pub/upload root@support:/home/ftp# Here's the vsftpd.conf file: root@support:/home/ftp# grep -v '#' /etc/vsftpd.conf listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES dirmessage_enable=YES xferlog_enable=YES anon_root=/home/ftp/pub/ connect_from_port_20=YES chown_uploads=YES chown_username=ftp nopriv_user=ftp secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Here's a file example that attempted to upload: root@support:/home/ftp/pub/upload# ls -l total 0 -rw------- 1 ftp nogroup 0 Jun 22 00:06 build.out This is the client attempting to upload...it is frozen at this point: $ ftp 173.203.89.78 Connected to 173.203.89.78. 220 (vsFTPd 2.0.6) User (173.203.89.78:(none)): ftp 331 Please specify the password. Password: 230 Login successful. ftp> put build.out 200 PORT command successful. Consider using PASV. 553 Could not create file. ftp> cd upload 250 Directory successfully changed. ftp> put build.out 200 PORT command successful. Consider using PASV. 150 Ok to send data.

    Read the article

  • How to test email spam scores with amavis?

    - by CaptSaltyJack
    I'd like a way to test a spam message to see its spam scores that SpamAssassin gives it. The SA db files (bayes_toks, etc) reside in /var/lib/amavis/.spamassassin. I've been testing emails by doing this: sudo su amavis -c 'spamassassin -t msgfile' Though this yields some strange results, such as: Content analysis details: (3.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100% [score: 1.0000] -0.0 NO_RELAYS Informational: message was not relayed via SMTP 0.0 LONG_TERM_PRICE BODY: LONG_TERM_PRICE 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100% [score: 1.0000] -0.0 NO_RECEIVED Informational: message has no Received headers 0.2 is an awfully low scores for BAYES_999! But this is the first time I've used amavis, previously I've always just used spamassassin directly as a content filter in postfix, but apparently running amavis/spamassassin is more efficient. So, with amavis in the picture, how can I run a test on a message to see its spam score breakdown? Another email I ran a test on got this result: 2.0 BAYES_80 BODY: Bayes spam probability is 80 to 95% [score: 0.8487] Doesn't make sense, that BAYES_80 can yield a higher score than BAYES_999. Help!

    Read the article

  • nginx starts up before apache

    - by paullb
    I've been fumbling through setting up redmine on a unbuntu (12.04) box and somewhere along the line NginX got set up and now apache no longer loads because nginx has already grabbed the port. I tried removing NginX with the below command but that didn't seem to make any difference. When I restarted the server and pointed my web browser I still got the "Welcome to NginX" message sudo apt-get purge nginx I have confirmed that NginX is gone because when I run the above now I get as an output Package nginx is not installed, so not removed Yet everytime I start the machine it is running again. I noticed the following for the running processes (if that is helpful) root 923 0.0 0.0 76784 1280 ? Ss 03:00 0:00 nginx: master process /usr/sbin/nginx www-data 925 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 926 0.0 0.1 77092 2204 ? S 03:00 0:00 nginx: worker process www-data 927 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 928 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process Any advice for bringing back apache2 as the "default" (for lack of a better term) web server?

    Read the article

  • MySQL permission errors

    - by dotancohen
    It seems that on a Ubuntu 14.04 machine the user mysql cannot access anything. It is not writing logs nor reading files. Witness: - bruno():mysql$ cat /etc/passwd | grep mysql mysql:x:116:127:MySQL Server,,,:/nonexistent:/bin/false - bruno():mysql$ sudo mysql_install_db Installing MySQL system tables... 140818 18:16:50 [ERROR] Can't read from messagefile '/usr/share/mysql/english/errmsg.sys' 140818 18:16:50 [ERROR] Aborting 140818 18:16:50 [Note] Installation of system tables failed! Examine the logs in /var/lib/mysql for more information. ...boilerplate trimmed... - bruno():mysql$ ls -la /usr/share/mysql/english/errmsg.sys -rw-r--r-- 1 root root 59535 Jul 29 13:40 /usr/share/mysql/english/errmsg.sys - bruno():mysql$ wc -l /usr/share/mysql/english/errmsg.sys 16 /usr/share/mysql/english/errmsg.sys Here we have seen that mysql cannot read /usr/share/mysql/english/errmsg.sys even though the permissions are open to read it, and in fact the regular login user can read the file (with wc). Additionally, MySQL is not writing any logs: - bruno():mysql$ ls -la /var/log/mysql total 8 drwxr-s--- 2 mysql adm 4096 Aug 18 16:10 . drwxrwxr-x 18 root syslog 4096 Aug 18 16:10 .. What might cause this user to not be able to access anything? What can I do about it?

    Read the article

  • Understanding what needs to be in place for a server to send outgoing email from a linux box

    - by Matt
    I am attempting to configure an openSuse 11.1 box to send outgoing email for a domain that the same server is hosting. I don't understand enough about smtp servers and the like to know what needs to be in place and working. The system already had Postfix installed, and I confirmed it was running via a > sudo /etc/init.d/postfix status I examined the Postfix config file in /etc/main.cf and configured a couple of items regarding the domain/host name and such, but left it largely default. I attempted to send an email from the command line with the following command: > echo "test 123" | mail -s "test subject" [email protected] Where differentdomain.com was not the same domain as the one best hosted on the server. However, the email never reaches the target account. Any suggestions? EDIT: In the postfix log, (/var/log/mail.info, there's nothing in .err) I see that postfix is trying to connect to what appears to be a different smtp server on our network, with a connection refused: connect to ourdomain.com.inbound15.mxlogic.net[our ip address]:25: Connection refused However, I can't figure out why it is 1) trying to connect to that server and 2) not just sending the messages itself... I mean, isn't postfix an smtp server? I did a grep -ri on ourdomain from /etc and see no configuration files anywhere telling it to do this. Why is it?

    Read the article

  • Unix Permissions issue with users belonging to the same group accessing a folder

    - by TK Kocheran
    I have a folder I'd really like to allow another user on this machine access to. I'm using mt-daapd to serve music to the network, so I'd like to enable the mt-daapd user to access my Music directory, /home/rfkrocktk/Music. The master user is rfkrocktk obviously. I've tried to set all of my permissions properly on the directory, but the mt-daapd user can't acces the files. I created a group called media-users and added both rfkrocktk and mt-daapd to it in order to give mt-daapd permission to simply read all of the files in that directory and subdirectories. If I run id on each of my users, here's what's displayed: $ id rfkrocktk > uid=1000(rfkrocktk) gid=1000(rfkrocktk) groups=1000(rfkrocktk),4(adm),20(dialout),24(cdrom),29(audio),46(plugdev),104(lpadmin),115(admin),120(sambashare),124(vboxusers),1001(jupiter),2002(media-users) $ id mt-daapd > uid=123(mt-daapd) gid=65534(nogroup) groups=65534(nogroup),2002(media-users) It definitely seems that both users are a part of the media-users group, so what could be going wrong? If I run ls -l on the actual Music directory to see its permissions, here's the output: drwxr-Sr-- 201 rfkrocktk media-users 12288 2011-01-13 12:26 Music If I run ls -l on the Music directory to get its children, here's the output: drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-12-20 15:31 2DBoy drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-05-25 12:50 ABBA drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Access Denied drwxr-Sr-- 10 rfkrocktk media-users 4096 2009-12-28 15:19 AC-DC drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Aerosmith drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-04 10:45 A Flock of Seagulls drwxr-Sr-- 4 rfkrocktk media-users 4096 2010-05-28 18:13 Alestorm drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-22 23:29 Amon Amarth drwxr-Sr-- 5 rfkrocktk media-users 4096 2009-12-28 15:19 Anberlin ... From this, it would seem that I should be able to access the folders from mt-daapd, but I can't. Running sudo -i -u mt-daapd ls -l /home/rfkrocktk/Music displays nothing, indicating to me that for whatever reason, mt-daapd doesn't have access to read the folder. What am I doing wrong?

    Read the article

  • Ati X1600 driver problem on Mac

    - by Mulot
    Hi all, I currently own a 06' Macbook pro 1.1, and since some months I have recurrent problems of displays bug or artifacts. I searched quickly around to see that a lot of other users on Mac (iMac or Macbook pro) also have the same problem due to a problem for the X1600 video card. Apparently it's due to overheating problem, in my case even without warming a lot I have very bad display bugs such as colorful pixel lines, or glitches, and freeze and crash, all of this on Tiger, Leopard and Snow Leopard. I found this interesting article here talking about this problem and trying to gather people so that Apple take the serious GPU problem in consideration. In one of the comments, an user said he removed all bundle named with "radeon" and then he had no more problems under Leopard, and seems ot work fine well too on Snow Leopard. I did the same thing, I removed the bundles of the driver, restart, and no more problems, but not more 3D acceleration, which is not an acceptable solution. For those interested, here is the list of files to be deleted to stop having this problem. /System/Library/Extensions/ATIRadeonX1000.kext /System/Library/Extensions/ATIRadeonX1000GA.plugin /System/Library/Extensions/ATIRadeonX1000GLDriver.bundle /System/Library/Extensions/ATIRadeonX1000VADriver.bundle /System/Library/Extensions/ATIRadeonX2000.kext /System/Library/Extensions/ATIRadeonX2000GA.plugin /System/Library/Extensions/ATIRadeonX2000GLDriver.bundle /System/Library/Extensions/ATIRadeonX2000VADriver.bundle I would like to know if there is a way to fix this using other drivers if that's possible or by creating a group to force Apple to make a replacement program in place. Edit : How to locate those files on your hard drive if you are not under Snow Tiger : sudo find / -iname "*radeon*"

    Read the article

  • How to fix locale settings in Debian squeeze

    - by blogjunkie
    I occasionally get locale errors and I've tried to run dpkg-reconfigure locales to fix the problem. Here's the output: :~$ sudo dpkg-reconfigure locales perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory /usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory /usr/bin/locale: Cannot set LC_ALL to default locale: No such file or directory Generating locales (this might take a while)... en_US.UTF-8... done Generation complete. perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "C" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "C" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). I looked for /usr/bin/locale but it doesn't exist on my system. Do I need to create it? What do I put in there? Also I found a related question that says the cause of his problem was in the sshd_config file. The file had the following entry: AcceptEnv LANG LC_* I'm mainly concerned that it may cause problems for my VPS, otherwise if it's nothing major I'll be happy to ignore the problem. What should I do? thanks!

    Read the article

  • Ruby on Rails (Redmine) on Apache - 503 Error

    - by andrewtweber
    I am running a Ruby on Rails application called Redmine. It's been working fine, but today it's giving a 503 Service Temporarily Unavailable error. (It was initially set up by an employee who is now gone.) I check the error log and it says: [Mon Nov 21 11:03:30 2011] [error] (111)Connection refused: proxy: HTTP: attempt to connect to 127.0.0.1:3000 (127.0.0.1) failed [Mon Nov 21 11:03:30 2011] [error] ap_proxy_connect_backend disabling worker for (127.0.0.1) Here's a chunk of my Apache config <VirtualHost *:80> ServerName redmine.{domain}.com RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://redminecluster%{REQUEST_URI} [P,QSA,L] </VirtualHost> <Proxy balancer://redminecluster> BalancerMember http://127.0.0.1:3000 </Proxy> I found this link: http://www.redmine.org/boards/2/topics/20561 which suggests I simply need to "start the redmine server." I've tried /etc/init.d/redmine start which gives me this output => Booting Mongrel => Rails 2.3.11 application starting on http://0.0.0.0:3000 The contents of /etc/init.d/redmine: cd /var/redmine sudo ruby script/server -d -e production One thing I immediately notice is that it says 0.0.0.0 instead of 127.0.0.1. In addition, running top or ps -ef shows no record of a "mongrel" or "redmine" process. I've also tried restarting Apache before and after starting redmine. Not sure where to go from here.

    Read the article

  • How does one change the UUID of a Volume on Mac OS X 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • need to stop mysql server on my mac os x

    - by al0ne evenings
    I just installed xampp on my mac os x. When I tried start mysql it display a message that mysql is already running on this computer. In order to start mysql stop first mysql. I tried following ways to stop it but neither of them works. mysqladmin version sudo /usr/local/mysql/mysql.server stop //mysql.server command not found mysqladmin -u root -p password shutdown //restarts the server but not shutdown when i use which mysql command it shows this path /usr/local/bin/mysql and when I issue ps aux | grep mysqld command I get following output zafarsaleem 85209 0.0 0.3 2699804 13204 ?? S 7:51AM 0:00.88 /Applications/MAMP/Library/bin/mysqld --basedir=/Applications/MAMP/Library --datadir=/Applications/MAMP/db/mysql --plugin-dir=/Applications/MAMP/Library/lib/plugin --lower-case-table-names=0 --log-error=/Applications/MAMP/logs/mysql_error_log.err --pid-file=/Applications/MAMP/tmp/mysql/mysql.pid --socket=/Applications/MAMP/tmp/mysql/mysql.sock --port=8889 zafarsaleem 85093 0.0 0.0 2435488 924 ?? S 7:51AM 0:00.03 /bin/sh /Applications/MAMP/Library/bin/mysqld_safe --port=8889 --socket=/Applications/MAMP/tmp/mysql/mysql.sock --lower_case_table_names=0 --pid-file=/Applications/MAMP/tmp/mysql/mysql.pid --log-error=/Applications/MAMP/logs/mysql_error_log zafarsaleem 86693 0.0 0.0 2425480 180 s004 R+ 8:30AM 0:00.00 grep mysqld zafarsaleem 86507 0.0 0.3 2678756 11364 ?? S 8:07AM 0:00.63 /usr/local/Cellar/mysql/5.5.20/bin/mysqld --basedir=/usr/local/Cellar/mysql/5.5.20 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mysql/5.5.20/lib/plugin --max-allowed-packet=32M --log-error=/usr/local/var/mysql/Zafars-MacBook-Pro-2.local.err --pid-file=/usr/local/var/mysql/Zafars-MacBook-Pro-2.local.pid zafarsaleem 86447 0.0 0.0 2435488 920 ?? S 8:07AM 0:00.02 /bin/sh /usr/local/bin/mysqld_safe --max_allowed_packet=32M Please help. How can I resolve this issue.

    Read the article

  • Installing VirtualBox on BackTrack 5

    - by m0skit0
    I'm getting this error when running VirtualBox's installation script: $ sudo ~/Downloads/VirtualBox-4.1.14-77440-Linux_x86.run Verifying archive integrity... All good. Uncompressing VirtualBox for Linux installation........... VirtualBox Version 4.1.14 r77440 (2012-04-12T16:20:44Z) installer Removing previous installation of VirtualBox 4.1.14 r77440 from /opt/VirtualBox Installing VirtualBox to /opt/VirtualBox tar: Record size = 8 blocks Python found: python, installing bindings... Building the VirtualBox kernel modules Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxhost/4.1.14/build/ for more information. ERROR: binary package for vboxhost: 4.1.14 not found Here's the log: $ cat /var/lib/dkms/vboxhost/4.1.14/build/make.log DKMS make.log for vboxhost-4.1.14 for kernel 3.2.6 (i686) Sun May 13 14:32:52 CEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/Makefile:39: /usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu: No such file or directory make: *** No rule to make target `/usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu'. Stop. make: Leaving directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/ directory: $ ls /usr/src/linux-headers-3.2.6/arch/x86/ Kconfig Makefile ia32 lguest mm pci tools video Kconfig.cpu boot kernel lib net platform um xen Kconfig.debug crypto kvm math-emu oprofile power vdso Makefile references on "cpu" $ cat /usr/src/linux-headers-3.2.6/arch/x86/Makefile | grep cpu include $(srctree)/arch/x86/Makefile_32.cpu # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) Before upgrading to 3.X I didn't have this problem, the script would install VB correctly. Any ideas on what might be causing this? Thanks in advance!

    Read the article

  • How do I switch java versions to an earlier version in Fedora 17?

    - by JHutson456
    I just installed Fedora 17. I'm setting up the Android Build Environment and need Java. I downloaded and installed jdk-6u32-linux-amd64.rpm I ran java -version and it spit out the correct version. Well a day or two later i tried my first compile in Fedora 17 and it complained about java and failed. I ran java -version again and low and behold it spits out $ java -version java version "1.7.0_03-icedtea" OpenJDK Runtime Environment (fedora-2.1.fc17.7-x86_64) OpenJDK 64-Bit Server VM (build 22.0-b10, mixed mode) I'm stumped. I mean, i've run the update/upgrade commands since i installed but i didn't think that updated full version revisions... So, I ran alternatives --config java and that only gave me the java 1.7 version. While digging around more I discovered that the recommended version of Java for the build environment is jdk-6u27-linux-x64-rpm.bin so I downloaded that from here: Oracle Download When I ran: sudo sh jdk-6u27-linux-x64-rpm.bin it returned: Unpacking... Checksumming... Extracting... UnZipSFX 5.50 of 17 February 2002, by Info-ZIP ([email protected]). inflating: jdk-6u27-linux-amd64.rpm inflating: sun-javadb-common-10.6.2-1.1.i386.rpm inflating: sun-javadb-core-10.6.2-1.1.i386.rpm inflating: sun-javadb-client-10.6.2-1.1.i386.rpm inflating: sun-javadb-demo-10.6.2-1.1.i386.rpm inflating: sun-javadb-docs-10.6.2-1.1.i386.rpm inflating: sun-javadb-javadoc-10.6.2-1.1.i386.rpm Preparing... ########################################### [100%] package jdk-2000:1.6.0_32-fcs.x86_64 (which is newer than jdk-2000:1.6.0_27-fcs.x86_64) is already installed Done. so now I'm confused. I ran: alternatives --config java again but it's still only returning 1.7 so I don't know what to do.I want to end up with 6u27 as the installed and functional version of the JDK. Thank you.

    Read the article

  • cups log kills ubuntu 12.04 and sudoer permissions changed

    - by peterretief
    I am using Ubuntu 12.04 as a desktop and recently had a weird crash with the log file for cups filling up the entire drive and not letting me back in, also what changed was /var/lib/sudo had changed from root to peter (me) I didn't make this change - I checked the history! I set the sudoers back to root and capped the max size for cups log Anyone had a similar experience? It feels like someone is messing around with my settings Is there any way to trace how the error occurred? Logs auth.log Jan 1 02:04:13 peter-desktop lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jan 1 02:04:13 peter-desktop lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 Jan 1 02:06:53 peter-desktop lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jan 1 02:06:53 peter-desktop lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 syslog Jan 1 02:04:13 peter-desktop rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="903" x-info="http://www.rsyslog.com"] start Jan 1 02:04:13 peter-desktop rsyslogd: rsyslogd's groupid changed to 103 Jan 1 02:04:13 peter-desktop rsyslogd: rsyslogd's userid changed to 101 Jan 1 02:04:13 peter-desktop rsyslogd-2039: Could not open output pipe '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] Jan 1 02:04:13 peter-desktop bluetoothd[898]: Failed to init gatt_example plugin Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Initializing cgroup subsys cpuset Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Initializing cgroup subsys cpu Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Linux version 3.2.0-25-generic-pae (buildd@palmer) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #40-Ubuntu SMP Wed May 23 22:11:24 UTC 2012 (Ubuntu 3.2.0-25.40-generic-pae 3.2.18) Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] KERNEL supported cpus: Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Intel GenuineIntel Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] AMD AuthenticAMD Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] NSC Geode by NSC

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Apache2 config variable is not defined

    - by Kurt Bourbaki
    I installed apache2 on ubuntu 13.10. If I try to restart it using sudo /etc/init.d/apache2 restart I get this message: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message So I read that I should edit my httpd.conf file. But, since I can't find it in /etc/apache2/ folder, I tried to locate it using this command: /usr/sbin/apache2 -V But the output I get is this: [Fri Nov 29 17:35:43.942472 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined [Fri Nov 29 17:35:43.942560 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_PID_FILE} is not defined [Fri Nov 29 17:35:43.942602 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_USER} is not defined [Fri Nov 29 17:35:43.942613 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined [Fri Nov 29 17:35:43.942627 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.947913 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948051 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948075 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf: Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} Line 74 of /etc/apache2/apache2.conf is this: Mutex file:${APACHE_LOCK_DIR} default I gave a look at my /etc/apache2/envvar file, but I don't know what to do with it. What should I do?

    Read the article

  • OS X 10.6 Apply ipfw rules at startup

    - by Michael Irey
    I have a couple of firewall rules I would to like to apply at startup. I have followed the instructions from http://images.apple.com/support/security/guides/docs/SnowLeopard_Security_Config_v10.6.pdf On page 192. However, the rules do not get applied at startup. I am running 10.6.8 NON Server Edition. I can however run: (Which applies the rules correctly) sudo ipfw /etc/ipfw.conf Which results in: 00100 fwd 127.0.0.1,8080 tcp from any to any dst-port 80 in 00200 fwd 127.0.0.1,8443 tcp from any to any dst-port 443 in 65535 allow ip from any to any Here is my /etc/ipfw.conf # To get real 80 and 443 while loading vagrant vbox add fwd localhost,8080 tcp from any to any 80 in add fwd localhost,8443 tcp from any to any 443 in Here is my /Library/LaunchDaemons/ipfw.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>ipfw</string> <key>Program</key> <string>/sbin/ipfw</string> <key>ProgramArguments</key> <array> <string>/sbin/ipfw</string> <string>/etc/ipfw.conf</string> </array> <key>RunAtLoad</key> <true /> </dict> </plist> The permissions of all the files seem to be appropriate: -rw-rw-r-- 1 root wheel 151 Oct 11 14:11 /etc/ipfw.conf -rw-rw-r-- 1 root wheel 438 Oct 11 14:09 /Library/LaunchDaemons/ipfw.plist Any thoughts or ideas on what could be wrong would be very helpful!

    Read the article

  • LogMeIn Hamachi for Linux

    - by tlunter
    So far most of my work using LogMeIn Hamachi has been from either a Mac OS X or Windows system to Windows or a Linux Computer. Recently I purchased a mini computer and have been running Ubuntu Server on it, as my little server. I knew LogMeIn had a Linux client that is command line only, but I often do all my work via command line anyway, so that wasn't an issue. I added my user to the correct local file so that I could run the hamachi daemon without sudo, and was able to connect to LogMeIn's service. I decided to set up my Linux server as a git server as well, and set it up correctly. The thing is, the server is behind my schools firewall and I need to use hamachi to get around that. Since most of the time I was using either Mac or Windows, I never had an issue sshing onto any of my computers since LogMeIn is fully featured for these OSs. From Linux (Arch) though, it seems like the client cannot correctly route to the LogMeIn IPs. I know from Windows I can connect to the Linux computers, both of them. From Linux (Arch) though, I can't connect to my Mac, Windows, or Linux server. It keeps just dropping the connection. I was wondering if there was some configuration that I would need to make for this to work. I understand that it is most likely going to be a static configuration since I assume it has to do with the computer not understanding that 5.*.*.* actually refers to another IP:Port. Has anyone had any experience getting this to work?

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • How does one change the UUID of a Volume on Mac OSX 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >