Search Results

Search found 30329 results on 1214 pages for 'ubuntu forums'.

Page 356/1214 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • access netatalk share on osx permission issue

    - by Fresheyeball
    I have two users in ubuntu. My first was me and I am the owner of the folder in question. The second is my wife. Netatalk is running and we can both see the folder on the network. However I can access it but she cannot. She gets an error in osx "... you don't have permission to see its contents". I have use chmod 777 on the folder but it made no difference. Any ideas? UPDATE The directory in question is a mounted harddrive at /media/ourPhotos

    Read the article

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • VSFTPD: Cannot figure this thing out...

    - by A Wizard Did It
    Alright, I've been giving this the best that I can, reading through various tutorials on google, but I cannot seem to get vsftpd running the way I want. For a short while I had it working with one account, but then that stopped and I haven't been able to get it to work since. I've since reformated and reinstall Ubuntu 10.04 LTS. I used apt-get install vsftpd and that's where I am now... I'd really appreciate if anyone could help me understand exactly how this is supposed to work... How do I add FTP accounts and set their home directory to something like /var/www/public_html?

    Read the article

  • What process is resurrecting mysqld?

    - by ripper234
    I'm following this guide to reset my mysql root password (I'm on ubuntu). When I kill the mysqld process, it immediately gets resurrected. The parent process ID is 1. How can I find what keeps resurrecting mysqld? $ ps -ef | grep mysql mysql 30136 1 0 07:16 ? 00:00:00 /usr/sbin/mysqld root 30295 30274 0 07:18 pts/0 00:00:00 grep --color=auto mysql $ kill -9 30136 $ ps -ef | grep mysql mysql 30302 1 2 07:18 ? 00:00:00 /usr/sbin/mysqld root 30404 30274 0 07:18 pts/0 00:00:00 grep --color=auto mysql $

    Read the article

  • Open file in local text editor from within an SSH connection

    - by Sam
    I'm not a vim guy. I'd like to be able to open log files in Sublime Text when in an SSH connection from within Terminal. Is there a way I could do this? I'm thinking there must be a command or something that could copy the file over to a temporary directory in OS X and then open it in Sublime Text, and when I save it, it'll copy back to the original location through SSH; similar to how FileZilla does it. I'm on Mac OS X MT. The server I SSH into is running Ubuntu. I'm using Terminal.

    Read the article

  • Why can't SVN checkout into a virtualbox shared folder?

    - by Alex Waters
    I am trying to checkout into the virtualbox shared folder with svn 1.7 in ubuntu 12.04 running as a guest on a windows 7 host. I had read that this error was a 1.6 problem, and updated - but am still receiving the error: svn: E000071: Can't move '/mnt/hostShare/code/www/.svn/tmp/svn-hsOG5X' to '/mnt/hostShare/code/www/trunk/statement.aspx?d=201108': Protocol error I found this blog post about the same error in a mac environment, but am finding that changing the folder/file permissions does nothing. vim .svn/entires just has the number 12 - does this need to be changed? Thank you for any assistance! (just another reason for why I prefer git...)

    Read the article

  • apache running but site not accessible

    - by Shyam
    Am pretty new to server administration. So I am not able to get to the root of the problem. I am running Apache2 with mod_php on a 1GB Rackspace Cloud Server (Ubuntu 9.10). My site goes down often, and I have to restart apache2 to get the site working. I checked the "error.log" file. There were no signs of any error messages. I even searched for words like [error] / error / warn / [warn] . But no results. The site goes down and even then apache is running. When the site was down, the checked the status /etc/init.d/apache2 status and it gave ** * Apache is running (pid 433). ** Any suggestions where I should look for the problem. Thanks a lot.

    Read the article

  • What's a fast way to copy a lot of files from an internal hard-drive to external (USB) storage?

    - by jonathanconway
    I have a large amount of data - about 500 GB - on the internal hard drive of a desktop PC. This includes music, videos, PDFs... you name it. I want to copy everything to an external USB hard drive (1.5 tb capacity). The desktop PC runs Ubuntu. To being with, I simply plugged in and mounted the hard drive and dragged the top-level folder onto the drive. It's started copying, but it seems to be proceeding very slowly. About 10 minutes later and it's only done about 500 MB. I'm sure this is slower than what I could achieve with less total data. So I'm wondering if there's a quicker way of doing this. Would it be better to copy it in portions of 500MB or so, rather than all at once?

    Read the article

  • ERROR : MySQL server has gone away while running query

    - by Rashmi Nama
    I am using ubuntu 12.04 version. I am connecting properly to MariaDB from command prompt,I have a database named Dealer and have some tables in it but when i running any query, it gives an error.My steps as follow: mysql -uroot -proot use dealer; select * from dealer_outlet limit 1; now error occours ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 3 Current database: dealer ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111 "Connection refused") ERROR: Can't connect to the server

    Read the article

  • Linux - File was deleted and then reappeared when folder was zipped

    - by davee9
    Hello, I am using Backtrack 4 Final, which is a Linux distro that is Ubuntu based. I had a directory that contained around 5 files. I deleted one of the files, which sent it to the trash. I then zipped the directory up (now containing 4 files), using this command: zip -r directory.zip directory/ When I then unzipped directory.zip, the file I deleted was in there again. I couldn't believe this, so I zipped up the directory again, and the file reappeared again but this time could not be opened because the operating system said it didn't exist or something. I don't remember the exact error, and I cannot make this happen again. Would anyone happen to know why a file that was deleted from a directory would reappear in that directory after it was zipped up? Thank you.

    Read the article

  • Slow write speeds on new Gigabit home file server

    - by Ryan Holder
    So I finally got all my parts delivered to setup a home file/backup server this week. It's currently running Ubuntu Server and I'm using Samba to share files on my network. The server currently has a 2TB WD Green drive in it connected to a Asus M5A78L-M This is then connected via CAT6a to my new Gigabit switch (TP-Link TL-SG1005D). My home desktop is then also connected to this switch and again also through CAT6a cable. Currently when transfering files I will get a perfect 100MB/s read from the server to my Windows machine. When copying from my Windows machine to the server I get around 30/38MB/s. I know this drive is capable is faster speeds so would anybody have an idea of where the bottleneck is? Any help would be greatly appreciated :) EDIT: I have found ftp's write speed is much closer to what my Samba read speed is so I'm going to give it a guess that is a software problem rather than hardware

    Read the article

  • Standalone server setup for compute capacity

    - by mikera
    I'm developing an application for my company that will require a lot of compute capacity (running some very big mathematical calculations), and looking for some form of server setup to do this. For various reasons, we want to run this on-site in our office rather than hosting it externally. It's been a while since I last had to set up my own servers so I thought I would tap into the collective wisdom of serverfault! My broad requirements are: Budget $30-50k, with an aim to get as much compute capacity as possible for that budget 64-bit servers suitable to run Ubuntu Linux + Java Some relatively standalone rack that can be installed in secure office space Fast/low latency network connections between the servers, but don't really care about connectivity to the outside world Storage capacity shared between the servers - they don't necessarily need their own storage providing they can be booted from a common image Downtime can be tolerated (since the calculations are run in batch mode) The software itself is fault-tolerant, so there is no need for extra resiliency in the server setup (cheap replaceable commodity parts will be fine in general) Given these requirements what kind of setup would you recommend and why?

    Read the article

  • Install Linux with two hard drives

    - by rdecourt
    I've a machine with two hard drives. The first one has 80 GB and the second has 120 GB. I'm about to format this machine and install Linux, and I want to install all the main partitions (/, /boot, /usr/, etc.) on the first hard disk drive (sda) and mount the /home and /var partition on second disk (sdb). Is this possible, and do I have to do something after the instalation? Or is the second hard disk drive automatically mounted? How can I do it? I won't do it, but is there any problem to mount /boot on the second hard disk drive? I'm using Ubuntu 12.04.

    Read the article

  • Burning iso images with wodim loses 2048 bytes at the end

    - by Grumbel
    If I burn an iso image with: wodim -data dev=/dev/scd0 in.iso and then read it back out with: dd if=/dev/scd0 of=out.iso The resulting files are not identical, out.iso is 2048 bytes shorter then in.iso. What is going on here and how can I fix it? Using Ubuntu 10.04 and Wodim 1.1.10 PS: dd always ends with an Input/output error, not just with this CD, but with all of them. I think its just a limitation of dd, but an explanation why it happens and how to avoid it would be welcome as well.

    Read the article

  • Add iphones, ipads to existing OpenVPN server

    - by Zoran
    Could someone please provide me with info How to connect iphone and ipad to a existing OpenVPN server based on Ubuntu 8.04? I saw similar posts (such is Simplest VPN setup for iphone on Debian Linux?) but I haven't seen answer which will help me. Most of our client machines (whic are connecting to OpenVPN) are Windows 7 & Vista. Now I have to add several iphone and ipad users. How to accomplish that task? EDIT: One more thing, I can not use GuizmoVPN since that I wil have to jailbreak iphone which is not possoble solution

    Read the article

  • When HDD becomes full, how to create a symbolic link to the data store on another disk?

    - by Brij Raj Singh
    I have a Linux Ubuntu machine which has an X GB hard disk. There is folder, say, /opt/software/data. The disk /dev/sda1 is almost full and I have attached another disk at /dev/sda2 which is mounted at /hdd2. Is it possible for me to link the folders /opt/software/data with /hdd2/software/data so, that every file get stored in the /hdd2/software/data but may be referred from the /opt/software/data? I can't do a reinstall of the software that creates this data, to change the default location of storage.

    Read the article

  • Blank screen after grub menu

    - by Tim
    I just rebooted an Ubuntu Server 10.04. After choosing boot options in the grub menu, though, it just displays a black screen with the blinking white underscore in the upper-left corner. The machine has had (hardware) trouble with networking before, but the problem remains after 10 minutes, so I don't think it's the problem now. Booting into recovery mode or using earlier kernels yields the same problem. This also happens if I boot from another hard-drive. I haven't yet tried to boot from CD as the machine lacks a CD reader. How should I diagnose the problem? Update: My boot options are: recordfail insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set 567[redacted] linux /boot/vmlinuz-2.6.32-29-generic root=UUID=567[redacted] ro quiet splash initrd /boot/initrd.img-2.6.32-29-generic Update: Also, I cannot access the virtual terminals (ctrl+alt+Fn).

    Read the article

  • HAProxy and Intermediate SSL Certificate Issue

    - by Sam K
    We are currently experiencing an issue with verifying a Comodo SSL certificate on an Ubuntu AWS cluster. Browsers are displaying the site/content fine and showing all the relevant certificate information (at least, all the ones we've checked), but certain network proxies and the online SSL checkers are showing we have an incomplete chain. We have tried the following to try to resolve this: Upgraded haproxy to the latest 1.5.3 Created a concatenated ".pem" file containing all the certificate (site, intermediate, w/ and w/out root) Added an explicit "ca-file" attribute to the "bind" line in our haproxy.cfg file. The ".pem" file verifies OK using openssl. The various intermediate and root certificates are installed and showing in /etc/ssl/certs. But the checks still come back with an incomplete chain. Can anyone advise about anything else we can check or any other changes we can make to try to fix this? Many thanks in advance... UPDATE: The only relevant line from the haproxy.cfg (I believe), is this one: bind *:443 ssl crt /etc/ssl/domainaname.com.pem

    Read the article

  • Which FTP Daemon should I use if I want to use MySQL for authentication?

    - by wag2639
    We want to set up a FTP Daemon on our Ubuntu 10.04 server that can use a simple (probably custom) built web interface for a FTP server using MySQL for authentication. It'll be public facing but only intended for use by a few customers or clients. I know vsftpd, ProFTPd, and Pure-FTPd but I'm not sure which is best for this application. Main features we would like: a. Very good MySQL authentication integration b. Able to specify a list folders/files (folder level is sufficient) each user has access to through MySQL Anything else would just be sprinkles on top.

    Read the article

  • Own website fails to load first time

    - by AmazingDreams
    I have a website running on a VPS, every time I first try to load the website the connection times out. If I press try again, it loads directly. I'm not sure whether this is a DNS issue or a server issue. As far as I know everything is set up correctly. Also, it has been doing this from the moment I got this server and set up my domain name. And that's about two to three months ago. You may take a look here: http://www.wegotcha.nl/ As you can see at this moment it's just an image, there are no scripts running in the background or anything. The only error Apache gives me is that favicon.ico cannot be found. It's an Apache webserver running on Ubuntu 12.04.1 (newest version) I update all packages almost every day (apt-get update && apt-get upgrade). I am merely an amateur on the area of webservers so any help will be appreciated. :)

    Read the article

  • How to invoke a command using specific proxy server?

    - by Xiè Jìléi
    Some applications support proxy (http proxy or socks proxy), and some are not. For browsers, I can specify proxy server in the preferences/options dialog, and other applications may be able to configure proxy servers in config files. For general purpose, can I invoke a command using a specific proxy? Like following: $ proxy-exec --type sock5 --server 1.2.3.4:8000 -- wget/ftp ... I'm using Ubuntu Maverick. P.S. In win32, it can be implemented by hijacking the socket dlls, maybe, I'm not familiar with Linux programming, but I guess it's possible in Linux. though.

    Read the article

  • Weird stuff in in my /var/log/auth.log

    - by xXx
    I just check my logs on my deed serv, i spotted some weird log in the auth.log : Jun 17 22:27:01 mutualab CRON[16249]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:27:01 mutualab CRON[16249]: pam_unix(cron:session): session closed for user user Jun 17 22:28:01 mutualab CRON[16253]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:28:01 mutualab CRON[16253]: pam_unix(cron:session): session closed for user alain Jun 17 22:29:01 mutualab CRON[16257]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:29:01 mutualab CRON[16257]: pam_unix(cron:session): session closed for user user Looks like somebody try to log - and suceed ? - but delog instantly ? I got the same log for hours now... Do you know what happens ? N.B : it's a 10.04 ubuntu server

    Read the article

  • Gitosis installation of public key not working...

    - by user29600
    I've been following this tutorial to install and setup git on Ubuntu Server 10.04 using Windows 7 as a client. However, after finally figuring out how it works (executed gitosis-init a bunch of times on the wrong key), I copied the id_rsa.pub file over to the server in /tmp folder and ran it again. Unfortunately it still doesn't work and when I execute git clone [email protected]:gitosis-admin.git it asks for gitosis's password rather than the RSA Passphrase. I'm assuming is the same problem this guy is having here... however, after following his instructions: Purge git-core and gitosis and manually remove the /srv/gitosis folder and following the instructions again (with the proper id_rsa.pub file this time), I'm still having the same issue. Anyone know what I'm doing wrong? Is there any way to probe for more information that might help in solving this?

    Read the article

  • Rsync over ssh with root access on both sides

    - by Tim Abell
    Hi, I have one older ubuntu server, and one newer debian server and I am migrating data from the old one to the new one. I want to use rsync to transfer data across to make final migration easier and quicker than the equivalent tar/scp/untar process. As an example, I want to sync the home folders one at a time to the new server. This requires root access at both ends as not all files at the source side are world readable and the destination has to be written with correct permissions into /home. I can't figure out how to give rsync root access on both sides. I've seen a few related questions, but none quite match what I'm trying to do. I have sudo set up and working on both servers.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >