Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 438/991 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • which user is the website host

    - by Kossel
    I m learning about server, and I'm configuring nginx mysql php wordpress. the server distro is debian 6. I created a new user and I wish each user is the owner of the site folder /var/www/site.one so I chown -R kossel:kossel site.one my problem is, my wordpress only work if I chmod 644 wp-config.php, which all can read wordpress site suggest that file should be 640. and my question is: when someone open mydomain.com, wordpress has to access wp-config.php file, but which user is it actually using to "read" that file? root? user kossel? anyone else? how can I properly give it permission or owner??

    Read the article

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • blocking port 80 via iptables

    - by JoyIan Yee-Hernandez
    I'm having problems with iptables. I am trying to block port 80 from the outside, basically plan is we just need to Tunnel via SSH then we can get on the GUI etc. on a server I have this in my rule: Chain OUTPUT (policy ACCEPT 28145 packets, 14M bytes) pkts bytes target prot opt in out source destination 0 0 DROP tcp -- * eth1 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED And Chain INPUT (policy DROP 41 packets, 6041 bytes) 0 0 DROP tcp -- eth1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED Any guys wanna share some insights?

    Read the article

  • ssh can't connect after server ip changed

    - by Kery
    I have a server with ubuntu installed. After I change the network configuration and restart server, ssh client can't connect server any more. But in the server I can use ssh client to connect itself and the netstat command shows that sshd is listening port 22. And in my computer (win7) ping command is OK to server's new IP. The configuration in /etc/network/interfaces is: auto eth0 iface eth0 inet static address 10.80.x.x netmask 255.255.255.0 gateway 10.80.x.1 I'm very confused about this. Hope somebody can give me some idea. Thank you in advance!!!

    Read the article

  • Thunderbird doesn't raise or give focus when you click "Write" or "Reply".

    - by Neil
    I'm using Thunderbird 2.0.22, the version that comes with Ubuntu Intrepid 8.10. When I hit "Reply" or "Write", a new email window pops up, but it ends up being under the main Thunderbird Window, and doesn't have focus. Thunderbird is the only application that exhibits this weird behaviour, and it just started happening one day, whereas it worked fine before. I've seen this problem years ago as well, and wasn't sure how I fixed it.

    Read the article

  • How to version lock packages in Ubuntu?

    - by Sandra
    On CentOS exists the yum versionlock option, where you can lock a package to a specific version, so it is never upgraded past that. I would like that puppet-server-2.7.19-1 puppet-2.7.19-1 stays on 2.7, and never upgraded to 3.0. Puppet Labs have released 3.0 and put it into the stable repo, so 2.7 will get upgraded to 3.0, which is not backwards compatible. Does Ubuntu have something similar to yum versionlock?

    Read the article

  • Can't create LVM due to: not found (or ignored by filtering)

    - by James
    I'm planning to use LVM for KVM, and when I try to create a VG it fails, so how can I create my VG and LV ? Thanks [root@server ~]# vgcreate virtual-machines /dev/sda Device /dev/sda not found (or ignored by filtering). Unable to add physical volume '/dev/sda' to volume group 'virtual-machines'. [root@server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 2.0T 929G 976G 49% / tmpfs 3.9G 124K 3.9G 1% /dev/shm /dev/sda1 194M 57M 128M 31% /boot [root@server ~]# pvscan No matching physical volumes found

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • Backup all plesk MySQL Databases to individual files

    - by Michael
    Hy, Because I'm new to shell scripting I need a hand. I currently backup all mydatabases to a single file, thing that makes the restore preaty hard. The second problem that my MySQL password dosen't work because of a Plesk bug and i get the password from "/etc/psa/.psa.shadow". Here is the code that I use to backup all my databases to a single file. mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` --all-databases | bzip2 -c > /root/21.10.2013.sql.bz2 I found some scripts on the web that backup each database to individual files but I don't know how to make them work for my situation. Here is a example script: for db in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $db | gzip > "/backups/mysqldump-$(hostname)-$db-$(date +%Y-%m-%d-%H.%M.%S).gz"; done Can someone help me make the script above work for my situation? Requirements: Backup each database to individual file using plesk password location.

    Read the article

  • Display CPU usage separately (without root privileges)

    - by synaptik
    I need to display the CPU usage for each processing core on a single shared-memory 12-core (SMP) machine. I don't have access to install htop, else I would simply use that. I don't need fancy graphs or meters, though they would be nice. For example, simply displaying: X X X X X X X X X X X X where each X is the percentage utilization of 1 of the 12 processing cores on my machine. FYI: I know I can simply look at the utilization in "top" and divide that number by the number of cores on my machine, but I prefer a solution that shows each core separately.

    Read the article

  • Using remote station as original

    - by Neka
    I have 2 computers with totally same Debian, config, apps and other stuff. One at work and another at home. It's inconvenient to maintain the same configuration on these stations - upgrading OS, sync configuration, etc. Is there the way to use my home station as "host", and such a "terminal" at work? As if i have one HDD on 2 computers, but must use they own resources like an videocard and another. Looks like i need some remote tool as VNC, but this is no sessional event, I need to use "terminal" comp like original all of the time.

    Read the article

  • Rsync to take the newest file. And a cron job?

    - by user1704877
    I have a log file on two different servers. The servers are under a load balancer so half the traffic goes to one server, and half the traffic goes to the other server. I need to take the newest log file from one machine and transfer that log file to the other machine. So if one log file is changed on one server, it gets updated on the other server. I think I need to use rsync. And do I also need to put it in a cron job?

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • ip route add HOMEIP via SERVERIP disconnects me from ssh

    - by Arya
    I want to use a vpn connection on my Debian server but I get disconnected from ssh if I connect to the vpn. I thought by using the "ip route add" I can prevent getting disconnected from my server and it will continue to use the main connection for communication between my computer and the server, and the vpn for communication with other ips. This is the command I use ip route add PUBLICHOMEIP via PUBLICSERVERIP But I get disconnected after the "ip route add" command too. Am I making a mistake anywhere?

    Read the article

  • Apache on Ubuntu very slow on inital calls, very fast afterwards

    - by papakost
    I own an Ubuntu 10 VPS Server with Apache 2 hosting a Magento website. The first hit to the site from any client takes about 15-20 sec, while the subsequent hits from the same client take 0-1 sec. I suppose it doesn't have to do with Magento caching, because this happens also when the first call is on a very light page and the next calls are on heavy ones. Does anyone have an idea on what is going wrong here?

    Read the article

  • cannot find java even though it is there (ubuntu 12.04)

    - by Jeff Storey
    I'm trying to just execute the java command and it's saying it cannot be found, even though it is there. Here's what my output looks like root@oneiric:/usr/lib/jvm/default-java/bin# ls -al java -rwxrwxrwx 1 uucp 143 5750 2012-09-20 11:14 java root@oneiric:/usr/lib/jvm/default-java/bin# ./java -su: ./java: No such file or directory So the ls shows it's there, but it doesn't seem to execute. Can someone explain why this is?

    Read the article

  • How to serve a .php file locally?

    - by isomorphismes
    This part of the PHP documentation says that I should be able to make a small, fake server to serve up some local .php files in a folder using php -S localhost:8000 . But when I try that I get the following error: Usage: php [options] [-f] <file> [--] [args...] php [options] -r <code> [--] [args...] php [options] [-B <begin_code>] -R <code> [-E <end_code>] [--] [args...] php [options] [-B <begin_code>] -F <file> [-E <end_code>] [--] [args...] php [options] -- [args...] php [options] -a What am I doing wrong?

    Read the article

  • How to change log rotate Extension..???

    - by Jayakrishnan T
    Hi all, currently my logrotate configuration adds a single number after the rotated log file: mylogfile.log is rotated to mylogfile.log.1 I would like to change the extension to mylogfile.log.Current date does anyone know a way to do this? my log rotate code is :- /usr/local/jboss/jboss-3.2.7-ND1/server/default/log/consolelog.log { copytruncate rotate 1 missingok notifempty } Currently am renaming the rotated file with script.is there any option to change the extension of log rotate default configuration. Please help me

    Read the article

  • Xorg fails to start under Ubuntu

    - by den-javamaniac
    I'm running desktop Ubuntu 9.10 on my Dell laptop. Previously it was Ubuntu 9.04. After some period of time (lets say 3-4 months) my X fails to start automatically after some restart calls. If that takes place my network manager applet doesn't start either (after I do startx). Can any one point out what I'm missing/what's the problem? EDDITED I get a perfect server boot meaning there's no Xorg started. Command line interface is all I get starting from login and further.

    Read the article

  • Xorg fails to start under Ubuntu

    - by den-javamaniac
    I'm running desktop Ubuntu 9.10 on my Dell laptop. Previously it was Ubuntu 9.04. After some period of time (lets say 3-4 months) my X fails to start automatically after some restart calls. If that takes place my network manager applet doesn't start either (after I do startx). Can any one point out what I'm missing/what's the problem? EDDITED I get a perfect server boot meaning there's no Xorg started. Command line interface is all I get starting from login and further.

    Read the article

  • ubuntu 64 or 32 bit for macbook/vps?

    - by ajsie
    i've got macbook pro and wonder if i should use 64 or 32 bits ubuntu server? and then i've got a vps not hosted by med. how do i know what version to choose? how do you check how many bits your cpu i working with? can i use 64 on 32 and vice versa?

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • Vicious net widget and dynamic interface

    - by tdi
    Ive got pretty standard vicious net widget for the eth0, yet this is my laptop and I move a lot, sometimes I use wlan0, sometimes ppp0. Is there a way in vicious to dynamically choose the active iface? netwidget = widget({ type = "textbox" }) -- Register widget vicious.register(netwidget, vicious.widgets.net, '<span color="' .. beautiful.fg_netdn_widget ..'">${eth0 down_kb}</span> <span color="' .. beautiful.fg_netup_widget ..'">${eth0 up_kb}</span>', 3)

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >