Search Results

Search found 24965 results on 999 pages for 'linux kvm'.

Page 447/999 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • Upgrading phpmyadmin (and other packages) on Debian Squeeze

    - by westexasman
    I just setup a new VM with Debian Squeeze (latest stable release, 6.0.4). I am going for a webserver, so I installed the usual... apache, php5, mysql, phpmyadmin, etc. Everything went well, everything is working. My question is about upgrading packages. I noticed the phpmyadmin version is 3.3.7... the latest is 3.4.10.1. Doing apt-get update/upgrade does not upgrade the package. How does one go about upgrading packages on a Debian Squeeze server if apt-get update/upgrade does not work? Thanks!

    Read the article

  • ACL permissions not behaving as expected

    - by Yarin
    I set the following ACL on my web directory: setfacl -R -d -m mask:002 /var/www and then created a file as root that I expected to be readable by the default (apache) group. -rw--w-r--+ 1 root apache 0 Dec 17 22:32 newfile.py When I run getfacl on the file, I get: # file: newfile.py # owner: root # group: apache user::rw- group::rwx #effective:-w- mask::-w- other::r-- I'm not sure how to read this- but all I know is that the webserver is throwing a permissions error because apache can't read the file. Can anyone explain what is going on here?

    Read the article

  • ip route add HOMEIP via SERVERIP disconnects me from ssh

    - by Arya
    I want to use a vpn connection on my Debian server but I get disconnected from ssh if I connect to the vpn. I thought by using the "ip route add" I can prevent getting disconnected from my server and it will continue to use the main connection for communication between my computer and the server, and the vpn for communication with other ips. This is the command I use ip route add PUBLICHOMEIP via PUBLICSERVERIP But I get disconnected after the "ip route add" command too. Am I making a mistake anywhere?

    Read the article

  • Postifix SMTP Load Balance

    - by user103373
    I want to load balance outbound emails between 3 post-fix gateways for sending mails only reason is to use multiple different source IPs to increase throughput & inbox delivery. Each gateway should receive an approximately equal amount of outbound messages. How is it possible please suggest. +---------- smtp A --------- Internet | clients -------- smtp lb ----- smtp B --------- Internet | +---------- smtp C --------- Internet

    Read the article

  • haproxy backend default location

    - by magd1
    If you go to www.company.com, I want it to redirect to /something/something on my server, but the URL still shows www.company.com Is this possible in haproxy? backend new_marketing_server *** set default URL to /something/something*** mode http balance roundrobin timeout server 10m option httpclose server server1 10.86.151.142:80 minconn 32000 maxconn 3200 check port 80 inter 2000 server server2 10.122.13.189:80 minconn 32000 maxconn 3200 check port 80 inter 2000

    Read the article

  • VPS stops responding every now and again

    - by Or W
    I have a Linode vps that I use to host some of my websites on. It's Ubuntu based and it's up to date in terms of all packages. I don't have any cron jobs scheduled or any automatic processes. I host a few (up to date) wordpress blogs there that have very little traffic altogether. Every day (at a different time) my server stops responding, I can't SSH to it, web access is getting timed out and it just dies until I reboot it through the Linode manager. On the linode dashboard I can see that the CPU is not very high (2-3%) Incoming/Outgoing traffic is on 0 and the IO count has a spike just before the server stops responding (SWAP IO is at 2k and IO Rate is at 5k). When I reboot the server everything is just fine. I'm trying to figure out a way to analyze what's going on at these random times where the server freezes up. How can I determine the problem?

    Read the article

  • Binding MySQL to run from the public or private LAN IP address - which one is faster

    - by Lamin Barrow
    So we have 2 servers all running at the same web host. We have bind MySQL to listen on the public ip-address of the database server and the web server connects to it from the public ip. Both servers run on the same private network. Currently, the DB connect method from our php script takes about 3ms to connect to the MySQL database server host. My question is, would MySql data interaction from the web server be faster if we bind it to listen on the private lan address on the database server instead of the public IP? or is it the same regardless and it wont make a different.

    Read the article

  • Right solution for /etc/hosts file reset on reboot

    - by user846226
    i've just installed funtoo and after setting the FQDN on /etc/conf.d/hostname i noticed when setting a list of aliases in /etc/hosts file it get overwtiten on each reboot. Someone points to set the aliases to 127.0.0.2 ip address but that's not a valid solution for me. Could someone point me to the file where i should place entries like 127.0.0.1 local.foo 127.0.0.1 local.bar in order to make them persist in /etc/hosts after rebooting? Thanks! PD: I think openresolv could be the one who is overwritting the file.

    Read the article

  • QNAP (469L) with Debian: can't connect to router

    - by agtoever
    I've been running my QNAP 469L with Debian (Wheezy deb7u3) for a few months. Yesterday I upgraded the memory to 4 GB. The system boots fine, but since the upgrade, I'm not able to connect the server to my router (a TP-Link WR941ND). My configuration: The router runs a DHCP server (192.168.67.100 and up), with a preconfigured ip address for the QNAP (192.168.67.10). The router is on 192.168.67.1. As said, Debian is installed on the QNAP (which can be regarded as a normal computer). Networking hardware on the QNAP: Intel PRO/1000 Network Connection using the e1000e kernel module. This is what I have tried so far: Replace the network cable (tried 3 different cables on different router ports). Check for messages from the kernel: dmesg | grep eth. Besides the normal hardware messages I get a ADDRCONF(NETDEV_UP): eth0: link is not ready for each call to ifup. Manually restart the network sudo server networking restart Check sudo ifconfig (eth0 is up, but no ip addresses). Check the /etc/network/interfaces which has (besides the loopback device) an allow-hotplug eth0 and iface eth0 inet dhcp, which is afaik the default Debian configuration. Since the server has two ethernet ports, I checked if I'm using the right port (checked the hardware address that ifconfig reports for eth0 is the same as the hardware address that is in the preconfigured ip address for the server in the router. Do a manual sudo ifdown eth0 && sudo ifup eth0 with no results (but an extra ADDRCONF(NETDEV_UP): eth0: link is not ready in the kernel log) Do a dhcp request dhclient -v eth0: for about a minute requests are send (according to the terminal) and at the end I get a No DHCPOFFERS received. No working leases in persistent database - sleeping.. Check the router system log if DHCP requests are received. I see them for some devices (my Mac, my iPhone) but not from the QNAP. The log entry looks like: DHCPS:Recv REQUEST from 84:85:06:07:75:6A and then a DHCPS:Send ACK to 192.168.67.101. There are no records from the QNAP's hardware address. So the two error messages that I do get are: ADDRCONF(NETDEV_UP): eth0: link is not ready for every ifup and No DHCPOFFERS received. No working leases in persistent database - sleeping. for every DHCP call.

    Read the article

  • Allow SFTP in iptables

    - by Kevin Orriss
    I have just purchased a VPS from linode and am going through the setup guide. I have everything running (apache2, php, mysql etc) but I am being denied access via SFTP when using fileZilla to upload a file. Now this is my second time installing the server as I missed a section out the first time. I was able to connect to my server through SFTP on filezilla the first time and the thing I missed out was adding a new user and editing the iptables in the firewall. So it would seem that the guide I have been following has blocked SFTP but allowed SSH. Here is the iptables file: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allow SSH connections # # The -dport number should be the same port number you set in sshd_config # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT All I would like is a line I need to put in there which allows SFTP over port 22. Thank you for reading this.

    Read the article

  • How do you get autofs and updatedb to work together?

    - by Veek.M
    /etc/my.misc sda1 -fstype=ntfs,user,exec :/dev/sda1 sda3 -fstype=ntfs,user,exec :/dev/sda3 sda4 -fstype=ntfs,user,exec :/dev/sda4 /etc/auto.master /my /etc/my.misc --ghost When I run locate .pdf, I get nothing because though the mount points (sda1, sda2, ..) are created in /my - there's nothing in them till I access them. Unfortunately this is not good enough for updatedb and it purges its cache of /my/sdaX files. How do I prevent/solve this problem?

    Read the article

  • Listing packages in a repositiory?

    - by noloader
    I'm working on Ubuntu 12.04 Server. I want to install OpenStack, so I enabled the Cloud Archive repo: sudo add-apt-repository cloud-archive:havana After the subsequent update and upgrade, I noticed python-crypto changed. python-crypto recently fixed a CVE, so I would like to ensure I'm using the patched version of python-crypto. I'd also like to compare the python-crypto in both Ubuntu and Cloud Archive. How does one list the package information for both Ubuntu::python-crypto and CloudArchive::python-crypto? (And sorry I could not tag this with apt-cache. Its not available in the list of tags). Thanks in advance

    Read the article

  • Best windows tool to scan and repair harddisk

    - by ICTdesk.net
    Does anybody know a good software tool to scan and repair sectors on harddisks (an alternative to the standard that is included with windows e.g. scandisk/chkdsk)? I know already about all emergency/ultimate boot cd's, I am looking for a tool that is not on one of the boot-cd's. Thank you, Kindest regards, Marcel

    Read the article

  • What could be causing LVM errors on first boot after install in Debian?

    - by ianfuture
    Hi, I've installed Debian (lenny) on a machine at home. It was set up during install to have a /boot partition, then the rest was encrypted, then had an LVM ontop of that, then all the other partitons inside LVM. After install completed and on first boot it asked for password to un-encrypt(same password for both drives) then it showed an error which said LVM could not find a physical device with a particular UUID or something similar. LVM install is over two HDs. One is 120GB and one 40GB. 120GB is Master on its IDE cable and this has /boot on it. 40GB is slave on the other IDE cable. Is there anything that could be done to rescue this install? Or diagnose problem? It took ages to get installed due to time spent enrypting drives and I'd rather not go through that again. :( Thanks.. Ian

    Read the article

  • Crontab -- scheduling my backups

    - by Garfonzo
    I want to do a backup every Friday night (no, this is not the whole backup routine, just part of it). Each Friday night's backup will not be overwritten until 4 weeks later. So, essentially, I have a four revolving backups: Week1, week2, week3, and week4. Now, I need the week1 backup script to run every 4 weeks. But I also want week2's script to run every four weeks. I know that I can tell the crontab to execute something every X weeks/days/hours/whatever. However, how do I set it up so that each of these four scripts actually run on different weeks, how do I avoid all 4 scripts running on the same night, then dutifully waiting for weeks only to all run again? Thanks.

    Read the article

  • Confused about setting up subversion

    - by apache
    I've already compiled and installed subversion, now trying to add users to it. And I find two articles on this, but they seem to be going in entire different direction. The 1st is here, which looks very simple, and seems it's not necessary to create a user account(useradd ...) the 2nd is here, which is a lot more complicated, and seems I need to create a user account for each svn user. Which one should I follow?

    Read the article

  • I'm trying to set up a LAMP server so it's totally anonymous, any suggestions?

    - by flexterra
    I'm going to set up a web service which will use the LAMP stack. One of the most important features of the site is that it should be anonymous. We thought that a cool thing will be if the server didn't made any logs that could potentially identify a user. I'm working on a web app for a news organization. They want a site to allow people to sumbit news leads and tips (text / files) to journalists. We think if we can provide good anonymity people will be more inclined to provide information. We will also teach how to use stuff like TOR as an extra precaution for whistleblowers Is this even possible? Any suggestions of obscure things we should look into?

    Read the article

  • Triggering GDM login on a remote machine

    - by creator
    I have to briefly describe the situation. We are planning to make a computer classroom with workstations running Ubuntu 10.04. Since making accounts for each student has not been considered reasonable, we decided to make accounts for each student group. We don't want students to share their passwords between groups so the solution would be not to give them passwords at all, but let the teacher log them in instead. Obviously he shouldn't go from one machine to another typing in credentials by hand, so we need some script that will connect to a remote machine by ssh and make GDM (or probably any other login manager if GDM cannot serve this purpose) log in specified user. I couldn't find any solutions, as well as I haven't noticed anybody in similar situation asking for help, so my question will be: can the scheme described be realized and if yes, then how? Thanks in advance.

    Read the article

  • How can I avoid hard-coding YubiKey user identities into the PAM stack?

    - by CodeGnome
    The Yubico PAM Module seems to require changes to the PAM stack for each user that will be authenticated with a YubiKey. Specifically, it seems that each user's client identity must be added to the right PAM configuration file before the user can be authenticated. While it makes sense to add authorized keys to an authentication database such as /etc/yubikey_mappings or ~/.yubico/authorized_yubikeys, it seems like a bad practice to have to edit the PAM stack itself for each individual user. I would definitely like to avoid having to hard-code user identities into the PAM stack this way. So, is it possible to avoid hard-coding the id parameter to the pam_yubico.so module itself? If not, are there any other PAM modules that can leverage YubiKey authentication without hard-coding the stack?

    Read the article

  • Gentoo+urxvt+terminus: How do I change font version?

    - by gaidal
    In my Debian installation I can type extended ASCII characters such as åäö by default using the terminus font, however in Gentoo I can't get it to work so far. Nothing happens when I hit those keys, like in this thread: Missing glyphs in Terminus font, how to setup a fallback font ? But in this case I know terminus supports those characters in at least some of its versions, since it's works in Debian. So what I want is to find out how to see and choose which of the many different terminus font files is being used. I set the font in the same way on both Debian and Gentoo, using URxvt*font: xft:terminus:size=xx in .Xdefaults. Both systems use en_US.UTF-8 as default locale.

    Read the article

  • mysql on ubunto 4 is not running and is not remotely accessible

    - by user628119
    Currently i installed ubunot on ubunto 4.4, and using following commands i can see mysql, mysqld running ps -ef | grep mysql ps -ef | grep mysqld but when I run, netstat i don't see mysql and 3306 anywhere. in my.cnf file, i have given my ip and port is 3306. also when i run this command sudo netstat -tap | grep mysql i don't see anything and commands I needed to run the mysql 5 on port 3306 and on ip=x.x.x.x for remotely accessible Looking forward to your reply

    Read the article

  • Setting differing ACLs on directories and files

    - by durandal
    Quick ACL question: I want to set up default permissions for a file share so that everyone can rwx all of the directories and so that all newly created files are rw. Everyone who is accessing this share is in the same group, so this isn't a concern. I have looked at doing this via ACLs without changing all of the users' umasks and such. Here are my current invocations: setfacl -Rdm g:mygroup:rwx share_name setfacl -Rm g:mygroup:rwx share_name My problem is that while I want all of the newly created sub-directories to be rwx, I only want newly created files to be rw. Does anyone have a better method to achieve my desired end-result? Is there some way to set ACLs on directories separately from files, in a similar vein to "chmod +x" vs. "chmod +X"? Thanks

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >