Search Results

Search found 53148 results on 2126 pages for 'ubuntu 9 10'.

Page 463/2126 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • my X server doesn't load a module called "glx", but my video drivers seem to be installed

    - by rumtscho
    I just got a new, very wide monitor (2560x1440) and there is no sense maximizing my applications. So I installed Compiz Config Settings Manager and enabled Grid. Nothing happened, the shortcuts don't move application windows. Went to System - Preferences - Appearance, the Visual effects tab. It at "disabled". When I try to set them to "normal" or "extra", a message box appears telling me that it's searching for video drivers, then disappears, and I get an error message "Desktop effects could not be enabled". I opened Xorg.0.log, and had errors there: (EE) Failed to load /usr/lib/xorg/modules/extensions//libglx.so (II) UnloadModule: "glx" (EE) Failed to load module "glx" (loader failed, 7) (II) LoadModule: "extmod" Going to Administration - System - Hardware drivers, it said that there are no available and/or installed hardware drivers. But apt-get said that it cannot install nvidia-glx-185, as it is already installed. Googling my error message suggested that I install and run something called envyng. This let me install the nvidia drivers again, and now I can see in the Hardware Drivers window that they are installed and active. But the error message in Xorg.0.log remains, and I still cannot enable the Compiz effects or use Grid. Now, I don't have enough Linux experience to understand if this is a single cause-effect-chain of problems, or three independent ones, but I'd appreciate help for any of them. I am running Ubuntu 9.10, the video card is a GeForce 7600GS.

    Read the article

  • fail2ban block ports rules iptable

    - by J Spen
    I just installed Ubuntu Server 14.04 and don't have much experience with IPtables. I am trying to get a basic setup going where I only accept SSH connections on port 22 and 2222. I actually have that working with no problem using fail2ban ssh. Then I wanted to block all other ports except 423 and 4242 but either method of DROPing all connections that are not listed seems not to work and it blocks me out of everything. Below is the setup that works: -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N fail2ban-ssh -A INPUT -p tcp -m multiport --dports 22,2222 -j fail2ban-ssh -A fail2ban-ssh -j RETURN I tried to change it either to: -P INPUT DROP -P FORWARD ACCEPT -P OUTPUT ACCEPT -N fail2ban-ssh -A INPUT -p tcp -m multiport --dports 22,2222 -j fail2ban-ssh -A fail2ban-ssh -j RETURN or: -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N fail2ban-ssh -A INPUT -p tcp -m multiport --dports 22,2222 -j fail2ban-ssh -A INPUT -j DROP -A fail2ban-ssh -j RETURN I have noticed that the rules for fail2ban-ssh are automatically added to my iptables on boot because if I save them with iptables-persistant they are entered twice. How do I go about blocking everything accept those 2 ports using fail2ban? Is it a bad fail2ban configuration or do I need to add the fail2ban-ssh -j Return somewhere else in my code.

    Read the article

  • Sending mail through local MTA while domain MX records point to Google Apps

    - by Assaf
    My domain's email is managed by Google Apps, so that domain users get Gmail and Calendar, etc. But I also want to be able to send applicative notifications to users outside the domain via email (e.g. "some commented on your post", and so on). However, if I try to send email through code I get blocked by Gmail after a few emails. I send marketing email through MailChimp, to minimize the risk of appearing as spam to my users (one-click unsubscribe, etc.). But I can't send applicative message in this way. I want to install a local MTA (my server runs Ubuntu), but I'm not sure what anti-spam measures I need to implement so that receiving MTAs don't think it's a spam server. What's stopping anyone from setting up a mail server and sending emails using my domain name? AFAIK it's the DNS records that show the MTA's address actually belongs to the domain. But my understanding of this is rather superficial, so someone please correct me if I'm wrong. But what sort of DNS configuration do I need to put in place so that I don't get blacklisted (assuming I don't actually spam anyone)? The MX records already point to Google, and I'd like to keep it this way. So do I just need to define an A record for my internal mail server? Should it show email as coming from a sub-domain, so as not to conflict with the bare domain being managed by google? Edit: Does the following SPF record make sense if I want email from my domain name to be sent by either google's servers or any server with a dns name ending with mydomain.com? "v=spf1 ptr mx:google.com mx:googlemail.com ~all" How should I set up reverse DNS for my server? If I have an A record that points mailsender.mydomain.com to my MTA's ip address, does it mean that reverse lookup will only allow emails sent from [email protected]?

    Read the article

  • x86_64 and memory issues

    - by Valery
    Recently I've switched from ubuntu 32bit to 64bit version. And now I experiencing some problems. All application take twice more memory. And some application takes even more. For example sshd on new server: root 6608 0.0 0.0 67972 2912 ? Ss 14:43 0:00 sshd: deploy [priv] deploy 6616 0.0 0.0 67972 1724 ? S 14:43 0:00 sshd: deploy@pts/4 root 20892 0.0 0.0 50916 1160 ? Ss 15:53 0:00 /usr/sbin/sshd root 21170 0.0 0.0 67972 2912 ? Ss 15:56 0:00 sshd: deploy [priv] deploy 21173 0.0 0.0 67972 1728 ? S 15:56 0:00 sshd: deploy@pts/0 root 23802 0.0 0.0 67972 2912 ? Ss 16:08 0:00 sshd: deploy [priv] deploy 23804 0.0 0.0 67972 1724 ? S 16:08 0:00 sshd: deploy@pts/1 root 24570 0.0 0.0 67972 2908 ? Ss 12:45 0:00 sshd: deploy [priv] deploy 24573 0.0 0.0 68112 1804 ? S 12:45 0:00 sshd: deploy@pts/3 deploy 25014 0.0 0.0 5168 852 pts/0 S+ 16:13 0:00 grep ssh the same on the old server: root 4867 0.0 0.0 5312 1028 ? Ss Mar23 0:00 /usr/sbin/sshd root 23753 0.0 0.0 8052 2556 ? Ss 16:15 0:00 sshd: deploy [priv] deploy 23755 0.0 0.0 8052 1524 ? S 16:15 0:00 sshd: deploy@pts/0 deploy 23770 0.0 0.0 3004 748 pts/0 D+ 16:15 0:00 grep ssh The same problems with postfix, nginx and some other application.

    Read the article

  • Problems getting Cron to run processes tagged @reboot for LDAP users

    - by Ben Torell
    I have a lab of computers running Ubuntu 9.10. Most of the people who log on to these computers are users from an LDAP server, and not local users. We discovered that if an LDAP user has a crontab with an entry marked to be run @reboot, the command will not actually run upon the reboot of a machine. I'm pretty sure that this is because the cron daemon starts before networking is fully up, so the crontabs of any LDAP users aren't loaded and run or checked for @reboot. In fact, cron will ignore LDAP users' crontabs entirely after a reboot until that user runs crontab -e again and saves, or until the cron daemon is rebooted. We were able to fix one part of this problem by adding the following line to /etc/crontab: @reboot root /bin/sleep 45 && /etc/init.d/cron restart Thus, when cron starts back up upon a reboot, it waits for networking to get up, then restarts the cron daemon. That fixes the problem of crontabs not being read at all for LDAP users. However, since it's the cron daemon being restarted and not the computer, @reboot entries are ignored. Is there a way for a user to make a command run upon restarting the daemon, rather than a reboot? Or is there a better solution to this overall problem? Thanks.

    Read the article

  • Enable gzip on Nginx

    - by Rob Wilkerson
    Yes, I know that there are a lot of other questions that seem exactly like this out there. I think I must've looked all of them. Twice. In desparation, I'm adding another in case my specific configuration is the issue. Bear with me. First, the question: What do I need to do to get gzip compression to work? I have an Ubuntu 12.04 server installed running nginx 1.1.19. Nginx was installed with the following packages: nginx nginx-common nginx-full The http block of my nginx.conf looks like this: http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Both PageSpeed and YSlow are reporting that I need to enable compression. I can see that the request headers indicate Accept-Encoding:gzip,deflate,sdch, but the response headers do not have the corollary Content-Encoding header. I've tried various other config values (gzip_vary on, gzip_http_version 1.0, etc.), but no joy. As far as I know, I can only assume that nginx was compiled with compression support, but if there's any way to verify that, I'd love to know. If anyone sees anything I'm missing or can suggest further debugging, please let me know. I'm no sysadmin and I'm new to Nginx so I've exhausted everything I can think of or have read. Thanks.

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • Sudoers file allow sudo on specific file for active directory group

    - by tubaguy50035
    I have active directory sign in working on an Ubuntu 12.04 box. When the user signs in, I have a script that runs that needs sudo permission (since it modifies the samba config file). How would I specify this in my sudoer's file? I've tried: %DOMAIN\\AD+Programmers ALL=NOPASSWD: /usr/local/bin/createSambaShare.php I've found various resources on the internet stating that this is how it would be done, but I'm not sure that I have the first part right. What are they using as the DOMAIN? The workgroup or the realm? I use Samba + winbind for active directory integration. Here's my smb.conf: [global] security = ads netbios name = hostname realm = COMPANYNAME.COM password server = passwordserver workgroup = COMPANYNAME idmap uid = 1000-10000 idmap gid = 1000-10000 winbind separator = + winbind enum users = no winbind enum groups = no winbind use default domain = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes domain master = no EDIT: The users that should have access to run that script are all part of the Programmers group which has an Active Directory Domain Services Folder of Company.com/Staff/Security Groups (not sure if that matters or not).

    Read the article

  • VirtualBox VM running web server not accessible via external IP

    - by mwigdahl
    I have a Windows 7 machine running VirtualBox with an Ubuntu guest. The guest has a Bitnami LAMP stack installed. I have the guest configured for Bridged networking, and I can access the guest web server just fine from other machines on my LAN using the guest's IP. I'm trying to configure port forwarding so that I can access the web server from outside my LAN. (The router is a 2WIRE model as I'm on ATT's UVerse). I've set up port forwarding for ports 80 and 443 to the guest's IP in a similar manner to how I had them set up for my previous, physical web server, which worked just fine. However, I cannot seem to access the new, virtual web server using my external IP on the forwarded port. I suspected Windows Firewall issues on the host, but disabling it didn't solve the issue. Anyone have advice on what I should try next? EDIT: I've now attempted disabling the firewall on the guest with sudo ufw disable -- that doesn't seem to help either. However, after checking the router's port forwarding in more detail I may see the problem. My VM is named "linux" and in the router's configuration pages it shows up inconsistently. Sometimes it reports with a valid LAN IP and other times it doesn't show up with any IP. Even when it shows the correct IP the router indicates that it is disconnected. Could this be an indication that the 2WIRE router doesn't play well with VirtualBox's bridged networking mode?

    Read the article

  • Are my iptables secure?

    - by Patricia
    I have this in my rc.local on my new Ubuntu server: iptables -F iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 9418 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 9418 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 5000 -m state --state NEW,ESTABLISHED -j ACCEPT # Heroku iptables -A INPUT -i eth0 -p tcp --sport 5000 -m state --state ESTABLISHED -j ACCEPT # Heroku iptables -A INPUT -p udp -s 74.207.242.5/32 --source-port 53 -d 0/0 --destination-port 1024:65535 -j ACCEPT iptables -A INPUT -p udp -s 74.207.241.5/32 --source-port 53 -d 0/0 --destination-port 1024:65535 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT iptables -P INPUT DROP iptables -P FORWARD DROP 9418 is Git's port. 5000 is a port used to manage Heroku apps. And 74.207.242.5 and 74.207.241.5 are our DNS servers. Do you think that this is secure? Can you see any holes here? Update: Why is it important to block OUTPUT? This machine will be used only by me.

    Read the article

  • Cant access folder on server- Permission denied

    - by Michal Korzeniowski
    I am running a vps with ubuntu 11.04. After a clean Modx install I've tried to access http://www.encepence.pl/manager and I've got a permission denied by my server. the thing is that I can easily access any other folder under that domain and modify this folder(manager) content via ftp. I’ve tried modifying virtual host with that <Directory /var/www/blackflow/data/www/encepence.pl/manager/> Options Indexes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> But it didn't work. <Directory /var/www/blackflow/data/www/encepence.pl> Options -ExecCGI -Includes php_admin_value open_basedir "/var/www/blackflow/data:." php_admin_flag engine on </Directory> <VirtualHost 192.166.219.34:80 > ServerName encepence.pl CustomLog /var/www/httpd-logs/encepence.pl.access.log combined DocumentRoot /var/www/blackflow/data/www/encepence.pl ErrorLog /var/www/httpd-logs/encepence.pl.error.log ServerAdmin [email protected] ServerAlias www.encepence.pl SuexecUserGroup blackflow blackflow AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/var/www/blackflow/data:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/var/www/blackflow/data/mod-tmp" php_admin_value session.save_path "/var/www/blackflow/data/mod-tmp" VirtualDocumentRoot /var/www/blackflow/data/www/%0 </VirtualHost> Any ideas on what might have gone wrong?

    Read the article

  • stdout, stderr, and what else? (going insane parsing slapadd output)

    - by user64204
    I am using slapadd to restore a backup. That backup contains 45k entries which takes a while to restore so I need to get some progress update from slapadd. Luckily for me there is the -v switch which gives an output similar to this one: added: "[email protected],ou=People,dc=example,dc=org" (00003d53) added: "[email protected],ou=People,dc=example,dc=org" (00003d54) added: "[email protected],ou=People,dc=example,dc=org" (00003d55) .######## 44.22% eta 05m05s elapsed 04m spd 29.2 k/s added: "[email protected],ou=People,dc=example,dc=org" (00003d56) added: "[email protected],ou=People,dc=example,dc=org" (00003d57) added: "[email protected],ou=People,dc=example,dc=org" (00003d58) added: "[email protected],ou=People,dc=example,dc=org" (00003d59) Every N entries added, slapadd writes a progress update output line (.######## 44.22% eta 05m05s elapsed ...) which I want to keep and an output line for every entry created which I want to hide because it exposes people's email address but still want to count them to know how many users were imported The way I thought about hiding emails and showing the progress update is this: $ slapadd -v ... 2>&1 | tee log.txt | grep '########' # => would give me real-time progress update $ grep "added" log.txt | wc -l # => once backup has been restored I would know how many users were added I tried different variations of the above, and whatever I try I can't grep the progress update output line. I traced slapadd as follows: sudo strace slapadd -v ... And here is what I get: write(2, "added: \"[email protected]"..., 78added: "[email protected],ou=People,dc=example,dc=org" (00000009) ) = 78 gettimeofday({1322645227, 253338}, NULL) = 0 _######## 44.22% eta 05m05s elapsed 04m spd 29.2 k/s ) = 80 write(2, "\n", 1 ) As you can see, the percentage line isn't sent to either stdout or stderr (FYI I have validated with known working and failing commands that 2 is stderr and 1 is stdout) Q1: Where is the progress update output line going? Q2: How can I grep on it while sending stderr to a file? Additional info: I'm running Openldap 2.4.21 on ubuntu server 10.04

    Read the article

  • Understanding tcptraceroute versus http response

    - by kojiro
    I'm debugging a web server that has a very high wait time before responding. The server itself is quite fast and has no load, so I strongly suspect a network problem. Basically, I make a web request: wget -O/dev/null http://hostname/ --2013-10-18 11:03:08-- http://hostname/ Resolving hostname... 10.9.211.129 Connecting to hostname|10.9.211.129|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ‘/dev/null’ 2013-10-18 11:04:11 (88.0 KB/s) - ‘/dev/null’ saved [13641] So you see it took about a minute to give me the page, but it does give it to me with a 200 response. So I try a tcptraceroute to see what's up: $ sudo tcptraceroute hostname 80 Password: Selected device en2, address 192.168.113.74, port 54699 for outgoing packets Tracing the path to hostname (10.9.211.129) on TCP port 80 (http), 30 hops max 1 192.168.113.1 0.842 ms 2.216 ms 2.130 ms 2 10.141.12.77 0.707 ms 0.767 ms 0.738 ms 3 10.141.12.33 1.227 ms 1.012 ms 1.120 ms 4 10.141.3.107 0.372 ms 0.305 ms 0.368 ms 5 12.112.4.41 6.688 ms 6.514 ms 6.467 ms 6 cr84.phlpa.ip.att.net (12.122.107.214) 19.892 ms 18.814 ms 15.804 ms 7 cr2.phlpa.ip.att.net (12.122.107.117) 17.554 ms 15.693 ms 16.122 ms 8 cr1.wswdc.ip.att.net (12.122.4.54) 15.838 ms 15.353 ms 15.511 ms 9 cr83.wswdc.ip.att.net (12.123.10.110) 17.451 ms 15.183 ms 16.198 ms 10 12.84.5.93 9.982 ms 9.817 ms 9.784 ms 11 12.84.5.94 14.587 ms 14.301 ms 14.238 ms 12 10.141.3.209 13.870 ms 13.845 ms 13.696 ms 13 * * * … 30 * * * I tried it again with 100 hops, just to be sure – the packets never get there. So how is it that the server does respond to requests via http, even after a minute? Shouldn't all requests just die? I'm not sure how to proceed debugging why this server is slow (as opposed to why it responds at all).

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make active the device again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question.

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • Unicorn 3.3.1 and Rack 1.1.0 issues?

    - by user41422
    I'm upgrading from the Ruby Enterprise Edition 1.8.6 to the latest 1.8.7 version with Unicorn to facilitate an upgrade to Rails 2.3.10, and am running into some issues. Should I uninstall the older versions of these gems? Here's the log messages: I'm upgrading from the Ruby Enterprise Edition 1.8.6 to the latest 1.8.7 version with Unicorn to facilitate an upgrade to Rails 2.3.10, and am running into some issues. Should I uninstall the older versions of these gems? I, [2011-02-02T22:06:16.328076 #30672] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:06:16.333137 #30672] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]> I, [2011-02-02T22:07:12.259436 #30701] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:07:12.259952 #30701] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]> I, [2011-02-02T22:09:27.787177 #30772] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:09:27.787691 #30772] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]> I, [2011-02-02T22:10:44.175407 #30846] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:10:44.175928 #30846] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]>

    Read the article

  • Replacing compiz/metacity with openbox reduces workspaces to 1

    - by Brian
    I like to use the GNOME desktop, but I prefer to replace its window manager with openbox, with 4 workspaces. However, when I run openbox --replace, the number of workspaces available drops to 1. If I go into obconf, workspaces is still configured to be 4 (~/.config/openbox/rc.xml shows the same). I can get the workspaces to reappear by changing the value in obconf to anything else, and then back to 4. I have just been dealing with this problem since Ubuntu 9.04 (now up to 10.10) since I don't reboot very often. But it's really annoying to have to reset my workspaces whenever I do have to reboot. Changing the value in rc.xml and running openbox --reconfigure does not seem to have any effect. So what is obconf doing that I'm not (sends a dbus message perhaps [EDIT: watching with dbus-monitor I see no messages when changing the workspaces value in obconf])? I was hoping there would be a cleaner way to change the window manager than just running openbox --replace at login. So my questions are: Is there a better way to specify an alternate window manager (i.e. a way that doesn't cause the workspaces to break)? If not, how can I automatically set the number of workspaces back to 4? Update: I finally got around to trying what I commented on MrShunz's answer (adding WINDOW_MANAGER=/usr/bin/openbox to ~/.gnomerc). But the effect is the same as openbox --replace.

    Read the article

  • Why am I getting this error in the logs?

    - by Matt
    Ok so I just started a new ubuntu server 11.10 and i added the vhost and all seems ok ...I also restarted apache but when i visit the browser i get a blank page the server ip is http://23.21.197.126/ but when i tail the log tail -f /var/log/apache2/error.log [Wed Feb 01 02:19:20 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs [Wed Feb 01 02:19:24 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs but my only file in sites-enabled is this <VirtualHost 23.21.197.126:80> ServerAdmin [email protected] ServerName logicxl.com # ServerAlias DocumentRoot /srv/crm/current/public ErrorLog /srv/crm/logs/error.log <Directory "/srv/crm/current/public"> Order allow,deny Allow from all </Directory> </VirtualHost> is there something i am missing .....the document root should be /srv/crm/current/public and not /etc/apache2/htdocs as the error suggests Any ideas on how to fix this UPDATE sudo apache2ctl -S VirtualHost configuration: 23.21.197.126:80 is a NameVirtualHost default server logicxl.com (/etc/apache2/sites-enabled/crm:1) port 80 namevhost logicxl.com (/etc/apache2/sites-enabled/crm:1) Syntax OK UPDATE <VirtualHost *:80> ServerAdmin [email protected] ServerName logicxl.com DocumentRoot /srv/crm/current/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /srv/crm/current/public/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

  • CentOS 5 VPN Server won't work

    - by Miro Markarian
    I have a CentOS 5 server configured to be both a L2TP server and a PPTP server + a radius server for hosting the AAA. My problem is that, the L2TP works great and I can connect to it, but can't connect to PPTP and every-time it ends up with error #619 when it gets to the verifying username and password section. Here is the log I got from /var/log/messages Dec 17 07:40:02 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection started Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Starting call (launching pppd, opening GRE) Dec 17 07:40:03 serverdl pppd[8571]: Plugin radius.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADIUS plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin radattr.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADATTR plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: pptpd-logwtmp: $Version$ Dec 17 07:40:03 serverdl pppd[8571]: pppd 2.4.4 started by root, uid 0 Dec 17 07:40:03 serverdl pppd[8571]: Using interface ppp0 Dec 17 07:40:03 serverdl pppd[8571]: Connect: ppp0 <--> /dev/pts/2 Dec 17 07:40:03 serverdl pptpd[8570]: GRE: read(fd=7,buffer=80515e0,len=8260) from network failed: status = -1 error = Protocol not available Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: GRE read or PTY write failed (gre,pty)=(7,6) Dec 17 07:40:03 serverdl pppd[8571]: Modem hangup Dec 17 07:40:03 serverdl pppd[8571]: Connection terminated. Dec 17 07:40:03 serverdl pppd[8571]: Exit. Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection finished Just yesterday when I hadn't set up the L2TP yet PPTP was working great but then I uninstalled it and removed all it's config from /etc/* and installed L2TP first and then installed PPTP after it. and then it stopped to work. I believe it must be a radiusclient issue because both of the PPTP and L2TP services use radius to authenticate. And another thing I think must be the issue is that when assigning IPs to the PPP interfaces, I have done the following config. Is that right? For L2TP: localip 10.10.10.1 remoteip 10.10.10.2-254 For PPTP: localip 10.10.9.1 remoteip 10.10.9.2-254

    Read the article

  • checksecurity / setuid changes, is this a bug or did somebody break in?

    - by Fabian Zeindl
    I received a mail by checksecurity from my ubuntu 12.04 server with the following content: --- setuid.today 2012-06-03 06:48:09.892436281 +0200 +++ /var/log/setuid/setuid.new.tmp 2012-06-17 06:47:51.376597730 +0200 @@ -30,2 +30,2 @@ - 131904 4755 2 root root 71280 Wed May 16 07:23:08.0000000000 2012 ./usr/bin/sudo - 131904 4755 2 root root 71280 Wed May 16 07:23:08.0000000000 2012 ./usr/bin/sudoedit + 143967 4755 2 root root 71288 Fri Jun 1 05:53:44.0000000000 2012 ./usr/bin/sudo + 143967 4755 2 root root 71288 Fri Jun 1 05:53:44.0000000000 2012 ./usr/bin/sudoedit @@ -42 +42 @@ - 130507 666 1 root root 0 Sat Jun 2 18:04:57.0752979385 2012 ./var/spool/postfix/dev/urandom + 130507 666 1 root root 0 Mon Jun 11 08:47:16.0919802556 2012 ./var/spool/postfix/dev/urandom First i was worried, then i realized that the change was actually 2 weeks ago, i think there was a sudo-update back then. Since checksecurity runs in /etc/cron.daily i wondered why i only get that email now. I looked into /var/log/setuid/ and found the following files: total 32 -rw-r----- 1 root adm 816 Jun 17 06:47 setuid.changes -rw-r----- 1 root adm 228 Jun 3 06:48 setuid.changes.1.gz -rw-r----- 1 root adm 328 May 27 06:47 setuid.changes.2.gz -rw-r----- 1 root root 1248 May 20 06:47 setuid.changes.3.gz -rw-r----- 1 root adm 4473 Jun 17 06:47 setuid.today -rw-r----- 1 root adm 4473 Jun 3 06:48 setuid.yesterday The obvious thing that confuses me is that the file setuid.yesterday is not from yesterday = Jun/16. Is this a bug?

    Read the article

  • How to create a init.d script for openssh-server which was compiled and installed from source using configure + make + make install?

    - by Patrick L
    I have installed openssh-server in my Ubuntu PC using apt-get install openssh-server. The version is 5.9. Now, I would like to compile and install openssh-server version 6.2 from source codes. I have successfully downloaded the source codes, and run the following commands: ./configure make make install I found that the new version of openssh-server was installed into /usr/local/sbin/. The old version of openssh-server is in /usr/sbin/. I found that the service script in /etc/init.d/ssh is still pointing to /usr/sbin/. And the old openssh-server (v5.9) is still running. How can I replace the old openssh-server with the new openssh-server that I have just compiled and installed? How can I create a init.d script to start and stop the new openssh-server that I've compiled from source manually? How to start the new openssh-server on boot? When I install openssh-server using apt-get install, the config files will be installed into /etc/ssh/. If I compile and install it from source, where is the config file? If I compiled openssh-server from source, but I install openssh-client package using apt-get install, will there be any config files conflict? Thanks.

    Read the article

  • Apache2 Re-Routing from Domain Name to Internal IP Address

    - by Richard Grey
    The problem that I am having, is that when someone goes to my domain name example.co.uk, for some reason, apache seems to be re-routing the request to the internal IP address of the server, i.e. 192.168.0.52 My Apache2 default sites enabled file is as follows: ServerAdmin [email protected] ServerName trusteeguard.co.uk ServerAlias www.trusteeguard.co.uk DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/trusteeguard-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/trusteeguard-access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> This is an Ubuntu box if that is any help ;)

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >