Search Results

Search found 24965 results on 999 pages for 'linux kvm'.

Page 421/999 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • How to record desktop session with sound on Moblin?

    - by Moblin Newbie
    I have tried to record my desktop session with sound on a netbook running Moblin, but I can't seem to be able to record sound. xvidcap just says error accessing /dev/dsp. Are there some options I should pass to xvidcap? Should I use some other recording application? Update: I am using the latest xvidcap (1.1.7) and have read the FAQ. Unfortunately Moblin' gnome-volume-control looks nothing like what is linked to from the xvidcap FAQ; there is no way to to set or even look at the details the screenshot shows, as far as I know. alsamixer shows pulseaudio is used, if that gives anyone any clues. The device is Acer Aspire One.

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • When does `cron.daily` run?

    - by warren
    When do entries in cron.daily (and .weekly and .hourly) run, and is it configurable? I haven't found a definitive answer to this, and am hoping there is one. I'm running RHEL5 and CentOS 4, but for other distros/platforms would be great, too.

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

  • Open file in local text editor from within an SSH connection

    - by Sam
    I'm not a vim guy. I'd like to be able to open log files in Sublime Text when in an SSH connection from within Terminal. Is there a way I could do this? I'm thinking there must be a command or something that could copy the file over to a temporary directory in OS X and then open it in Sublime Text, and when I save it, it'll copy back to the original location through SSH; similar to how FileZilla does it. I'm on Mac OS X MT. The server I SSH into is running Ubuntu. I'm using Terminal.

    Read the article

  • Amazon Web Services : mise à jour de l'environnement Linux, avec les dernières versions de MySQL, Python, Ruby et le Kernel 3.2

    Amazon Web Services : mise à jour de l'environnement Linux avec les dernières versions de MySQL, Python, Ruby et le Kernel Linux 3.2 Amazon Web Services (AWS) vient de procéder à une mise à jour majeure d'Amazon Linux AMI. L'image du système d'exploitation Linux qui s'exécute sur la plateforme intègre désormais les versions les plus récentes de TomCat, MySQL, Python, GCC, Ruby, etc. Cette version a été construite avec pour objectif principal de permettre aux entreprises de migrer ou de rester sur les anciennes versions des outils. Ainsi, les organismes peuvent exécuter différentes versions majeures des applications et langages de programmation. Ceci permet au code qui s'appuie su...

    Read the article

  • Fastest reliable way to open the terminal?

    - by meder
    I actually had my SUPER_L ( left windows key ) binded to gnome-terminal, but for whatever reason ever since upgrading to 9.04 Ubuntu from 8.10 Intrepid it seemed to break the key binding. It was very handy because I could throw open the terminal with one key ( sorry but alt-f2 and typing gnome-terminal isn't practical for me ). Or perhaps it reset all the keybindings? I remember using xev and some gui type interface that was akin to Win32 registry editor. Anyway, I'm curious as to what you guys use to open the terminal.

    Read the article

  • Why doesn't NFS recognize a new UID?

    - by user76177
    I have two servers running RHEL6. I have root access to both. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's UID on client is 502. appuser's UID on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Running id appuser on each yields: uid=506(appuser). Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed UID of user in /etc/passwd on client to be 506. Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client (or anything else.)

    Read the article

  • Apache crashing at random intervals. Can not find a reason in log files

    - by Nick Downton
    We are having an issue with a VPS running plesk 9.5 on ubuntu 8.04 At seemingly random intervals Apache will disappear and needs to be started manually. I have checked the apache error log, /var/log/messages, individual virtual host apache error files and cannot find anything that coincides with the time of the failure. dmesg is empty which is a bit odd. We have also had the psa service go down for no apparent reason but apache stay up. I'm at a loss to diagnose this really because all the log files I can find do not point to any issues. Are there any others I can look at? Memory usage sits at about 55% (out of 400mb) and it isn't a particularly high trafficed server. Any pointers as to where else I can find out what is going on would be very much appreciated. Nick

    Read the article

  • OpenVPN + iptables / NAT routing

    - by Mikeage
    Hi, I'm trying to set up an OpenVPN VPN, which will carry some (but not all) traffic from the clients to the internet via the OpenVPN server. My OpenVPN server has a public IP on eth0, and is using tap0 to create a local network, 192.168.2.x. I have a client which connects from local IP 192.168.1.101 and gets VPN IP 192.168.2.3. On the server, I ran: iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On the client, the default remains to route via 192.168.1.1. In order to point it to 192.168.2.1 for HTTP, I ran ip rule add fwmark 0x50 table 200 ip route add table 200 default via 192.168.2.1 iptables -t mangle -A OUTPUT -j MARK -p tcp --dport 80 --set-mark 80 Now, if I try accessing a website on the client (say, wget google.com), it just hangs there. On the server, I can see $ sudo tcpdump -n -i tap0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 96 bytes 05:39:07.928358 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 558838 0,nop,wscale 5> 05:39:10.751921 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 559588 0,nop,wscale 5> Where 74.125.67.100 is the IP it gets for google.com . Why isn't the MASQUERADE working? More precisely, I see that the source showing up as 192.168.1.101 -- shouldn't there be something to indicate that it came from the VPN? Edit: Some routes [from the client] $ ip route show table main 192.168.2.0/24 dev tap0 proto kernel scope link src 192.168.2.4 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.101 metric 2 169.254.0.0/16 dev wlan0 scope link metric 1000 default via 192.168.1.1 dev wlan0 proto static $ ip route show table 200 default via 192.168.2.1 dev tap0

    Read the article

  • CPanel: Every url is being redirected to http://:2083

    - by Frank
    On my cpanel server, I restored about 50 accounts from crashed cpanel server. All of the sites were working fine, but suddenly without changing anything, every site started to get redirected to url "http://:2083/"., There is nothing in logs, no errors. when i do wget it says: wget grinfeld.com.br --2012-09-04 13:18:23-- http://grinfeld.com.br/ Resolving grinfeld.com.br... 198.101.221.254 Connecting to grinfeld.com.br|198.101.221.254|:80... connected. HTTP request sent, awaiting response... 301 Moved Location: https://:2083/ [following] https://:2083/: Invalid host name.

    Read the article

  • Getting permission denied error

    - by JM4
    On my Media Temple DV 4.0 server I am getting permission denied errors: -bash: cd: httpdocs: Permission denied If I switch from my login user to sudo (sudo su) or switch to root using su-, I can access the directory with any issue. This is just my site's files directory though so not sure why I'm being denied. Additionally, I added my user to the visudo commands file with: user ALL=(ALL) ALL Any suggestions to what else could be the issue?

    Read the article

  • How does Slackware handle security updates?

    - by Abtin Forouzandeh
    I use a distribution that uses apt for package management and am accustomed to letting apt grab a list of package changes. I generally let it install all the needed security updates. I've been considering migrating to slackware. However, it seems slackware does not have a package management system. How would I learn about new security updates? Is the only way to monitor http://www.slackware.com/security/?

    Read the article

  • Forward all traffic from one IP to another Ip on OS X

    - by Josh
    This is related to this question I just asked... I have two IP address on my iMac I want to "bridge". I'm not sure what the proper terminology is... here's the situation. My iMac has a firewire connection to my laptop and an ethernet connection to the rest of my office. My laptop has an ip of 192.168.100.2 (on the firewire interface). My iMac has an IP of 192.168.100.1 on the firewire interface, and two IPs, 10.1.0.6 and 10.1.0.7, on it's ethernet interface. If I wanted to forward all traffic coming in from 192.168.100.2 on my OS X machine to go out on IP 10.1.0.7, and vice-versa, can this be done? I assume I would use the ipfw command. Essentially I want to "bridge" the firewire network to the ethernet network so my laptop can see all the machines on the 10.1 network, and all those machines can see my laptop at 10.1.0.7. Is this possible?

    Read the article

  • Taking an image backup of an entire server?

    - by WarDoGG
    I am currently using a dedicated server for my hosting needs. However, the costs are too high and I would like to suspend everything until I work out my business strategy again. Is there a way I can take a complete backup of the filesystem and run it in VMWare ? I cannot just copy the entire filesystem because there are lots of tools installed and tight changes to the server configuration files I myself dont know about (by the developers), but I need a snapshot of the entire disk image along with processes installed and everything is as is because for development needs, I need to work on this copy in VMWare or VirtualBox etc. Is it possible for me to take a full image copy ? How do I do it ?

    Read the article

  • Cloning OpenVZ container

    - by Tiffany Walker
    I have an OpenVZ container on 1 host and I would like to clone it over to my server. both run SolusVM. I only have root access to my server and would like to host the container on my server now. Can I use rsync to clone the drive while the OS is running on both? Using a command like this: rsync -uazPx --exclude='/boot' --exclude='/proc' --exclude='/dev' --exclude='/lib' --exclude='/tmp' --exclude='/var/lock' / [email protected]:/ Is there any other areas I should probably not copy over?

    Read the article

  • Galera install failure on Fedora 18

    - by ehime
    I've been trying to reinstall MariaDB and have been encountering multiple issues, $ yum install Mariadb-Galera-server Error: Package: MariaDB-Galera-server-5.5.29-1.i386 (mariadb) Requires: galera Available: galera-23.2.4-1.rhel5.i386 (mariadb) galera galera = 23.2.4-1.rhel5 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest there is a requirement that libssl.so.6 and libcrypto.ssl.6 are installed, these DO show up in my /lib64 and /lib though as linked items. /usr/lib -rwxr-xr-x 1 root root 1356700 Nov 23 2010 libcrypto.so.0.9.8e lrwxrwxrwx 1 root root 19 Jun 28 12:03 libcrypto.so.6 -> libcrypto.so.0.9.8e -rwxr-xr-x. 1 root root 394272 Mar 18 14:22 libssl.so.1.0.1e lrwxrwxrwx 1 root root 16 Jun 28 12:03 libssl.so.6 -> libssl.so.0.9.8e /usr/lib64 -rwxr-xr-x 1 root root 1849680 Mar 18 14:21 libcrypto.so.1.0.1e lrwxrwxrwx 1 root root 26 Jun 28 11:54 libcrypto.so.6 -> /lib64/libcrypto.so.1.0.1e -rwxr-xr-x 1 root root 421712 Mar 18 14:21 libssl.so.1.0.1e lrwxrwxrwx 1 root root 23 Jun 28 11:54 libssl.so.6 -> /lib64/libssl.so.1.0.1e So the deps SHOULD be met, trying to $ yum install galera returns this Resolving Dependencies --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Finished Dependency Resolution No errors? but no install either .... ? lets try wget and rpm'ing the package instead I guess? $ wget https://launchpad.net/galera/2.x/23.2.4/+download/galera-23.2.4-1.rhel5.x86_64.rpm $ rpm -ivh galera-23.2.4-1.rhel5.x86_64.rpm This issues the dreaded error: Failed dependencies: libcrypto.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 libssl.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 But we saw above these packages are here =( Whats going on?? Is openssl not installed? $ yum install openssl Loaded plugins: langpacks, presto, refresh-packagekit Package 1:openssl-1.0.1e-4.fc18.x86_64 already installed and latest version Nothing to do Its there.... ??? wth Fedora?

    Read the article

  • How can I set deadline as the I/O scheduler for USB Flash devices by using udev rules?

    - by ????
    I have set CFQ as the default I/O scheduler. I often get bad performance when I write data into a Flash device. This is resolved if I use deadline as the I/O scheduler for USB Flash devices. I can't always change the scheduler manually, right? I think writing udev rules is a good idea. Can someone please write rules for me? I want: When I plug in a USB device, detect the type of the device. If it is a portable USB hard disk, do nothing (I think if a device has more than one partitions, it always a portable hard disk. If it is a USB Flash device, set deadline as it's scheduler.

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >