Search Results

Search found 27515 results on 1101 pages for 'embedded linux'.

Page 437/1101 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Entire filesystem restore from rdiff-backup snapshot

    - by atmosx
    I'm trying to make a complete system restore from an rdiff-backup. The cli for backing was: rdiff-backup --exclude-special-files --exclude /tmp --exclude /mnt --exclude /proc --exclude /sys / /mnt/backup/ebox/ I created a new partition mounted the partition at /mnt/gentoo and did: rdiff-backup -r /mnt/vol2 /mnt/gentoo However when I try to chroot to this system (following gentoo's manual, which means mounting /dev/ and /proc) I get the following error: chroot: failed to run command/bin/bash': No such file or directory` All this takes place on a Parallels (virtual machine) Debian installation. Any ideas on how to proceed in order to fully restore the system? Best Regards ps. /mnt/gentoo/bin/bash works fine if I execute it. All files and permissions are in place rdiff-backup seems to work just fine. However the system cannot neither boot (exits with kernel panic - cannot find init) or be chrooted.

    Read the article

  • FreeNAS/ZFS/Raid-Z slightly different disks

    - by muskratt
    I'm considering using FreeNAS and "recycling" some of my older 1TB disks. Two are the exact same model Western Digital while another is Seagate and the fourth is Samsung. Typically, since all disks are not equal, I'll create my arrays on a Windows-based server 1GB undersized to prevent a replacement disk not being large enough. Dell is notorious for sending replacement SATA disks of different brand---knock on wood, no problems yet. Since not all drives are created equally and they can vary a few MBs, is there a way to make the the FreeNas/ZFS/Raid-Z function in the same way I do for my Windows-based servers above? Thanks

    Read the article

  • Hung Java JVM failing to respond to kill -3

    - by Hans
    I have a Java VM that is hanging "randomly". I quote the randomly bit, because there is obviously a reason that the VM is hanging, but the hang does not occur periodically. We have the same software running in different customer environments and in those environments the JVM is not hanging. In the process of attempting to troubleshoot the hang the process exists with zero CPU utilization. I then attempt to execute kill -3 and the kill command hangs. No JVM Thread Dump is produced. I have spent time instrumenting the code to periodically log the thread stack traces hoping to catch the JVM in a state that would indicate where the issue lies, but so far this attempt has not born much fruit. Unfortunately I have not been able to reproduce this issue in my lab environment so I am limited by what can be done at the Customer site. The OS's in question are Red Hat Enterprise 5.4 and SUSE 10 running java version 1.6.0_05-b13 Has anyone had this problem? Any ideas on why kill -3 is failing to produce a Java Thread Dump? Thanks!

    Read the article

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • Setting XFCE terminal PS1 value and making it permanent

    - by Matt
    I'm trying to add the value PS1='\u@\h: \w\$ ' to my terminal in XFCE. I added the line to (what I think is) the correct area in /etc/profile. The relevant segment is: # Set a default shell prompt: #PS1='`hostname`:`pwd`# ' PS1='\u@\h: \w\$ ' if [ "$SHELL" = "/bin/pdksh" ]; then # PS1='! $ ' PS1='\u@\h: \w\$ ' elif [ "$SHELL" = "/bin/ksh" ]; then # PS1='! ${PWD/#$HOME/~}$ ' PS1='\u@\h: \w\$ ' elif [ "$SHELL" = "/bin/zsh" ]; then # PS1='%n@%m:%~%# ' PS1='\u@\h: \w\$ ' elif [ "$SHELL" = "/bin/ash" ]; then # PS1='$ ' PS1='\u@\h: \w\$ ' else PS1='\u@\h: \w\$ ' fi Most of that was already there, I just commented out the existing value and added the one I want. By manually opening the terminal and doing . profile, I can load these values, but they don't stick - I close the terminal and reopen, and I'm back to sh-4.1$. Maybe I'm doing this in the wrong place, but how can I make that value stick? All the info I've found on google is Fedora/Ubuntu-specific. I use Slackware. Any help on this matter would be greatly appreciated.

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • How to automatically set default quota limits for users on XFS filesystem, when the new account is created

    - by acidburn2k
    I guess the title explains the problem pretty well. Do you have an idea for a mechanism, which will automatically assign default quota values for every new account created (sort as the skel scheme works, but in this area)? Now, I am looking for a generic clean solution, not some ugly cron based scripts, or wrapper scripts for creating users. I would also like to avoid any external, unmaintained stuff (like forgotten pam modules, and such). Anything what could lead to overhead and extra work in future isn't really the solution, nor is checking for new accounts every minute.

    Read the article

  • Changing time or offsetting it in OpenVZ contained server

    - by Milad Naseri
    I am trying to run a VPS, a Debian box contained in an OpenVZ container. Obviously, I cannot use time --set or any such command, as the time must be set via the parent node. The owner of the parent node, however refuses to adjust the time (which is 30 minutes slower than the actual time). All the programs on my system, consequently, now recognized the false time and this throws a wrench in my syncing. Is there a way to possibly change the system time without interference from the container's administrator? Or perhaps, failing that, a way to make the programs "see" the time 30 minutes faster than what is reported by the container?

    Read the article

  • Zabbix not getting data for one filesystem

    - by Dennis Williamson
    I have Zabbix monitoring disk space for several volumes on several servers. It works fine on all of them except for one of the volumes on one of the servers which always reports as 0. However, when I run ./zabbix_get -s localhost -p 10050 -k 'vfs.fs.size[/home, free]' locally on the machine in question, it gives me the correct, non-zero size which matches the output of df. How can I go about troubleshooting and correcting this problem?

    Read the article

  • Why should I use a puppet parameterized class?

    - by robbyt
    Generally when working with complex puppet modules, I will set variables at the node level or inside a class. e.g., node 'foo.com' { $file_owner = "larry" include bar } class bar { $file_name = "larry.txt" include do_stuff } class do_stuff { file { $file_name: ensure => file, owner => $file_owner, } } How/when/why does parametrized classes help when this situation? How are you using parametrized classes to structure your puppet modules?

    Read the article

  • How is network mounted software executed?

    - by CptSupermrkt
    I would like to understand how network mounted software works. For example, at my place of work, we have a software server. Each client machine (hundreds of them) automatically mounts directories from the software server on boot. For example, a program like Matlab is installed just once on the software server, but each client machine can start up an instance of Matlab. What is going on under the hood? Let's say I run /opt/bin/matlab and /opt/ is mounted from the software server, what happens when I press Enter to execute matlab on a client machine? The process is on the client machine, and I've already narrowed down that there isn't any implicit or hidden file transfer (i.e. copying matlab to my machine temporarily for that session) by running matlab on a computer with nearly zero disk space (i.e. not enough room to transfer). Since Matlab was installed on the server, how is my client computer executing it? What mechanism is controlling this? What is happening behind the scenes?

    Read the article

  • Monitor or log directory permission changes?

    - by Myles
    I'm having an issue with a cPanel shared server running CentOS 5 where a few directories under the public_html folder keep getting changed to 777 from 755. The customer says they are not changing it and i'm wondering if there is a way to monitor these specific directories to find out who/what is changing the permissions. I have looked into using auditctl and after testing it and changing the permissions myself I don't see anything in the logs so i'm not sure if i'm doing it right or if it's even possible. Does anybody have any suggestions or ideas on how I could figure out what is changing the permissions? Thanks!!

    Read the article

  • unable to start apache after changes to rc.conf and resov.conf

    - by shupru
    I had a working configuration this morning with the following simple /etc/rc.conf ifconfig_rl0="DHCP" ifconfig_xl="inet 192.168.1.11 netmask 255.255.255." defaultrouter="192.168.1.1" I added the following lines: firewall_enable="YES" firewall_type="SIMPLE" firewall_logging="YES" sshd_enable="YES" apache_enable="YES" mysql_enable="YES" my httpd.conf includes: NameVirtualHost 192.168.1.11 <VirtualHost 192.168.1.11> ... </VirtualHost> now apache and ssh server are down. changed rc.conf back to last working configuration and still no ssh or apache apachectl start #--> /usr/local/sbin/apachectl start: httpd could not be started apachectl status #--> Looking up localhost Making http connection to localhost Alert!: Unable to connect to remote host.

    Read the article

  • How can I use a keyfile on a removable USB drive for my encrypted root in Debian?

    - by naivem
    Recently set up root encryption with a couple of LVM volumes inside one LUKS volume, and I am just a little confused as to how I would go about getting it to automatically unlock using a keyfile stored on a USB flash drive, I presume I would have to put the drive in the fstab inside my initramfs (if there is one), and add a hook for USB device support. But I digress, essentially, I want to know what I have to do to enable my LUKS volume (containing all of my partitions sans /boot) to unlock using a keyfile stored on a USB flash drive, rather than a manually entered passphrase.

    Read the article

  • Add the "SAMBA File Server" role to a server running SCO Unix?

    - by I.T. Support
    We're trying to get network access to a hard drive on a server running SCO Unix from Windows Servers. I beleive we need to add the role "SAMBA File Server" to the server so we can mount the drive as a network share that we can access from Windows. Is it possible to add the SAMBA role to a SCO Unix operating system? Are there any gotchas or concerns? Thanks

    Read the article

  • Apache crashing at random intervals. Can not find a reason in log files

    - by Nick Downton
    We are having an issue with a VPS running plesk 9.5 on ubuntu 8.04 At seemingly random intervals Apache will disappear and needs to be started manually. I have checked the apache error log, /var/log/messages, individual virtual host apache error files and cannot find anything that coincides with the time of the failure. dmesg is empty which is a bit odd. We have also had the psa service go down for no apparent reason but apache stay up. I'm at a loss to diagnose this really because all the log files I can find do not point to any issues. Are there any others I can look at? Memory usage sits at about 55% (out of 400mb) and it isn't a particularly high trafficed server. Any pointers as to where else I can find out what is going on would be very much appreciated. Nick

    Read the article

  • Finding the current user authenticated by basic auth (Apache)

    - by jtd
    When you log in through a basic auth page, is the username you authenticated as stored anywhere (on the server or client machine), maybe in an environment variable? Background: I have a common web administration page for an e-mail server and I'd like to know who is doing what. When a user successfully logs in via basic auth, I somehow want to be able to identify them and log their actions. So each time a request is submitted, I can write to a log file. The basic format would be: $username ran a $function against $useraccount so if a user changed someone's permissions, eg: Admin-Bob ran a permission change against User-Scott So if errors occur, I can easily trace back in the log file what actions lead to the cause. I tried checking the %ENV hash to no avail, any Ideas? I don't really want to get into PHP-like sessions, because that would mean scrapping my basic auth, which gives me a fine degree of control already. If I have to code something with sessions, I'd need to implement a system to block users after maximum tries and so on, which I don't really want to code. I think this is better geared towards serverfault because it pertains to Apache moreso than the programming language. Sessions can be done in a myriad of languages.

    Read the article

  • chroot for unsecure programs execution

    - by attwad
    Hi, I have never set-up a chroot-jailed environment before and I am afraid I need some help to do it well. To explain shortly what this is all about: I have a webserver to which users send python scripts to process various files that are stored on the server (the system is for Research purpose). Everyday a cron job starts the execution of the uploaded scripts via a command of this kind: /usr/bin/python script_file.py All of this is really insecure and I would like to create a jail in which I would copy the necessary files (uploaded scripts, files to process, python binary and dependencies). I already looked at various utilities to create jails but none of them seemed up-to-date or were lacking solid documentation (ie. the links proposed in How can I run an untrusted python script) Could anyone guide me to a viable solution to my problem? like a working example of a script that creates a jail, put some files in it and executes a python script? Thank you very much.

    Read the article

  • Is there a way to control two instantiated systemd services as a single unit?

    - by rascalking
    I've got a couple python web services I'm trying to run on a Fedora 15 box. They're being run by paster, and the only difference in starting them is the config file they read. This seems like a good fit for systemd's instantiated services, but I'd like to be able to control them as a single unit. A systemd target that requires both services seems like the way to approach that. Starting the target does start both services, but stopping the target leaves them running. Here's the service file: [Unit] Description=AUI Instance on Port %i After=syslog.target [Service] WorkingDirectory=/usr/local/share/aui ExecStart=/opt/cogo/bin/paster serve --log-file=/var/log/aui/%i deploy-%i.ini Restart=always RestartSec=2 User=aui Group=aui [Install] WantedBy=multi-user.target And here's the target file: [Unit] Description=AUI [email protected] [email protected] After=syslog.target [Install] WantedBy=multi-user.target Is this kind of grouping even possible with systemd?

    Read the article

  • Ubuntu Connect Button Disabled, cannot connect

    - by CSharperWithJava
    I'm trying to connect to my school network from Ubuntu 12. The network is WPA encrypted and I can connect just fine if the network manager establishes a connection automatically. However, when I get disconnected and try to reconnect a username and password prompt pops up. The credentials are already stored and are autofilled in correctly, but the Connect button is disabled so I can't actually connect to the network. What can I do? The only guess I've found so far is the password is too short and Ubuntu is expecting a full WPA key. I don't administer the school network though so I certainly can't change that.

    Read the article

  • default virtual network interface

    - by Zulakis
    I got a single ethernet connection to a network but need multiple ips. Because of this, I am using virtual network interfaces like this: auto intern iface intern inet static address ... netmask ... gateway ...U auto intern:1 iface intern:1 inet static address ... netmask ... gateway ... I need to specify which IP should be used by default for outgoing traffic. How can I do that?

    Read the article

  • Can only bring up one of two interfaces

    - by mstaessen
    I'm having a bizarre issue with my HP Proliant DL 360 G4p server. It has two gigabit ethernet interfaces but I can bring up only one of them. This is starting to freak me out and that's why I turned here. I'm running the x64 ubuntu 11.10 server edition. lshw -c network shows that the second interface is disabled. I have no idea why ans how to enable it. $ sudo lshw -c network *-network:0 description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 logical name: eth0 version: 10 serial: 00:18:71:e3:6d:26 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 duplex=full firmware=5704-v3.27b, ASFIPMIc v2.36 ip=10.48.8.x latency=64 link=yes mingnt=64 multicast=yes port=twisted pair speed=100Mbit/s resources: irq:25 memory:fdf70000-fdf7ffff *-network:1 DISABLED description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2.1 bus info: pci@0000:02:02.1 logical name: eth1 version: 10 serial: 00:18:71:e3:6d:25 capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 firmware=5704-v3.27b latency=64 link=no mingnt=64 multicast=yes port=twisted pair resources: irq:26 memory:fdf60000-fdf6ffff If I try to ifup eth1, then I get $ sudo ifup eth1 Ignoring unknown interface eth1=eth1. I figured that's what happens when there is no eth1 listed in /etc/network/interfaces. But when I add the configuration for eth1, I still can't ifup. $ sudo ifup eth1 RTNETLINK answers: File exists Failed to bring up eth1. I've also tried ifconfig eth1 up but without any result. For clarity, I have added a masked version of /etc/network/interfaces. I don't think it is the cause of the problem though. $ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.48.8.x netmask 255.255.255.y network 10.48.8.z broadcast 10.48.8.t gateway 10.48.8.u auto eth1 iface eth1 inet static address 193.190.253.x netmask 255.255.255.y network 193.190.253.z broadcast 193.190.253.t gateway 193.190.253.u I really need some help fixing this. It's driving me crazy. Thanks.

    Read the article

  • SSH rsa key works with external IP not internal IP

    - by Ian
    I am using rackspace cloud hosting. I have 2 servers behind a load balancer. Each server has an external IP and an internal IP. I want to setup a sync job that uses SSH to transfer files. I made an rsa key, and I can successfully SSH from server A into server B, using the external IP of server B, without being prompted for a password. If I try to do the same but use the internal IP, it prompts me for a password. I want to be able to use the key instead of the password. Why is this? Is there something special I have to do during key generation so it works for both IPs? Any help is appreciated.

    Read the article

  • How can I change 'change' date of file?

    - by Someone1234
    How can I change 'change' date? $ touch -t 9901010000 test;stat test File: `test' Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fe01h/65025d Inode: 11279017 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 1000/ x) Gid: ( 1000/ x) Access: 1999-01-01 00:00:00.000000000 +0100 Modify: 1999-01-01 00:00:00.000000000 +0100 **Change: 2012-04-08 19:26:56.061614473 +0200** Birth: -

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >