Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 690/1831 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • No network connectivity for my CentOS 6 installed in VMWare Fusion in MacOSX Lion

    - by gilzero
    I installed CentOS 6.2 (CentOS-6.2-x86_64-minimal.iso) with VMWare Fusion(Version 4.1.2). (Not with EasyInstall). I am using MacOSX Lion, connect to internet via WIFI. The centos installed does not have network connectivity. *How may I configure it to connect to wifi? * Thanks for any help. Below is screenshot of ifconfig: With the setting Network Adapter, I tried both NAT and Bridged WiFi.

    Read the article

  • Best MTA setup for home or laptop computers - not server

    - by thomasrutter
    Hello, What is a good MTA (e.g. Postfix or something else) setup for a home computer behind a NAT, or a laptop that connects to various different wifi networks? I've read a lot of Postfix tutorials on how to set it up this way or that, but they are usually geared towards computers that are servers ie they have a static IP have a domain name are always connected to the same network My requirements are, I guess: Ability to forward mail for "root" to another server of my choosing. No listening for incoming SMTP connections - outgoing only Ability to route outgoing mail via an external SMTP server with authentication (and perhaps encryption) If not Postfix, I need an MTA which can queue up mails in case it temporarily has no internet connection.

    Read the article

  • Package visible in Synaptic but not in aptitude?

    - by sleepycat
    I can see libfreeimage3-dev in synaptic and have installed it from there, but what is it that I can't see it from the command line? I tried sudo aptitude update... mike@sleepycat:~$ sudo aptitude search '~n.*freeimage3.*' i libfreeimage3 - Support library for graphics image formats p libfreeimage3-dbg - Support library for graphics image formats Any ideas?

    Read the article

  • Is it possible to rate limit based on host headers? i.e. not just on ip address

    - by Blankman
    I have a web service endpoint that I am building where people will post an xml file to, and it will really get pounded with over 1K requests per second. Now they are sending in these xml files via http post, but a good majority of them will be rate limited. The problem is, the rate limiting will be done by the web application by looking up the source_id in the xml, and if it is over x requests per minute, it will not be processed further. I was wondering if I could do rate limit checking earlier in the processing somehow and thus save the 50K file going threw the pipeline to my web servers and eating up resources. Could a load balancer make a call out to verify rate usage somehow? If this is possible, I could maybe put the source_id in a host header so even the XML file doesn't have to be parsed and loaded into memory. Is it possible to just look at host headers and not load up the entire 50K xml file into memory? I really appreciate your insights as this takes more knowledge of the entire tcp/ip stack etc.

    Read the article

  • open_basedir vs sessions

    - by liquorvicar
    On a virtual hosting server I have the open_basedir set to .:/path/to/vhost/web:/tmp:/usr/share/pear for each virtual host. I have a client who's running WordPress and he's complaining about open_basedir errors thus: PHP WARNING: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/var/lib/php/session/sess_42k7jn3vjenj43g3njorrnrmf2) is not within the allowed path(s): (.:/path/to/vhost/web:/tmp:/usr/share/pear) So the PHP session save_path isn't included in open_basedir but sessions across all sites on the server seems to be working fine apart from in this intermittent instance. I thought that perhaps the default session handler ignored open_basedir and this warning was caused by WP accessing the session file directly. However from what I can see PHP 5.2.4 introduced open_basedir checking to the session.save_path config: http://www.php.net/ChangeLog-5.php#5.2.4 (I am on PHP 5.2.13). Any ideas?

    Read the article

  • Terse, documented, correct way to create Kerberos-backed user shares in Greyhole

    - by MrGomez
    As a migration strategy away from Windows Home Server (which is currently out of support and intractable for our needs, for a variety of reasons), our little cloister of nerds has targeted Greyhole for our shared use at home. Despite the documentation's terseness, getting the system set up for simple, single-user operation isn't especially difficult, but this scenario fails to service our needs. Among other highlights of the system, we're attempting to emulate Integrated Windows Authentication (with Kerberos) and single-user shares to keep the Windows users in the house happy and well-supported. I'm aware of the underlying systems that go into Greyhole and understand how to set up per-user shares in Samba, but the documentation doesn't seem to support cases for Greyhole to sop up these directories as separate landing zones for replication. Enter my question: are both of these cases (IWA user authentication and user-partitioned personal shares) supported by Greyhole? If so, please cite or link the supporting documentation if it exists.

    Read the article

  • test filenames for regex patterns in bash

    - by rk
    I'm not sure exactly how the code should be written but I want to test a file/folder for naming patterns, something like: if [ -d $i ] && [ regex([0-9].,$i) { do something } I want it to check if the file/folder is a directory and that the name of it is a number (i.e. 1 or 101 or 10007)...

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • RAID FS detection at boot time

    - by alex
    An excerpt from dmesg: md: Autodetecting RAID arrays. md: Scanned 2 and added 2 devices. md: autorun ... md: considering sdb1 ... md: adding sdb1 ... md: adding sda1 ... md: created md1 md: bind<sda1> md: bind<sdb1> md: running: <sdb1><sda1> raid1: raid set md1 active with 2 out of 2 mirrors md1: detected capacity change from 0 to 1500299198464 md: ... autorun DONE. md1: unknown partition table EXT3-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT2-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT4-fs (md1): mounted filesystem with ordered data mode Is it OK that kernel tries to mount an ext4 raid as ext3, ext2 first? Is there a way to tell it to skip those two steps? Just in case: /dev/md1 / ext4 noatime 0 1 TIA.

    Read the article

  • Nagios orphaned services warnings

    - by Gordon
    We have had Nagios running on one of our servers with out any problems for a while but lately certain old service warning have been reappearing and then disappearing on the service detail page. From looking at the logs I found warning like the following. Warning: The check of service 'Tomcat' on host 'virtual1' looks like it was orphaned (results never came back). I'm scheduling an immediate check of the service... Has anyone ever came across this before or at least know a way to delete the old Orphaned Warnings. The Nagios Version we are running is Version 3.0b7 so an update might be in order. Thanks.

    Read the article

  • Trouble with apache saarting on boot with ssl api key

    - by molleman
    Im Running on Centos, the trouble is when i restart my server i need to start my apache and varnish service I use this to start both of them service httpd restart && service varnish restart But i would likw both of them to start when i reboot the server I read i could use this chkconfig httpd on But this is only for apache could i do this chkconfig varnish on Finally when i do y usual start of httpd , i am asked for my api key for SSL , am i able to incorporate this into resarting both varnish and httpd on start up. Or am i doomed to run this command everytime i resart

    Read the article

  • Understading the output of syslogd -d

    - by Heoa
    What is the meanding of 80, F and X in the following output of syslogd -d? 0: X X X X FF X X X X X FF X X X X X X X X X X X X X X FILE: /var/log/auth.log (unused) 1: FF FF FF FF X FF FF FF FF FF X FF FF FF FF FF FF FF FF FF FF FF FF FF FF FILE: /var/log/syslog (unused) 2: X X X FF X X X X X X X X X X X X X X X X X X X X X FILE: /var/log/daemon.log (unused) 3: FF X X X X X X X X X X X X X X X X X X X X X X X X FILE: /var/log/kern.log (unused) 4: X X X X X X FF X X X X X X X X X X X X X X X X X X FILE: /var/log/lpr.log (unused) 5: X X FF X X X X X X X X X X X X X X X X X X X X X X FILE: /var/log/mail.log (unused) 6: X FF X X X X X X X X X X X X X X X X X X X X X X X FILE: /var/log/user.log (unused) 7: X X 7F X X X X X X X X X X X X X X X X X X X X X X FILE: /var/log/mail.info (unused) 8: X X 1F X X X X X X X X X X X X X X X X X X X X X X FILE: /var/log/mail.warn (unused) 9: X X F X X X X X X X X X X X X X X X X X X X X X X FILE: /var/log/mail.err (unused) 10: X X X X X X X 7 X X X X X X X X X X X X X X X X X FILE: /var/log/news/news.crit (unused) 11: X X X X X X X F X X X X X X X X X X X X X X X X X FILE: /var/log/news/news.err (unused) 12: X X X X X X X 3F X X X X X X X X X X X X X X X X X FILE: /var/log/news/news.notice (unused) 13: 80 80 X 80 X 80 80 X 80 80 X 80 80 80 80 80 80 80 80 80 80 80 80 80 80 FILE: /var/log/debug (unused) 14: 70 70 X X X 70 70 X 70 X X 70 70 70 70 70 70 70 70 70 70 70 70 70 70 FILE: /var/log/messages (unused) 15: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 WALL: 16: F0 F0 FF FF F0 F0 F0 FF F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 F0 PIPE: |/dev/xconsole (unused)

    Read the article

  • How can I password protect & let cgi-bin to work?

    - by jaaaaaaax
    This is taken from sites-available directory. It's a virtual host setting for apache. Accessing myiphere/cgi-bin/ throws 403. The directory setting for /var/www2/ drwxrwxrwx 8 www-data www-data NameVirtualHost myiphere <VirtualHost myiphere> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory>

    Read the article

  • EBS full device confusion

    - by Mike
    I have a 500GB EBS device (/dev/xvdf) mounted to /vol and all data on the box seems to be writing to /vol correctly (see du output below). For some reason /dev/xvda1 is totally full. Any idea what's going on here? $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 32G 30G 8.0K 100% / udev 34G 8.0K 34G 1% /dev tmpfs 14G 176K 14G 1% /run none 5.0M 0 5.0M 0% /run/lock none 34G 0 34G 0% /run/shm /dev/xvdb 827G 201M 785G 1% /mnt /dev/xvdf 500G 145G 356G 29% /vol $ du -sh * 8.7M bin 18M boot 8.0K dev 5.1M etc 48K home 0 initrd.img 80M lib 4.0K lib64 16K lost+found 4.0K media 20K mnt 4.0K opt 0 proc 40K root 176K run 7.1M sbin 4.0K selinux 4.0K srv 0 sys 4.0K tmp 414M usr 356M var 0 vmlinuz 145G vol

    Read the article

  • Why do Ping and Dig provide different IP address than nslookup?

    - by user1032531
    When pinging my domain name which points to my home public IP from two different servers on my LAN, it shows them pinging different IP. Further investigation shows dig and nslookup providing different results. See below. A little history. My IP used to be 11.22.33.444 and is hosted by Comcast. I changed routers, and it somehow got changed to 55.66.77.888. I've since updated my 1and1 domain name to point to the 55.66.77.888. desktop is a basic server, runs the web server, and connects wirelessly to my LAN. laptop is a GUI and connected via CAT5. Both operate Centos6.4. My old router was a D-Link, and used their "Virtual Server" feature to pass port 80 to desktop. My new router is a Linksys, and I use their "Port Forwarding" feature to pass port 80 to desktop (however, I haven't gotten this part working yet). What is going on??? Why the different IPs? Obviously, it most somehow be stored on the server, but why does the actual machine even know the public IP since it is on a LAN? How do I purge the old IP? [root@desktop etc]# dig +short myDomain.com 11.22.33.444 [root@desktop etc]# nslookup www.myDomain.com Server: 8.8.8.8 Address: 8.8.8.8#53 Non-authoritative answer: Name: www.myDomain.com Address: 55.66.77.888 [root@desktop etc]# dig myDomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> myDomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13822 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myDomain.com. IN A ;; ANSWER SECTION: myDomain.com. 16031 IN A 11.22.33.444 ;; Query time: 21 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Oct 21 04:36:52 2013 ;; MSG SIZE rcvd: 44 [root@desktop etc]# [root@laptop ~]# dig +short myDomain.com 55.66.77.888 [root@laptop ~]# nslookup www.myDomain.com Server: 192.168.0.1 Address: 192.168.0.1#53 Non-authoritative answer: Name: www.myDomain.com Address: 55.66.77.888 [root@laptop ~]#

    Read the article

  • nginx start failing, says error.log doesn't exist

    - by sososo
    I structured my sites like: /home/www/domain.com/public,private, log, backup In the log folder, I created a blank error.log and access.log. My nginx file in sites-available for the domain looks like: server { access_log /home/www/domain1.com/log/access.log; error_log /home/www/domain1.com/log/error.log; } Trying to start nginx it says: starting nginx: the config file /etc/nginx/nginx/conf syntax is ok [emrg] open() ".../access.log" failed (2: no such file or directory) Is this a permission issue?

    Read the article

  • nginx start failing, says error.log doesn't exist

    - by Blankman
    I structured my sites like: /home/www/domain.com/public,private, log, backup In the log folder, I created a blank error.log and access.log. My nginx file in sites-available for the domain looks like: server { access_log /home/www/domain1.com/log/access.log; error_log /home/www/domain1.com/log/error.log; } Trying to start nginx it says: starting nginx: the config file /etc/nginx/nginx/conf syntax is ok [emrg] open() ".../access.log" failed (2: no such file or directory) Is this a permission issue?

    Read the article

  • difference between compiled and installed via rpm (zypper)

    - by cherouvim
    In an openSUSE 11.1 I download, compile and install ImageMagick via: wget ftp://.../pub/graphics/ImageMagick/ImageMagick-6.7.7-0.zip unzip ImageMagick-6.7.7-0.zip cd ImageMagick-6.7.7-0 ./configure --prefix=/usr/local/ImageMagick make make install Everything works nicelly until I discover that JPG is not supported: identify -list format | grep -i jpg [nothing related to JPG returned] So I reconfigure and recompile using: ./configure --prefix=/usr/local/ImageMagick --with-jpeg=yes --with-jp2=yes make make install But that changes nothing. I end up uninstalling: make uninstall and installing via zypper: zypper install ImageMagick This installed version 6.4.3 and now it does support JPG: identify -list format | grep -i jpg JPG* JPEG rw- Joint Photographic Experts Group JFIF format Any idea on what is going on here? What is a possible reason that this capability of ImageMagick was not there when compiled from source but was there when installed from rpm? Note that I don't necessarily care a lot about ImageMagick (since it now works), but generally about his kind of behaviour, becase in one way or another I've seen this happen in other ocasions as well.

    Read the article

  • Cannot connect on TFS 2012 server through SSL with invalid certificate

    - by DaveWut
    I saw the problem on some forums and even here, but not as specific as mine. So here's the thing, So I've configured a TFS 2012 server, on one of my personnel server at home, and now, I'm trying to make it available through the internet, with the help of apache2 on a different UNIX based, physical server. The thing is working perfectly, I don't have any problem accessing the address https://tfs.something.com/tfs through my browser. The address can be pinged and I do have access to the TFS control panel through it. How does it work? Well, with apache2 you can set a virtual host and set up the ProxyPass and ProxyPassReserver setting, so the traffic can externally comes from a secure SSL connection, through a specified domain or sub-domain, but it can be locally redirect on a clear http session on a different port. This is my current setup. As I already said, I can access the web interface, but when I'm trying to connect with Visual Studio 2012, it can't be done. Here's the error I receive: http://i.imgur.com/TLQIn.png The technical information tells me: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. My SSL certificate is invalid and was automatically generated on my UNIX server. Even if I try to add it in the Trusted Root Certification Authorities either on my TFS server or on my local workstation, it doesn't work. I still receive the same error. Is there's a way to completely ignore certificate validation? If not, what's have I done? I mean, I've added the certificate in the trusted root certificates, it should works as mentioned on some forums... If you need more information, please ask me, I'll be pleased to provide you more. Dave

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Booting Fedora guest VBox on /dev/mapper/vg0-fc17-root

    - by NevilleDNZ
    I already have the following logical volumes: host:/dev/mapper/vg0-fc17-boot (guestOS:/dev/hdb) formatted as ext4 (no partition table) host:/dev/mapper/vg0-fc17-root (guestOS:/dev/hdc) formatted as ext4 (no partition table) Do I have to create the following grub partition to boot a guest VM under VirtualBox? host:/dev/mapper/vg-fc17-mbr (guestOS:/dev/hda) with a partition table and install grub MBR here? Or is there a better way? (Maybe grub on vg0-fc17-boot?)

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • How can I compare two directories to compare missing files, when the directories don't have the same structure?

    - by David Dean
    I've been sent a HDD of new and updated files from an organisation that we are working with, but we already have most of the files sitting on our servers, and would like to update our local versions to match theirs. Normally, this would be a job for something like rsync, but our problem is that the directory structure they provide is very poorly organised and we've had to rearrange their files in the past to work best with our systems. So, my question is: How can I find out which files in the set they have provided are new or different to the versions that we have, when the directory structures are different? Once that question is answered, we can update the changed files, and work out where to put the new files on our system, probably somewhat manually.

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >