I would like to clone a smaller LVM-formatted disk onto a larger one using dd, and boot that disk in the same machine. Do I need to make any special considerations for LVM?
Hi guys,
Now I'm using a router to connect my computer to the Internet. I installed VMware Workstation 7 on my computer and installed a guest OS which is CentOS 5.3. Now I want to using putty to connect to the guest OS in my host computer, and I need the guest os can access the Internet too. How can I configure my host's network, guest os and my router? I have little knowledge about network, can you guys give me a step by step direction or something similar? Great thanks!
After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki)
When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model.
I've been testing with a few post I've found but I was stack with the error:
WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/.
FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory
iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw'
Error occurred at line: 2
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this:
IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc"
As the error remained I read that I should build dependencies again on the virtual machine:
depmod -a but this returned an error:
WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory
FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory
So I read again about creating the directory empty and redoing "depmod -a" it.
I now don't get the dependancies error but get this and I don't have a clue how to proceed:
WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/.
FATAL: Module ip_tables not found.
iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw'
Error occurred at line: 2
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config.
Any help would be greatly appreciated.
I've setup server on mac using virtualbox and my server is centos. but the guy who did it for me he forgot the root password that he setup at the beginning and what happen now is my website have a lot of problem due to the permission issues.
So what can I do in order to retrive the password or even to change my permission without using root?
The group for my website is apache and I believe I'm not in the same group.
Suppose i send a mail using the following the following command:
mailx [email protected]
then does mailx first try to find out the SMTP server of my ISP for relaying the mail or does it connect directly. Does it depend on whether my PC has a public IP address or it is behind a NAT.
How do I check the settings of mailx on my PC?
How can I verify this using tcpdump?
Using iptraf, tcpdump and wireshark I can see a SYN packet coming in but only the ACK FLAG is set in reply packet.
I'm running Debian 5 with kernel 2.6.36
I've turned off window_scaling and tcp_timestamps, tcp_tw_recycle and tcp_tw_reuse:
cat /etc/sysctl.conf
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_window_scaling = 0
net.ipv4.tcp_timestamps = 0
I've attached an image of the wireshark output.
http://imgur.com/pECG0.png
Output to netstat
netstat -natu | grep '72.23.130.104'
tcp 0 0 97.107.134.212:18000 72.23.130.104:42905 SYN_RECV
I've been doing everything possible to find a solution and have yet to figure out the problem, so any help/suggestions are much appreciated.
UPDATE 1: I've set tcp_syncookies = 0 and noticed I am now replying with 1 SYN+ACK for every 50 SYN requests. The host trying to connect is sending a SYN request about once every second.
PCAP FILE
I'm beginning to deal with more than one user on my system (it's a VPS serving some sites) and I need to make sure I understand how group permissions work.
Here's my setup:
I have an account named "admin" .. it's basically the primary account that is used for serving most of the sites that I control myself.
Now, I added a second account named "Ville" as one of my users wants to be able to administer that site.
So, I can do this the easy way and just chown their domains folder under the ville user and viola, they have permission to do whatever they need be and so forth.
However, let's say I want to also give the admin user access to the files (modifying and all) .. how can I put both users into the same group and give them both permission?
I've tried doing:
sudo usermod -a -G admin ville
To add the ville into the admin group, but ville still cannot edit files by admin. Permissions for the primary directory for the ville user are read/write for both owner and group, and the current group for the files is admin:admin ..
But ville still can't write into the directory.
So, what should I be doing here to get this right and secure at the same time?
Thank you.
so i have set up apache serving my php pages.
i read about squid but don't understand why/how i should use it to speed up my web server.
from what i've learned squid is located in same network (or another) and caches content requested by the web browsers, and then when another web browser wants a same page, squid returns that page cached locally, so it never sends a request to the apache server (faster response time for the client, and reduced load for the server). so it seems that squid is for the client side (web browser), and has nothing to do with the server side (apache).
but then some people tell others how they have speeded up apache using squid. so im confused.
could squid be used on the server side too? and how will it work?
Long Story Short
I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images.
Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images?
In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system.
Background Information not included in the Long Story Short
I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.
It is possible to monitor windows in screen (I mean the terminal multiplexer called screen) with Ctrl-a M. However, I am only aware of the notification when I am looking at any window. What I want, though, is to somehow also be notified if the screen in question has been detached with Ctrl-a d.
I.e., I issue the command to monitor a window in screen, then detach that screen, now I want to get a notification if the monitor detects activity, in some form (a string in the bash I'm using, or an email, or anything).
Is this possible, and if yes how?
Is it possible to create a symlink to a directory, like /var/lib/tomcat6/webapps/MyWar that I can access from everywhere? I want to be able to say cd myapp from anywhere in the directory tree and go to that directory. Is it only possible in the directory where I create the symlink?
Do I have to update my ~/.bashrc file to include an alias like: alias myapp="cd /var/lib/tomcat6/webapps/MyWar" and then just type myapp from anywhere? What is the best way to handle this situation so I don't always have to type in the long directory? I also want to be able to use that parameter in say a copy command, so the alias wouldn't help in that situation. Hopefully I can do something similar where ~ maps to the home directory in any command.
I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost.
No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about).
... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange.
The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet.
As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu?
(I need to go try the obvious experiment of plugging it in).
I am currently using ubuntu 10.10 on my laptop , the model of my wireless chip is broadcom BCM4312 802.11b/g LP-PHY . I tried to activate the driver for the chip by clicking menu bar System - Administration - Additional Drivers . and activated Broadcom STA wireless driver.
But the laptop can't detect any wireless signal still.
Do I have to do any additional work to make the chip work ? Or how can I test if there is physical damage to the chip itself
I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost.
No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about).
... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange.
The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet.
As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu?
(I need to go try the obvious experiment of plugging it in).
The default emacs on OSX, 22.1, appears to have some problems with the info/help stuff.
When you hit C+h i, it says Can't find the Info directory node.
It works in AquaEmacs, but I downloaded the 23 version of emacs precompiled binaries from here and I still have the same problem.
I have setup a monitoring server with the following setup.
<Plugin network>
Listen "0.0.0.0" "25826"
</Plugin>
Now my clients are sending data to the monitoring server(verified through
tcpdump). Even the collection folder shows that the data is being dumped
/var/lib/collectd/rrd
[ec2-user at x rrd]$ ll
total 4
drwxr-xr-x 11 root root 4096 Nov 20 17:53 x-web-1.y.com
[ec2-user at x rrd]$
I have also verified with find . -mmin 1 to see if its being constantly
updated.
[ec2-user@x rrd]$ find . -mmin 1
./x-web-1.y.com/interface-eth0/if_errors.rrd
./x-web-1.y.com/interface-eth0/if_packets.rrd
./x-web-1.y.com/interface-eth0/if_octets.rrd
./x-web-1.y.com/disk-xvda1/disk_time.rrd
./x-web-1.y.com/disk-xvda1/disk_ops.rrd
./x-web-1.y.com/disk-xvda1/disk_octets.rrd
./x-web-1.y.com/disk-xvda1/disk_merged.rrd
But when i look it up through collectd-web, I don't see the clients
What might be wrong in my setup?
Hello friends...
In My computer
Lan card model is Realtek RTL8168B/8111B PCI-E GIGABIT ETHERNET NIC (NDIS 6.20)
My system is dual boot windows 7 and redhat 5.1
Now windows 7 automatically detected this lan card but in redhat lan card is not detected.I have tried to through evrywhere like network or through neat-tui but it is not showing lan card..
I tried google also but all of them providing windows software for this lan card .
So please anyone can tell me the link so that i can download drivers for this and can use internet there..
Thanks a lot in advance
Deepak Narwal
Every time i run
service networkd restart
This is what i get
Shutting down interface eth0: Device state: 3 (disconnected)
[ OK ]
Shutting down interface eth1: [ OK ]
Shutting down loopback interface: Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown
Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown
Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown
Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown
[ OK ]
Bringing up loopback interface: Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown
Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown
Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown
Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown
[ OK ]
Bringing up interface eth0:
** (process:12951): WARNING **: fetch_connections_done: error fetching user connections: (2) The name org.freedesktop.NetworkManagerUserSettings was not provided by any .service files.
Active connection state: activating
Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/1
state: activated
Connection activated
[ OK ]
Here is my ifcfg-eth0
# Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller
DEVICE=eth0
BOOTPROTO=dhcp
DEFROUTE=yes
DHCPCLASS=
HWADDR=xxx
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
ONBOOT=yes
OPTIONS=layer2=1
PEERDNS=yes
PEERROUTES=yes
TYPE=Ethernet
UUID=xxx
And my ifcfg-eth1
# Intel Corporation 82541PI Gigabit Ethernet Controller
DEVICE=eth1
HWADDR=xxx
ONBOOT=no
And my ifcfg-lo
DEVICE=lo
IPADDR=127.0.0.1
NETMASK=255.0.0.0
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback
Any ideas?
I use reverse-i-search often, and that's cool. Sometime though when pressing Ctrl+r multiple times, I pass the command I am actually looking for. Because Ctrl+r searches backward in history, from newest to oldest, I have to:
cancel,
search again and
stop exactly at the command, without passing it.
While in reverse-i-search prompt, is it possible to search forward, i.e. from where I stand to newest. I naively tried Ctrl+shift+r, no luck. I heard about Ctrl+g but this is not what I am expecting here. Anyone has an idea?
Our development server at work is taking a dump on us. So at this point, we're repurposing some other servers we have in our server room for this purpose.
My boss wants me to test the servers before I even try installing anything on them. How do we to about this?
I would like to find a way to open a url without using a web browser making my computer system runs very slow. Is there an easy way to achieve this on a command prompt using ssh to access another network? Thanks in advance!
I am trying to install a statistical program which requires GNU Scientific Library (GSL). I have successfully installed GSL through the yum command, but my statistical program gives an error when I try to run make install. I think there is a linking problem. How can I solve it?
$ sudo yum install gsl.x86_64
Installed:
gsl.x86_64 0:1.15-3.fc16
Dependency Installed:
atlas.x86_64 0:3.8.4-1.fc16
$ tar -xvzf prog.tgz
$ cd prog
$ make
$ gcc -O3 -Wall -Wshadow -pedantic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -DVER32 -I/opt/local/include/ -L/opt/local/lib/ -c -o prog.o prog.c
In file included from prog.c:16:0:
prog.h:7:30: fatal error: gsl/gsl_sf_gamma.h: No such file or directory
compilation terminated.
make: *** [prog.o] Error 1
I have setup my VPS pretty much now, and want to upload some basic files to the server.
How is this done in Ubuntu 9.10?
I have PuTTY and use the terminal there...
Is there any ftp program, like in regular managed hostings, to just upload files with?
I was thinking about proftpd, but don't have a clue how to get it to work.
I am using my home-laptop with windows xp to command the VPS.
Thanks
I have setup my VPS pretty much now, and want to upload some basic files to the server.
How is this done in Ubuntu 9.10?
I have PuTTY and use the terminal there...
Is there any ftp program, like in regular managed hostings, to just upload files with?
I was thinking about proftpd, but don't have a clue how to get it to work.
I am using my home-laptop with windows xp to command the VPS.
Thanks