We had a hard disc crash of one of two hard discs in a software raid with a LVM on top.
The server is running Citrix xenserver.
On the hard disk which is still intact, the volume group gets detected well, but only one LV is left.
(some hashes replaced by "x")
# lvdisplay
--- Logical volume ---
LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT
VG Name VG_XenStorage-x-x-x-x-408b91acdcae
LV UUID x-x-x-x-x-x-vQmZ6C
LV Write Access read/write
LV Status available
# open 0
LV Size 4.00 MiB
Current LE 1
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
root@rescue ~ # vgdisplay
--- Volume group ---
VG Name VG_XenStorage-x-x-x-x-408b91acdcae
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 698.62 GiB
PE Size 4.00 MiB
Total PE 178848
Alloc PE / Size 1 / 4.00 MiB
Free PE / Size 178847 / 698.62 GiB
VG UUID x-x-x-x-x-x-53w0kL
I could understand if a full physical volume is lost - but why only the logical volumes?
Is there any explanation for this?
Is there any way to recover the logical volumes?
EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22)
What we are trying to do is to access the root filesystem. But everything was in the LVM.
We have only this:
(parted) print
Model: ATA SAMSUNG HD753LJ (scsi)
Disk /dev/sdb: 750GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 750GB 750GB primary boot, lvm
And this 750GB LVM volume is exactly what we see on top.
edit2
Output of vgcfgrestore, but from the rescue system, as there is no root to chroot to.
# vgcfgrestore --list VG_XenStorage-x-b4b0-x-x-408b91acdcae
File: /etc/lvm/archive/VG_XenStorage-x-x-x-x-408b91acdcae_00000.vg
VG name: VG_XenStorage-x-x-x-x-408b91acdcae
Description: Created *before* executing '/sbin/vgscan --ignorelockingfailure --mknodes'
Backup Time: Fri Jun 28 23:53:20 2013
File: /etc/lvm/backup/VG_XenStorage-x-x-x-x-408b91acdcae
VG name: VG_XenStorage-x-x-x-x-408b91acdcae
Description: Created *after* executing '/sbin/vgscan --ignorelockingfailure --mknodes'
Backup Time: Fri Jun 28 23:53:20 2013
I mount NFSv4 export via /etc/fstab and mount and use the shared folder in nautilus.
There are two issues:
When I copy a large file (around 4 GB) to the NFS server, the progress bar rapidly goes to 2 GB and then basically stops moving. But the copy s still in progress - it is just not displayed well
When I disconnect from the network without unmounting the nfs share, nautilus freezes.
How can I work around that?
/etc/export on the server
/export/share 192.168.0.0/24(rw,sync,insecure,no_subtree_check,anonuid=1000,anongid=1000)
/etc/fstab on the client:
server:/share /mnt nfs4 soft,tcp
Facebook photo privacy is more complex than most people think - including the bloggers who fill the Internet trying to explain it in simple terms.
Not only there is the basic album-level privacy setting to consider, but also what happens with Tagging (and related privacy settings) as well as the Share button when clicked by a Friend.
Has anybody seen a good, engineering-type (e.g. UML) diagram? I envision it should include the various privacy "states" a photo can be in, what causes state transitions, and the characteristics of each state?
Thanks
Is there a way in nginx to redirect everything from domain1.com to domain2.com, except for the homepage?
Right now I have:
server {
listen 80;
server_name www.domain1.com domain1.com;
rewrite ^ http://domain2.com$uri permanent;
}
This works, except that I'd like http://domain1.com (without any additional path) to be left alone and not redirected. Basically, I need to redirect everything, to avoid broken links, but I want to use the homepage of domain1 to serve a static file.
The situation: My wife has a 13" MacBook Pro that she uses for email, Facebook, web surfing, and working from home. I'm about to buy us our first iPad. My wife's brother's computer just went belly-up, and she's contemplating giving him her MacBook and just using the iPad. The question is whether or not this is possible or realistic.
Obviously, the iPad is well-suited for the email/web/Facebook tasks, but the working-from-home thing is an absolute must -- if the iPad can't handle that, it's a deal-breaker. For my wife, working from home means two things:
Accessing her workplace computer's Windows Vista desktop, which she currently does via Remote Desktop.
Editing Office documents locally, which she currently syncs via Dropbox. Being able to edit documents locally is important, because sometimes she will download documents and edit them when she doesn't have network access (e.g. on the subway).
I'm more than happy to get a keyboard dock for her, so typing won't be an issue. Are there any iPad apps she can use to access her work computer and edit her work files?
Thanks for any suggestions!
I need to keep millions of files on an ext4 system.
I understand that having a structure with multiple subdirectories is the general accepted solution. I wonder what would be the optimal approach in terms of number of dirs/subdirs.
For example I tried a structure like 16/16/16/16 (that is, (sub)directories from 1 to 16) and I found that I am able to move 100K files to this structure in 2m50s.
When trying to move 100K files to a 8/8/8/8/8/8 structure it took 11 minutes. So the 16/16/16/16 approach seems to be better but I was wondering if anyone has some empirical experience with an even better dir/subdir distribution.
i have a script that get all the info i need about my SharePoint farm :
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") > $null
$farm = [Microsoft.SharePoint.Administration.SPFarm]::Local
$websvcs = $farm.Services | where -FilterScript {$_.GetType() -eq [Microsoft.SharePoint.Administration.SPWebService]}
$webapps = @()
foreach ($websvc in $websvcs) {
write-output "Web Applications"
write-output ""
foreach ($webapp in $websvc.WebApplications) {
write-output "Webapp Name -->"$webapp.Name
write-output ""
write-output "Site Collections"
write-output ""
foreach ($site in $webapp.Sites) {
write-output "Site URL --> -->" $site.URL
write-output ""
write-output "Websites"
write-output ""
foreach ($web in $site.AllWebs) {
write-output "Web URL --> --> -->" $web.URL
write-output ""
write-output "Lists"
write-output ""
foreach ($list in $web.Lists) {
write-output "List Title --> --> --> -->" $list.Title
write-output ""
}
foreach ($group in $web.Groups) {
write-output "Group Name --> --> --> -->" $group.Name
write-output ""
foreach ($user in $group.Users) {
write-output "User Name --> --> --> -->" $user.Name
write-output ""
}
}
}
}
}
}
i want to make the output to an XML file and then connect the xml file to HTML and make a site of it for manager use
how can i do it ?
thanks for the help !
I've installed nginx server on my Mac from MacPorts: sudo port install nginx.
Then I followed the recommendation from the port installation console and created the launchd startup item for nginx, then started the server. It works fine (after I renamed nginx.conf.example to nginx.conf and renamed mime.types.example to mime.types), but I couldn't stop it... I tried sudo nginx -s stop - this doesn't stop the server, I can still see "Welcome to nginx!" page in my browser on http://localhost/, also I still see master and worker processes of nginx with ps -e | grep nginx.
What is the best way to start/stop nginx on Mac?
BTW, I've added "daemon off;" into nginx.conf - as recommended by various resources.
Thank you.
Right now I'm using a simple VmWare virtual machine with preinstalled Win 7. The IP is assigned automatically (it's the same as main OS IP).
Is it possible to create several virtual machines that have different hardware specifications and different IP addresses? Here is what I mean regarding these issues:
Specs:
Certainly, you can easily change some specifications in the Settings menu (RAM size, HDD size), but what about advanced settings? For example:
advanced settings for the Processor: is it AMD (2500+,4000+, etc.. ) or Intel (core 2, Pentium, etc..)
Ram - is it Corsair 4 Gb 1333 Mhz or Kingston 2 x 2 Gb 866Mhz or something else?
Hdd - Is it Seagate Barracuda 80 gb 5400 Rpm or is it Samsung 500Gb 7200 Rpm or some random SSD?
Programs that work under a Virtual Machine shouldn't have a clue if that's a VmWare or not.
IPs:
Every program that's launched under main OS use the real IP: 93.56.xx.xx
All programs that are launched under virtual machine A use IP 1: 74.78.xx.xx
All programs that are launched under virtual machine B use IP 2: 84.159.xx.xx
I believe that you have to use either VPN or Proxy to solve this problem.
The Sum Up:
The idea is to create 2-3 independent virtual machines with different hardware specifications and IP addresses. Programs that work under a certain Virtual Machine shouldn't have a clue if that's a VmWare or the real PC. Any ideas/tips or experience regarding configuration will be appreciated!
I'm in the middle of upgrading, and purchasing licensing for 3 of our Servers.
One will be a Windows Server 2008 machine, running SQL Server 2008.
The other two machines will be domain controllers, both running Windows 2003.
Our organisation has 30 Users.
I understand (through our reseller) that a Windows 2008 licence gives "downgrade" rights to use 2003.
Realistically, for the above setup of 3 machines, will I just need one set of 30 CALs for 2008?
I just tried updating the iPhone's firmware, and things went haywire.
iTunes informed the update had failed and linked me to this page.
The iPhone is stuck on the image from the top of that page.
I held down the sleep and home buttons until it turned off and turned it back on, and then went through the iPhone's 'restore' mode.
It then error'd again and I'm stuck back with the logos on the iPhone.
Does anyone know how to fix this?
Thanks
I have a Windows 2003 Server that I want to migrate to new hardware. All the setup and configuration was done by my predecessor. Right now I'm in a real time crunch and I just want to copy all the files and settings to the new machine. Is there an easy way to do this or do I need to manually copy all the files and add all the settings?
Microsoft KB suggests "Automated System Recovery", is this the best way forward?
My Mac Mini outputs to my two new monitors - Dell U2311Hs.
The LED on the bezel displays blue when receiving a signal, or yellow otherwise. Both screens are displaying blue.
It also seems my Mini can see both of them...
However, one of them is black. It just displays black, but appears to be receiving a signal (when I turn the Mac off, it then displays No Signal).
To make things weirder, on startup, the boot up (white with Apple logo) appears on the right monitor (the one that now displays black).
Occasionally, it flickers up on the black screen for 1 second.
I have tried Detect Displays. It appears to do nothing.
I'm also running a dual monitor KVM. Video connections are DVI-D.
How can I fix this situation?
Thanks.
Update
This is the weirdest thing - I used the DVI-D cable that came with the KVM and it seems to have fixed it - I didn't both because it looks identical to any other DVI cable (in form an pin out).
So, I will accept an answer if someone can tell me what may be the difference in these cables?
I have a Gigabyte Z68X-UD4-B3 motherboard. It advertises this new thing called "XHD", which is like RAID but makes a SSD and traditional-style drive work together to enable high speed with high capacity. I don't want to use this feature, and I already have Windows 7 64 installed without using this feature.
When I first installed my 2 hard drives (1 SSD and 1 traditional-style drive) in my machine and booted it up for the first time, it ran a program from the mobo that asked me if I wanted to set up XHD. Thinking it would go to some config screen, I said yes. It immediately started doing something with my drives and finished. I considered that strange, but figured it wouldn't matter when I simply install Windows onto my SSD only. I now have my BIOS and Windows running in AHCI mode with no RAID arrays and separate drives.
My SSD is one of those new Corsair Force GT drives which loses power every so often, causing Windows to BSOD. I've figured everything out about this problem, including installing the latest firmware from Corsair, and the only way to fix it at this point is by installing Intel Rapid Storage Technology to control AHCI instead of Windows, since the Windows AHCI driver disables the drive's power every once in a while and can't be configured not to do so.
I've tried installing Intel Rapid Storage Technology. When I reboot my machine after doing so, it BSODs just after the Windows logo. I've figured out this is because my SSD and my traditional drive are flagged as RAID, as seen in the "Intel Matrix Storage Manager" program found by switching the BIOS hard drive handling to "RAID" mode. This is due to the XHD auto-config program I mentioned earlier.
Normally, the BIOS is set to AHCI, and when the drives boot in AHCI mode, they work perfectly. So, I've concluded the data is stored in AHCI mode but the drives' flags are set to RAID.
I've figured out that I can accomplish my objective by using the "Intel Matrix Storage Manager" program on the mobo (with "Reset disks to non-RAID"), but doing so would cause it to completely wipe the drives I select. I want to simply toggle these flags from RAID to AHCI so Intel Rapid Storage Technology doesn't fail and cause a BSOD upon booting, but without wiping the drives.
I want to make it so that nautilus requires me to press a key combination to delete files - so that I do not accidentally delete files on a network share with no confirmation again.
Ideally I would make the behavior identical to OSX's Finder, so that I press ctrl+backspace to delete files.
Hi all,
I'm a newbie systems administrator, so any advice would be great. I would like to setup ubuntu 8.04 lts in a small office of consulting in management (around 50 workstations) instead of Windows XP. I would install MS Office 2007 via WINE (*). It would be a fresh installation, so the migration would be less of a pain.
The new setup would also include a small server as document repository and a backup server by now. Later, I would install another goodies like a IM server, a document management solution, and whatnot collaborative tool.
What do you advice in this scenario? Do you think is viable? Should I try to convince my managers this is a good idea? I consider myself as a fair experienced user in both systems, and I'm the only guy in charge of everything. I need to cut costs down, and I think that antivirus and antimalware software are a waste of money and time. Is this good idea?, or should I resign and try to lock down the Windows systems and install AV software? Is there anything else in this setup I'm not foreseeing?
(*) The only catch in my test machine until now had been that Office SmartArt doesn't work properly, the rest of Office 2007 may seem ok.
I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session.
I receive an error below the gnome login that says
"Could not locate any suitable fingerprints matched to available hardware."
What is causing this?
here are the contents of /etc/pam.d/common-auth file
#
# /etc/pam.d/common-auth - authentication settings common to all services
#
# This file is included from other service-specific PAM config files,
# and should contain a list of the authentication modules that define
# the central authentication scheme for use on the system
# (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the
# traditional Unix authentication mechanisms.
#
# As of pam 1.0.1-6, this file is managed by pam-auth-update by default.
# To take advantage of this, it is recommended that you configure any
# local modules either before or after the default block, and use
# pam-auth-update to manage selection of other modules. See
# pam-auth-update(8) for details.
# here are the per-package modules (the "Primary" block)
auth sufficient pam_fprint.so
auth [success=1 default=ignore] pam_unix.so nullok_secure
# here's the fallback if no module succeeds
auth requisite pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth required pam_permit.so
# and here are more per-package modules (the "Additional" block)
auth optional pam_ecryptfs.so unwrap
# end of pam-auth-update config
#auth sufficient pam_fprint.so
#auth required pam_unix.so nullok_secure
I'm trying to implement the following scenario:
One VM-host, multiple guest VMs, each one gets its own IP-address (and domain).
Our server has only one physical interface, so the intended use is to add virtual interfaces on eth0. To complicate our situation the provider uses port-security on their switches, so I can't run the guest interfaces in bridged mode, because then the switch detects a "spoofed" MAC-address and kills the interface (permanently, forcing me to call the support, which I'm sure will get them a little bit angry the third time ;) ).
My first guess was to use iptables and NAT to forward all packages from one virtual interface to another one, but iptables doesn't seem to like virtual interfaces (at least I can't get it to work properly). So my second guess is to use the source IP of the packages to the public interface.
Let's assume libvirt creates a virbr0-network with 192.168.100.0/24 and the guest uses 192.168.100.2 as IP-address.
This is what I tried to use:
iptables -t nat -I PREROUTING --src public_ip_on_eth0:0 -p tcp --dport 80 -j DNAT --to-destination 192.168.100.2:80
That doesn't give me the intended results either (accessing the server times out).
Is there a way to do what I'm trying to do, or even to route all traffic to a certain IP on a virtual interface to the VM's device?
I installed nginx using ports and started it with sudo. Accessing the nginx welcome page from localhost works perfectly, however accessing it from an external computer fails.
Doing an nmap on the computer from the outside reveals
80/tcp filtered http
So clearly the mac firewall is blocking the port. I then proceed to add the nginx executable to the firewall exception list as seen in this image, however the nmap still shows up as port 80 being filtered and I'm unable to access the webpage. The exact binary that is in the list is /opt/local/sbin/nginx which to my knowledge seems correct
Any ideas what I should do? Thanks!
P.S. Turning the firewall off does allow me to access the website from the outside world, however that isn't an ideal solution.
I have an old Ubuntu 8.10 32-bit with MySQL 5.0.67.
There's 5.7GB of data in it and it grows by about 100MB every day.
About 3 days ago, the MySQL instance begin dying suddenly and quitely (no log entry) during the nightly mysqldump.
What could be causing it?
Upgrading MySQL is a long-term project for me, unless there happens to be a specific bug in 5.0.67 then I guess I'll just need to reprioritize.
I'm hoping somebody might be familiar with this problem since this is a fairly popular version bundled with Ubuntu 8.10.
Thanks