Search Results

Search found 12793 results on 512 pages for 'format specifiers'.

Page 391/512 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • Postfix won't pipe to PHP file through aliases file

    - by jfreak53
    I'm trying to pipe from postfix to a command. According to Postfix logs it worked, but when I check the command it didn't. This is a fresh postfix install. This is my alias file: # See man 5 aliases for format postmaster: root support: "| /usr/bin/php -q /var/www/pipe/pipe.php" I run sendmail [email protected] then type it and then on a separate line type . and it goes. I check the postfix log /var/log/mail.log and this is what it states: Nov 2 15:32:33 server3 postfix/local[13284]: 42C429E0B5: to=<[email protected]>, relay=local, delay=156, delays=156/0.01/0/0.05, dsn=2.0.0, status=sent (delivered to command: /usr/bin/php -q /var/www/pipe/pipe.php) So according to that it worked, but it doesn't. If I run echo 'text' | /usr/bin/php -q /var/www/pipe/pipe.php it does work just fine. Any ideas what I did wrong? I know piping is working, I originally checked it by running that command above WITHOUT the quotes, so just support: | /usr/bin/php -q /var/www/pipe/pipe.php What it did there was append my email header and all to the file pipe.php. So I know postfix was piping it, but when I put in the quotes it says it's going but it's not according to my script.

    Read the article

  • Excel 2007: Named ranges problems when linking workbooks

    - by Mike
    I've 30+ workbooks each with 5 specific worksheets (formated the same). Each worksheet's data needs to be linked to a master workbook, so that I end up with 5 master workbooks and all the specific data in one long table format $A$2:$I$750. (Are you still with me? ;)) I don't have access to a database, so I'm having to link the sheets to their master workbook directly. I've highlighted the data I need; named the range; and then tried referencing this from my master workbook. I get the #Value error symbol when I try to link (=[WorkbookName]!MyNamedRange) to a cell that doesn't match the top left cell of my range. Example: MyNamedrange is always =$A$2:$I43$ on one specific sheet. On my master workbook it works if it's referenced at A2 but I get #Value if it's referenced A1, or A44. Any ideas? I'm trying to link my data in one continous table so I can run a pivot on it, and other things. Can it be done like this, or should I just copy and paste? I'm trying to keep things 'linked'so I do not need to spend time C&Ping all day. Many thanks Mike.

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • virtual disk image - file or partition

    - by tylerl
    I'm looking at the differences between using a file versus a partition to store a virtual disk image in VM use. The common knowledge is that partition-based images are faster than file-based images because of a decreased overhead. It makes sense, but I've never seen any actual numbers. My own testing bears out a different result. When I benchmark a direct-to-partition virtual disk, then format that same partition with ext4, create a virtual disk image stored on that ext4 filesystem, and then benchmark that, I see no speedup at all for the direct-to-partition virtual disk. Instead on some systems the file-based image is even faster (possibly due to host OS caching or something like that). This test was repeated many times on many systems, with fairly consistent results. So perhaps throwing out the performance justification, is it still considered better to use a partition rather than a virtual disk image? Is there some other reason why direct partition access is better than image files? Or perhaps is there some reason to go the other way around? Perhaps an advantage in one of the virtual disk file formats that you don't get with raw partition images?

    Read the article

  • Interactive console based CSV editor

    - by Penguin Nurse
    Although spreadsheet applications for editing CSV files on the console used to be one of the earliest killer applications for personal computers, only few of them and even less documentation about them is still actively maintained. After having done extensive search on the web, manpages and source code, I ended up with the following three applications that all have fundamental drawbacks: sc: abbrev. for spreadsheet calculator; nice tool with vi keybings, but it does not put strings containing the delimiter into quotas when exporting to delimiter separated format and can't import csv files correctly, i.e. all numbers are interpreted as strings GNU oleo: doesn't seem to be actively maintained any longer since 2001 and there are therefore no packages for major linux distributions teapot: offers packages for various operating systems, but uses for example counter-intuitive naming for cells (numbers for row and column, i.e. 11 seems to be intended to be row 1, column 1) and superfluous code for FLTK GUI Various Emacs modes also do not quote strings containing the delimiter well or are require much more typing for entering the scaffold of a table. Therefore I would be very grateful for overcoming one of theses drawbacks or any hints towards another console based CSV editor. It actually needn't do any calculations just editing cells or column- and rowise.

    Read the article

  • Restarting shell script with &disown using Monit

    - by Solas Admin
    I have a shell script that runs a C++ backend mail system (PluginHandler). I need to monitor this process in Monit and restart it if it fails. The script: export LD_LIBRARY_PATH=/usr/local/lib/:/CONFIDENTAL/CONFIDENTAL/Common/ cd PluginHandler/ ./PluginHandler This script does not have a PID file and we run this script by executing ./rundaemon.sh &disown ./pluginhandler starts the process and starts logging into /etc/output/output.log I stop the process by identifying the process ID with [ps -f | grep PluginHandler] and then killing the process. I can check the process in Monit just fine, but I think Monit is starting the process if it is not running but it can't do &disown so the process ends as soon as it starts. This is the code in the monitrc file for checking this process: check process Backend matching "PluginHandler" if not exist then alert start "PATH/TO/SCRIPT/rundaemon.sh &disown" alert [email protected] only on {timeout} with mail-format {subject: "[BLAH"} I tried to stop the script from terminating by modifying the script like the following but this does not work either. export LD_LIBRARY_PATH=/usr/local/lib/:/home/CONFIDENTAL/production/CONFIDENTAL/Common/ cd PluginHandler/ (nohup ./PluginHandler &) return Any help to write a proper Monit rules to resolve this issue would be greatly appreciated :)

    Read the article

  • Sound out of sync after merging multiple mp4 files with Avidemux

    - by Goto10
    I am trying to join (merge) two or more .mp4 files together, without re-encoding. Here is what I did: Started Avidemux 2.5.5. With File-Open, selected Input1.mp4. I received this message - "H.264 detected. If the file is using B-frames as reference it can lead to a crash or stuttering. Avidemux can use another mode which is safe but YOU WILL LOOSE SOME FRAME ACCURACY. Do you want to use that mode?". I chose "No". With File-Append, selected Input2.mp4. I received the same "H.264 detected" message again and chose "No". Selected the Format to MP4 (from AVI). Saved the output file (called Output.mp4) with File-Save-Save Video. Unfortunately, when I play the Output.mp4 video in VLC, the sound is out of sync with the second video. How can I correct this?

    Read the article

  • How to discover true identity of hard disk?

    - by F21
    I have 2 fake external hard drives that claim to have a storage capacity of 2TB. I pulled the enclosure apart and the hard drives seems to be refurbished ones with their labels replaced as Barracuda LP 2000 GB labels (the serial numbers on both labels are the same). Interestingly, one of the drives have 160G written on it with pencil. However, the counterfeiters seem to have done something to the firmware, because CrystalDiskInfo reports them as 2TB ST2000DL003 drives. I then delete the 1.81 TB partition in Windows disk management and tried to create a new one and format it. Once I get to this point, the drives would make some noise that is common to dying drives. I am not interested in using these drives for production, but I am interested in finding the true identity (manufacturer/serial number/model number, etc) and restoring it to their factory defaults with the right capacity. Can this be done without any special equipment? This would be an interesting learning exercise. Some pictures of the drives in question: Here are the screens from CrystalDiskInfo: Note the serial numbers are the same (these are 2 different drives!). How is this done? Did they have to tamper with the controller board? I would assume that changing the firmware doesn't change the serial number at all.

    Read the article

  • USB drive dead after stopping copying process on Snow Leopard Server

    - by Anriëtte Combrink
    Hi there I was copying to a flash drive from our Snow Leopard server when I stopped the copying process half way through. The device then disappeared from the Desktop. So I unplugged it and plugged it right back in. The device just didn't show up. I unplugged it and plugged it into a Windows XP machine as well as a Windows 7 machine. On both machines, I right clicked "My Computer" and selected "Manage…". On both PC's, the device was located under Removable Storage, but had no size and no drive letter. It shows up in "My Computer", but when I choose "Format…" from the right-click menu (context menu), it says the drive could not be formatted. Can someone please advise me? The flash drives is about 5 mins old and should have no reason to be dead. I really can't loose this drive (I don't need the data on it, I just need it to work again), any help would be appreciated. Thanks in advance.

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • Recovering a damaged microSDHC

    - by djechelon
    I just bought from eBay a Kingston 32GB microSDHC that was advertised as defective. The seller said that there could be formatting problems or with transfer of large files. Unfortunately, when I got it, it was a total mess. My Nikon camera doesn't read it at all (OK, maybe it doesn't support 32GB) My Linux laptop doesn't mount it: can't read superblock The same laptop refuses to mkfs.msdos because it failed whilst writing reserved sector The same laptop, under Windows, doesn't read nor format the card HTC HD2 mounts the MMC, allows me to write via USB, but is unable to open the just written files OK, folks, now you would say I would have to go through Paypal complaint... that's not that easy. I consciously bought a half-price card that was known to show some defects, and Paypal complaints take time. Obviously, I can't accept somebody sold me a completely use-less computer decoration. So I'll keep it as last option. My question is Do you know a way, under either Linux or Windows, to thoroughly scan, test and possibly repair memory cards, even if I have to lose some percentage of space because of bad sectors? If I can keep at least half of the card intact it would certainly be fine. I used to do broken sector marking with hard disks in the past. I almost forgot: MONSTR:/home/djechelon # fsck /dev/mmcblk0p1 fsck from util-linux-ng 2.17.2 dosfsck 3.0.9, 31 Jan 2010, FAT32, LFN Read 512 bytes at 0:Input/output error

    Read the article

  • Configuring https access on HP A5120 Switch

    - by GerryEgan
    I am trying to configure HTTPS management on a HP a5120 switch running Version 5.20.99, Release 2215 and not having much luck. I have followed the manual by creating an SSL policy first and then enabling the HTTPS server with the SSL policy: ssl server-policy sslpol ip https ssl-server-policy sslpol ip https enable When I try and log onto the switch with Google Chrome I get the following error: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. When I look this up I have found references to errors due to TLS being used in SSL. I can find no way to specify the SSL version in the server policy. The manual has a configuration example that uses MSCEP to retrieve a certificate but in Windows 2008 R2 that feature is only available in Enterprise and Datacentre editions which I don't have. I have SSH configured and it is using a locally generated certificate so I'm not sure if I can use that but I'd like to if possible. Has anybody been able to setup HTTPS management on HP A series switches without MSCEP? Any and all help appreciated! here is a copy of my config with the interfaces removed: version 5.20.99, Release 2215 # sysname MYSYSNAME # irf domain 10 irf mac-address persistent timer irf auto-update enable undo irf link-delay # domain default enable system # telnet server enable # vlan 1 # vlan 100 description Management # radius scheme system primary authentication 127.0.0.1 1645 primary accounting 127.0.0.1 1646 user-name-format without-domain # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher authorization-attribute level 3 service-type ssh telnet terminal service-type web # stp enable # ssl server-policy sslpol pki-domain MYDOMAIN # interface NULL0 # interface Vlan-interface199 ip address 192.168.199.140 255.255.255.0 # interface GigabitEthernet1/0/1 poe enable stp edged-port enable # interface Ten-GigabitEthernet2/1/2 # dhcp-snooping # ntp-service unicast-server 192.168.1.71 # ssh server enable # ip https ssl-server-policy sslpol ip https enable # load xml-configuration # user-interface aux 0 1 user-interface vty 0 15 authentication-mode scheme

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • Formatted C: from Windows 7 setup, now it won't even install

    - by ocurro
    Help, I'm so confused. I did more or less what's been described here: I formatted Vista and installed Windows 7 over it. Problem is that I'm now unable to boot (...) [1] I'm installing Seven on top of Vista on ACER AS1410 Notebook When it comes to the part where I choose where to install, I pick the partition labeled C: but instead of keeping windows.old files (what would I want them for?) I choose to go and carelessly format the partition (my bad). It shows me this error: Setup was unable to create a new system partition or locate an existing system partition. See the Setup log files for more information Now the only option is "Load Driver". i have tried installing every single one from ACER website, none of them are useful. I even flashed orig. BIOS. I've tried going back and choose "Repair" like in the picture:[2] but I only get an error: "Failed to save startup options" I think this is weird, what else can I do? [1] superuser.com/questions/117076/formatting-of-an-xp-vista-dual-boot-machine-now-unable-to-boot-up-xp [2] www.howtogeek.com/wp-content/uploads/2007/08/image51.png

    Read the article

  • Upgrade an Ubuntu 8.04 installation with VMware Server 1.0.8 and lots of guest OSes to Something Els

    - by Glyph
    I have an Ubuntu 8.04 (Hardy Heron) host machine which is running a whole slew of virtual machines in VMWare Server 1.0.8. Among other guest OSes, there is every release version of Ubuntu since 6.06, OpenSolaris 2009.06, and Windows XP. Right now I access these VMs from a variety of client OSes as well; Linux and Windows via the VMWare server console, and MacOS via X-forwarding the host machine's server console. I'd like to upgrade the host to Ubuntu 10.04 (Lucid Lynx), but from what I can tell, getting VMWare Server 1.x to work on a more recent version of Linux is a real pain. While VMware Server 2.x is a bit easier, it's still not packaged as Debian packages, so installing security updates is a big chore. As long as I'm upgrading anyway, I'd like to move to a virtualization solution that will allow me to automate applying updates. The options that I'm aware of right now are KVM (managed via virt-manager) and VirtualBox (as managed by its own tools or via its own libvirt bindings), but I'm open to other suggestions. For each option, I'd like to know how do I convert my guest images to the new format? am I going to have to re-activate my Windows guests (alternatively, "If the virtual hardware is different by default, can I avoid re-activation by changing some virtualization configuration to provide me with more similar virtual hardware") what are the management options like for each client OS (mac, linux, windows)? Thanks.

    Read the article

  • Seagate 3TB ST3000DM001 hard drive not recognized by Linux, causes fdisk to hang

    - by MountainX
    I'm running Kubuntu 12.04. I have a brand new, never used Seagate 3TB ST3000DM001 hard drive. It's an internal drive. I installed it in a USB enclosure. When I connect it to my PC, nothing happens automatically. When I run sudo fdisk -l, fdisk hangs (without reporting this drive) until I disconnect this drive from the USB port. blkid won't report it either. I tried connecting it to both USB 2.0 and USB 3.0 ports on my PC. I got the same result either way. I tried two different USB enclosures with the same result. If I take the same drive, same enclosure and connect it to a Windows 7 laptop, it is recognized automatically as a USB mass storage device. I want to format the drive (probably ext4) and copy files to it. I have another drive, also in a USB enclosure, that is connected via USB 3.0 to this PC and it works fine. It's a 2.0 TB Samsung HDD. I plan to copy files from the 2TB to the 3TB drive, once I get this issue resolved. My motherboard is an Asus Asus P8B WS LGA1155/ Intel C206/ Quad CrossFireX/ SATA3&USB3.0/ A&2GbE/ ATX. What is the resolution?

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

  • Outlook sends uninvited images as attachments

    - by serhio
    Outlook is always sending along 2 pictures (image001.png and image002.gif) with my mails. Even if I don't attach any images. How do I fix this? Thanks. EDIT - SIGNATURE The HTML of my signature: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML xmlns:o = "urn:schemas-microsoft-com:office:office"><HEAD><TITLE>Company default Signature</TITLE> <META content="text/html; charset=windows-1252" http-equiv=Content-Type> <META name=GENERATOR content="MSHTML 8.00.6001.18854"></HEAD> <BODY> <DIV align=left><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt"> <P class=MsoNormal align=left><BR>Cordialement,&nbsp;</SPAN></FONT><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt"><o:p>&nbsp;</o:p></SPAN></FONT></P> <P class=MsoNormal><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt">Firstname LASTNAME<BR></SPAN></FONT><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt">COMPANY Name<o:p></o:p></SPAN></FONT></P></DIV></BODY></HTML> EDIT 2 - STATIONERY I don't use stationery format: EDIT 3 - HTML If I delete the signature (Ctrl+A, Del) in the HTML mode the images always appear. If I use the signature in text only mode, the images disappears...

    Read the article

  • How to disable "safely remove hardware"

    - by Matt
    I have some windows 7 virtual machines in xen that have devices showing up in "safely remove hardware". I don't want users to ever be able to remove/eject any hardware at all. I'm told vmware has a hotplug option. xen doesn't seem to provide this for pci passthrough devices, therefore I'm looking for a reliable solution to prevent users from ejecting devices. This issue is not necessarily related just to virtual machines but seems to be a common problem with devices that get wrongly reported as removable. I'm ideally looking for a way to prevent all devices from appearing or just prevent the safely remove hardware option from ever coming up. I've tried setting device capabilities for specific devices on boot with a script but this for some reason doesn't always seem to work reliably. Is there a way to prevent this icon from appearing in the notification area completely, either by registry key or group policy? I should point out that setting this in group policy to "Administrators" did not seem to work. [Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesSecurity Optionsevices:Allowed to format and eject removable media]

    Read the article

  • How does Linux determine the SCSI address of a disk?

    - by Chris Sears
    Greetings, I'm working with RHEL 5.5 guest VMs under VMware ESX 4. When I configure the virtual disks in the VM hardware settings, each disk has a SCSI address in the format "N:M". For example, "1:3" would mean SCSI host number 1 and SCSI target ID 3. When I look at the disk info from the VM's BIOS or a Windows OS, the detected SCSI address info matches up with the virtual hardware settings. But under Linux, the SCSI address components don't match up, at least not completely or consistently. I've tried the three supported virtual SCSI and SAS drivers and they all seem to be "broken", but in different ways. Here's a list of the virtual hardware addresses vs what was detected under Linux with each of the drivers: Driver vHW Addr Linux Addr -------- -------- ---------- LSI SAS 0:0 0:0 LSI SAS 0:3 0:1 LSI SAS 0:6 0:2 LSI SCSI 1:1 2:1 LSI SCSI 1:4 2:4 LSI SCSI 1:7 2:7 pvSCSI 2:2 1:2 pvSCSI 2:5 1:5 pvSCSI 2:8 1:8 My main question is why does this happen under Linux? The next question is: how do I get it fixed or fix it myself? If I was going to guess, I'd say it's an issue with how the kernel is handing out the SCSI host number and how the Linux SCSI driver (included with VMware tools) is detecting the SCSI target number. Perhaps the order the drivers are loaded also has something to do with the issue. I'm guessing this would not involve udev, but I could be wrong. Any thoughts would be appreciated. Thanks! PS. My environment is VMware, but I don't need an answer for these drivers specifically. I imagine this might be a problem with any SCSI driver under Linux.

    Read the article

  • How to disable irritating Office File Validation security alert?

    - by Rabarberski
    I have Microsoft Office 2007 running on Windows 7. Yesterday I updated Office to the latest service pack, i.e. SP3. This morning, when opening an MS Word document (.doc format, and a document I created myself some months ago) I was greeted with a new dialog box saying: Security Alert - Office File Validation WARNING: Office File Validation detected a problem while trying to open this file. Opening this is probably dangerous, and may allow a malicious user to take over your computer. Contact the sender and ask them to re-save and re-send the file. For more security, verify in person or via the phone that they sent the file. Including two links to some microsoft blabla webpage. Obviously the document is safe as I created it myself some months ago. How to disable this irritating dialog box? (On a sidenote, a rethorical question: Will Microsoft never learn? I consider myself a power user in Word, but I have no clue what could be wrong with my document so that it is considered dangerous. Let alone more basic users of Word. Sigh....)

    Read the article

  • IIS URl Rewrite working inconsistently?

    - by Don Jones
    I'm having some oddness with the URL rewriting in IIS 7. Here's my Web.config (below). You'll see "imported rule 3," which grabs attempts to access /sitemap.xml and redirects them to /sitemap/index. That rule works great. Right below it is imported rule 4, which grabs attempts to access /wlwmanifest.xml and redirects them to /mwapi/wlwmanifest. That rule does NOT work. (BTW, I do know it's "rewriting" not "redirecting" - that's what I want). So... why would two identically-configured rules not work the same way? Order makes no different; Imported Rule 4 doesn't work even if it's in the first position. Thanks for any advice! EDIT: Let me represent the rules in .htaccess format so they don't get eaten :) RewriteEngine On # skip existing files and folders RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # get special XML files RewriteRule ^(.*)sitemap.xml$ /sitemap/index [NC] RewriteRule ^(.*)wlwmanifest.xml$ /mwapi/index [NC] # send everything to index RewriteRule ^.*$ index.php [NC,L] The "sitemap" rewrite rule works fine; the 'wlwmanifest' rule returns a "not found." Weird.

    Read the article

  • Getting file not found error with pdebuild

    - by user35042
    I am attempting to build a Debian package using pdebuild on my main development server (running Debian wheezy). Here is the command I run: pdebuild --pbuilder cowbuilder --buildresult .. \ --debbuildopts -i -- \ --basepath /var/cache/pbuilder/base-wheezy.cow \ --distribution wheezy --configfile /etc/pbuilder/wheezy This works on other servers, but on one server I get this output: I: using cowbuilder as pbuilder dpkg-buildpackage: source package libexample-orange-util-perl dpkg-buildpackage: source version 0.08 dpkg-buildpackage: source changed by John User <[email protected]> dpkg-source -i --before-build libexample-orange-util-perl fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean dh_clean dpkg-source -i -b libexample-orange-util-perl dpkg-source: info: using source format `3.0 (native)' dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.tar.gz dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.dsc dpkg-genchanges -S >../libexample-orange-util-perl_0.08_source.changes dpkg-genchanges: including full source code in upload dpkg-source -i --after-build libexample-orange-util-perl dpkg-buildpackage: source only upload: Debian-native package File not found: ../libexample-orange-util-perl_0.08.dsc There is no file ../libexample-orange-util-perl_0.08.dsc, but on other build servers no such file is needed (it gets created by the package build). What is causing this "file not found" error?

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >