Search Results

Search found 14689 results on 588 pages for 'executable format'.

Page 462/588 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • Getting file not found error with pdebuild

    - by user35042
    I am attempting to build a Debian package using pdebuild on my main development server (running Debian wheezy). Here is the command I run: pdebuild --pbuilder cowbuilder --buildresult .. \ --debbuildopts -i -- \ --basepath /var/cache/pbuilder/base-wheezy.cow \ --distribution wheezy --configfile /etc/pbuilder/wheezy This works on other servers, but on one server I get this output: I: using cowbuilder as pbuilder dpkg-buildpackage: source package libexample-orange-util-perl dpkg-buildpackage: source version 0.08 dpkg-buildpackage: source changed by John User <[email protected]> dpkg-source -i --before-build libexample-orange-util-perl fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean dh_clean dpkg-source -i -b libexample-orange-util-perl dpkg-source: info: using source format `3.0 (native)' dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.tar.gz dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.dsc dpkg-genchanges -S >../libexample-orange-util-perl_0.08_source.changes dpkg-genchanges: including full source code in upload dpkg-source -i --after-build libexample-orange-util-perl dpkg-buildpackage: source only upload: Debian-native package File not found: ../libexample-orange-util-perl_0.08.dsc There is no file ../libexample-orange-util-perl_0.08.dsc, but on other build servers no such file is needed (it gets created by the package build). What is causing this "file not found" error?

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

  • How to open semicolon delimited CSV-files in US-version of Excel

    - by Holgerwa
    When I double-click on a .csv file, it is opened in Excel. The csv-files have columns delimited with semicolons (not commas, but also a valid format). Using a German Windows/Excel setup, the opened file is displayed correctly, the columns are separated where the semicolons existed in the csv-file. But when I do the same on an (US-) English Windows/Excel setup, only one column is imported, showing the whole data including the semicolons in the first column. (I don't have an English setup available for tests, users have reported the behavior) I tried to change the list separator value in Windows regional settings, but that didn't change anything. What can I do to be able to double-click-open those CSV-files on an English setup? EDIT: It seems to be the best solution not to rely on CSV-files in this case. I was hoping that there is some formatting for CSV-files that makes it possible to use them internationally. The best solution seems that I'll switch to creating XLS-files. Thanks to all for your suggestions and helpful tips!

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by ThatGraemeGuy
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • Disable the user of Internet explorer through policies when called from HTML help

    - by Stephane
    Hello, I have a locked down environment where users are prohibited from doing, well, basically anything but run the specific programs we specify. We just switched a program from using the venerable "WinHELP" help format to HTML help (CHM) but that seem to have an unwanted and rather dangerous side effect: when a user click on a hyperlink inside the HTML help, a new internet explorer window is opened and the user is free to browse and do terrible things to my server (well, not that much, but still...) I have checked the session in this case and the IE window is actually hosted within the help engine: there is no iexplore.exe process running in the user session (and it cannot: it's explicitly prohibited). We have disable all help right now until we find a solution. I'm working with the help team to have all external URLs removed from the help file but that is going to be a long and error-prone task. Meanwhile, I've checked all the group policies option but I have to say that I was unable to find anything that would prevent a standalone IE window hosted in a random process from running. I don't want to disable WinHTTP or the IE rendering engine or anything of the sort. But I need to prevent all users members of a specific AD user group from ever having an IE window displayed to them. The servers are running Windows 2003 and Citrix metaframe 4.5. Thanks in advance

    Read the article

  • How to discover true identity of hard disk?

    - by F21
    I have 2 fake external hard drives that claim to have a storage capacity of 2TB. I pulled the enclosure apart and the hard drives seems to be refurbished ones with their labels replaced as Barracuda LP 2000 GB labels (the serial numbers on both labels are the same). Interestingly, one of the drives have 160G written on it with pencil. However, the counterfeiters seem to have done something to the firmware, because CrystalDiskInfo reports them as 2TB ST2000DL003 drives. I then delete the 1.81 TB partition in Windows disk management and tried to create a new one and format it. Once I get to this point, the drives would make some noise that is common to dying drives. I am not interested in using these drives for production, but I am interested in finding the true identity (manufacturer/serial number/model number, etc) and restoring it to their factory defaults with the right capacity. Can this be done without any special equipment? This would be an interesting learning exercise. Some pictures of the drives in question: Here are the screens from CrystalDiskInfo: Note the serial numbers are the same (these are 2 different drives!). How is this done? Did they have to tamper with the controller board? I would assume that changing the firmware doesn't change the serial number at all.

    Read the article

  • ffmpeg: converting an avi to a reduced, shareable flv/mp4...

    - by meder
    I recently followed a guide and recompiled my ffmpeg so x264 is enabled. I used some generic settings to convert my 700 MB avi file to a mp4 file, the result was a 407MB mp4 file. The original avi file's settings: Codec: DX50 Resolution: 704x304 Frame rate: 23.976023 Stream 1 Codec: mpga Type: Audio Channels: 2 Sample rate: 48000 Hz Bitrate 179 kb/s Command I used: ffmpeg -i input.avi -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -vpre hq -crf 22 -threads 0 output.mp4 The settings of the output file (output.mp4): Codec: avc1 Resolution: 704x304 Display resolution: 704x304 Frame rate: 11.988011 Stream 1 Codec: mp4a Type: Audio Channels: 2 Sample Rate: 48000 Hz Bits per sample: 16 Bitrate: 1536 kb/s The quality of the output mp4 is pretty nice, it seems as if it's pretty much the same as the original source. However, I'm trying to reduce the filesize and I'm not really sure whether I should go with an flv format or keep it mp4. The advantage the flv would have obviously is that it would be playable with a flash player ( I have come across some swf players which take a flash parameter to play an flv file ).. but maybe I could use the video element, as I'm only going to be displaying this video privately so I don't have to worry about supporting legacy browsers such as IE. Can someone recommend some settings to specify in order for the filesize to be around ~100-150MB or so? I don't mind a reduction in quality, nor do I mind resizing it - I was going to do it initially but I wasn't sure what the guidelines were ( if any ) for dealing with resolution.. since this video's is 704x304 would it still be ok if I forced it into one that isn't perfectly fit for the aspect ratio? I have no clue about that part. I realize that I could have probably specified 28 instead of 22 for the CRF, I'm not sure if I should do that as opposed to maybe specifying smaller resolution, which might make it smaller as well?

    Read the article

  • cannot commit svn with dav on ubuntu

    - by hiddenkirby
    So there are several similar questions on serverfault ... but the solution is still eluding me. I am running subversion on ubuntu 9.04 .. through apache2.2.x .... i get Commit failed (details follow): Can't make directory '/home/kirb/svn/dav/activities.d': Permission denied when i attempt to commit. It is deffinitely a permissions issue... but how to fix it is still eluding me. my repository is in /home/kirb/svn. http://serverfault.com/questions/61573/svn-commit-error says to chgrp .. but i dont seem to be able to. all the apache dav stuff seems to be working though. I can access my repository just fine through a browser. apologies if i am missing something simple here. Thanks in advance, Kirb additional edit: i am not able to sudo chgrp on the directory at all sudo chgrp -R www-data /home/kirb/svn; chmod -R g+rwx /home/kirb/svn [sudo] password for kirb: chmod: changing permissions of/home/kirb/svn': Operation not permitted chmod: changing permissions of /home/kirb/svn/format': Operation not permitted chmod: changing permissions of/home/kirb/svn/conf': Operation not permitted chmod: cannot read directory /home/kirb/svn/conf': Permission denied chmod: changing permissions of/home/kirb/svn/locks': Operation not permitted chmod: cannot read directory /home/kirb/svn/locks': Permission denied chmod: changing permissions of/home/kirb/svn/db': Operation not permitted chmod: cannot read directory /home/kirb/svn/db': Permission denied chmod: changing permissions of/home/kirb/svn/README.txt': Operation not permitted chmod: changing permissions of /home/kirb/svn/hooks': Operation not permitted chmod: cannot read directory/home/kirb/svn/hooks': Permission denied`

    Read the article

  • Trouble cloning a Macbook Pro hard drive

    - by Mirko Froehlich
    I am trying to upgrade the 250GB hard drive in my MacBook Pro (early 2008 model) to a 750GB drive. I have connected the new drive via an external USB enclosure. The drive is recognized fine, I can format it, etc. However, every time I try to clone the drive, I am getting Input/Output errors. Before the clone operation, I have verified both the internal and the external drive using Disk Utility, and they both check out fine. After the clone operation, the external drive shows multiple "Invalid node structure" errors: I have tried two approaches for cloning the drive: Using Disk Utility, by starting from the OSX install DVD Using Carbon Copy Cloner The outcome is the same in both cases. The Carbon Copy Cloner logs show a handful of the following types of errors: rsync: mkstemp "<... an external filename ...>" failed: Input/output error (5) rsync: stat "<... an external filename ...>" failed: Input/output error (5) The actual files affected seem to be different across different runs of the application. Before the last run, I used Disk Utility to (once more) reformat the external drive and explicitly overwrite it with zeros, but this made no difference. I also tried running a surface scan in Tech Tool Pro overnight. It got about 2/3 of the way through before I had to disconnect the drive (had to take my MacBook Pro to work), but so far it didn't report any bad blocks. Assuming it scans the drive in the same order in which blocks would be allocated during actual use, it seems like if bad blocks were to blame for the clone failures, they should have been found already (given that the source drive is only 250GB). As a last attempt, I may try SuperDuper as well, although my understanding is that it uses the same underlying rsync approach as Carbon Copy Cloner, so it's unlikely to perform any better. Are there any other things I should try before I send the drive in for a replacement? Could these problems be caused by my internal drive, even though it works fine and checks out fine in Disk Utility?

    Read the article

  • How to disable irritating Office File Validation security alert?

    - by Rabarberski
    I have Microsoft Office 2007 running on Windows 7. Yesterday I updated Office to the latest service pack, i.e. SP3. This morning, when opening an MS Word document (.doc format, and a document I created myself some months ago) I was greeted with a new dialog box saying: Security Alert - Office File Validation WARNING: Office File Validation detected a problem while trying to open this file. Opening this is probably dangerous, and may allow a malicious user to take over your computer. Contact the sender and ask them to re-save and re-send the file. For more security, verify in person or via the phone that they sent the file. Including two links to some microsoft blabla webpage. Obviously the document is safe as I created it myself some months ago. How to disable this irritating dialog box? (On a sidenote, a rethorical question: Will Microsoft never learn? I consider myself a power user in Word, but I have no clue what could be wrong with my document so that it is considered dangerous. Let alone more basic users of Word. Sigh....)

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • Windows XP Setup Fails to Recognize USB Floppy after formatting AHCI disk

    - by Strahn
    I am attempting to install Windows XP Professional x64 onto a HP EliteBook 8540w. I have downloaded both the latest Intel Rapid Storage Technology drivers and the Intel Storage Matrix drivers that are listed on HPs website and copied the drivers over to a floppy disk (two separate floppies, one for each version of the drivers.) Booting to my WinXP Pro x64 install CD, I go through the F6 process, load the driver and am able to see my HDD, delete, create and format partitions on it. When I go to continue the install, after checking the disk, the system asks me to enter the disk labeled "Intel Rapid Storage Technology" and press enter to continue. Nothing happens at this point when I press enter. This happens if I use the latest drivers or the older drivers. We have created a slipstreamed install CD using nLite that has the AHCI drivers integrated, which installs fine. However, we have identified a number of issues with the system that I believe are side-effects of using nLite for the slipstreaming and I am attempting to verify that. I have researched this issue and found a few examples of others having the same problem, but no solution. The USB floppy is a Lacie branded floppy, connecting it to a working XP workstation shows it to be the Y-E Data USB floppy drive that is supposedly 100% compatible with XP per MS KB 916196.

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • Postfix won't pipe to PHP file through aliases file

    - by jfreak53
    I'm trying to pipe from postfix to a command. According to Postfix logs it worked, but when I check the command it didn't. This is a fresh postfix install. This is my alias file: # See man 5 aliases for format postmaster: root support: "| /usr/bin/php -q /var/www/pipe/pipe.php" I run sendmail [email protected] then type it and then on a separate line type . and it goes. I check the postfix log /var/log/mail.log and this is what it states: Nov 2 15:32:33 server3 postfix/local[13284]: 42C429E0B5: to=<[email protected]>, relay=local, delay=156, delays=156/0.01/0/0.05, dsn=2.0.0, status=sent (delivered to command: /usr/bin/php -q /var/www/pipe/pipe.php) So according to that it worked, but it doesn't. If I run echo 'text' | /usr/bin/php -q /var/www/pipe/pipe.php it does work just fine. Any ideas what I did wrong? I know piping is working, I originally checked it by running that command above WITHOUT the quotes, so just support: | /usr/bin/php -q /var/www/pipe/pipe.php What it did there was append my email header and all to the file pipe.php. So I know postfix was piping it, but when I put in the quotes it says it's going but it's not according to my script.

    Read the article

  • Interactive console based CSV editor

    - by Penguin Nurse
    Although spreadsheet applications for editing CSV files on the console used to be one of the earliest killer applications for personal computers, only few of them and even less documentation about them is still actively maintained. After having done extensive search on the web, manpages and source code, I ended up with the following three applications that all have fundamental drawbacks: sc: abbrev. for spreadsheet calculator; nice tool with vi keybings, but it does not put strings containing the delimiter into quotas when exporting to delimiter separated format and can't import csv files correctly, i.e. all numbers are interpreted as strings GNU oleo: doesn't seem to be actively maintained any longer since 2001 and there are therefore no packages for major linux distributions teapot: offers packages for various operating systems, but uses for example counter-intuitive naming for cells (numbers for row and column, i.e. 11 seems to be intended to be row 1, column 1) and superfluous code for FLTK GUI Various Emacs modes also do not quote strings containing the delimiter well or are require much more typing for entering the scaffold of a table. Therefore I would be very grateful for overcoming one of theses drawbacks or any hints towards another console based CSV editor. It actually needn't do any calculations just editing cells or column- and rowise.

    Read the article

  • Adding third disk as a single disk in a server with an existing RAID1

    - by slowhandsolo
    I've got a ProLiant DL360 G5 server (Fedora 13) with two SAS disks in a hardware RAID 1, working fine. Now I hot plugged another SAS disk. I'd like to configure this new hard disk out of my RAID, as a single non-RAID disk (ex. /dev/sdb). Even after rebooting the server, I can't see the new disk with "fdisk -l". It displays only my hardware RAID, but not the new disk. [root@myserver]# fdisk -l Disco /dev/cciss/c0d0: 300.0 GB, 299966445568 bytes Disposit. Inicio Comienzo Fin Bloques Id Sistema /dev/cciss/c0d0p1 * 1 126 512000 83 Linux /dev/cciss/c0d0p2 126 71798 292422656 8e Linux LVM Disco /dev/dm-0: 234.9 GB, 234881024000 bytes Disco /dev/dm-1: 10.5 GB, 10536091648 bytes Disco /dev/dm-2: 21.0 GB, 20971520000 bytes Disco /dev/dm-3: 31.5 GB, 31474057216 bytes Disco /dev/dm-4: 1577 MB, 1577058304 bytes However, I can see the new disk using the HP Array Configuration Utility CLI for Linux "hpacucli": [root@myserver]# hpacucli => controller slot=0 physicaldrive all show status physicaldrive 1I:1:1 (port 1I:box 1:bay 1, 300 GB): OK physicaldrive 1I:1:2 (port 1I:box 1:bay 2, 300 GB): OK physicaldrive 1I:1:3 (port 1I:box 1:bay 3, 300 GB): OK => controller slot=0 pd all show detail Smart Array P400i in Slot 0 (Embedded) array A physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 **unassigned** physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: **Unassigned Drive** As you can see, I've got two SAS disks in a RAID 1 and the new disk as "unassigned". Is there any way to work with the new disk as another non-RAID single disk? If relevant, I want to create a new partition in my new disk, format it with mkfs and mount it, but as I can't see it with fdisk, I don't know how to do it. Thanks!

    Read the article

  • Hotmail marking messages as junk

    - by Canadaka
    I was having problems with emails sent from my server being blocked completely by Hotmail, but I found out Hotmail had blocked my IP and by contacting Hotmail I had the block removed. See this question for more info: Email sent from server with rDNS & SPF being blocked by Hotmail But now all emails from my server are going directly to recipients "Junk" folder on hotmail and I can't figure out why. Hotmail says "Microsoft SmartScreen marked this message as junk and we'll delete it after ten days." I tried contacting the same people at Hotmail who had my IP block removed, but I haven't received any reply and its been almost a week. Here are some details: I have a valid SPF record for my domain "v=spf1 a include:_spf.google.com ~all" I have reverse DNS setup I have a Sender Score of 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 I have signed up for Microsoft's SNDS and was approved. My ip says "All of the specified IPs have normal status." Microsoft added my IP to the JMRP Database My IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 my FROM header is being sent in proper format "From: CKA <[email protected]>" Here is a test email source:

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • Disappearing Arial on 2 Macs

    - by drewk
    I noticed that Safari started rendering common web pages in a funny manner on two different Macs that I have. One is a Macbook Pro and the other a Mac Pro desktop. Yahoo and Google would appear all excessively bold or all italic and not at all look right or acceptable. The computers are all running OS X 10.6.3 "Snow Leopard" Turns out that "Arial.TTF" and "Arial Bold.ttf" got deleted somehow on these two computers. I restored Arial through Font Book and got my web mojo back. So questions: 1) Anyone seen "arial" strangely randomly disappear? The only thing in common is these are the only two computers out of eight on site that recently got Adobe CS 5 installed. Has anyone had CS 5 delete arial? 2) When I restored arial with font book, it goes into User fonts rather than the All fonts. Can I use Font Book to restore a font in /System/Library/Fonts or do I need to do that manually? 3) I located THIS article on the web regarding OS X fonts. Essentially, Snow Leopard did away with the older .dfont format and replaced with open True Type. There is a minimum font list, but Arial is not among them. Arial is installed by MS Office. 4) Why are web sites affected by Arial being missing anyway? If I look at the HTML source for Yahoo for example, "arial" is specified by name only in an ad. Yahoo itself does not specify a font name. In my Safari preferences, I have Times and Courier specified as the default font which is the default for Safari when installed. How does a missing Arial screw things up anyway? Thanks in advance.

    Read the article

  • Cross OS data recover question, USB drive involved.

    - by Moshe
    Here's the story: A MacBook had OS X 10.4 and Windows XP dual booting using rEFIt. Then the Windows partition gets corrupted and it won't boot. Presumably a virus. There were sensitive files there and those were successfully copied to a USB drive and then 10.5 was installed on the hard drive, formatting the drive in the process. The USB drive's contacts cracked and he data is lost from there, unless it can be resoldered. The issues is that there is too much solder there already. So, how can the data in question be recovered? The files were Microsoft Money (not the latest version) files for the Windows version of the program. Right now, only OS X is installed on the MacBook. Is there Mac based program that can recover the Windows data or am I better off trying to resolder the drive? Does anyone know how to best resolder a USB drive more than once, where the first solder is ther, but detached from the silicon? Also, what format (extension) are Microsoft Money files? In need of help!

    Read the article

  • HTTP Error: 413 Request Entity Too Large

    - by Torben Gundtofte-Bruun
    What I have: I have an iPhone app that sends HTTP POST requests (XML format) to a web service written in PHP. This is on a hosted virtual private server so I can edit httpd.conf and other files on the server, and restart Apache. The problem: The web service works perfectly as long as the request is not too large, but around 1MB is the limit. After that, the server responds with: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>413 Request Entity Too Large</title> </head><body> <h1>Request Entity Too Large</h1> The requested resource<br />/<br /> does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. </body></html> The web service writes its own log file, and I can see that small messages are processed fine. Larger messages are not logged at all so I guess that something in Apache rejects them before they even reach the web service? Things I've tried without success: (I've restarted Apache after every change. These steps are incremental.) hosting provider's web-based configuration panel: disable mod_security httpd.conf: LimitXMLRequestBody 0 and LimitRequestBody 0 httpd.conf: LimitXMLRequestBody 100000000 and LimitRequestBody 100000000 httpd.conf: SecRequestBodyLimit 100000000 At this stage, Apache's error.log contains a message: ModSecurity: Request body no files data length is larger than the configured limit (1048576) It looks like my step #4 didn't really take, which is consistent with step #1 but does not explain why mod_security appears to be active after all. What more can I try, to get the web service to receive large messages?

    Read the article

  • Windows 7 pc freezes for an indeterminate amount of time after unlocking

    - by pikes
    Not sure if this type of question is appropriate for this forum, but I've tried everything I can think of to solve this problem aside from format/reinstall. I recently got a new work PC (Dell optiplex 755) with windows 7 professional x64. Standard developer software installed for .net development: VS2008, VS2005, SQL management studio, office 2007, etc. Recently I've been having this weird problem where after I lock my pc, when I try to unlock it, the screen will be black for awhile after unlocking. I can ctl+alt+del and put my password in but then it just goes black. The amount of time on the black screen seems to be related to the amount of time I am away from my PC. If only away a few minutes, it'll take about a minute to get to the desktop. If away for an hour, could take up to 15 minutes. If I lock it and go home for the night, I have to restart my PC in the morning (I've let it sit for an hour after a night of being locked and nothing happened). It doesn't do it every time but definitely the majority of the time. One weird thing I've seen is that if I remote into my machine before trying to log back in it does not do it. I uninstalled all software back to the point when I remember it started happening and it still does it. I was using this PC for a few weeks without this problem happening at all. Anyone know what my next troubleshooting steps could be? My IT department tried to fix it by moving my old profile to another disk and having me log in, effectively recreating a profile from scratch but that didn't solve it. As I said above if this isn't the right forum for these types of questions please let me know. Thanks in advance!

    Read the article

  • Estimating compressed file size using a list parameter

    - by Sai
    I am currently compressing a list of files from a directory in the following format: tar -cvjf test_1.tar.gz -T test_1.lst --no-recursion The above command will compress only those files mentioned in the list. I am doing this because this list is generated such that it fits a DVD. However, during compression the compression rate decreases the estimated file size and there is abundant space left in the DVD. This is something like a Knapsack algorithm. I would like to estimate the compressed file size and add some more files to the list. I found that it is possible to estimate file size using the following command: tar -cjf - Folder/ | wc -c This command does not take a list parameter. Is there a way to estimate compressed file size? I am also looking into options like perl scripts etc. Edit: I think I should provide more information since I have been doing a lot of web search. I came across a perl script(Link)that sort of emulates the Knapsack algorithm. The current problem with the above mentioned script is that it splits the files in their original state. When I compress the files after splitting them, there are opportunities for adding more files which I consider to be inefficient. There are 2 ways I could resolve the inefficiency: a) Compress individual files and save them in a directory using a script. The compressed file could provide a best estimate. I could generate a script using a folder of compressed files and use them on the uncompressed ones. b) Check whether the compressed file's size is less than the required size. If so, I should keep adding files until I meet the requirement. However, the addition of new files to the compressed file is an optimization problem by itself.

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >