Search Results

Search found 18029 results on 722 pages for 'stripe size'.

Page 565/722 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • KVM error with device pass through

    - by javano
    I am running the following command booting a Debian live CD passing a host PCI device to the guest as a test and KVM errors out; kvm -m 512 -boot c -net none -hda /media/AA502592502565F3/debian.iso -device pci-assign,host=07:00.0 PCI region 1 at address 0xf7920000 has size 0x80, which is not a multiple of 4K. You might experience some performance hit due to that. No IOMMU found. Unable to assign device "(null)" kvm: -device pci-assign,host=07:00.0: Device 'pci-assign' could not be initialized lspci | grep 07 07:00.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M [Tornado] (rev 74) I shoved an old spare NIC into my motherboard to test PCI pass through. I have searched the Internet with Goolge and found that errors relating to "No IOMMU found" often mean the PCI device is not supported by KVM. Does KVM have to support the device being "passed-through"? I though the point was to pass the device through and let the guest worry about it? Ultimately I want to pass-through a PCI random number generator, is this not going to be possible with KVM? Thank you.

    Read the article

  • Can't write to samba share

    - by Tiddo
    I try to setup a samba file server, but whatever I do I can't get write access to work (reading works fine). This is my current situation: I have a local fileserver with 3 harddisks mounted at /mnt/share/disk<nr>. 2 of these use the ext4 filesystem, the third one is ntfs. This file server runs Fedora 18 32-bit. The root folders of these harddisks are owned by superman:superman, and testparm outputs the following: [global] workgroup = WORKGROUP netbios name = FILE_SERVER server string = Samba Server Version %v interfaces = lo, eth0, 192.168.123.191/8 log file = /var/log/samba/log.%m max log size = 50 unix extensions = No load printers = No idmap config * : backend = tdb hosts allow = 192.168.123. cups options = raw wide links = Yes [share] comment = Home Directories path = /home/share/ write list = superman, @users force user = superman read only = No create mask = 0777 directory mask = 0777 inherit permissions = Yes guest ok = Yes I've tried a lot to get this to work: the disk are chmodded to 777, I've tried turning off selinux, I've added the samba_share_t label to the disks and as can be seen in the above output I tried to make the smb config as permissive as I could, but still I cannot write to the share (tried from Windows 7 and another Fedora installation). What can I try to be able to write to the shares? EDIT: The replies I got so far are mostly concerned with the smb.conf. I have however tried a lot of different setup, ready made configs, and solutions to similar problems for the smb.conf file, so I suspect that the real problem is somewhere else.

    Read the article

  • File uploads and client_max_body_size in nginx + gunicorn + django

    - by carlosescri
    I need to configure nginx + gunicorn to be able to upload files greater than the default max size in both servers. My nginx .conf file looks like this: server { # ... location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 60; proxy_pass http://localhost:8000/; } } The idea is to allow requests of 20M for two locations: /admin/path/to/upload?param=value /installer/other/path/to/upload?param=value I've tried to add location directives at the same level than the one I've pasted here (getting 404 errors) and also tried to add them inside the location / directive (getting 413 Entity Too Large errors). My location directives look like these in their simplest form: location /admin/path/to/upload/ { client_max_body_size 20M; } location /installer/other/path/to/upload/ { client_max_body_size 20M; } But they don't work (actually I tested lots of combinations and I'm desperate thinking about this. Please, help If you can: What settings do I need to set to make this work? Thank you so much!

    Read the article

  • How can I avoid a few seconds of blank video when using -vcodec copy?

    - by arlomedia
    I'm processing user-uploaded videos on a CentOS web server with ffmpeg. I need to convert each video to a standard size and format, then extract a 30-second sample clip from each video. I want to use the "-vcodec copy" flag in the extraction command to avoid encoding a second time. This command works for my initial conversion: ffmpeg -i uploaded.mov -f mp4 -vcodec libx264 -vpre medium -acodec libfaac -r 15 -b 360k -ab 48k -ar 22050 -s 480x320 formatted.mp4 And this sometimes works for the extraction: ffmpeg -i formatted.mp4 -vcodec copy -acodec copy -ss 0 -t 30 formatted_sample.mp4 However, when I run the extraction command on some videos, the extracted sample clip starts with several seconds of blank video. The audio starts right away but the video doesn't start for 3-6 seconds. To demonstrate the problem, I've uploaded two video clips and run the above commands on them. I created the first clip in Final Cut Express and encoded it with Handbrake before uploading to the web server: 1a) uploaded clip 1b) converted with first command 1c) extracted with second command, missing first six seconds By comparison, this second clip comes from Apple's website and does not show the problem: 2a) uploaded clip 2b) converted with first command 2c) extracted with second command, no problem Can anyone see what's different about the two source clips? And if so, is there anything I can do in my conversion command so that when the extraction command runs, the clip is set up to avoid the missing video? By the way, I initially had the problem with ffmpeg 0.6.1 installed from yum, but I upgraded to the latest git version and the problem remains.

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • VMWare Raw Device Mapping Not Working

    - by George H. Lenzer
    While I'm waiting for VMWare support to get back to me, I thought I'd ask here. I have a 400 gig LUN presented from a fiber channel SAN to my VMWare host. It's legacy from another virtualization platform and I need to keep it as is to avoid a long period of downtime. I formatted my VMFS3 datastore with 4 meg blocks to allow up to 1 TB disks. Then I tried adding my 400 gig disk as a raw device in physical compatibility mode. I get the error: "File is larger than the maximum size supported by datastore 'Base Test'. [Base Test]VMTEST01/VMTEST01_2.vmdk Originally I had the VMFS datastore formatted with 1 meg blocks which was the cause of this problem since the largest disk allowed would be 256 gigs. But I deleted the data store and then reformatted with 4 megs blocks. I've also tried using virtual compatibility mode for the raw device but it still fails. Does anyone have any suggestions? I've been waiting for a little over a week for VMWare, but that's fine because I'm not yet a paying customer. I'm still in the eval phase.

    Read the article

  • Finding custom printer-specific options on *nix

    - by enbuyukfener
    Hey all, I have a printer set up using CUPS (FujiXerox Document Center 1100), it is named DC1100. There are capabilities of the printer that are not shows as options of the printer as listed by: lpoptions -l -d DC1100 The output is below: PrintoutMode/Printout Mode: Draft *Normal High Photo InputSlot/Media Source: Upper Lower MultiPurpose LargeCapacity Manual *Standard PageSize/Page Size: *Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement PageRegion/PageRegion: Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement STP_Brightness/Brightness: 0.00 0.02 0.04 [snip] 2.00 STP_Contrast/Contrast: 0.00 0.05 0.10 0.15 [snip] 4.00 STP_ColorCorrection/Color Correction: Accurate Bright Density [snip] Uncorrected STP_DitherAlgorithm/Dither Algorithm: Adaptive EvenTone Fast [snip] VeryFast STP_EnableDensity/Density Enable: *Disabled Enabled STP_Density/Density Value: 0.1 0.2 0.3 0.4 [snip] 8.0 STP_EnableGamma/Composite Gamma Enable: *Disabled Enabled STP_Gamma/Composite Gamma Value: 0.10 0.15 0.20 0.25 [snip] 4.00 STP_LinearContrast/Linear Contrast Adjustment: *False True STP_Duplex/Double-Sided Printing: DuplexNoTumble DuplexTumble *None Resolution/Rendering Resolution: *FromPrintoutMode 150x150dpi 300x300dpi 600x600dpi OutputType/Output Type: *FromPrintoutMode BlackAndWhite Grayscale STP_ImageType/Image Type: *FromPrintoutMode Photo Graphics LineArt None Text TextGraphics STP_Resolution/Resolution: *FromPrintoutMode 150dpi 300dpi 600dpi I am particularly looking for options for: "secure print" (possibly by setting a mode and setting a username) stapling hole punching Perhaps I need a vendor specific driver/PPD file? If so, any pointers as I have no idea where to look for one. I haven't been able to find one on the official site or on sites such as http://www.openprinting.org

    Read the article

  • Postfix qmgr process causes heavy overload on mailservers

    - by Mattias
    We are using Postfix as MTA for our e-mailmarketing software and once in a while we see that the load on one of the mailservers rises above 5. The load is caused by the qmgr-process which is the heart of Postfix and I see that it is consuming a lot of CPU resources. The process seems to be stuck because after 15 minutes it is still doing the samething and still increasing the load. Once I restart the postfix service the load rapidly decreases to below 1 and Postfix continues to send e-mails without any problems. I'm wondering if anyone else has encountered this problem and if people have suggestions on how to prevent it. The problem shows up on all our mailservers but almost never at more than 1 at the time. It seems to be triggered only when we are sending a mailing but the size (10 or 100.000 e-mails doesn't seem to make a difference). It maybe happens once a week or even less often and the time and day is also different every time. We tried to solve the problem by decreasing the amount of messages qmgr is allowed to process but this didn't solve it. We are using Postfix 2.5.5 on Debian Lenny 5.0.8 (postfix is installed through the default Debian repository). No special messages can be found in the logs (syslog, messages, mail.*). Thank you for your time

    Read the article

  • Can I still restore partition table?

    - by Johannes Lund
    Once I was going to resize partitions on my Mac HD from Bootcamp. I changed my mind and was going to quit, but apparently I hit a button, which made every single mac partion dissapear, and windows 7 refused to restart and be reinstalled. The 1 TB large HD consists of 3 partions, I believe. Since I can't see their actual size (except bootcamp), this is how I recall it. Macintosh HD about 500GB (Somewhere around 700GB according to disk utillity, but 500 according to Finder, and 500GB was all I could access.) Lion Recovery disk Bootcamp 293.36 GB To fix this I connected my mac via target disk mode to a pc and ran TestDisk. However this is the results: Since I Don't have 10 reputation I cant post the image showing the testdisk results, so I post a link instead hoping it is ok. The two mac partitions' sizes are completely wrong, and BOOTCAMP isn't showing. I tested using disk utilities from the snow leopard dvd. There there is one 293.36 GB Mac OS Extended partition. Before I had the firewire cable for target disk mode I tried reinstalling windows. Without success I tried again formating BOOTCAMP. Was that a bad thing to do? Could it have overwritten data from Macintosh HD? Unfortunately I have no backup. I could bring it to some kind of computer repair firm though.

    Read the article

  • samba joined to AD canot see users when in the security tab on client

    - by Jonathan
    I've got samba joined via kerberos and winbindd to our AD network and user authentication and everything else is working great. However when I try to add users/groups to file permissions it tells me they are not found. All the users groups show up fine with getent so I'm not sure why they are not showing up. Here is my smb.conf and I would much appreciate any help with this. #GLOBAL PARAMETERS [global] socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=11264 SO_SNDBUF=11264 workgroup = [hidden] realm = [hidden] preferred master = no server string = xerxes web/file server security = ADS encrypt passwords = yes log level = 3 log file = /var/log/samba/%m max log size = 50 printcap name = cups printing = cups winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes winbind nested groups = Yes winbind separator = + winbind refresh tickets = yes idmap uid = 1600-20000 idmap gid = 1600-20000 template primary group = "Domain Users" template shell = /bin/bash kerberos method = system keytab nt acl support = yes [homes] comment = Home Direcotries valid users = %S read only = No browseable = No create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [test] comment = Test path=/mnt/test writeable=yes valid users = %s create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [printers] comment = All Printers path = /var/spool/cups browseable = no printable = yes

    Read the article

  • Fix Video timelines

    - by Josh
    So, I have been going through and riping all of my DVD's and it seems that the way to get the highest quality out of these is to have DVD Shrink de-encrypt, rip, and decompress, the DVD's. After that I usually end up with a high quality (high size) set of .vob files in a classic DVD structure. Then I use a python script that I wrote to automate the process of finding the title sequence and then combining all of the title sequences' .vob files together into one file(similar to the "copy /b" command in windows), and then changing the extension to .mpg (a more widely supported format then .vob). This allows me to get a high quality rip in about 40 min. The problem comes in playing the files. I need all of the ripped dvd's to play on my media computer using windows media center but windows media center (and vlc for that matter) all think that the video files are anywhere from 5 min. to 0 min. which is not a problem (the video will still play all the way through) but if you want to pause it, when it is unpaused the video will start all the way over (Also fast forward and rewind don't work). I suspect that it is something wrong with the way the timeline is encoded in the video file, various forums on the internet recommended using virtualdub to fix the errors. But when I try to open the file virtual dub says that the file is not in mpeg-1 encoding and may be in mpeg-2. Is there any way to fix this? PS: I am aware that there was a similar question but it hasn't had any activity for 2 months and is dealing more with wmv files.

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • come on!teach u save photos from iphone to computer

    - by goodm
    i am using iphone ,when we have fun,we took a lot of nice pics ,but,that is a question,how to transfer photos from iphone to computer,now ,let me show you,step by step: Step 1: Download Tansee iPhone Transfer Photo free trial version here,and then install it. You also need iTunes above 7.3 installed.or download at: http://www.softseeking.com/prodail.aspx?proid=74 Step 2: Connect iPhone to your computer. Step 3: Launch Tansee iPhone Transfer Photo and all the photos in your iPhone will display automatically, Step 4: Select the photos to be transferred to your computer, the selected file will marked with red border. You can select photos by click on each one, or just drag a rectangle to select a bundle of photos. You can also select all photos by click right button of your mouse or click "File" to choose. Note: you can only select first 6 photos if you haven't purchase. Step 5: Click "Copy" button to select output path and start to transfer photos to your computer: iPhone Camera Photo & Camera Video: Click "Camera Roll", do as steps above can copy your iPhone Camera Photos and iPhone Camera Videos to PC. Options Setting 1.Backup File Format: To select backup photo file format, Tansee iPhone Transfer Photo support BMP and JPG file now. 2.Backup Path: To select directory for storing the backup photos. You can select backup directory for each photo during backup by check "Ask Every Time" or store all files in a specified directory by checking "Save Here" and select the directory in the edit box. 3.Backup Resolution: To select the photo size to be backup.

    Read the article

  • Determining the Source of a Given File System Mount on Unix [migrated]

    - by phobos51594
    Background Recently I have run into a bit of a snag on my home FreeBSD server. I recently upgraded it to the latest stable release, and I have noticed some strange behavior with the /var partition. Originally, I had the system configured such that /var had its own partition with /var/run and /var/log in memory disks (/tmp, too). After the upgrade, I notice there is a new, fourth memory disk mounting directly to /var that I had not set up manually and is not in my fstab. It is only 28 megs or so in size and is causing problems when trying to update my ports collection. The ramdisk mounts atuomagically at boot and cannot be unmounted while in multi-user mode. If I drop to single user mode, I am able to unmount it without issue, however rebooting causes it to pop right back up. System specifications have been included at the end of the post. Question Is there any way to determine exactly what is mounting a given memory disk (or any filesystem, for that matter) after it has been mounted? Alternately, does anybody have any ideas what might have caused the new /var ramdisk to pop up? System Specification # uname -a FreeBSD sarge 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #0: Thu Nov 22 14:02:13 PST 2012 donut@sarge:/usr/obj/usr/src/sys/GENERIC i386 # df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/da0s1a 515612 410728 63636 87% / devfs 1 1 0 100% /dev /dev/da0s1d 515612 287616 186748 61% /var /dev/da0s1e 6667808 2292824 3841560 37% /usr /dev/md0 63004 32 57932 0% /tmp /dev/md1 3484 8 3200 0% /var/run /dev/md2 31260 8 28752 0% /var/log /dev/md3 31260 512 28248 2% /var <-- This # cat /etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/da0s1a / ufs rw,noatime 1 1 /dev/da0s1d /var ufs rw,noatime 2 2 /dev/da0s1e /usr ufs rw,noatime 2 2 md /tmp mfs rw,-s64M,noatime 0 0 md /var/run mfs rw,-s4M,noatime 0 0 md /var/log mfs rw,-s32M,noatime 0 0 Thank you in advance for any assistance.

    Read the article

  • Centos/Postfix able to send mail but not receive it

    - by Dan Hastings
    I have set up postfix and used the mail command to test and an email was successfully sent and delivered. The email arrived in my yahoo inbox BUT the sender also recieved an email in the Maildir directory saying "I'm sorry to have to inform you that your message could not be delivered to one or more recipients", even though the message was delivered. I tried replying from yahoo to the email but it never arrived. I have 1 MX record added to godaddy which i did last week. Priority0 Host @ Points to mail.domain.com TTL1 Hour Postfix main.cf has the following added to it myhostname = mail.domain.com mydomain = domain.com myorigin = $mydomain inet_interfaces = all mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mynetworks = 192.168.0.0/24, 127.0.0.0/8 relay_domains = home_mailbox = Maildir/ I checked var/logs/maillog and found the following errors occuring postfix/anvil[18714]: statistics: max connection rate 1/60s for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max connection count 1 for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max cache size 1 at Jun 3 09:30:15 postfix/smtpd[18772]: connect from unknown[unknown] postfix/smtpd[18772]: lost connection after CONNECT from unknown[unknown] postfix/smtpd[18772]: disconnect from unknown[unknown] output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 168.100.189.0/28, 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • Distortion problem with Creative audio equalizer

    - by e-t172
    Hi, I have a problem with the Creative Console EQ, I don't know if it's fixable or not (is the EQ software or hardware on these cards?). Basically, I have enormous distortion with certain sounds in the 30 - 125 hz range. When this happens I get some sort of "frrzzzz" (sorry, I'm french and don't really know the correct english word for that) on top of the original sound. I have a Sound Blaster Audigy SE. I'm using the Daniel_K drivers, on Windows 7 Profesionnal x64. All the effects are disabled except EQ. Steps to reproduce Put the card in 24bit/96khz mode. The problem is also present with 16bit/48khz but seems to be less audible. In the Creative Console, use the following EQ: (full size) Play this sound at a reasonably high volume. You should hear distortion on the two "booms". Especially the second one. Disable Creative EQ. Play the sound in an application with an integrated EQ (e.g. foobar2000, ffdshow) using the same EQ parameters. There is no distortion. Conclusion: the Creative EQ is broken. Is anyone having the same problem? I'm also interested in the results with other Creative cards or even other brands soundcards with a similar EQ feature.

    Read the article

  • Update a bootable OS X drive clone with rsync?

    - by Joe
    The question: is it possible to keep a boot-able backup drive clone of OS X updated with rsync? If rsync is not a viable option are there alternatives? The Setup: My situation is as shown above. One internal Samsung 840 SSD [120g] in use as my OS X 10.8 boot disk on a recent model Mac Mini. I have successfully cloned that drive with disk utility to a 125g partition of another HDD in an external USB 3 enclosure and at that point I am able to boot to it. The Goal: As my last system went out in a fiery blaze taking much valuable data with it, I have a new respect for a proper backup solution and really want to do this right. My goal is to achieve an automated differential backup/update from Disk A to Disk B while most importantly maintaining boot-ability on the external drive. And I would prefer to do this differentially to minimize stress on the drives. Hence rsync was the first thing to come to mind. What I have tried: following along with Jamie Zawinski's differential mac bootable backup solution running this manually initially worked - i tested it with only very miniscule file change and everything was fine / external booted and all. now after subsequent passes rsync fails throwing errors particularly relating to updating 'boot.efi' (not at the machine currently I will update the precise log message once I return home) is this a drive partition size issue? does rsync require more space? if it cant be done, are there any alternatives? i've heard whispers of dd

    Read the article

  • Need Help getting perl module DBD::mysql installed for bugzilla on RedHat.

    - by Alos Diallo
    Hi everyone I am having some issues getting Bugzilla setup, I have the software on the server and am trying to get the pre-rec's setup. I am using RedHat 4.1.2-42. I have all of the required perl modules save one:DBD::mysql When I try: sudo perl install-module.pl DBD::mysql I get the following response(this is only an excerpt): rm -f blib/arch/auto/DBD/mysql/mysql.so LD_RUN_PATH="/usr/lib64/mysql:/usr/lib64:/lib64" /usr/bin/perl myld gcc -shared -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic dbdimp.o mysql.o -o blib/arch/auto/DBD/mysql/mysql.so \ -L/usr/lib64/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib64 -lssl -lcrypto \ /usr/bin/ld: skipping incompatible /usr/lib/libssl.so when searching for -lssl /usr/bin/ld: skipping incompatible /usr/lib/libssl.a when searching for -lssl /usr/bin/ld: cannot find -lssl collect2: ld returned 1 exit status make: * [blib/arch/auto/DBD/mysql/mysql.so] Error 1 /usr/bin/make -- NOT OK Running make test Can't test without successful make Running make install make had returned bad status, install seems impossible I then tried the following: CFLAGS="-I/usr/lib64/mysql:/usr/lib64:/lib64" perl install-module.pl DBD::mysql I get the same result. I have also tried to install it using CPAN but also get the same result. Right now I have DBD-mysql v3.0007 but need (v4.00) Also when I try to install open ssl it says I have the latest version. Does anyone know what I have to do to get this to work? Any help would be greatly appreciated. Thank you

    Read the article

  • Using fedora 17 commandline 'mail' program cannot send to hotmail

    - by Eric Leschinski
    I am trying to use the console in Fedora 17 to send an automated email to myself. I run this: echo "email content" | mail -s "blah" [email protected] It works fine, google treats it as a spam email, but when you mark it not spam everything is cool. For Hotmail there are policies to prevent the email from being sent. I do this: echo "email content" | mail -s "blah" [email protected] And the email returns as undeliverable, the email does not even appear in the spam folder and I get this as a response: ----- Transcript of session follows ----- ... while talking to mx3.hotmail.com.: >>> MAIL From:<[email protected]> SIZE=685 <<< 550 DY-001 (BAY0-MC3-F8) Unfortunately, messages from 184.90.101.28 weren't sent. Please contact your +Internet service provider. You can tell them that Hotmail does not relay dynamically-assigned IP ranges. +You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. 554 5.0.0 Service unavailable So apparently hotmail doesn't like spammers so much, they they are blocking anything with a dynamically assigned IP range. Google does not do this. What is the easiest way to just get around this and send an email to hotmail and end up in their spam folder to be unblocked later by the user?

    Read the article

  • Cannot write samba shares

    - by Batsu
    Running samba 3.5 on Red Hat Enterprise 6.1 I'm having issues sharing two folders. Here is the output of testparm: [global] workgroup = DOMAINNAME server string = Samba Server Version %v interfaces = lo, eth1 bind interfaces only = Yes map to guest = Bad User log file = /var/log/samba/log.%m max log size = 50 idmap uid = 16777216-33554431 idmap gid = 16777216-33554431 hosts allow = 10.50.183.48, 10.50.184.41, 10.50.184.199, 10.50.183.160, 127.0.0.1 hosts deny = 0.0.0.0/0 cups options = raw [test] comment = test folder path = /usr/local/samba valid users = claudio write list = claudio force user = claudio read only = No create mask = 0775 directory mask = 0775 [test2] comment = another test path = /home/claudio/tst valid users = claudio write list = claudio force user = claudio read only = No create mask = 0775 From the Windows XP machine I'm connecting from I'm able to read test but not write, while for test2 I can't even access the folder (though I can see it listed). ls -l /usr/local ... drwxrwxrwx. 2 claudio claudio 4096 Dec 3 10:39 samba ... ls -l /user/local/samba total 32 -rwxrwxrwx. 1 claudio claudio 9 Nov 29 16:26 asd.txt -rwxrwxrwx. 1 claudio claudio 728 Dec 3 10:16 out.txt ... ls -l /home/claudio/ ... drwxrwxr-x. 2 claudio claudio 4096 Dec 3 09:57 tst ... ls -l /home/claudio/tst total 4 -rw-rw-r--. 1 claudio claudio 4 Dec 3 09:57 asd.txt Any suggestion?

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. If changing the limit isn't possible, would it be possible for some programming genius to create a very small optimized light-weight non resource-hungry (you get the idea!) program whose sole purpose will be to automatically click the "Show more details..." link in the details pane after every file selection? For bonus points, the program should wait for a second or two after file selection is complete to do this, so that selecting multiple files in quick succession doesn't hit the HDD repeatedly. Any takers for the challenge? :)

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit

    Read the article

  • Is there a good alternative to Videora iPod Converter?

    - by Richard
    I use Videora to convert my videos (in DivX/XviD format) to something I can play on my iPod Classic. I really dislike it. It's clunky, riddled with adverts, sometimes doesn't convert properly (the infamous "invalid public atom" error - see Google for more) and has a UI that truly stinks. On the upside, it's free, accepts a list of video files (via the oddly hard to find "1-click convert" button) and just gets on with the converting as it already knows the correct settings for my iPod. One final nice touch is that once they are converted, it'll automatically upload them into iTunes. Are there any alternatives which have all the upsides but none of the downsides? Bonus points if they can set the metadata in iTunes correctly for TV shows (season, show, episode) and delete the converted file afterwards (as my iTunes settings means that a copy is made elsewhere). I've looked at a bunch of applications (handbrake, virtualdub, mediacoder, format factory, any video converter, convertxtodvd) but many of them fail the "just select a list of files and get on with converting" test - let alone all the other features I want. I have no desire to individually set the video size of each file or the codec or the post-processing options. I'm currently using the command line version of HandBrake (handbrakecli) and a hand-written DOS batch file to go through every file in a folder and convert it. It does most of what I want, just not in a very slick way. Can anyone recommend anything better? It needs to work on Windows 7 and be free.

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >